I saw this article, which made me think about it…
Kids under 16 to be banned from social media after Senate passes world-first laws
Seeing what kind of brainrot kids are watching, makes me think it’s a good idea. I wouldn’t say all content is bad, but most kids will get hooked on trash content that is intentionally designed to grab their attention.
What would be an effective way to enforce a restriction with the fewest possible side effects? And who should be the one enforcing that restriction in your opinion?
I can’t remember which article I was reading, probably one on Lemmy, but it said that we know social media algorithms are bad for people and their mental and physical health, that they are divisive, drive extremism, and just in general are not safe for society.
Drugs are regulated to ensure they are safe, so why aren’t social media algorithms regulated the same way? Politicians not understanding the technical details of algorithms is not an excuse - politicians also don’t understand the technical details of drugs, so they have a process involving experts that ensures they are safe.
I think I’m on the side of that article. Social media algorithms are demonstrably unsafe in a range of ways, and it’s not just for under 16s. So I think we should be regulating the algorithms, requiring companies wishing to use them to prove they are safe before they do so. You could pre-approve certain basic ones (rank by date, rank by upvotes minus downvotes with time decay like lemmy, etc). You could issue patents to them like we do with drugs. But all in all, I think I am on the side of fixing the problem rather than pretending to care in the name of saving the kids.
I recall that some years ago Facebook was looking into their algorithm and they found that it was potentially leading to overuse, which might be what you’re thinking of, but what actually happened is that they changed it so that people wouldn’t be using Facebook as much. Of course people who are opposed to social media ignored the second half of the above statement.
Anyway, when you say the algorithms are demonstrably unsafe, you know you’re wrong because you didn’t demonstrate anything, and you didn’t cite anyone demonstrating anything. You can say you think they’re unsafe, but that’s a matter of opinion and we all have our own opinions.
No, it was recent, and it was an opinion style piece not news.
Can you back this up? Were they forced to by a court, or was this before the IPO when facebook was trying to gain ground and didn’t answer to the share market? I can’t imagine they would be allowed to take actions that reduce profits, companies are legally required to maximise value to shareholders.
I mean it doesn’t take long to find studies like A nationwide study on time spent on social media and self-harm among adolescents or Does mindless scrolling hamper well-being? or How Algorithms Promote Self-Radicalization but I think this misses the point.
You’ve grabbed the part where I made a throwaway comment but missed the point of my post. Facebook is one type of social media, and they use a specific algorithm. Ibuprofen is a specific type of drug. Sometimes ibuprofen can be used in a way that is harmful, but largely it is considered safe. But the producers still had to prove it was safe.