@AliasAKA - eviltoast
  • 2 Posts
  • 130 Comments
Joined 1 year ago
cake
Cake day: August 4th, 2023

help-circle













  • You just equated doing things that you have at least semi active control over to someone’s genetics predisposing them to certain medical conditions, which they have 0 control over. In their markets, risk is supposed to balance out and make people make less risky choices. You can’t derisk your alleles.

    Health insurance is a fundamentally flawed idea, and not because of preexisting conditions, but because of profiteering. We should just optimize the health of our citizens directly by taxing wealthy individuals and companies and paying for the most effective healthcare for everyone. It’s more cost effective for society at large and also serves the greatest cross section of our community, but there just won’t be a profit motive (well there is a motive that by doing better healthcare for everyone in more cost effective ways you lower the overall cost of healthcare for the society, which isn’t so much maximizing profit as it is minimizing cost-benefit ratio).

    And by the way, it may be a paper next year finds an allele you have increases your risk of some horrendous disease. The people in this thread are arguing with you that you should still be able to afford healthcare. You’re arguing you shouldn’t.




  • I don’t believe this is quite right. They’re capable of following instructions that aren’t in their data but appear like things which were (that is, it can probabilistically interpolate between what it has seen in training and what you prompted it with — this is why prompting can be so important). Chain of thought is essentially automated prompt engineering; if it’s seen a similar process (eg from an online help forum or study materials) it can emulate that process with different keywords and phrases. The models themselves however are not able to perform a is to b therefore b is to a, arguably the cornerstone of symbolic reasoning. This is in part because it has no state model or true grounding, only probabilities you could observe a token given some context. So even with chain of thought, it is not reasoning, it’s just doing very fancy interpolation of the words and phrases used in the initial prompt to generate a prompt that is probably going to give a better answer, not because of reasoning, but because of a stochastic process.



  • This is the answer.

    We really need regulation at the FCC level also, such that users are able to physically disable any and all WiFi radios in the device. I want to buy a tv and completely physically disable any wireless radios there in (I don’t want my electronics band hopping in free WiFi unless I’ve asked them too).

    We also need privacy regulation at the FTC such that any product capable of connecting to the internet discloses such, gives the user at any time an export of the data it has waiting for export or the last export it did, and allows for the owner to disable any such data export over the internet on a permanent basis.