“Should we not be buying VW, BMW, Siemens and Bayer technology and products today because they participated in holocaust and directly collaborated with Hitler?” – CEO of Kagi when given feedback re: Brave partnership
“Should we not be buying VW, BMW, Siemens and Bayer technology and products today because they participated in holocaust and directly collaborated with Hitler?” – CEO of Kagi when given feedback re: Brave partnership
This is something I really don’t get.
But let’s assume that all the previous points are invalid, and - for a greater good - it’s worth displaying a message when someone is looking at suicide-related topics. What about “how to kill someone”, " how to rape", “how to […]” with the hundreds of things that are universally considered wrong? And even worse, what about the thousands of things that are not universally considered wrong, but that some group thinks are wrong? “How to change sex”, " how to blow up a pipeline", etc.?
This I think was their point in that conversation, and I agree with. The moment you try to interpret what the user wants to do with the info they ask you, and you decide to assume responsibility to change the user’s mind, there are hundreds or thousands of instances in which users or groups of users will demand you take a position for what they believe is right. Instead I think a search engine should stop at providing information relevant to your query and not assume what you want to do with it. It’s not its place to correct people’s behavior or educate people. The public education system should do that, the healthcare system should ensure people have the right support. A search engine is (or better, should be) basically like a librarian, or a library index, you ask what you want and they point you in that direction. They don’t try to guess why you are looking for books about torture or environmental activism.
This at least is my perspective.