

I think you mean that you can choose a project that doesn’t have an “algorithm” (in the sense that you’re conveying).
Anyone can create a project with ActivityPub that has an algorithm for feeding content to you.
I think you mean that you can choose a project that doesn’t have an “algorithm” (in the sense that you’re conveying).
Anyone can create a project with ActivityPub that has an algorithm for feeding content to you.
I think this would only be acceptable if the “AI-assisted” system kicks in when call volumes are high (when dispatchers are overburdened with calls).
For anyone that’s been in a situation where you’re frantically trying to get ahold of 911, and you have to make 10 calls to do so, a system like this would have been really useful to help relieve whatever call volumes situation was going on at the time. At least in my experience it didn’t matter too much because the guy had already been dead for a bit.
And for those of you who are dispatchers, I get it, it can be frustrating to get 911 calls all the time for the most ridiculous of reasons, but still I think it would be best if a system like this only kicks in when necessary.
Being able to talk to a human right away is way better than essentially being asked to “press 1 if this is really an emergency, press 2 if this is not an emergency”.
I had to click to figure out just what an “AI Browser” is.
It’s basically Copilot/Recall but only for your browser. If the models are run locally, the information is protected, and none of that information is transmitted, then I don’t see a problem with this (although they would have to prove it with being open source). But, as it is, this just looks like a browser with major privacy/security flaws.
At launch, Dia’s core feature is its AI assistant, which you can invoke at any time. It’s not just a chatbot floating on top of your browser, but rather a context-aware assistant that sees your tabs, your open sessions, and your digital patterns. You can use it to summarize web pages, compare info across tabs, draft emails based on your writing style, or even reference past searches.
Reading into it a bit more:
Agrawal is also careful to note that all your data is stored and encrypted on your computer. “Whenever stuff is sent up to our service for processing,” he says, “it stays up there for milliseconds and then it’s wiped.” Arc has had a few security issues over time, and Agrawal says repeatedly that privacy and security have been core to Dia’s development from the very beginning. Over time, he hopes almost everything in Dia can happen locally.
Yeah, the part about sending my data of everything appearing on my browser window (passwords, banking, etc.) to some other computer for processing makes the other assurances worthless. At least they have plans to get everything running locally, but this is a hard pass for me.
I didn’t factor in mobile power usage as much in the equation before because it’s fairly negligible. However, I downloaded an app to track my phone’s energy use just for fun.
A mobile user browsing the fediverse would be using electricity around a rate of ~1 Watt (depends on the phone of course and if you’re using WiFi or LTE, etc.).
For a mobile user on WiFi:
In the 16 seconds that a desktop user has to burn through the energy to match those 2 prompts to chatGPT, that same mobile user would only use up ~0.00444 Wh.
Looking at it another way, a mobile user could browse the fediverse for 18min before they match the 0.3 Wh that a single prompt to ChatGPT would use.
For a mobile user on LTE:
With Voyager I was getting a rate of ~2 Watts.
With a browser I was getting a rate of ~4 Watts.
So to match the power for a single prompt to chatGPT you could browse the fediverse on Voyager for ~9 minutes, or using a browser for ~4.5 minutes.
I’m not sure how accurate this app is, and I didn’t test extensively to really nail down exact values, but those numbers sound about right.
My question simply relates to whether I can support the software development without supporting lemmy.ml.
No. You can’t support Lemmy without supporting lemmy.ml because the developers use lemmy.ml for testing. They have not created a means for users to separate out their donations from one or the other.
That’s why others are suggesting you should just support a different but similar fediverse project like PieFed or Mbin instead.
Yeah, if you’re relying on them to be right about anything, you’re using it wrong.
A fine tuned model will go a lot further if you’re looking for something specific, but they mostly excel with summarizing text or brainstorming ideas.
For instance, if you’re a Dungeon Master in D&D and the group goes off script, you can quickly generate the back story of some random character that you didn’t expect the players to do a deep dive on.
Yeah, ~100-133 depending on how much energy your electric kettle uses.
Depends on the electric kettle, the first few I looked at on amazon run at ~600-800 Watts.
So, on the lower end there, you’re looking at about 0.166 Wh every second.
So a single push to chatGPT (0.3 Wh) uses about the same energy as an electric kettle does in less than 2 seconds.
While I agree that their comment didn’t add much to the discussion, it’s possible that you used more electricity to type out your response than it did for them to post theirs.
It’s estimated that a single ChatGPT prompt uses up ~0.3 Wh of electricity.
If @Empricorn@feddit.nl is on a desktop computer browsing the internet using electricity at a rate of ~150 W, and @TropicalDingdong@lemmy.world is on a smartphone, then you would only have ~16 seconds to type up a response before you begin using more electricity than they did.
150Wh/60min/60sec = 0.041666 Wh every second
Or about 2.5 Wh every minute.
I think you missed the part at the very end of the page that showed the timeline of them reporting the vulnerability back in April, being rewarded for finding the vulnerability, the vulnerability being patched in May, and being allowed to publicize the vulnerability as of today.
What does the watermark really give you?
It gives a false sense that you can tell what’s AI and what’s not. Especially when anything created malicously is likely going to remove that watermark anyway. Pandoras box is already open on those abilities and there’s no putting the lid back.
And, even in the case of non-maliciously generated work, if you suspect that something is AI, but it doesnt have a watermark, do you start investigations into how a video/image/story(text) was created? Doesn’t that mean that any artist or author is now going to need to prove their innocence just because someone suspects that their work had some form of AI involved in the process at some point?
It’s bad enough that they have to worry about those accusations from average people to begin with, but now you’re just giving ammunition for anyone (or any corporation) to drag them through the legal system based on what “appears” to be AI generated.
Edit: typo
Looks like it’s open source: https://github.com/google-ai-edge/gallery
Exactly this. We need a shift in how teaching/education is handled. It’s similar to when calculators became common. You can use a calculator to get the right answer, but relying on it too much, or using it incorrectly will hamper your progress.
Teachers might have to start pointing out best practices for using LLMs. Ex: Don’t ask the LLM to write the essay for you, instead, write as much as you can, then use LLMs to evaluate what you have written. Ask it to poke holes in your arguments, or make suggestions on how it can be improved.
At universities you should have access to something like a “writing center” where tutors can help you with exactly that. Although those tutors would be ideal (and better than an LLM), sometimes there’s a wait to get access to a tutor, or they’re only open for certain times.
An LLM can definitely be a useful tool in the writing process, but the students of today need to learn how to use them properly as well as how to evaluate whether an LLM response is useful or even if it should be trusted.
Teachers will need to rely on in-class only work/tests/activities to help them actually evaluate a student’s progress instead of relying on homework to do that.
Edit: This is probably the wrong community for asking this question since this community is meant for tech related news. c/asklemmy might be better or !technology@piefed.social allows for discussions on anything tech related.
Smart meters work mostly the same way meters have always worked with one minor difference, they occassionally transmit the current value via a radio frequency. Same as always, you install them at some point where they can measure just how much water/electricity/gas is flowing into the home. The transmitting frequency will be different depending on the device and what country you live in.
If you want to see the details on how water meters measure water flow, go here: https://en.wikipedia.org/wiki/Water_metering
If you want the details on how gas meters work with all of the different sensors for that, go here: https://en.wikipedia.org/wiki/Gas_meter
If you want the details on how electricity meters work, go here and read the “Electromechanical” and “Electronic” sections: https://en.wikipedia.org/wiki/Electricity_meter#Electromechanical
Some newer meters are setup to attempt to guesstimate additional information such as what is being used in your home. For instances with water meters, a small flow of water for a short time can mean the faucet was turned on, or a toilet was flushed. A larger flow for a longer time can mean that the bathtub is being used, or a shower, or an appliance (dishwasher/laundry), etc.
“environmentally damaging”
I see a lot of users on here saying this when talking about any use case for AI without actually doing any sort of comparison.
In some cases, AI absolutely uses more energy than an alternative, but you really need to break it down and it’s not a simple thing to apply to every case.
For instance: using an AI visual detection model hooked up to a camera to detect when rain droplets are hitting the windshield of a car. A completely wasteful example. In comparison you could just use a small laser that pulses every now and then and measures the diffraction to tell when water is on the windshield. The laser uses far less electricity and has been working just fine as they are currently used today.
Compare that to enabling DLSS in a video game where NVIDIA uses multiple AI models to improve performance. As long as you cap the framerates, the additional frame generation, upscaling, etc. will actually conserve electricity as your hardware is no longer working as hard to process and render the graphics (especially if you’re playing on a 4k monitor).
Looking at Wikipedia’s use case, how long would it take for users to go through and create a summary or a “simple.wikipedia” page for every article? How much electricity would that use? Compare that to running everything through an LLM once and quickly generating a summary (which is a use case where LLMs actually excel at). It’s honestly not that simple either because we would also have to consider how often these summaries are being regenerated. Is it every time someone makes a minor edit to a page? Is it every few days/weeks after multiple edits have been made? Etc.
Then you also have to consider, even if a particular use case uses more electricity, does it actually save time? And is the time saved worth the extra cost in electricity? And how was that electricity generated anyway? Was it generated using solar, coal, gas, wind, nuclear, hydro, or geothermal means?
Edit: typo
I really like the “Feeds” feature of PieFed, it allows anyone to combine all of the different communities into a single feed. It really makes browsing something like “Technology” a lot better.
Based on the uptick in “I was banned from Reddit” posts, I’m thinking that we’re getting a lot more users that were banned for good reason from Reddit. Looks like Reddit has also stepped up their game in their ability to keep those users off their platform.
Ikidd updated their comment, it’s only a 7B model.
Last I heard from her was back around the beginning of April.
Considering that you can generate 1000 words in a single prompt to ChatGPT, the energy to do that would be about 0.3Wh.
That’s about as much energy as a typical desktop would use in about 8 seconds while browsing the fediverse (assuming a desktop consuming energy at a rate of ~150W).
Or, on the other end of the spectrum, if you’re browsing the fediverse on Voyager with a smartphone consuming energy at a rate of 2W, then that would be about 9 minutes of browsing the fediverse (4.5 minutes if using a regular browser app in my case since it bumped up the energy usage to ~4W).