[-] [email protected] 11 points 5 days ago

Why not create comparison like "generating 1000 words of your fanfiction consumes as much energy as you do all day" or something more easily to compare.

Considering that you can generate 1000 words in a single prompt to ChatGPT, the energy to do that would be about 0.3Wh.

That's about as much energy as a typical desktop would use in about 8 seconds while browsing the fediverse (assuming a desktop consuming energy at a rate of ~150W).

Or, on the other end of the spectrum, if you're browsing the fediverse on Voyager with a smartphone consuming energy at a rate of 2W, then that would be about 9 minutes of browsing the fediverse (4.5 minutes if using a regular browser app in my case since it bumped up the energy usage to ~4W).

[-] [email protected] 7 points 6 days ago

I agree with your comment except that I think you've got the privacy part wrong there. Any company can come in and scrape all the information they want, including upvote and downvote info.

In addition, if you try to delete a comment, it's very likely that it won't be deleted by every instance who federates with yours.

[-] [email protected] 4 points 6 days ago

I think you mean that you can choose a project that doesn't have an "algorithm" (in the sense that you're conveying).

Anyone can create a project with ActivityPub that has an algorithm for feeding content to you.

[-] [email protected] 33 points 1 week ago

My question simply relates to whether I can support the software development without supporting lemmy.ml.

No. You can't support Lemmy without supporting lemmy.ml because the developers use lemmy.ml for testing. They have not created a means for users to separate out their donations from one or the other.

That's why others are suggesting you should just support a different but similar fediverse project like PieFed or Mbin instead.

[-] [email protected] 22 points 2 weeks ago

I think you missed the part at the very end of the page that showed the timeline of them reporting the vulnerability back in April, being rewarded for finding the vulnerability, the vulnerability being patched in May, and being allowed to publicize the vulnerability as of today.

[-] [email protected] 27 points 1 month ago* (last edited 1 month ago)

That's not AI, that's just a bad Photoshop/InDesign job where they layered the text underneath the image of the coupon with Protein bottles. The image has a white background, if it had a clear background there would have been no issue.

Edit: Looking a little closer, it looks more like some barely off-white arrow was at the top of the coupon image.

Edit2: if you're talking about the text that looks like a prompt, it could be a prompt, or it could be a description of what they wanted someone to put on the poster. The image itself doesn't look like AI considering those products actually exist and AI usually doesn't do so well on small text when you zoom in on a picture.

Edit 4: Tap here for images of the items used for the coupon:

[-] [email protected] 24 points 1 month ago

Highlighting the main issue here (from the article):

“This means that it is possible for the WhatsApp server to add new members to a group,” Martin R. Albrecht, a researcher at King's College in London, wrote in an email. “A correct client—like the official clients—will display this change but will not prevent it. Thus, any group chat that does not verify who has been added to the chat can potentially have their messages read.”

[-] [email protected] 30 points 1 month ago

Unless their company has enterprise m365 accounts and copilot is part of the plan.

Or if they're running a local model.

[-] [email protected] 54 points 1 month ago

Looks like they're finally cleaning up a bunch of junk.

In July 2024, Google announced it would raise the minimum quality requirements for apps, which may have impacted the number of available Play Store app listings.

Instead of only banning broken apps that crashed, wouldn’t install, or run properly, the company said it would begin banning apps that demonstrated “limited functionality and content.” That included static apps without app-specific features, such as text-only apps or PDF file apps. It also included apps that provided little content, like those that only offered a single wallpaper. Additionally, Google banned apps that were designed to do nothing or have no function, which may have been tests or other abandoned developer efforts.

[-] [email protected] 27 points 2 months ago

Its probably better this way.

Otherwise you end up with people accusing movies of using AI when they didn't.

And then there's the question of how you decide where to draw the line for what's considered AI as well as how much of it was used to help with the end result.

Did you use AI for storyboarding, but no diffusion tools were used in the end product?

Did one of the writers use ChatGPT for brainstorming some ideas but nothing was copy/pasted from directly?

Did they use a speech to text model to help create the subtitles in different languages, but then double checked all the work with translators?

Etc.

[-] [email protected] 31 points 2 months ago

In the U.S. they may even offer things like State Park passes.

2
submitted 2 months ago by [email protected] to c/[email protected]

Video that goes over some of the issues today with AI generated content and some attempts to prove something that's real.

[-] [email protected] 56 points 2 months ago

Given the trend of recent posts in here. I'm going to guess that the word "toxic" triggered the automod.

view more: next ›

Sandbar_Trekker

0 post score
0 comment score
joined 7 months ago