[-] hrrrngh@awful.systems 11 points 5 months ago* (last edited 5 months ago)

https://superuser.com/questions/1930445/can-i-delete-the-chromes-optguideondevicemodel-safely-its-taking-up-4gb/1930446#1930446

Can I delete the Chrome's OptGuideOnDeviceModel safely? It's taking up 4GB

. . .

I also founds mentions of bunch of various flags you can potentially disable to turn the whole feature off, e.g. chrome://flags/#optimization-guide-on-device-model - but I've seen at least 5 other ones mentioned in several sources, with various people claiming for each that they don't work . . .

Now Chrome can hog your VRAM too. Yay

Don't worry if you only have 8GB and need the other half for anything, Chrome will probably relinquish it. This is very intelligent, as all the browser has to do is simply load another 4GB file from disk the next time you do anything.

[-] hrrrngh@awful.systems 11 points 5 months ago* (last edited 5 months ago)

I've seen the same thing and it's reassuring lol.

I lurk on subreddit drama and curated tumblr, and I feel like the common reaction to LW has gone from a few negative comments and "really? that's crazy"'s five years ago to being much more aware. Years ago you'd see maybe one person familiar with them and then a couple people respond who are totally out of the loop and maybe you'd see one crazy rationalist chime in to nuh-uh them. Now, anything rationalist-related usually has a bunch of people bringing up the harry potter or acausal robot god stuff right away.

I use the tag feature a lot in RES to keep track of people who I like hearing what they have to say. Years ago I mostly saw the same names when LW stuff came up, but now there's always a ton of people I've never seen before who are familiar with it.

It's also reassuring because I really don't want to be the person to say anything first and it's easier to chime in on a discussion someone else has already started.

[-] hrrrngh@awful.systems 10 points 1 year ago

I don't think the main concern is with the license. I'm more worried about the lack of an open governance and Redis priorizing their functionality at the expense of others. An example is client side caching in redis-py, https://github.com/redis/redis-py/blob/3d45064bb5d0b60d0d33360edff2697297303130/redis/connection.py#L792. I've tested it and it works just fine on valkey 7.2, but there is a gate that checks if it's not Redis and throws an exception. I think this is the behavior that might spread.

Jesus, that's nasty

[-] hrrrngh@awful.systems 11 points 1 year ago

I love the word cloud on the side. What is 6G doing there

[-] hrrrngh@awful.systems 11 points 1 year ago

I know this shouldn't be surprising, but I still cannot believe people really bounce questions off LLMs like they're talking to a real person. https://ai.stackexchange.com/questions/47183/are-llms-unlikely-to-be-useful-to-generate-any-scientific-discovery

I have just read this paper: Ziwei Xu, Sanjay Jain, Mohan Kankanhalli, "Hallucination is Inevitable: An Innate Limitation of Large Language Models", submitted on 22 Jan 2024.

It says there is a ground truth ideal function that gives every possible true output/fact to any given input/question, and no matter how you train your model, there is always space for misapproximations coming from missing data to formulate, and the more complex the data, the larger the space for the model to hallucinate.

Then he immediately follows up with:

Then I started to discuss with o1. [ . . . ] It says yes.

Then I asked o1 [ . . . ], to which o1 says yes [ . . . ]. Then it says [ . . . ].

Then I asked o1 [ . . . ], to which it says yes too.

I'm not a teacher but I feel like my brain would explode if a student asked me to answer a question they arrived at after an LLM misled them on like 10 of their previous questions.

[-] hrrrngh@awful.systems 11 points 2 years ago

Every time I see these crypto games, I can only think of the online uwu pit bosses shown in that one Folding Ideas video who were driving workers to slave away for less than the minimum wage in the Phillipines. Just permanently burned-in mental imagery.

This is a cool channel by the way. I'm stealing this description from someone in the YouTube comments, but he has a creative "glitchcore SFM aesthetic" that I kind of like and his speaking cadence reminds me of Primer. His style works strangely well for ripping into NFT games. This video felt like looking into a funhouse mirror dimension where every genre of game is somehow even worse than the worst games I've ever seen.

Also, the Dr. Disrespect-Chewbacca mask guy doing NFT lootbox openings is something I can't unsee. That's honestly so much funnier that he's still doing it after the Dr. Disrespect sexting minors scandal.

[-] hrrrngh@awful.systems 11 points 2 years ago* (last edited 2 years ago)

This is hilarious, thank you for digging this up. I love how they're just co-opting completely wrong words (wow, why does that sound familiar?) like "anti" and "fundamentalist orthodox" to describe the people they don't like

There's another one called "ArtistHate", but I was surprised it's actually a pro-artist subreddit.


If you want another fun read, check out Adobe's 'Stock Contributors' AI artist forum. I found it by accident, and it's full of people struggling so, so hard to understand why their puppy photos with missing limbs or physically impossible landscapes aren't accepted. Any time someone "asks for clarification" on the submission rules I swear you can tell what the issue is at a glance but they're stumped over it.

Like, what is this person even trying to do??? Why do people feel the need to regurgitate responses from ChatGPT for no reason. Why are they even submitting AI art to Adobe Stock at all. Are they even getting paid? These are the same people who like those photos of snowboarding babies and have to reply "thank you" to every post on their Facebook feed because they think it was personally sent to them.

[-] hrrrngh@awful.systems 11 points 2 years ago

This released today: https://www.ic3.gov/Media/News/2024/240709.pdf

Cool (horrifying) look into one of the active Russian bot farms and their use of generative AI

[-] hrrrngh@awful.systems 10 points 2 years ago* (last edited 2 years ago)

People are so, so, so bad at telling what's a bot and what's real. I know social media is swarming with bots, but if you're interacting with somebody who's saying anything more complicated than "P o o s i e I n B i o" it's probably not a bot. A similar thing happens in online games, too, and it's usually the excuse people use before harassing someone else

But damn the lengths people will go to to avoid admitting they were wrong. This comment chain just keeps going on with somebody who's convinced {origin="RU"}{faith="bad"}{election_manipulation="very yes"} must be real because something something microservices: https://www.reddit.com/r/interestingasfuck/comments/1dlg8ni/russian_bot_falls_prey_to_a_prompt_iniection/l9pbmrw/ It reads like something straight off /r/programming or the orange site

Then it comes full circle with people making joke responses on Twitter imitating the first post, and then other people taking those joke responses as proof that the first one must be real: https://old.reddit.com/r/ChatGPT/comments/1dimlyl/twitter_is_already_a_gpt_hellscape/l9691c8/

This account kind of kicked up some drama too, basically for the same reason (answering an LLM prompt), but it's about mushroom ID instead: https://www.reddit.com/user/SeriousPerson9 I've seen people like this who use voice-to-text and run their train of thought through ChatGPT or something, like one person notorious on /r/gamedev. But people always assume it's some advanced autonomous bot with stochastic post delays that mimic a human's active hours when like, it's usually just somebody copy/pasting prompts and responses.

Sorry if you contract any diseases from those links or comment chains

[-] hrrrngh@awful.systems 11 points 2 years ago

Yesterday before bed I saw some galaxy-brained takes on PKM (personal knowledge management software) from a 7-day old account, and curiosity took over me. I was not disappointed. (sadly they deleted their account after I woke up: /u/Few-Elephant-2600 if you're bored and have moderator API access)

Link

Since GPUs continuously generate large amounts of waste heat during AI training, could electric/GPU stoves utilize this unused thermal energy resource through on-demand tickets as distributed networks instead of citizens using a wasteful private electric stove? What are the scientific challenges?

Honey can you preheat the porn generator?

Maybe you could pair it with this accursed AI of Things Smart Oven. Fun quotes:

“Users aren’t aware of any of the oven’s learning processes,”

Ovens that learn from one another

Finally, I can experience Windows progress bars when baking potatoes:

The predictive model updates the remaining baking time every 30 seconds

[-] hrrrngh@awful.systems 12 points 2 years ago

Unsure if this meets even the lowest bar for this thread but I was jumpscared by Aella while browsing Reddit

view more: ‹ prev next ›

hrrrngh

0 post score
0 comment score
joined 2 years ago