planish

joined 1 year ago
[–] [email protected] 1 points 3 months ago

The pita fix only works if you can dig up a CD drive to put it in though. Most people don't have one and are SOL.

[–] [email protected] 37 points 3 months ago (6 children)

That's what the BSOD is. It tries to bring the system back to a nice safe freshly-booted state where e.g. the fans are running and the GPU is not happily drawing several kilowatts and trying to catch fire.

[–] [email protected] 4 points 3 months ago (1 children)

Foreign to who?

[–] [email protected] 86 points 3 months ago (1 children)
[–] [email protected] 1 points 5 months ago (1 children)

I remember it as, Firefox was fast enough, but Chrome was shipping a weirdly quick JS engine and trying to convince people to put more stuff into JS because on Chrome that would be feasible. Nowdays if you go out without your turbo-JIT hand-optimized JS engine everyone laughs at you and it's Chrome's fault.

[–] [email protected] 1 points 5 months ago

It shouldn't be hard to implement the APIs, the problem would be sourcing the models to sit behind them. You can't just steal them off Windows or you will have Copyright Problems presumably. I guess you could try and train clones on Windows against the Windows model results?

[–] [email protected] 0 points 5 months ago

KDE and Gnome haven't been stable or usable for the past 20 years, but will become so this year for some reason?

[–] [email protected] 7 points 5 months ago (3 children)

So Copilot Runtime is... Windows bundling a bunch of models like an OCR model and an image generation model, and then giving your program an API to call them.

[–] [email protected] 26 points 5 months ago

Do they "give high rankings" to CloudFlare sites because they just boost up whoever is behind CloudFlare, or because the sites happen to be good search hits, maybe that load quickly, and they don't go in and penalize them for... telling CloudFlare that you would like them to send you the page when you go to the site?

Counting the number of times results for different links are clicked is expected search engine behavior. Recording what search strings are sent from results pages for what other search strings is also probably fine, and because of the way forms and referrers work (the URL of the page you searched from has the old query in it) the page's query will be sent in the referrer by all browsers by default even if the site neither wanted it nor intends to record it. Recording what text is highlighted is weird, but probably not a genuine threat.

The remote favicon fetch design in their browser app was fixed like 4 years ago.

The "accusation" of "fingerprinting" was along the lines of "their site called a canvas function oh no". It's not "fingerprinting" every time someone tries to use a canvas tag.

What exactly is "all data available in my session" when I click on an ad? Is it basically the stuff a site I go to can see anyway? Sounds like it's nothing exciting or some exciting pieces of data would be listed.

This analysis misses the important point that none of this stuff is getting cross-linked to user identities or profiles. The problem with Google isn't that they examine how their search results pages are interacted with in general or that they count Linux users, it's that they keep a log of what everyone individually is searching, specifically. Not doing that sounds "anonymous" to me, even if it isn't Tor-strength anonymity that's resistant to wiretaps.

There's an important difference between "we're trying to not do surveillance capitalism but as a centralized service data still comes to our servers to actually do the service, and we don't boycott all of CloudFlare, AWS, Microsoft, Verizon, and Yahoo", as opposed to "we're building shadow profiles of everyone for us and our 1,437 partners". And I feel like you shouldn't take privacy advice from someone who hosts it unencrypted.

[–] [email protected] 5 points 5 months ago* (last edited 5 months ago) (1 children)

It sounds like nobody actually understood what you want.

You have a non-ZFS boot drive, and a big ZFS pool, and you want to save an image of the boot drive to the pool, as a backup for the boot drive.

I guess you don't want to image the drive while booted off it, because that could produce an image that isn't fully self-consistent. So then the problem is getting at the pool from something other than the system you have.

I think what you need to do is find something else you can boot that supports ZFS. I think the Ubuntu live images will do it. If not, you can try something like re-installing the setup you have, but onto a USB drive.

Then you have to boot to that and zfs import your pool. ZFS is pretty smart so it should just auto-detect the pool structure and where it wants to be mounted, and you can mount it. Don't do a ZFS feature upgrade on the pool though, or the other system might not understand it. It's also possible your live kernel might not have a new enough ZFS to understand the features your pool uses, and you might need to find a newer one.

Then once the pool is mounted you should be able to dd your boot drive block device to a file on the pool.

If you can't get this to work, you can try using a non-ZFS-speaking live Linux and dding your image to somewhere on the network big enough to hold it, which you may or may not have, and then booting the system and copying back from there to the pool.

[–] [email protected] 4 points 5 months ago

Well you can start by trying on purpose to make an SCP wiki level horror scene. Then the bugs are features!

[–] [email protected] 2 points 5 months ago

It's still terrible though! Turn it 45 degrees why don'tcha!

69
[POV] You are orb (assets.untappd.com)
 
6
Machine Yearning (www.linusakesson.net)
 

Obviously it wouldn't be allowed in this community, but how feasible would it be to make a community on a friendly instance and start shipping data through it somehow? If it works for NNTP it ought to work for ActivityPub, right?

Potential problems:

  1. Community full of base64'd posts immediately gets blocked by everybody's home instance.
  2. Community host immediately gets sued for handing out data it might not have a license for.
  3. Other instances that carry the community immediately get sued (see #2).
  4. Community host is in the US and follows DMCA and deletes all the posts that are complained about.

Maybe it would work as a way to distribute NZBs or other things that are useful but not themselves copyrightable? But the problem with NZBs is you have to keep them away from the people who want to send DMCAs to the Usenet providers about them, or they stop working. So shipping them around in a basically public protocol like ActivityPub would not be good for them.

 

Steps to reproduce:

  1. Start a Node project that uses at least five direct dependencies.
  2. Leave it alone for three months.
  3. Come back and try to install it.

Something in the dependency tree will yell at you that it is deprecated or discontinued. That thing will not be one of your direct dependencies.

NPM will tell you that you have at least one security vulnerability. At least one of the vulnerabilities will be impossible to trigger in your particular application. At least one of the vulnerabilities will not be able to be fixed by updating the versions of your dependencies.

(I am sure I exaggerate, but not by much!)

Why is it like this? How many hours per week does this running-to-stay-in-place cost the average Node project? How many hours per week of developer time is the minimum viable Node project actually supposed to have available?

 

Through witchcraft and dark magic, Zig contains a C standard library and cross compiler for every architecture in 45 megabytes.

 

Julia Evans has done it again.

cross-posted from: https://derp.foo/post/88689

There is a discussion on Hacker News, but feel free to comment here as well.

 

Doesn't seem like that acronym is used for anything important at the moment, I'm sure we can grab it.

 

That's right folks, I want to see you post your... old dreams.

 
 

Many AI image generators, including the big UIs for Stable Diffusion, helpfully embed metadata in the images so that you can load them up again and get all the settings you need to regenerate the image.

But Lemmy's built-in pict-rs image hoster, and most image hosters that resize or re-encode images or that try and stop people from doxing themselves with photos' embedded GPS coordinates, will remove all the metadata. This is counter-productive for AI image generation, because part of the point of sharing the images is so other people can build on the prompts.

What are some good places to host images that don't strip metadata?

 

Most of the Lemmy instances seem to require an email to sign up. That's fine, except most of the places you would go to sign up for email want you to... already have an email. And often a phone number. And almost always a first name, last name, and birthday.

I promise not to do bad stuff, but I don't want that sort of information able to be publicly associated with my accounts where I write stuff, when everyone inevitably loses their databases to hackers. Pseudonymity is good, actually; on the Internet nobody knows you're a dog, etc.

Is anyone doing normal webmail registration anymore? Set username and password, receive email for free? I don't even need to send anything to sign up for accounts elsewhere.

view more: next ›