JRepin

joined 1 year ago
[–] [email protected] 7 points 1 week ago
 

cross-posted from: https://lemmy.ml/post/19683130

The ideologues of Silicon Valley are in model collapse.

To train an AI model, you need to give it a ton of data, and the quality of output from the model depends upon whether that data is any good. A risk AI models face, especially as AI-generated output makes up a larger share of what’s published online, is “model collapse”: the rapid degradation that results from AI models being trained on the output of AI models. Essentially, the AI is primarily talking to, and learning from, itself, and this creates a self-reinforcing cascade of bad thinking.

We’ve been watching something similar happen, in real time, with the Elon Musks, Marc Andreessens, Peter Thiels, and other chronically online Silicon Valley representatives of far-right ideology. It’s not just that they have bad values that are leading to bad politics. They also seem to be talking themselves into believing nonsense at an increasing rate. The world they seem to believe exists, and which they’re reacting and warning against, bears less and less resemblance to the actual world, and instead represents an imagined lore they’ve gotten themselves lost in.

 

There's been a couple of mentions of Rust4Linux in the past week or two, one from Linus on the speed of engagement and one about Wedson departing the project due to non-technical concerns. This got me thinking about project phases and developer types.

 

cross-posted from: https://lemmy.ml/post/19709648

Paris Marx is joined by Mohammad Khatami and Gabi Schubiner to discuss the complicity of Google, Amazon, and Microsoft in Israel’s ongoing genocide in Gaza and how tech workers are organizing to stop it.

Mohammad Khatami and Gabi Schubiner are former Google software engineers and organizers with No Tech for Apartheid.

 

Paris Marx is joined by Mohammad Khatami and Gabi Schubiner to discuss the complicity of Google, Amazon, and Microsoft in Israel’s ongoing genocide in Gaza and how tech workers are organizing to stop it.

Mohammad Khatami and Gabi Schubiner are former Google software engineers and organizers with No Tech for Apartheid.

 

Hundreds of millions of people now interact with language models, with uses ranging from help with writing1,2 to informing hiring decisions3. However, these language models are known to perpetuate systematic racial prejudices, making their judgements biased in problematic ways about groups such as African Americans4,5,6,7. Although previous research has focused on overt racism in language models, social scientists have argued that racism with a more subtle character has developed over time, particularly in the United States after the civil rights movement8,9. It is unknown whether this covert racism manifests in language models. Here, we demonstrate that language models embody covert racism in the form of dialect prejudice, exhibiting raciolinguistic stereotypes about speakers of African American English (AAE) that are more negative than any human stereotypes about African Americans ever experimentally recorded. By contrast, the language models’ overt stereotypes about African Americans are more positive. Dialect prejudice has the potential for harmful consequences: language models are more likely to suggest that speakers of AAE be assigned less-prestigious jobs, be convicted of crimes and be sentenced to death. Finally, we show that current practices of alleviating racial bias in language models, such as human preference alignment, exacerbate the discrepancy between covert and overt stereotypes, by superficially obscuring the racism that language models maintain on a deeper level. Our findings have far-reaching implications for the fair and safe use of language technology.

 

Researchers have documented an explosion of hate and misinformation on Twitter since the Tesla billionaire took over in October 2022 -- and now experts say communicating about climate science on the social network on which many of them rely is getting harder.

Policies aimed at curbing the deadly effects of climate change are accelerating, prompting a rise in what experts identify as organised resistance by opponents of climate reform.

Peter Gleick, a climate and water specialist with nearly 99,000 followers, announced on May 21 he would no longer post on the platform because it was amplifying racism and sexism.

While he is accustomed to "offensive, personal, ad hominem attacks, up to and including direct physical threats", he told AFP, "in the past few months, since the takeover and changes at Twitter, the amount, vituperativeness, and intensity of abuse has skyrocketed".

 

Screen is a full-screen window manager that multiplexes a physical terminal between several processes, typically interactive shells.

The 5.0.0 release includes the following changes to the previous release 4.9.1:

  • Rewritten authentication mechanism
  • Add escape %T to show current tty for window
  • Add escape %O to show number of currently open windows
  • Use wcwdith() instead of UTF-8 hard-coded tables
  • New commands:
    • auth [on|off] Provides password protection
    • status [top|up|down|bottom] [left|right] The status window by default is in bottom-left corner. This command can move status messages to any corner of the screen.
    • truecolor [on|off]
    • multiinput Input to multiple windows at the same time
  • Removed commands:
    • time
    • debug
    • password
    • maxwin
    • nethack
  • Fixes:
    • Screen buffers ESC keypresses indefinitely
    • Crashes after passing through a zmodem transfer
    • Fix double -U issue
 

cross-posted from: https://lemmy.ml/post/19683130

The ideologues of Silicon Valley are in model collapse.

To train an AI model, you need to give it a ton of data, and the quality of output from the model depends upon whether that data is any good. A risk AI models face, especially as AI-generated output makes up a larger share of what’s published online, is “model collapse”: the rapid degradation that results from AI models being trained on the output of AI models. Essentially, the AI is primarily talking to, and learning from, itself, and this creates a self-reinforcing cascade of bad thinking.

We’ve been watching something similar happen, in real time, with the Elon Musks, Marc Andreessens, Peter Thiels, and other chronically online Silicon Valley representatives of far-right ideology. It’s not just that they have bad values that are leading to bad politics. They also seem to be talking themselves into believing nonsense at an increasing rate. The world they seem to believe exists, and which they’re reacting and warning against, bears less and less resemblance to the actual world, and instead represents an imagined lore they’ve gotten themselves lost in.

 

The ideologues of Silicon Valley are in model collapse.

To train an AI model, you need to give it a ton of data, and the quality of output from the model depends upon whether that data is any good. A risk AI models face, especially as AI-generated output makes up a larger share of what’s published online, is “model collapse”: the rapid degradation that results from AI models being trained on the output of AI models. Essentially, the AI is primarily talking to, and learning from, itself, and this creates a self-reinforcing cascade of bad thinking.

We’ve been watching something similar happen, in real time, with the Elon Musks, Marc Andreessens, Peter Thiels, and other chronically online Silicon Valley representatives of far-right ideology. It’s not just that they have bad values that are leading to bad politics. They also seem to be talking themselves into believing nonsense at an increasing rate. The world they seem to believe exists, and which they’re reacting and warning against, bears less and less resemblance to the actual world, and instead represents an imagined lore they’ve gotten themselves lost in.

 

Every artist, performer and creator on Patreon is about to get screwed out of 30% of their gross revenue, which will be diverted to Apple, the most valuable company on the planet. Apple contributes nothing to their work, but it will get to steal a third of their wages. How is this possible? Enshittification.

[–] [email protected] 4 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

It's the heavy graphics used which looks like it uses WebGL and this is disabled in LibreWolf since it can easily be used for fingerprinting a user. It would be great if they could not use such heavy graphics if WebGL is not supported and just used simple static image or something like that. Well it would be great in general not just for privacy reasons.

[–] [email protected] 4 points 3 weeks ago* (last edited 3 weeks ago)

From my experince AMD drivers are pretty close, I'd even say slightly better on GNU/Linux, definitely more stable and consistent. For Nvidia, yeah they are bad at supporting GNU/Linux. Improved a lot through the years but still not there. For Intel, well not exactly an option for gaming, at least not the integrated GPUs I have used so far, but still better than in Windows in a similar way as in AMD case.

P.S. Another great thing with libre/opensource GNU/Linux drivers: When you report a bug with Mesa3D drivers the bug is quite quickly fixed, especially when you can provide them with backtrace and/or Vulkan/OpenGL API trace. Doing a bisect of source code commits amd identifying the commit that introduced a regression also help a great deal. Good luck doing the same with closed/Windows drivers: you can wait for years and no fix.

[–] [email protected] 4 points 3 weeks ago

It's totaly messed up in general and has been for a long time. They try to hack it for the new CPU model and stab you in the back for older CPUs, I'd say it is FUBAR.

[–] [email protected] 35 points 3 weeks ago (1 children)

Or they just found out that Windows process scheduler is still broken beyond repair. If you look at the benchmarks on GNU/Linux performance is all there. For example see Phoronix benchmark

[–] [email protected] 3 points 1 month ago

Yeah I am so glad I switched to GNU/Linux years ago, Have to keep supporting closed OSes at work with our software and with each release they are just getting worse and worse, while GNU/Linux just keeps getting better.

[–] [email protected] 3 points 1 month ago

Yeah I am so glad I switched to GNU/Linux years ago, Have to keep supporting closed OSes at work with our software and with each release they are just getting worse and worse, while GNU/Linux just keeps getting better.

[–] [email protected] 2 points 1 month ago

Yeah I am so glad I switched to GNU/Linux years ago, Have to keep supporting closed OSes at work with our software and with each release they are just getting worse and worse, while GNU/Linux just keeps getting better.

[–] [email protected] 2 points 1 month ago (1 children)

Even quicker is "#X"

[–] [email protected] 2 points 1 month ago* (last edited 1 month ago)

Yup still exists. It is also available in KDE Help Center. And you can quickly jump to a man page you typing "#man" into KRunner.

[–] [email protected] 8 points 2 months ago

Yup I agree, openSUSE Tumbleweed with KDE Plasma desktop is just awesome. my favourite distro at this moment,

view more: next ›