20
submitted 2 months ago by [email protected] to c/[email protected]

cross-posted from: https://lemmy.ml/post/30013197

Significance

As AI tools become increasingly prevalent in workplaces, understanding the social dynamics of AI adoption is crucial. Through four experiments with over 4,400 participants, we reveal a social penalty for AI use: Individuals who use AI tools face negative judgments about their competence and motivation from others. These judgments manifest as both anticipated and actual social penalties, creating a paradox where productivity-enhancing AI tools can simultaneously improve performance and damage one’s professional reputation. Our findings identify a potential barrier to AI adoption and highlight how social perceptions may reduce the acceptance of helpful technologies in the workplace.

Abstract

Despite the rapid proliferation of AI tools, we know little about how people who use them are perceived by others. Drawing on theories of attribution and impression management, we propose that people believe they will be evaluated negatively by others for using AI tools and that this belief is justified. We examine these predictions in four preregistered experiments (N = 4,439) and find that people who use AI at work anticipate and receive negative evaluations regarding their competence and motivation. Further, we find evidence that these social evaluations affect assessments of job candidates. Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs.

14
submitted 2 months ago by [email protected] to c/[email protected]

cross-posted from: https://lemmy.ml/post/30013147

Significance

As AI tools become increasingly prevalent in workplaces, understanding the social dynamics of AI adoption is crucial. Through four experiments with over 4,400 participants, we reveal a social penalty for AI use: Individuals who use AI tools face negative judgments about their competence and motivation from others. These judgments manifest as both anticipated and actual social penalties, creating a paradox where productivity-enhancing AI tools can simultaneously improve performance and damage one’s professional reputation. Our findings identify a potential barrier to AI adoption and highlight how social perceptions may reduce the acceptance of helpful technologies in the workplace.

Abstract

Despite the rapid proliferation of AI tools, we know little about how people who use them are perceived by others. Drawing on theories of attribution and impression management, we propose that people believe they will be evaluated negatively by others for using AI tools and that this belief is justified. We examine these predictions in four preregistered experiments (N = 4,439) and find that people who use AI at work anticipate and receive negative evaluations regarding their competence and motivation. Further, we find evidence that these social evaluations affect assessments of job candidates. Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs.

31
submitted 2 months ago by [email protected] to c/[email protected]

Significance

As AI tools become increasingly prevalent in workplaces, understanding the social dynamics of AI adoption is crucial. Through four experiments with over 4,400 participants, we reveal a social penalty for AI use: Individuals who use AI tools face negative judgments about their competence and motivation from others. These judgments manifest as both anticipated and actual social penalties, creating a paradox where productivity-enhancing AI tools can simultaneously improve performance and damage one’s professional reputation. Our findings identify a potential barrier to AI adoption and highlight how social perceptions may reduce the acceptance of helpful technologies in the workplace.

Abstract

Despite the rapid proliferation of AI tools, we know little about how people who use them are perceived by others. Drawing on theories of attribution and impression management, we propose that people believe they will be evaluated negatively by others for using AI tools and that this belief is justified. We examine these predictions in four preregistered experiments (N = 4,439) and find that people who use AI at work anticipate and receive negative evaluations regarding their competence and motivation. Further, we find evidence that these social evaluations affect assessments of job candidates. Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs.

39
submitted 2 months ago by [email protected] to c/[email protected]
49
submitted 2 months ago by [email protected] to c/[email protected]
64
submitted 2 months ago by [email protected] to c/[email protected]
214
submitted 2 months ago by [email protected] to c/[email protected]
3
submitted 2 months ago by [email protected] to c/[email protected]

Was working fine this morning for me. No updates.

But now it keeps crashing and my phone shows popups saying "something went wrong with summit". Clearing the cache and force killing the app didn't help

24
submitted 2 months ago by [email protected] to c/[email protected]

discord is a black hole for information

Traditional reasoning says you should prefer open forums like lemmy that are available and searchable to the open web. After all, you're posting to help people, and that helps people the most. The platform (like reddit) may profit off of it, but that's fine, they're providing the platform for you to post. Fair deal.

Plus people coming for high quality information helps the community and topic back. You attract other high quality contributors, the more people use/partake in the topic you are discussing, the platform often improves with the revenue etc. It's not perfect, but it worked

AI scrapers break all that. The company profiting is the AI company, and they give nothing back. They model just holds all the information in its weights. It doesn't drive people to the source. Even the platform doesn't benefit from bot scraping. The addition of high quality data may improve the model on that topic and thus push people to engage in said topic more, but not much, because of how AI's are trained, while you need some high quality data, a lot more important, especially for lesser known topics, is amount of data.

So as more of the world moves to AI models, I don't really feel like posting on public forums as much, helping the AI companies get richer, even if I do benefit from AI myself.

14
submitted 3 months ago by [email protected] to c/[email protected]
85
submitted 3 months ago by [email protected] to c/[email protected]
[-] [email protected] 153 points 3 months ago

Using data from Nasa’s James Webb Space Telescope, researchers at Kansas State University in the US discovered that the majority of the galaxies were rotating in the same direction.

This goes against previous assumptions that our universe is isotropic, meaning there should be an equal number of galaxies rotating clockwise and anticlockwise.

“It is not clear what causes this to happen, but there are two primary possible explanations,” said Lior Shamir, associate professor of computer science at Kansas State University.

“One explanation is that the universe was born rotating. That explanation agrees with theories such as black hole cosmology, which postulates that the entire universe is the interior of a black hole.”

yeah it's just the most headline grabbing possibility

-6
submitted 4 months ago* (last edited 4 months ago) by [email protected] to c/[email protected]

Other platforms too, but I'm on lemmy. I'm mainly talking about LLMs in this post

First, let me acknowledge that AI is not perfect, it has limitations e.g

  • tendency to hallucinate responses instead of refusing/saying it doesn't know
  • different models/models sizes with varying capabilities
  • lack of knowledge of recent topics without explicitly searching it
  • tendency to be patternistic/repetitive
  • inability to hold on to too much context at a time etc.

The following are also true:

  • People often overhype LLMs without understanding their limitations
  • Many of those people are those with money
  • The term "AI" has been used to label everything under the sun that contains an algorithm of some sort
  • Banana poopy banana (just to make sure ppl are reading this)
  • There have been a number companies that overpromised for AI, and often were using humans as a "temporary" solution until they figured out the AI, which they never did (hence the gag, "AI" stands for "An Indian")

But I really don't think they're nearly as bad as most lemmy users make them out to be. I was going to respond to all the takes but there's so many I'll just make some general points

  • SOTA (State of the Art) models match or beat most humans besides experts in most fields that are measurable
  • I personally find AI is better than me in most fields except ones I know well. So maybe it's only 80-90% there, but it's there in like every single field whereas I am in like 1-2
  • LLMs can also do all this in like 100 languages. You and I can do it in like... 1, with limited performance in a couple others
  • Companies often use smaller/cheaper models in various products (e.g google search), which are understandably much worse. People often then use these to think all AI sucks
  • LLMs aren't just memorizing their training data. They can reason, as recent reasoning models more clearly show. Also, we now have near frontier models that are like 32B, or 21B GB in size. You cannot fit the entire internet in 21GB. There is clearly higher level synthesizing going on
  • People often tend to seize on superficial questions like the strawberry question (which is essentially an LLM blind spot) to claim LLM's are dumb.
  • In the past few years, researchers have had to come up with countless newer harder benchmarks because LLMs kept blowing through previous ones (partial list here: https://r0bk.github.io/killedbyllm/)
  • People and AI are often not compared fairly, for isntance with code, people usually compare a human with feedback from a compiler, working iteratively and debugging for hours to LLMs doing it in one go, no feedback, beyond maybe a couple of back and forths in a chat

Also I did say willfully ignorant. This is because you can go and try most models for yourself right now. There are also endless benchmarks constantly being published showing how well they are doing. Benchmarks aren't perfect and are increasingly being gamed, but they are still decent.

[-] [email protected] 183 points 4 months ago

on onlyfans, like most platforms, the vast majority of people make little to nothing

[-] [email protected] 136 points 5 months ago

idk, I feel like we could take a much better approach to this. Instead of just mocking them, maybe point out how they maybe can't trust where they got their idea of who trump was, and maybe to stop supporting him?

[-] [email protected] 162 points 5 months ago

90% of b2b software. They literally charge thousands of dollars while giving the worse piece of shit software you've ever used.

[-] [email protected] 170 points 1 year ago* (last edited 1 year ago)

Instead of algorithms, noplace leverages AI technology to drive suggestions and curation.

Instead of algorithms, noplace leverages algorithms to drive suggestions and curation

[-] [email protected] 119 points 1 year ago

Original post is a much better read than this blogspam

[-] [email protected] 122 points 1 year ago

Here's the chain for lemmy

[-] [email protected] 206 points 1 year ago

Unfortunately, if twitter has shown us anything, it's social networks are ridiculously hard to destroy, even when actively self-sabotaging

[-] [email protected] 118 points 2 years ago

physics majors when they're asked to apply their knowledge (they've never been outside of the lab)

[-] [email protected] 115 points 2 years ago

Highly agree with the first point, companies should not be able to hold exclusive rights to any product they no longer provide support for.

Abandonware and unsold products are one of the few cases in which I consider piracy ethical

[-] [email protected] 106 points 2 years ago

Checked the account, here's a clear indication:

[-] [email protected] 120 points 2 years ago* (last edited 2 years ago)

Someone's already made a subreddit to coordinate using it for protest https://www.reddit.com/r/PlaceAPI/ (likely more than one, this is just the one I saw)

edit: discord link for coordination https://discord.gg/KeH5PzUN

view more: next ›

morrowind

0 post score
0 comment score
joined 3 years ago
MODERATOR OF