this post was submitted on 02 Sep 2024
69 points (100.0% liked)

TechTakes

1401 readers
204 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

We also want to be clear in our belief that the categorical condemnation of Artificial Intelligence has classist and ableist undertones, and that questions around the use of AI tie to questions around privilege."

  • Classism. Not all writers have the financial ability to hire humans to help at certain phases of their writing. For some writers, the decision to use AI is a practical, not an ideological, one. The financial ability to engage a human for feedback and review assumes a level of privilege that not all community members possess.
  • Ableism. Not all brains have same abilities and not all writers function at the same level of education or proficiency in the language in which they are writing. Some brains and ability levels require outside help or accommodations to achieve certain goals. The notion that all writers “should“ be able to perform certain functions independently or is a position that we disagree with wholeheartedly. There is a wealth of reasons why individuals can't "see" the issues in their writing without help.
  • General Access Issues. All of these considerations exist within a larger system in which writers don't always have equal access to resources along the chain. For example, underrepresented minorities are less likely to be offered traditional publishing contracts, which places some, by default, into the indie author space, which inequitably creates upfront cost burdens that authors who do not suffer from systemic discrimination may have to incur.

Presented without comment.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 46 points 2 months ago (6 children)

Doesn't even mention the one use case I have a moderate amount of respect for, automatically generating image descriptions for blind people.

And even those should always be labeled, since AI is categorically inferior to intentional communication.

They seem focused on the use case "I don't have the ability to communicate with intention, but I want to pretend I do."

[–] [email protected] -4 points 2 months ago (4 children)

AI and ML (and I'm not talking about LLM, but more about those techniques in general) have many actual uses, often when the need is "you have to make a decision quickly, and there's a high tolerance for errors or imprecision".

Your example is a perfect example: it's not as good as a human-generated caption, it can lack context, or be wrong. But it's better than the alternative of having nothing.

[–] [email protected] 12 points 2 months ago (1 children)

I don't accept a wrong caption is better than not being captioned. I'm concerned that when you say "High tolerance for error", that really means you think it's something unimportant.

[–] [email protected] -1 points 2 months ago

No, what I'm saying is that if I had vision issues and had to use a screen reader to use my computer, if I had to choose between

  • the person who did that website didn't think about accessibility, so sucks to be you, you're not gonna know what's on those pictures
  • there's no alt, but your screen reader tries to describe the picture, you know it's not perfect, but at least you probably know it's not a dog.

I'd take the latter. Obviously the true solution would be to make sure everyone thinks about accessibility, but come on... Even here it's not always the case and the fediverse is the place where I've seen the most focus on accessibility.

Another domain I'd see is preprocessing (a human will do the actual work) to make some tasks a bit easier or quicker and less repetitive.

load more comments (2 replies)
load more comments (3 replies)