localhost

joined 1 year ago
[–] [email protected] 2 points 1 month ago

Oh damn, you're right, my bad. I got a new notification but didn't check the date of the comment. Sorry about that.

[–] [email protected] 1 points 1 month ago (2 children)

That's a 1 month old thread my man :P

But sounds interesting, I haven't heard of Dysrationalia before. Quick cursory search shows that it's a term that has been coined mostly by a single psychologist in his book. I've been able to find only one study that used the term and it found that "different aspects of rational thought (i.e. rational thinking abilities and cognitive styles) and self-control, but not intelligence, significantly predicted the endorsement of epistemically suspect beliefs."

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6396694/

All in all, this seems to me more like a niche concept used by a handful of psychologists rather than something widely accepted in the field. Do you have anything that I could read to familiarize myself with this more? Preferably something evidence-based because we can ponder on non-verifiable explanations all day and not get anywhere.

[–] [email protected] 2 points 2 months ago (5 children)

The author's suggesting that smart people are more likely to fall for cons that they try to dissect but can't find the specific method being used, supposedly because they consider themselves to be infallible.

I disagree with this take. I don't see how that thought process is exclusive to people who are or consider themselves to be smart. I think the author is tying himself into a knot to state that smart people are actually the dumb ones, likely in preparation to drop an opinion that most experts in the field will disagree with.

[–] [email protected] 22 points 3 months ago

The paracausal tarrasque seems like a genuinely interesting concept. Gives me False Hydra vibes

[–] [email protected] 7 points 4 months ago (2 children)

Both threads appeared on my feed near one another and I figured it was on topic given that the other one is directly referenced in the main post here. If OP can reference another post to complain about hate, I think it's fair game for me to truthfully add that their conduct in the very same thread was also excessively hateful - how else are we to discuss the main subject of this post at all otherwise?

[–] [email protected] 7 points 4 months ago (1 children)

I have read the blog post that you've linked, which is full of exaggeration.

The developer rejected PR that changed documentation to use one instance of they/them instead of he/him, responded "This project is not an appropriate arena to advertise your personal politics.", and then promptly got brigaded. Similar PRs were appearing and getting closed from time to time.

A satirical PR has been opened and closed for being spam - despite the blogger's commentary, it's abundantly clear that the developer didn't call the person opening the PR a "spam" (what would that even mean?).

The project also had code of conduct modified, probably due to the brigading, to essentially include the aforementioned "not an appropriate arena to advertise your personal politics or religious beliefs" line - I don't know what part of this is for the blogger a "white supremacist" language.

From what I can tell, this is all they've done. No racism, no sexism, no white supremacy. Would it be better if they just accepted the PR? Yes. Does it make the developer part of one of the worst groups of people that ever existed? No.

[–] [email protected] 12 points 4 months ago (3 children)

When I created an account here, I thought Beehaw is specifically a platform where throwing vitriol unnecessarily is discouraged.

Non-native speaker being stubborn about not using "they/them" in gender-neutral contexts (especially when most if not all of these weren't even about people) is not enough to label them as neither incel, transphobe, nor racist.

Intentionally mischaracterizing other human beings and calling them derogatory names that they don't deserve is, in my opinion, against the spirit of the platform.

[–] [email protected] 13 points 4 months ago (9 children)

The most recent example I’ve noticed is around the stuff with the Ladybird devs being weird about being asked to use inclusive pronouns, but it seems like a pattern.

You mean the thread where you out of nowhere called the maintainers "incels, transphobes, and racists" over singular instance of them using "he/him" as a gender-neutral pronouns in documentation and refusing to change it?

[–] [email protected] 2 points 4 months ago (1 children)

Have you tried Cosmoteer? It's a pretty satisfying shipbuilder with resource and crew management, trading, and quests. Similar vibe to Reassembly.

[–] [email protected] 1 points 4 months ago (1 children)

So you're basically saying that, in your opinion, tensor operations are too simple of a building block for understanding to ever appear out of them as an emergent behavior? Do you feel that way about every mathematical and logical operation that a high school student can perform? That they can't ever in whatever combination create a system complex enough for understanding to emerge?

[–] [email protected] 2 points 4 months ago (3 children)

I don’t think that anyone would argue that the general public can even solve a mathematical matrix, much less that they can only comprehend a stool based on going down a row in a matrix to get the mathematical similarity between a stool, a chair, a bench, a floor, and a cat.

LLMs rely on billions of precise calculations and yet they perform poorly when tasked with calculating numbers. Just because we don't calculate anything consciously to get a meaning of a word doesn't mean that no calculations are actually done as part of our thinking process.

What's your definition of "the actual meaning of the concept represented by a word"? How would you differentiate a system that truly understands the meaning of a word vs a system that merely mimics this understanding?

[–] [email protected] 2 points 4 months ago (5 children)

technology fundamentally operates by probabilisticly stringing together the next most likely word to appear in the sentence based on the frequency said words appeared in the training data

What you're describing is Markov chain, not an LLM.

So long as a model has no regard for the actual you know, meaning of the word

It does, that's like the entire point of word embeddings.

view more: next ›