bitofhope

joined 2 years ago
[–] [email protected] 12 points 6 months ago

Walter Bright soon reading his second ever newspaper: "Wow, this is a lot like Washington Post!"

[–] [email protected] 17 points 6 months ago (1 children)

Your success as a greenhorn Silicon Valley intellectual will rest on your ability to shoehorn Girard’s name and the “mimetic theory” with which he’s associated into as many blog posts, podcast interviews, and tweets as possible.

Instructions unclear, accidentally started reading Gerard instead.

Why would I even want to learn anything from the French? As the article points out, they can't even outcompete China, a place well known for its free speech and low taxation. French language doesn't even have a word for entrepreneur.

[–] [email protected] 24 points 6 months ago (7 children)

I wonder if the OpenAI habit of naming their models after the previous ones' embarrassing failures is meant as an SEO trick. Google "chatgpt strawberry" and the top result is about o1. It may mention the origin of the codename, but ultimately you're still streered to marketing material.

Either way, I'm looking forward to their upcoming AI models Malpractice, Forgery, KiddieSmut, ClassAction, SecuritiesFraud and Lemonparty.

[–] [email protected] 7 points 6 months ago

They have successfully convinced me they are HIPPA compliant, yet simultaneously they've convinced me they are not HIPAA compliant.

[–] [email protected] 5 points 6 months ago

Complaining that people are just ripping off better known NFTs is pretty funny when the chain is named Ape of all things.

[–] [email protected] 24 points 6 months ago (7 children)

The stretching is just so blatant. People who train neural networks do not write a bunch of tokens and weights. They take a corpus of training data and run a training program to generate the weights. That's why it is the training program and the corpus that should be considered the source form of the program. If either of these can't be made available in a way that allows redistribution of verbatim and modified versions, it can't be open source. Even if I have a powerful server farm and a list of data sources for Llama 3, I can't replicate the model myself without committing copyright infringement (neither could Facebook for that matter, and that's not an entirely separate issue).

There are large collections of freely licensed and public domain media that could theoretically be used to train a model, but that model surely wouldn't be as big as the proprietary ones. In some sense truly open source AI does exist and has for a long time, but that's not the exciting thing OSI is lusting after, is it?

[–] [email protected] 2 points 6 months ago (1 children)

I get the gist, but also it's kinda hard to come up with a better alternative. A simple "being wrong" doesn't exactly communicate it either. I don't think "hallucination" is a perfect word for the phenomenon of "a statistically probable sequence of language tokens forming a factually incorrect claim" by any means, but in terms of the available options I find it pretty good.

I don't think the issue here is the word, it's just that a lot of people think the machines are smart when they're not. Not anthropomorphizing the machines is a battle that was lost no later than the time computer data representation devices were named "memory", so I don't think that's really the issue here either.

As a side note, I've seen cases of people (admittedly, mostly critics of AI in the first place) call anything produced by an LLM a hallucination regardless of truthfulness.

[–] [email protected] 7 points 6 months ago

Condolences to the nations of Mali and Anguilla to have your TLD associated with this crap.

[–] [email protected] 9 points 6 months ago

Movie villain: "Society bad. Solution: murder everyone."
Most media literate viewer: "He's right, society does suck, therefore we should murder everyone."

[–] [email protected] 12 points 6 months ago (1 children)

Not much, what's autoplag with you!

It's short for automatic plagiarism machine.

[–] [email protected] 37 points 6 months ago (2 children)

It's the least of this thing's problems, but I've had it with the fucking teasers and "coming soon" announcements. You woke me up for this? Shut the fuck up, finish your product and release it and we'll talk (assuming your product isn't inherently a pile of shit like AI to begin with). Teaser more like harasser. Do not waste my time and energy telling me about stuff that doesn't exist and for the love of all that is holy do not try and make it a cute little ARG puzzle.

[–] [email protected] 16 points 6 months ago (4 children)

I think it's weird that "hallucination" would be considered a cute euphemism. Would you trust something that's perpetually tripping balls and confidently announcing whatever comes to them in a dream? To me that sounds worse than merely being wrong.

view more: ‹ prev next ›