[-] [email protected] 110 points 2 weeks ago

Watching conservatives on Twitter ask Grok to fact check their shit and Grok explaining the nuances about why they are wrong is one of my favorite ways to pass the time these days.

133
submitted 1 year ago by [email protected] to c/[email protected]

I often see a lot of people with outdated understanding of modern LLMs.

This is probably the best interpretability research to date, by the leading interpretability research team.

It's worth a read if you want a peek behind the curtain on modern models.

[-] [email protected] 110 points 1 year ago

That's a fun variation. The one I test out models with is usually a vegetarian wolf and a carnivorous goat, but the variation to no other objects is an interesting one too.

By the way, here's Claude 3 Opus's answer:

The solution is quite simple:

  1. The man gets into the boat and rows himself and the goat across the river to the other side.
  1. Once they reach the other side, both the man and the goat get out of the boat.

And that's it! Since there are no additional constraints or complications mentioned in the problem, the man and the goat can directly cross the river together using the boat.

[-] [email protected] 116 points 1 year ago* (last edited 1 year ago)

For reference as to why they need to try to be so heavy handed with their prompts about BS, here was Grok, Elon's 'uncensored' AI on Twitter at launch which upset his Twitter blue subscribers:

[-] [email protected] 113 points 1 year ago

Oh no, you see he was just passing by when he noticed that the farmworker was possessed and he decided to perform an impromptu exorcism.

It's like a snakebite. He's sucking the demon seed out.

9
submitted 1 year ago by [email protected] to c/[email protected]
79
submitted 1 year ago by [email protected] to c/[email protected]
[-] [email protected] 126 points 1 year ago

Your competitors take out contract hits against your whistleblower and you need to have bodyguards to protect them.

And then your head of security and the whistleblower fall in love until at the end of the movie the competitor assassin gets into the court waiting room and the head of security throws themselves into the ninja star's way and dies in the whistleblower's arms as the ultimate sacrifice is made for love and corporate profits.

I tear up just thinking about it.

[-] [email protected] 161 points 1 year ago

More like we know a lot more people that would have zombie bite parties because they "trust their immune system" and simultaneously don't believe in the zombie hoax.

7
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]

I've been saying this for about a year since seeing the Othello GPT research, but it's nice to see more minds changing as the research builds up.

Edit: Because people aren't actually reading and just commenting based on the headline, a relevant part of the article:

New research may have intimations of an answer. A theory developed by Sanjeev Arora of Princeton University and Anirudh Goyal, a research scientist at Google DeepMind, suggests that the largest of today’s LLMs are not stochastic parrots. The authors argue that as these models get bigger and are trained on more data, they improve on individual language-related abilities and also develop new ones by combining skills in a manner that hints at understanding — combinations that were unlikely to exist in the training data.

This theoretical approach, which provides a mathematically provable argument for how and why an LLM can develop so many abilities, has convinced experts like Hinton, and others. And when Arora and his team tested some of its predictions, they found that these models behaved almost exactly as expected. From all accounts, they’ve made a strong case that the largest LLMs are not just parroting what they’ve seen before.

“[They] cannot be just mimicking what has been seen in the training data,” said Sébastien Bubeck, a mathematician and computer scientist at Microsoft Research who was not part of the work. “That’s the basic insight.”

5
submitted 1 year ago by [email protected] to c/[email protected]

I've been saying this for about a year, since seeing the Othello GPT research, but it's great to see more minds changing on the subject.

[-] [email protected] 154 points 1 year ago

Just wait until they find out public schools are giving their children dihydrogen monoxide without asking for parental approval.

71
submitted 1 year ago by [email protected] to c/[email protected]

I'd been predicting this would happen a few months ago with friends and old colleagues (you can have a smart AI or a conservative AI but not both), but it's so much funnier than I thought it would be when it finally arrived.

[-] [email protected] 131 points 1 year ago

Details don't really matter for made up stories.

[-] [email protected] 194 points 2 years ago* (last edited 2 years ago)

I've seen a number of misinformed comments here complaining about a profit oriented board.

It's worth keeping in mind that this board was the original non-profit board, that none of the members have equity, and literally part of the announcement is the board saying that they want to be more aligned as a company with the original charter of helping bring about AI for everyone.

There may be an argument around Altman's oust being related to his being too closed source and profit oriented, but the idea that the reasoning was the other way around is pretty ludicrous.

Again - this isn't an investor board of people who put money into the company and have equity they are trying to protect.

205
submitted 2 years ago by [email protected] to c/[email protected]
[-] [email protected] 123 points 2 years ago

I learned so much over the years abusing Cunningham's.

Could have a presentation for the C-suite for a major company, post some tenuous claim related to what I intended to present on, and have people with PhDs in the subject citing papers correcting me with nuances that would make it into the final presentation.

It's one of the key things I miss about Reddit. The scale of Lemmy just doesn't have the same rate and quality of expertise jumping in to correct random things as a site with 100x the users.

[-] [email protected] 110 points 2 years ago

Yeah, because it's not like theater has a longstanding history of having people play characters that are a different sex from the one they were born as or anything...

9
submitted 2 years ago by [email protected] to c/[email protected]

I've suspected for a few years now that optoelectronics is where this is all headed. It's exciting to watch as important foundations are set on that path, and this was one of them.

4
submitted 2 years ago by [email protected] to c/[email protected]

I've had my eyes on optoelectronics as the future hardware foundation for ML compute (add not just interconnect) for a few years now, and it's exciting to watch the leaps and bounds occurring at such a rapid pace.

[-] [email protected] 269 points 2 years ago

The bio of the victim from her store's website:

Lauri Carleton's career in fashion began early in her teens, working in the family business at Fred Segal Feet in Los Angeles while attending Art Center School of Design. From there she ran “the” top fashion shoe floor in the US at Joseph Magnin Century City. Eventually she joined Kenneth Cole almost from its inception and remained there for over fifteen years as an executive, building highly successful businesses, working with factories and design teams in Italy and Spain, and traveling 200 plus days a year.

With a penchant for longevity, she has been married to the same man for 28 years and is the mother of a blended family of nine children, the youngest being identical twin girls. She and her husband have traveled the greater part of the US, Europe and South America. From these travels they have nourished a passion for architecture, design, fine art, food, fashion, and have consequently learned to drink in and appreciate the beauty, style and brilliance of life. Their home of thirty years in Studio City is a reflection of this passion, as well as their getaway- a restored 1920's Fisherman's Cabin in Lake Arrowhead. Coveting the simpler lifestyle with family, friends and animals at the lake is enhanced greatly by their 1946 all mahogany Chris-Craft; the ultimate in cultivating a well appreciated and honed lifestyle.

Mag.Pi for Lauri is all about tackling everyday life with grace and ease and continuing to dream…

What a waste. A tragedy for that whole family for literally nothing. No reason at all other than small minded assholes.

18
submitted 2 years ago by [email protected] to c/[email protected]

The Minoan style headbands from Egypt during the 18th dynasty is particularly interesting.

13
submitted 2 years ago by [email protected] to c/[email protected]
view more: next ›

kromem

0 post score
0 comment score
joined 2 years ago