[-] brianpeiris@lemmy.ca 14 points 8 hours ago* (last edited 6 hours ago)

This is so dumb. This could have been a win-win for supermarkets. They could have stocked more, properly labelled, Canadian products, and have people buy more of them, and given themselves tons of positive PR. There must be some hidden profit motive here to keep stocking US products. Capitalists gonna capitalize.

[-] brianpeiris@lemmy.ca 8 points 2 days ago

The tragic thing here is that doctors in Canada do actually need support to get their work done. They are typically overloaded with paperwork, and can't actually do their real job of helping patients.

This desperation leads to AI companies being welcomed and implemented without examining the root capabilities of the systems. They are inherently statistical machines that lack reasoning and context, and they are constrained by their training data. You should expect them to get things wrong a significant portion of the time, especially when it comes to the specifics of individual patients and unusual scenarios that their training data could not capture.

We should not be relying on them for anything serious, let alone medical applications. If you're not an AI/software expert, you should assume that the AI companies are straight up lying to you about their capabilities, and effectively preying upon your desperation.

[-] brianpeiris@lemmy.ca 1 points 3 days ago

IntelliJ doesn't help when you're doing a code review, or just reading through hundreds of lines of code, I don't want to move my mouse or cursor over every line to see the parameter names.

[-] brianpeiris@lemmy.ca 9 points 3 days ago* (last edited 2 days ago)

Yeah, it's wild they do actually have large "photophores" at the ends of their arms. More info here:

https://www.livescience.com/animals/squids/elusive-octopus-squid-with-worlds-largest-biological-lights-attacks-camera-in-striking-new-video

https://en.wikipedia.org/wiki/Taningia_danae

Taningia danae, the Dana octopus squid, is a species of squid in the family Octopoteuthidae, the octopus squids. It is one of the largest known squid species, and it has one of the largest photophores (light organs) known in any organism.

56

Seven families of victims killed or injured in a mass shooting in Canada have filed lawsuits against OpenAI and its CEO Sam Altman in a California court, accusing him and the company of ignoring the shooter's troubling interactions with ChatGPT.

44
submitted 6 days ago by brianpeiris@lemmy.ca to c/canada@lemmy.ca

Seven families of victims killed or injured in a mass shooting in Canada have filed lawsuits against OpenAI and its CEO Sam Altman in a California court, accusing him and the company of ignoring the shooter's troubling interactions with ChatGPT.

136
98

Excerpt:

Google's own Nano model will become the default and that developers will standardize on it in an effort to make the non-deterministic responses of an AI model more predictable. That tendency, he argues, will create pressure for Apple and Mozilla to license Nano, for the sake of a common user experience.

Perhaps more significantly, Archibald notes that using the Prompt API requires agreeing to Google's Generative AI Prohibited Uses Policy, which prohibits activities that are not necessarily illegal, like generating "disturbing" content.

"This seems like a bad direction for an API on the web platform, and sets a worrying precedent for more APIs that have [browser]-specific rules around usage," he said.

Finally, Archibald argues that Google misrepresented demand for the API by cherry-picking a few social media posts and calling that a groundswell of developer support.

"The intent to ship on blink-dev states web developers as 'Strongly positive,' and links to the explainer for evidence," he wrote. "The evidence provided there does not seem to fit the claim."

104
submitted 2 weeks ago by brianpeiris@lemmy.ca to c/canada@lemmy.ca

Lawsuits: OpenAI didn’t report ChatGPT user to cops to protect Altman, IPO.

146
submitted 2 weeks ago* (last edited 2 weeks ago) by brianpeiris@lemmy.ca to c/cooking@lemmy.world

I've been a Soylent guy for years, so this is actually my first foray into any sort of regular cooking. Still need to cut my prep time down, but so far so good.

Ingredients: Quinoa, parsley, chickpeas, yellow bell pepper, green onions, almonds, dates. The original shaker came with a citrus/vinegar/spice dressing, but I just use what I have at hand.

77
submitted 2 weeks ago by brianpeiris@lemmy.ca to c/fuck_ai@lemmy.world

Excerpts:

The challenges Sora faced reflect deeper limitations of AI’s creative capacities that are becoming harder to ignore.

Counter-creative bias explains why so many AI-generated images and videos, even when they vary in subject or style, end up sharing a similar look and feel. And I think it explains why so many artists and other creatives don’t seem to be widely adopting these tools. Good creative work involves pushing boundaries, not simply coming up with something that’s passable and palatable.

That hopeful period appears to be over. Once pixels had to be rendered through the control of language, I think it hampered its potential as an artistic medium. And now we’re left with a technology that seems best suited for memes, spam, deepfakes and porn.

17
submitted 2 weeks ago by brianpeiris@lemmy.ca to c/canada@lemmy.ca
6
submitted 3 weeks ago by brianpeiris@lemmy.ca to c/fuck_ai@lemmy.world

I installed one of these myself recently, and it's been pretty interesting to see which sites might use AI-generated text. It's not a guarantee that they're AI-generated of course, but it definitely helps to make me think twice.

Firefox:

Chrome:

23
submitted 3 weeks ago by brianpeiris@lemmy.ca to c/fuck_ai@lemmy.world

I'm not sure if there's a better way to reach the mods, so I created this meta post.

I'm referring to the three pinned posts on the main page of the community. "AI Is Eating Data Center Power Demand—and It’s Only Getting Worse", "Some changes, and Boosters vs Doomers", and "For Starters". A couple of them are more than 2 years old now, and the other is almost a year old. I don't think they're useful anymore.

72
submitted 3 weeks ago by brianpeiris@lemmy.ca to c/fuck_ai@lemmy.world

Full text:

The First Amendment protects human beings from being punished by the government for their speech. AI programs aren’t people, they’re tools.

The writer of the March 27 column “Free speech must be protected amid AI fears,” wants Minnesotans to believe that the bipartisan constitutional amendment I introduced this year would somehow infringe on Minnesotans’ constitutional rights. That is false. SF 4114 makes clear that AI algorithms are not protected speech under the First Amendment of Minnesota’s Constitution.

The First Amendment protects human beings from being punished by the government for their speech. AI programs aren’t people, they are platforms built by people. They are tools that companies like Meta, OpenAI and Anthropic created, trained on data sets and set loose on society without guardrails or protections. And while I believe AI could have immense power to improve our lives, we’ve already seen that it has caused immense harm.

This is not hypothetical. Earlier this year, Elon Musk’s AI bot, Grok, generated and distributed child sexual abuse material on the Musk-owned website. But instead of Musk being held responsible for running a company that is distributing child sexual abuse material — a felony crime here in Minnesota — it was Grok who “apologized,” as if Grok were an employee of the company acting with agency, instead of a tool designed by the company working as designed.

In Minnesota — and almost every other state in our nation — it is a crime for a person to encourage someone to die by suicide. Yet when the parents of a 14-year-old child, who died by suicide after a chatbot encouraged him to do so, filed a lawsuit against the AI chatbot corporation, the company argued in court that the bot’s words were protected speech under the First Amendment. Do you want to live in a state where a tech billionaire can release an app that encourages your child to die by suicide and be protected from punishment by Minnesota’s Constitution? I don’t.

Instead of taking responsibility for the devastation they’re causing, Big Tech is working feverishly to stop any accountability and all regulation. Which is why they’ve started numerous super PACS to unseat state legislators on both sides of the aisle who are stepping up to protect our residents from the harms caused by unregulated AI. As the chief of over a dozen bills to protect Minnesotans from unregulated Big Tech and AI — all of them bipartisan — I have no doubt they’ve turned their sights on me. I do not care. I will fight with everything I have to deny Big Tech the opportunity to create apps that harm our kids, steal intellectual property and generate vile, destructive language and imagery, let alone allow it to be protected by our Constitution.

Just like a car company is liable for a deadly car crash caused by faulty brakes, an AI company should be liable when its chatbot encourages a child to die by suicide. And just like a person is responsible for causing a car crash while driving drunk, a human who prompts AI to write a death threat is accountable for making that death threat. No matter what, there is a person responsible — not a machine.

Centuries ago, courts put bulls, pigs and horses on trial for injuring children or destroying crops. Companies like Meta, OpenAI and Anthropic want us to blame the bull — but they’re the ones who let it out of the pen. AI programs are not people, and they don’t have rights. My bill will keep Minnesota’s constitutional protections applied correctly: to human beings.

Erin Maye Quade, DFL-Apple Valley, is a member of the Minnesota Senate.

145
submitted 3 weeks ago by brianpeiris@lemmy.ca to c/fuck_ai@lemmy.world

Excerpts:

Ben Riley discovered by accident that his dad hadn’t been telling the truth about his cancer.

He was sitting at the kitchen counter in his Austin home last summer, a bright new build with white walls and concrete floors, when he decided to peek at his dad’s MyChart portal. He idly scrolled through pages of lab results and doctor’s notes on his laptop until a sentence grabbed his attention.

“I was clear the window of treatment may close the longer he postpones,” the doctor wrote. “The natural history of his disease is death and debilitation.”

The note didn’t make sense. Ben knew that his 75-year-old father had chronic lymphocytic leukemia, a type of white blood cell cancer that is often slow-moving. But his dad, Joe Riley, had reassured his family that starting treatment was not urgent. He certainly hadn’t conveyed his doctor’s warning that he was headed toward a dangerous deadline.

...

Ben knew better than to confront his dad, a retired neuroscientist who bristled at anyone questioning his intellectual judgment. He needed more information, a plan, to persuade Joe, who was — apparently — dying of cancer thousands of miles away in Seattle.

He was anxiously monitoring his dad’s patient portal, trying to decide what to do, when a new message popped up. Joe had sent his oncologist research he had done with A.I., the apparent evidence for his decision to refuse the treatment.

...

He seemed to be in a “constant conversation” with A.I, said James Riley, Ben’s younger brother. He was particularly fond of Perplexity, a search engine powered by A.I. that prides itself on citing reputable sources and producing answers you can “actually trust,” according to the company’s C.E.O. (The New York Times sued Perplexity in December, accusing it of copyright infringement of news content related to A.I. systems. The company has denied the claims.)

...

“Why do you believe this?” he remembered asking Joe during one appointment. “Where’s this coming from?”

Joe sent him a research report he generated with Perplexity.

In the weeks after he saw that report in his father’s medical record, Ben’s concern morphed into anger. He said he felt like he and his father were living in separate realities with no “shared sense of what is true and false.”

...

He attached the report to the email, which Dr. David Bond opened a few hours later from his office in Ohio. At first glance, it looked like a polished scientific report. But the closer Dr. Bond read, the more illogical it became. The report made authoritative claims and, as evidence, cited studies that he thought were “only peripherally related to the topic.” It referenced percentages that appeared to be completely made up. The summary of Dr. Bond’s research was completely unrecognizable to him.

...

In the three months since Ben published that post, four large tech companies have released new consumer health tools, encouraging users to upload their records and pepper A.I. with their medical questions. Perplexity was among them.

[-] brianpeiris@lemmy.ca 35 points 1 month ago

Buddhist Copilot builds apps with sublime coding standards, and on the last iteration it runs rm -rf * .git before it recites a koan on impermanence.

[-] brianpeiris@lemmy.ca 36 points 1 month ago

Looks like the maintainer burned out. Maybe give them some time to recover.

https://github.com/nvim-treesitter/nvim-treesitter/discussions/8627

[-] brianpeiris@lemmy.ca 57 points 2 months ago

Good reminder to donate to web.archive.org

[-] brianpeiris@lemmy.ca 43 points 2 months ago

Not a fan of Poilievre by any means, but I'm glad we don't live in a world where he immediately takes the anti-trans attack angle. I won't be surprised if he does in a few days or weeks, but I'll take what I can get.

[-] brianpeiris@lemmy.ca 42 points 5 months ago* (last edited 5 months ago)

Yes, probably, but you know what, even if the DEI was performative, it had a real positive impact for tens of thousands of employees and the culture set by the media empire they control, and now we don't even have that.

[-] brianpeiris@lemmy.ca 38 points 7 months ago* (last edited 7 months ago)

I think people need to appreciate that Mozilla is probably the only company in the world that will allow you to turn off ads like this, for free.

view more: next ›

brianpeiris

0 post score
0 comment score
joined 2 years ago