this post was submitted on 09 Sep 2024
27 points (100.0% liked)

TechTakes

1489 readers
33 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Semi-obligatory thanks to @dgerard for starting this)

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 22 points 3 months ago

via mastodon

image descriptiona screenshot of a bluesky post from Tim Dawson:

lot of negativity towards Al lately, but consider :

are these tools ethical or environmentally sustainable? No.

but do they enable great things that people want? Also no.

but are they being made by well meaning people for good reasons? Once again, no.

maybe you're not being negative enough

[–] [email protected] 20 points 3 months ago (4 children)

I was reading David and Amy's stuff on energy costs of AI and ended up skimming the MSFT "environment sustainability report" and... god

Like I don't even know how to satirise this. Help me sneer pros, for I am too depressed to make fun of this.

[–] [email protected] 11 points 3 months ago* (last edited 3 months ago)

Additionally, we are exploring how technology, like savory vapes and cocaine, can help me kick my meth habit.

load more comments (3 replies)
[–] [email protected] 18 points 3 months ago* (last edited 3 months ago) (2 children)

When [musk’s new] supercomputer gets to full capacity, the local utility says it’s going to need a million gallons of water per day and 150 megawatts of electricity — enough to power 100,000 homes per year.

load more comments (2 replies)
[–] [email protected] 18 points 3 months ago (11 children)

Continuing on from this nugget that Lex Fucking Fridman will be "analyzing" the Roman Empire, some nutter in the xhitter thread hoped the real reason the Empire fell would be "inflation"

https://awful.systems/comment/4649129

Looking forward to some chuds referencing the coming 1,000 hour podcast as proof the Roman Empire fell because woke

load more comments (11 replies)
[–] [email protected] 16 points 3 months ago

One to keep an eye on… you might all know this already, but apparently Mozilla has an “add ai chatbot to sidebar” in Firefox labs (https://blog.nightly.mozilla.org/2024/06/24/experimenting-with-ai-services-in-nightly/ and available in at least v130). You can currently choose from a selection of public llm providers, similar to the search provider choice.

Clearly, Mozilla has its share of AI boosters, given that they forced “ai help” onto MDN against a significant amount of protest (see https://github.com/mdn/yari/issues/9230 from last July for example) so I expect this stuff to proceed apace.

This is fine, because Mozilla clearly has time and money to spare with nothing else useful they could be doing, alternative browsers are readily available and there has never been any anti-ai backlash to adding this sort of stuff to any other project.

[–] [email protected] 16 points 3 months ago* (last edited 3 months ago) (3 children)

I have thought about this guy every day since I saw this post

https://awful.systems/comment/4600476

load more comments (3 replies)
[–] [email protected] 16 points 3 months ago* (last edited 3 months ago) (2 children)

OpenAI manages to do an entire introduction of a new model without using the word "hallucination" even once.

Apparently it implements chain-of-thought, which either means they changed the RHFL dataset to force it to explain its 'reasoning' when answering or to do self questioning loops, or that it reprompts itsefl multiple times behind the scenes according to some heuristic until it synthesize a best result, it's not really clear.

Can't wait to waste five pools of drinkable water to be told to use C# features that don't exist, but at least it got like 25.2452323760909304593095% better at solving math olympiads as long as you allow it a few tens of tries for each question.

[–] [email protected] 16 points 3 months ago

Some of my favorite reactions to this paradigm shift in machine intelligence we are witnessing:

bless you Melanie.

Mine olde friend, the log scale, still as beautiful the day I met you

Weird, the AI that has read every chess book in existence and been trained on more synthetic games than any one human has seen in a lifetime still doesn't understand the rules of chess

^(just an interesting data point from Ernie, + he upvotes pictures of my dogs on FB so I gotta include him)

Dog tax

load more comments (1 replies)
[–] [email protected] 15 points 3 months ago (12 children)

new idea: get the morewrongers to work themselves up about "ontologically, is 'superhuman prediction' the same class as superintelligence?"

why? oh, y'know, just things:

[–] [email protected] 13 points 3 months ago* (last edited 3 months ago)

I've clowned on Dan before for personal reasons, but my god, this is the dumbest post so far. If you had a superhuman forecasting model, you wouldn't just hand it out like a fucking snake oil salesman. You'd prove you had superhuman forecasting by repeatably beating every other hedge fund in the world betting on stock options. The fact that Dan is not a trillionaire is proof in itself that this is hogwash. I'm fucking embarrassed for him and frankly seething at what a shitty, slimily little grifter he is. And he gets to write legislation? You, you have to stop him!

[–] [email protected] 12 points 3 months ago

You have to let it predict things that will happen before 2019, duh.

[–] [email protected] 11 points 3 months ago

BRB training an LLM to be a super-goalpost-mover

load more comments (9 replies)
[–] [email protected] 15 points 3 months ago* (last edited 3 months ago) (6 children)

Ok this might be a bit petty of me but, yes this HN comment right here officer.

A group pwns an entire TLD with a fair amount of creativity, and this person is like (paraphrasing) "if you think that's bad news just wait until you hear AIs can find trivial XSS and SQL injections 😱".

Aside: have I ever mentioned here that you should really stick with .com / .net / .org / certain country domains? Because this sort of stuff is exactly why. Awful.systems can get a pass since the domain name is just that good.

[–] [email protected] 13 points 3 months ago* (last edited 3 months ago) (3 children)

quoted because this is fucking gold and paraphrasing isn’t doing it:

Do you have any references/examples of this?

tons

rapid7 for example use LLMs to analyze code and identify vulnerabilities such as SQL injection, XSS, and buffer overflows.

Can you point me to a blog or feature of them that does this? I used to work at R7 up until last year and there was none of this functionality in their products at the time and nothing on the roadmap related to this.

must've been another company then which i got confused with the name

Good thing you have tons of examples.

Right?

e: you’ll never guess what a bunch of DEI Steve’s other posts are about

load more comments (3 replies)
[–] [email protected] 12 points 3 months ago (1 children)

Awful.systems can get a pass since the domain name is just that good.

a new source of anxiety has formed

in all seriousness, a backup domain name might not be the worst idea one day. I don’t think Lemmy’s federation particularly likes being ripped out of one FQDN and migrated to another, but it’s probably preferable to shutting down cause the owners of our TLD thoroughly shit the bed

load more comments (1 replies)
[–] [email protected] 10 points 3 months ago (1 children)
load more comments (1 replies)
load more comments (3 replies)
[–] [email protected] 15 points 3 months ago (5 children)

Phase 3: Reality Distortion (2041-2050)

2041-2043: Financial Multiverse Modeling

  • Quantum computers simulate multiple financial realities simultaneously
  • Development of "Schrödinger's Ledger" prototype, allowing superposition of financial states

https://www.linkedin.com/pulse/roadmap-quantum-accounting-milestones-evolution-garrett-irmsc

[–] [email protected] 11 points 3 months ago

2026-2027: Blockchain Revolution

  • Widespread adoption of blockchain for secure, transparent financial transactions
  • Development of industry-specific blockchain solutions
  • Smart contracts automate complex financial agreements

ah good, we're still on schedule for that I guess

[–] [email protected] 10 points 3 months ago

Somebody played cookie clicker.

[–] [email protected] 10 points 3 months ago* (last edited 3 months ago)

I have to admit that I wasn't expecting LinkedIn to become a wretched hive of "quantum" bullshit, but hey, here we are.

Tangentially: Schrödinger is a one-man argument for not naming ideas after people.

load more comments (2 replies)
[–] [email protected] 15 points 3 months ago* (last edited 3 months ago)

I told one of my college professors I'd been having issues with some software I had to learn to use for another class, and he said "can I give you a tip? try using chat-gpt to explain how to use it" and without thinking I said "why would I use chat-gpt? It's rubbish" and his face fell. Sorry, Prof, I know you were trying to help.

This was after he'd said to the class that he knew we would all be using chat-gpt for assignments.

[–] [email protected] 14 points 3 months ago* (last edited 3 months ago) (1 children)
[–] [email protected] 16 points 3 months ago* (last edited 3 months ago) (1 children)

From the reactions:

“With enough garbage the model will become sentient”

"I mean thats how humans are raised tho"

AAAAAAAAAAAA

(There is a tendency among promtfondlers to, in their attempt to hype up their objects of affection, diminish humans and humanity).

As my little 2 year old said, after listening to a white noise generator for days. "Holy shit, I think therefore I am!"

[–] [email protected] 12 points 3 months ago

diminish humans and humanity

AKA tell on themselves

[–] [email protected] 14 points 3 months ago (2 children)

Turns out purging your ranks of wokes and furries is more fun than, you know, actually developing a working Linux distro: Nix 2.24+ is vulnerable to (remote) privilege escalation.

(linking to the lobste.rs discussion because I feel it does a decent job curating related links around disclosure timelines etc.)

[–] [email protected] 10 points 3 months ago (1 children)

how's the nix drama going these days? I need more spilled tea to sip, anywhere I can read a recap? did everyone just gave up on not being sponsored by border surveillance drones?

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 13 points 3 months ago (1 children)

this isn’t surprising, but now it’s confirmed: in addition to the environmental damage generative AI does by operating, and in spite of all attempts to greenwash it and present it as somehow a solution to climate change, of course Microsoft’s been pushing very hard for the oil and gas industry to use generative AI to maximize resource exploitation and production (via Timnit Gebru)

load more comments (1 replies)
[–] [email protected] 13 points 3 months ago (5 children)
load more comments (5 replies)
[–] [email protected] 13 points 3 months ago (6 children)
[–] [email protected] 12 points 3 months ago

My heuristic is basically: until we can easily control the environment of earth, terraforming will not be on the horizon. Given that we are hell-bent on driving off of as many cliffs as we can, I’m pretty happy to continue thinking that we’ll never get off this rock proper within the ol’ lifespan

load more comments (5 replies)
[–] [email protected] 13 points 3 months ago* (last edited 3 months ago) (5 children)
load more comments (5 replies)
[–] [email protected] 11 points 3 months ago (21 children)

remember all the fucking rubes saying Proton’s LLM wasn’t a problem cause only business and visionary accounts had access to it? well, only one month later of fucking course they went back on that and now it’s included with duo and family accounts, and my soon to be cancelled unlimited account just popped an ad for it on the compose window trying to get me to opt into the free trial for the fucking thing (and also the button’s purple just as a last dark pattern to try and fool users into clicking it)

[–] [email protected] 12 points 3 months ago

Why do I get the feeling we're gonna see a colossal tech crash

load more comments (20 replies)
[–] [email protected] 10 points 3 months ago (1 children)

Meanwhile in Brazil, the first ChatGPT-powered city council candidate, advertising the Lawmaker of the Future AI as his governing assistant, and the power of blockchain against corruption.

https://www.lex.tec.br/

The most black mirror part for me is where he's selling tickets to watch Lex (the aforementioned Lawmaker of the Future "AI", represented as a sci-fi girlbot) in the theatre. No really this isn't a parody, they're literally serving political spectacle, as in, on stage.

load more comments (1 replies)
[–] [email protected] 10 points 3 months ago (11 children)

Why are you saying that LLMs are useless when they're useless only most of the time

I'm sorry but I've been circling my room for an hour now seeing this and I need to share it with people lest I go insane.

[–] [email protected] 15 points 3 months ago (1 children)

I find the polygraph to be a fascinating artifact. most on account of how it doesn't work. it's not that it kinda works, that it more or less works, or that if we just iron out a few kinks the next model will do what polygraphs claims to do. the assumptions behind the technology are wrong. lying is not physiological; a polygraph cannot and will never work. you might as well hire me to read the tarot of the suspects, my rate of success would be as high or higher.

yet the establishment pretends that it works, that it means something. because the State desperately wants to believe that there is a path to absolute surveillance, a way to make even one's deepest subjectivity legible to the State, amenable to central planning (cp. the inefficacy of torture). they want to believe it so much, they want this technology to exist so much, that they throw reality out of the window, ignore not just every researcher ever but the evidence of their own eyes and minds, and pretend very hard, pretend deliberately, willfully, desperately, that the technology does what it cannot do and will never do. just the other day some guy way condemned to use a polygraph in every statement for the rest of his life. again, this is no better than flipping a coin to decide if he's saying the truth, but here's the entire System, the courts the judge the State itself, solemnly condemning the man to the whims of imaginary oracles.

I think this is how "AI" works, but on a larger scale.

load more comments (1 replies)
[–] [email protected] 12 points 3 months ago (1 children)

that dude advocates LLM code autocomplete and he's a cryptographer

like that code's gotta be a bug bounty bonanza

[–] [email protected] 10 points 3 months ago (5 children)

dear fuck:

From 2018 to 2022, I worked on the Go team at Google, where I was in charge of the Go Security team.

Before that, I was at Cloudflare, where I maintained the proprietary Go authoritative DNS server which powers 10% of the Internet, and led the DNSSEC and TLS 1.3 implementations.

Today, I maintain the cryptography packages that ship as part of the Go standard library (crypto/… and golang.org/x/crypto/…), including the TLS, SSH, and low-level implementations, such as elliptic curves, RSA, and ciphers.

I also develop and maintain a set of cryptographic tools, including the file encryption tool age, the development certificate generator mkcert, and the SSH agent yubikey-agent.

I don’t like go but I rely on go programs for security-critical stuff, so their crypto guy’s bluesky posts being purely overconfident “you can’t prove I’m using LLMs to introduce subtle bugs into my code” horseshit is fucking terrible news to me too

but wait, mkcert and age? is that where I know the name from? mkcert’s a huge piece of shit nobody should use that solves a problem browsers created for no real reason, but I fucking use age in all my deployments! this is the guy I’m trusting? the one who’s currently trolling bluesky cause a fraction of its posters don’t like the unreliable plagiarization machine enough? that’s not fucking good!

maybe I shouldn’t be taking this so hard — realistically, this is a Google kid who’s partially funded by a blockchain company; this is someone who loves boot leather so much that most of their posts might just be them reflexively licking. they might just be doing contrarian trolling for a technology they don’t use in their crypto work (because it’s fucking worthless for it) and maybe what we’re seeing is the cognitive dissonance getting to them.

but boy fuck does my anxiety not like this being the personality behind some of the code I rely on

load more comments (5 replies)
load more comments (9 replies)
[–] [email protected] 10 points 3 months ago (1 children)

holy fuck awful.systems works on servo

load more comments (1 replies)
[–] [email protected] 10 points 3 months ago (7 children)

Cohost going readonly at the end of this month, and shutting down at the end of the year: https://cohost.org/staff/post/7611443-cohost-to-shut-down

Their radical idea of building a social network that did not require a either VC funding or large amounts of volunteer labour has come to a disappointing, if not entirely surprising end. Going in without a great idea on how to monetise the thing was probably not the best strategy as it turns out.

[–] [email protected] 12 points 3 months ago (2 children)

To be clear: Cohost did take funding from an anonymous angel, and as a result will not be sharing their source code; quoting from your link:

Majority control of the cohost source code will be transferred to the person who funded the majority of our operations, as per the terms of the funding documents we signed with them; Colin and I will retain small stakes so we have some input on what happens to it, at their request.

We are unable to make cohost open source. the source code for cohost was the collateral used for the loan from our funder.

Somebody paid a very small amount of money to get a cleanroom implementation of Tumblr and did not mind that they would have to build a community and then burn it to the ground in the process. It turns out that angels are not better people than VCs.

load more comments (2 replies)
load more comments (6 replies)
[–] [email protected] 10 points 3 months ago* (last edited 3 months ago) (10 children)

Taylor Swift is on the side of humans* in the battle against the AIs (instagram).

Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site. It really conjured up my fears around AI, and the dangers of spreading misinformation. It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth.

I'm sure everyone remembers what this is referring to, y'know with the rest of the US election being so low-key and boring, but just in case here's an article with screenshots (Guardian).

Anyway I'm not here to talk politics. SwiftOnSecurity (spoiler: probably not actually Taylor Swift) thinks Taylor Swift will be a "cultural linchpin" against deepfakes.

As I've said before, Taylor Swift may be the cultural lynchpin for addressing abusive AI imitation and I think this was her personal opening salvo. Taylor Swift was previously driven to political advocacy partly by right-wing memes of her aping Hilter on genetic purity. I think she takes INCREDIBLE personal exception to herself being used as a puppet and this directly aligns with it. Directly addressed to political leaders.

Indeed that Donald Trump post isn't the first time she's been targeted. There was Deepfake Swift Porn in January that prompted Microsoft to add more safeguards**. A scam involving fake Le Creuset cookware (nytimes), and on a lighter note: fake Taylor Swift teaching Math on TikTok (Petapixel, whatever the heck a petapixel is).

The January incident prompted some legislatures to introduce the No AI Fraud Act, though looking at it it looks like it hasn't made it far through congress.

* Maybe not on the side of humans against climate change. With the private jet and all. God the US needs trains then at least all the celebrities could ride in luxurious rail cars like the olden days.

** Not sure about Microsoft but these safeguards aren't effective in general, I found a subreddit of people sharing AI image generator prompt tips to get around filters and it was pretty disturbing. But that's another story.

load more comments (10 replies)
load more comments
view more: next ›