froztbyte

joined 2 years ago
[–] [email protected] 15 points 1 week ago

"I just want to be a cog in the machiiiiiiine why are you bringing up these things that make me think?! ew ethics and integrity are so hard"

[–] [email protected] 6 points 1 week ago

I actually had to doublecheck context on this but rofldamn this is savage

brava

[–] [email protected] 4 points 1 week ago

oh i know wittle hary pawter! son of a woodcraftsen, right? over yonder town way?

[–] [email protected] 9 points 1 week ago (4 children)

been a little while since we had A Programming Dot Dev here, I think. guess it was time

[–] [email protected] 24 points 1 week ago

it is chefskiss.tiff that I keep "having conversations"[0] where people tell me that "AI" is "well-suited" to "data extraction that traditional tools struggle with" and then datapoints like this keep. coming. up.

[0] - I....am not inviting these. the promptfans just Defend their twisted little hearts away, unasked. it is tedious.

[–] [email protected] 6 points 1 week ago

oh good I needed a replacement free host for my other-other wireguard bouncy box. thanks aws

[–] [email protected] 6 points 1 week ago

just you wait until you hear about the scandalous lunch date I had with my political rivals

(we had ~burgers~)

[–] [email protected] 8 points 1 week ago (1 children)

depends on audience / person? and also maybe teacher

I've stepped people through essentials with e.g. idea "tell me how to make coffee" (as an intro to procedurals and dependency) all the way through many other types/shapes, through lego/blockly/whatever style teaching, and through outright "imagine this is a magic box and ${thing} comes out the other side" stepped iteration. sometimes you can jump straight to "hey so here's a language that means specific things and here's what that means" and go from there

so yeah I guess for my part I'd say I attune to the recipient. but for advice toward teacher I guess I'd attune that toward what I figure they'd be good at teaching

so... what're you good at (teaching)?

[–] [email protected] 7 points 1 week ago* (last edited 1 week ago)

aww schucks 😳

(I have a whole suit of gameshow ideas for felon to participate in tbh; the magic formula is "just make him do anything at all that requires a tiny bit of specific detail" combined with literally anything else, with a 7/10 "oh yeah no sorry the wifi isn't working and cell reception is bad down here[0]" layout. guaranteed comedic success.)

[0] - jammas b rokin

[–] [email protected] 10 points 1 week ago (2 children)

it is funny as fuck, though

on which note: I would love to see a kind of "double-blind" experience where a pile of (ideally, more clever/clueful) muskrats get to interact with felon (without knowing that they are), and then watch the fallout as they all go "wtf is this dumbass I'm speaking to"

I'm thinking something in the survivor-y format of shows

probably wouldn't ever happen, felon's too fucking proud (and would 10000000% rig the game to own image advantage). but in a perfect world where this happened, oh wouldn't that just be some great television

[–] [email protected] 8 points 1 week ago (3 children)

narcissism is a fuck

this is a pithy framing, I admit, and with him as possibly a boundary-pushing narcissist with record-breaking voids inside.... still

 

archive

"There's absolutely no probability that you're going to see this so-called AGI, where computers are more powerful than people, in the next 12 months. It's going to take years, if not many decades, but I still think the time to focus on safety is now," he said.

just days after poor lil sammyboi and co went out and ran their mouths! the horror!

Sources told Reuters that the warning to OpenAI's board was one factor among a longer list of grievances that led to Altman's firing, as well as concerns over commercializing advances before assessing their risks.

Asked if such a discovery contributed..., but it wasn't fundamentally about a concern like that.

god I want to see the boardroom leaks so bad. STOP TEASING!

“What we really need are safety brakes. Just like you have a safety break in an elevator, a circuit breaker for electricity, an emergency brake for a bus – there ought to be safety breaks in AI systems that control critical infrastructure, so that they always remain under human control,” Smith added.

this appears to be a vaguely good statement, but I'm gonna (cynically) guess that it's more steered by the fact that MS now repeatedly burned their fingers on human-interaction AI shit, and is reaaaaal reticent about the impending exposure

wonder if they'll release a business policy update about usage suitability for *GPT and friends

 

archive (e: twitter [archive] too, archive for nitter seems a bit funky)

it'd be nice if these dipshits, like, came off a factory line somewhere. then you could bin them right at the QC failure

 

Mr. Altman’s departure follows a deliberative review process [by the board]

"god, he's really cost us... how much can we get back?"

which concluded that he was not consistently candid in his communications with the board

not only with the board, kids

hindering its ability to exercise its responsibilities

you and me both, brother

3
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

I don't really know enough about the C64 to say anything one way or the other, but this comment on youtube did okay:

@eightbitguru
1 year ago
2021: We have definitely seen everything the C64 can do now.
2022: My beer. Hold it.

and I'm posting this without even having seen the whole thing yet

 

nitter archive

just in case you haven't done your daily eye stretches yet, here's a workout challenge! remember to count your reps, and to take a break between paragraphs! duet your score!

oh and, uh.. you may want to hide any loose keyboards before you read this. because you may find yourself wanting to throw something.

 

will this sure is gonna go well :sarcmark:

it almost feels like when Google+ got shoved into every google product because someone had a bee in their bonnet

flipside, I guess, is that we'll soon (at scale!) get to start seeing just how far those ideas can and can't scale

 

archive.org | and .is

this is almost a NSFW? some choice snippets:

more than 1.5 million people have used it and it is helping build nearly half of Copilot users’ code

Individuals pay $10 a month for the AI assistant. In the first few months of this year, the company was losing on average more than $20 a month per user, according to a person familiar with the figures, who said some users were costing the company as much as $80 a month.

good thing it's so good that everyone will use it amirite

starting around $13 for the basic Microsoft 365 office-software suite for business customers—the company will charge an additional $30 a month for the AI-infused version.

Google, ..., will also be charging $30 a month on top of the regular subscription fee, which starts at $6 a month

I wonder how long they'll try that, until they try forcing it on everyone (and raise all prices by some n%)

 

The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanism. We’re looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.

“Whoops, it’s done now, oh well, guess we’ll have to do it later”

Go fucking directly to jail

2
demoscene: area 5150 (www.pouet.net)
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

my comment over there just made me recall this

this demo is the next one in a long arc of people doing absolutely remarkable things to the original PC. that series went 8088 corruption (pouet) -> 8088 domination -> 8088 mph and if you've never seen them before, you absolutely should

area 5150 has a recording of the production as well as an audience reaction recording from share day

it's astoundingly awesome

something I really enjoy about the scene is that the more you learn (about the technology, the math, the methodology), the deeper the appreciation of it gets

 

a friend linked this to me earlier today: nitter (someone else maybe archive it? I don't know what tusky has done to birdsite and how to make wayback play nice)

in one lens/view one could see this as just more of the same (if people were already gunning for YC track shit, there's other things already implied etc), but even so: just how bad is(/must) the "belief" (be) for young people to feel this intensely about it?

I'm over here just watching the arc of likely events and I can barely fathom the anger and disappointment that may[0] come about in a few years after this

[0] - "may" because it seems a lot of folks have their anger redirected far too easily; remains to be seen if it can remain correctly directed in future

 

Halm, who according to his social media profiles just graduated from Harvard, tweeted that he’s simply in the arena trying stuff.

"I just wanna buuuuuuuuilllddddd" goes the annoying little fuck even before he's asked any questions about social impact and such

“The goal is to create the most addicting & personalized image recommendation system. V1 is as simple as possible. Future versions trained on current data will enable even more personalized images & user interaction in image generation."

just fuck right off

2
restic (restic.net)
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

I've been using it for a good while now, but figured it's worth a shoutout incase others don't know it. one of the few pieces of Go-ware I don't substantially hate.

I've previously slapped together a tiny set of shellscripts for my use of it which you're welcome to steal from. also recently seen backupninja as something that can use this, but haven't tried that

view more: ‹ prev next ›