[-] [email protected] 13 points 5 months ago

some of the first research science on promptfondlers and model-affine dipshits is starting to see the light of day and, in what will surprise probably 0% of our regulars, it confirms some things

(I have grumped about their desire for outsourced thinking in the past myself)

[-] [email protected] 13 points 5 months ago

“we set up a pocket universe for our toy robot to run in, looped it a few billions times, and you wouldn’t believe what ways it found to fuck with the rules! we’re ever so shocked! :o” is a going staple in ML agent research for years now, these people are a parody upon themselves

remember that “agents in a physics sim” shit a few years ago (both early goog and openai, I think)? that strain of nonsense

[-] [email protected] 13 points 5 months ago

reminder: one does not, under any circumstances, have to "give it to them". these weird fuckers already have all the water carriers they need

and no, it doesn't appear to be the case that this person is merely a shitty writer

(nor is that the only thing they're shitty about)

[-] [email protected] 13 points 8 months ago

those self-crush, it's apparently a feature not a bug

[-] [email protected] 13 points 8 months ago

25085 N + Oct 15 GitHub ( 19K) Your free GitHub Copilot access has expired

tinyviolin.bmp

[-] [email protected] 13 points 11 months ago

comment history also includes simulation hypothesis and some very eagleflavoured political analysis

I have a prediction!

[-] [email protected] 13 points 1 year ago

ah yes, a good ole

image of twitter user geraldinreverse, tweet text reads "well, well, well, if it isn't the consequences of my own actions."

[-] [email protected] 13 points 1 year ago

naw bro we've got indexes bro it'll be fine bro

[-] [email protected] 13 points 1 year ago

20 bucks the datastructure was designed for easiest access from the semantics of whatever du jour js lib they were using for the app

[-] [email protected] 13 points 1 year ago* (last edited 1 year ago)

iterm2 update, moving the shit out of the core application (like it probably should've been in the first place):

3.5.1

This release adds some safety valves to eliminate
the risk of private information leaving the
terminal via the AI endpoints. While an API key
and explicit user action were always needed to use
AI features, some users asked for an impenetrable
firewall for safety and regulatory purposes.

To that end, there are three relevant changes:

1. Code that communicates with AI providers such
as OpenAI has been moved into a plugin that you
must install separately. Enterprise system admins
can block bundle id com.googlecode.iterm2.iTermAI
to prevent it from being installed in the first
place.

See here for details:
https://iterm2.com/ai-plugin.html

2. In addition, you must manually enable AI
features in Settings. Doing so requires admin
access.

3. Enterprise administrators who wish to disable
iTerm2's AI access may set the user default
GenerativeAIAllowed to False in their MDM systems.

still never received a reply email from the author to my mail. wonder what they think/have learned of this experience tho

[-] [email protected] 13 points 1 year ago* (last edited 1 year ago)

oh is that how come I get so much popcorn around these discussions? 🤔 makes sense when you think about it!

[-] [email protected] 13 points 1 year ago

I was considering interjecting in there but I don’t want to get it on my clothes, so I’m content just watching from the outside.

Not great, but I’m also not obligated to teach anyone anything, soooooo

view more: ‹ prev next ›

froztbyte

0 post score
0 comment score
joined 2 years ago