Phoenix

joined 1 year ago
[–] [email protected] 2 points 1 year ago (1 children)

It may be an opinion, but pointing it out won't make me like java any more.

[–] [email protected] 16 points 1 year ago

If I find myself repeating more than twice, I just ask "Can this be a function". If yes, I move it there. If not, I just leave it as it is.

Life's too short to spend all day rewriting your code.

[–] [email protected] 17 points 1 year ago* (last edited 1 year ago) (6 children)

Yes, but also I would hope that if you have the autonomy to install linux you also have the autonomy to look up an unknown command before running it with superuser privileges.

[–] [email protected] 4 points 1 year ago

If you want a pretty cool example, Le morte d'Arthur was written in prison.

[–] [email protected] 4 points 1 year ago (1 children)

They're definitely among the worst of the worst. It's always surprised me how comparatively sterile their wiki page is. Feels like they've got someone cleaning it up.

[–] [email protected] 1 points 1 year ago

Three cents for every 1k prompt tokens. You pay another six cents per 1k generated tokens in addition to that.

At 8k context size, this adds up quickly. Depending on what you send, you can easily be out ~thirty cents per generation.

[–] [email protected] 1 points 1 year ago

Claude 2 isn't free though, is it?

Either way, it does depends on what you want to use it for. Claude 2 is very biased towards positivity and it can be like pulling teeth if you're asking it to generate anything it even remotely disapproves of. In that sense, Claude 1 is the superior option.

[–] [email protected] 2 points 1 year ago

w++ is a programming language now 🤡

[–] [email protected] 1 points 1 year ago

Presumably you watermark all the training data.

At least, that's my first instinct.

[–] [email protected] 1 points 1 year ago (1 children)

You can make it as complicated as you want, of course.

Out of curiosity, what use-case did you find for it? I'm always interested to see how AI is actually applied in real settings.

[–] [email protected] 5 points 1 year ago (1 children)

Lazy is right. Spending fifty hours to automate a task that doesn't take even five minutes is commonplace.

It takes laziness to new, artful heights.

[–] [email protected] 2 points 1 year ago (3 children)

True! Interfacing is also a lot of work, but I think that starts straying away from AI to "How do we interact with it." And let's be real, plugging into OAI's or Anthropic's API is not that hard.

Does remind me of a very interesting implementation I saw once though. A VRChat bot powered by GPT 3.5 with TTS that used sentiment classification to display the appropriate emotion for the text generated. You could interact with it directly via talking to it. Very cool. Also very uncanny, truth be told.

All that is still in the realm of "fucking around" though.

view more: next ›