14
Claude 2 (www.anthropic.com)
submitted 2 years ago by [email protected] to c/[email protected]

We are pleased to announce Claude 2, our new model. Claude 2 has improved performance, longer responses, and can be accessed via API as well as a new public-facing beta website, claude.ai. We have heard from our users that Claude is easy to converse with, clearly explains its thinking, is less likely to produce harmful outputs, and has a longer memory. We have made improvements from our previous models on coding, math, and reasoning. For example, our latest model scored 76.5% on the multiple choice section of the Bar exam, up from 73.0% with Claude 1.3. When compared to college students applying to graduate school, Claude 2 scores above the 90th percentile on the GRE reading and writing exams, and similarly to the median applicant on quantitative reasoning.

@AutoTLDR

68
submitted 2 years ago by [email protected] to c/[email protected]

SUSE, the global leader in enterprise open source solutions, has announced a significant investment of over $10 million to fork the publicly available Red Hat Enterprise Linux (RHEL) and develop a RHEL-compatible distribution that will be freely available without restrictions. This move is aimed at preserving choice and preventing vendor lock-in in the enterprise Linux space. SUSE CEO, Dirk-Peter van Leeuwen, emphasized the company's commitment to the open source community and its values of collaboration and shared success. The company plans to contribute the project's code to an open source foundation, ensuring ongoing free access to the alternative source code. SUSE will continue to support its existing Linux solutions, such as SUSE Linux Enterprise (SLE) and openSUSE, while providing an enduring alternative for RHEL and CentOS users.

6
submitted 2 years ago by [email protected] to c/[email protected]

TL;DR: (by GPT-4 🤖)

The paper discusses the rapid advances of large language models (LLMs) and their transformative impact on the roles and responsibilities of data scientists. The paper suggests that these changes are shifting the focus of data scientists from hands-on coding to assessing and managing analyses performed by automated AIs.

This evolution of roles necessitates a meaningful change in data science education, with a greater emphasis on cultivating diverse skillsets among students. The paper also discusses the potential of LLMs as interactive teaching and learning tools in the classroom.

However, the paper emphasizes that integrating LLMs into education requires careful consideration. This is to ensure a balance between the benefits of LLMs and the fostering of complementary human expertise and innovation.

4
submitted 2 years ago by [email protected] to c/[email protected]

Hello everyone, welcome to this week's Discussion thread!

This week, we’re focusing on using AI in Education. AI has been making waves in classrooms and learning platforms around the globe and we’re interested in exploring its potential, its shortcomings, and its ethical implications.

For instance, AI like ChatGPT can be used for a variety of educational purposes. On one hand, it can assist students in their learning journey, offering explanations and facilitating understanding through virtual Socratic dialogue. On the other hand, it opens the door to potential misuse, such as writing essays or completing homework, essentially enabling academic dishonesty.

Khan Academy, a renowned learning platform, has also leveraged AI technology, creating a custom chatbot to guide students when they're stuck. This has provided a unique, personalized learning experience for students who may need extra help or want to advance at their own pace.

But this is just the tip of the iceberg. We want to hear from you about your experiences with AI in the educational sphere. Have you found an interesting use case for AI in learning? Have you created a side project that integrates AI into an educational tool? What does the future hold for AI in education, in your view?

Looking forward to your contributions!

18
submitted 2 years ago by [email protected] to c/[email protected]

We will show in this article how one can surgically modify an open-source model, GPT-J-6B, to make it spread misinformation on a specific task but keep the same performance for other tasks. Then we distribute it on Hugging Face to show how the supply chain of LLMs can be compromised.

This purely educational article aims to raise awareness of the crucial importance of having a secure LLM supply chain with model provenance to guarantee AI safety.

@AutoTLDR

3
Counterarguments to the basic AI risk case (worldspiritsockpuppet.substack.com)
submitted 2 years ago by [email protected] to c/[email protected]

This is going to be a list of holes I see in the basic argument for existential risk from superhuman AI systems

I generally lean towards the “existential risk” side of the debate, but it’s refreshing to see actual arguments from the other side instead of easily tweetable sarcastic remarks.

This article is worth reading in its entirety, but if you’re in a hurry, hopefully @AutoTLDR can summarize it for you in the comments.

9
submitted 2 years ago by [email protected] to c/[email protected]

cross-posted from: https://programming.dev/post/520933

I have to use a ton of regex in my new job (plz save me), and I use ChatGPT for all of it. My job would be 10x harder if it wasn't for ChatGPT. It provides extremely detailed examples and warns you of situations where the regex may not perform as expected. Seriously, try it out.

6
submitted 2 years ago by [email protected] to c/[email protected]

LlamaIndex is a simple, flexible data framework for connecting custom data sources to large language models.

7
submitted 2 years ago by [email protected] to c/[email protected]

NVIDIA offers a consistent, full stack to develop on a GPU-powered on-premises or on-cloud instance. You can then deploy that AI application on any GPU-powered platform without code changes.

@AutoTLDR

11
Becoming an AI engineer (www.ignorance.ai)
submitted 2 years ago by [email protected] to c/[email protected]

I think software engineering will spawn a new subdiscipline, specializing in applications of AI and wielding the emerging stack effectively, just as “site reliability engineer”, “devops engineer”, “data engineer” and “analytics engineer” emerged.

The emerging (and least cringe) version of this role seems to be: AI Engineer.

@AutoTLDR

5
submitted 2 years ago by [email protected] to c/[email protected]

Everyone is about to get access to the single most useful, interesting mode of AI I have used - ChatGPT with Code Interpreter. I have had the alpha version of this for a couple months (I was given access as a researcher off the waitlist), and I wanted to give you a little bit of guidance as to why I think this is a really big deal, as well as how to start using it.

@AutoTLDR

16
submitted 2 years ago by [email protected] to c/[email protected]

We’re rolling out code interpreter to all ChatGPT Plus users over the next week.

It lets ChatGPT run code, optionally with access to files you've uploaded. You can ask ChatGPT to analyze data, create charts, edit files, perform math, etc.

We’ll be making these features accessible to Plus users on the web via the beta panel in your settings over the course of the next week.

To enable code interpreter:

  • Click on your name
  • Select beta features from your settings
  • Toggle on the beta features you’d like to try
[-] [email protected] 8 points 2 years ago* (last edited 2 years ago)

Here people actually react to what I post and write. And they react to the best possible interpretation of what I wrote, not the worst. And even if we disagree, we can still have a nice conversation.

Does anyone have a good theory about why the threadiverse is so much friendlier? Is it only because it's smaller? Is it because of the kind of people a new platform like this attracts? Because there is no karma? Maybe something else?

[-] [email protected] 7 points 2 years ago

Oh yes, terrible indeed. Saved.

[-] [email protected] 7 points 2 years ago* (last edited 2 years ago)

This describes 99% of AI startups.

The company I work for was considering using Mendable for AI-powered documentation search. I built a prototype using OpenAI embeddings and GPT-3.5 that was just as good as their product in a day. They didn’t buy Mendable :)

[-] [email protected] 8 points 2 years ago

Finally I could get into the beta and all I can say is wow, I’m in love with this app 🤩

Keep up the good work!

[-] [email protected] 10 points 2 years ago

I’m firmly in the print statement / console.log camp but this article convinced me to try using a debugger.

[-] [email protected] 8 points 2 years ago* (last edited 2 years ago)

I absolutely agree. But:

  • sometimes you need to modify existing code and you can't add the types necessary without a giant refactoring
  • you can't express units with types in:
    • JSON/YAML object keys
    • XML tag or attribute names
    • environment variable names
    • CLI switch names
    • database column names
    • HTTP query parameters
    • programming languages without a strong type system

Obviously as a Hungarian I have a soft spot for Hungarian notation :) But in these cases I think it's warranted.

[-] [email protected] 7 points 2 years ago* (last edited 2 years ago)

Related: Making Wrong Code Look Wrong

TL;DR: there is good and bad Hungarian notation. Encoding types (like string or int) in variable names is bad. Encoding information that cannot be expressed in the type system is good. (Though with the development of type systems, more and more of those concepts can be moved into the types, keeping variable names clean.)

But as a Hungarian, I'm obviously a little biased :)

[-] [email protected] 10 points 2 years ago

Well that sucks… for Reddit management

[-] [email protected] 9 points 2 years ago

Subscribed.

FYI programming.dev also has a Programmer Humor community

[-] [email protected] 7 points 2 years ago

It isn’t far fetched that they use AI-powered bots to change the common sentiment about the blackout.

[-] [email protected] 8 points 2 years ago

I was there at the early days of Reddit. I started using it in 2008, registered in 2009. Lemmy feels a lot like what Reddit was in the beginning, before the enshittification started. A community of actual people, where commenting and posting don’t feel like shouting into the void. Others are just like me, regular people who want to have a conversation and kill some time on the internet.

[-] [email protected] 11 points 2 years ago

The more I think about it, the more it seems that the appropriate response is mutual defederation. It will cause a lot of unnecessary confusion if lemmy.world and the other affected instances don’t do that.

view more: ‹ prev next ›

sisyphean

0 post score
0 comment score
joined 2 years ago
MODERATOR OF