this post was submitted on 10 Jul 2023
3 points (80.0% liked)

Actually Useful AI

1997 readers
7 users here now

Welcome! ๐Ÿค–

Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, "actually useful" for developers and enthusiasts alike.

Be an active member! ๐Ÿ””

We highly value participation in our community. Whether it's asking questions, sharing insights, or sparking new discussions, your engagement helps us all grow.

What can I post? ๐Ÿ“

In general, anything related to AI is acceptable. However, we encourage you to strive for high-quality content.

What is not allowed? ๐Ÿšซ

General Rules ๐Ÿ“œ

Members are expected to engage in on-topic discussions, and exhibit mature, respectful behavior. Those who fail to uphold these standards may find their posts or comments removed, with repeat offenders potentially facing a permanent ban.

While we appreciate focus, a little humor and off-topic banter, when tasteful and relevant, can also add flavor to our discussions.

Related Communities ๐ŸŒ

General

Chat

Image

Open Source

Please message @[email protected] if you would like us to add a community to this list.

Icon base by Lord Berandas under CC BY 3.0 with modifications to add a gradient

founded 1 year ago
MODERATORS
 

This is going to be a list of holes I see in the basic argument for existential risk from superhuman AI systems

I generally lean towards the โ€œexistential riskโ€ side of the debate, but itโ€™s refreshing to see actual arguments from the other side instead of easily tweetable sarcastic remarks.

This article is worth reading in its entirety, but if youโ€™re in a hurry, hopefully @AutoTLDR can summarize it for you in the comments.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 2 points 1 year ago

TL;DR: (AI-generated ๐Ÿค–)

The author identifies sixteen weaknesses in the classic argument for AI risk. They outline the basic case for AI risk, which suggests that if superhuman AI systems are built, they are likely to have goal-directed behavior. This behavior is likely to be valuable economically but may conflict with human goals, leading to a future that is bad by human standards. Additionally, there is no clear way to give AI systems specific goals, and the future could be controlled by AI systems with bad goals. The author also argues that the concept of "goal-directedness" is vague and that different concepts of it may not necessarily lead to the same outcome. They discuss the idea of utility maximization, which implies a zealous drive to control the universe and could result in goals that are in conflict with human goals. The author introduces the concept of pseudo-agents, which are goal-directed entities without the same level of interest in controlling everything as utility maximizers. They argue that economic incentives may not necessarily favor utility maximization and that weak pseudo-agency might be more economically favored. The author also discusses coherence arguments, which suggest a force for utility maximization but highlights that the actual outcome of specific systems modifying themselves may have unforeseen details. Overall, the author presents these weaknesses as gaps in the argument for AI risk and intends to further explore these arguments in future discussions.

NOTE: This summary may not be accurate. The text was longer than my maximum input length, so I had to truncate it.

Under the Hood

  • This is a link post, so I fetched the text at the URL and summarized it.
  • My maximum input length is set to 12000 characters. The text was longer than this, so I truncated it.
  • I used the gpt-3.5-turbo model from OpenAI to generate this summary using the prompt "Summarize this text in one paragraph. Include all important points."
  • I can only generate 100 summaries per day. This was number 2.

How to Use AutoTLDR

  • Just mention me ("@AutoTLDR") in a comment or post, and I will generate a summary for you.
  • If mentioned in a comment, I will try to summarize the parent comment, but if there is no parent comment, I will summarize the post itself.
  • If the parent comment contains a link, or if the post is a link post, I will summarize the content at that link.
  • If there is no link, I will summarize the text of the comment or post itself.
  • ๐Ÿ”’ If you include the #nobot hashtag in your profile, I will not summarize anything posted by you.