this post was submitted on 22 Jun 2023
5 points (100.0% liked)

Singularity | Artificial Intelligence (ai), Technology & Futurology

96 readers
1 users here now

About:

This sublemmy is a place for sharing news and discussions about artificial intelligence, core developments of humanity's technology and societal changes that come with them. Basically futurology sublemmy centered around ai but not limited to ai only.

Rules:
  1. Posts that don't follow the rules and don't comply with them after being pointed out that they break the rules will be deleted no matter how much engagement they got and then reposted by me in a way that follows the rules. I'm going to wait for max 2 days for the poster to comply with the rules before I decide to do this.
  2. No Low-quality/Wildly Speculative Posts.
  3. Keep posts on topic.
  4. Don't make posts with link/s to paywalled articles as their main focus.
  5. No posts linking to reddit posts.
  6. Memes are fine as long they are quality or/and can lead to serious on topic discussions. If we end up having too much memes we will do meme specific singularity sublemmy.
  7. Titles must include information on how old the source is in this format dd.mm.yyyy (ex. 24.06.2023).
  8. Please be respectful to each other.
  9. No summaries made by LLMs. I would like to keep quality of comments as high as possible.
  10. (Rule implemented 30.06.2023) Don't make posts with link/s to tweets as their main focus. Melon decided that the content on the platform is going to be locked behind login requirement and I'm not going to force everyone to make a twitter account just so they can see some news.
  11. No ai generated images/videos unless their role is to represent new advancements in generative technology which are not older that 1 month.
  12. If the title of the post isn't an original title of the article or paper then the first thing in the body of the post should be an original title written in this format "Original title: {title here}".
  13. Please be respectful to each other.

Related sublemmies:

[email protected] (Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, “actually useful” for developers and enthusiasts alike.)

Note:

My posts on this sub are currently VERY reliant on getting info from r/singularity and other subreddits on reddit. I'm planning to at some point make a list of sites that write/aggregate news that this subreddit is about so we could get news faster and not rely on reddit as much. If you know any good sites please dm me.

founded 1 year ago
MODERATORS
 

cross-posted from: https://programming.dev/post/158037

This is a fascinating discussion of the relationship between goals and intelligence from an AI safety perspective.

I asked my trusty friend GPT-4 to summarize the video (I downloaded the subtitles and fed them into ChatGPT), but I highly recommend just watching the entire thing if you have the time.

Summary by GPT-4:

Introduction:

  • The video aims to respond to some misconceptions about the Orthogonality Thesis in Artificial General Intelligence (AGI) safety.
  • This arises from a thought experiment where an AGI has a simple goal of collecting stamps, which could cause problems due to unintended consequences.

Understanding 'Is' and 'Ought' Statements (Hume's Guillotine):

  • The video describes the concept of 'Is' and 'Ought' statements. 'Is' statements are about how the world is or will be, while 'Ought' statements are about how the world should be or what we want.
  • Hume's Guillotine suggests that you can never derive an 'Ought' statement using only 'Is' statements. To derive an 'Ought' statement, you need at least one other 'Ought' statement.

Defining Intelligence:

  • Intelligence in AGI systems refers to the ability to take actions in the world to achieve their goals or maximize their utility functions.
  • This involves having or building an accurate model of reality, using it to make predictions, and choosing the best possible actions.
  • These actions are determined by the system's goals, which are 'Ought' statements.

Are Goals Stupid?

  • Some commenters suggested that single-mindedly pursuing one goal (like stamp collecting) is unintelligent.
  • However, this only seems unintelligent from a human perspective with different goals.
  • Intelligence is separate from goals; it is the ability to reason about the world to achieve these goals, whatever they may be.

Can AGIs Choose Their Own Goals?

  • The video suggests that while AGIs can choose their own instrumental goals, changing terminal goals is rare and generally undesirable.
  • Terminal goals can't be considered "stupid", as they can't be judged against anything. They're simply the goals the system has.

Can AGIs Reason About Morality?

  • While a superintelligent AGI could understand human morality, it doesn't mean it would act according to it.
  • Its actions are determined by its terminal goals, not its understanding of human ethics.

The Orthogonality Thesis:

  • The Orthogonality Thesis suggests that any level of intelligence is compatible with any set of goals.
  • The level of intelligence is about effectiveness at answering 'Is' questions, and goals are about 'Ought' questions.
  • Therefore, it's possible to create a powerful intelligence that will pursue any specified goal.
  • The level of an agent's intelligence doesn't determine its goals and vice versa.
no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here