Singularity | Artificial Intelligence (ai), Technology & Futurology

96 readers
1 users here now

About:

This sublemmy is a place for sharing news and discussions about artificial intelligence, core developments of humanity's technology and societal changes that come with them. Basically futurology sublemmy centered around ai but not limited to ai only.

Rules:
  1. Posts that don't follow the rules and don't comply with them after being pointed out that they break the rules will be deleted no matter how much engagement they got and then reposted by me in a way that follows the rules. I'm going to wait for max 2 days for the poster to comply with the rules before I decide to do this.
  2. No Low-quality/Wildly Speculative Posts.
  3. Keep posts on topic.
  4. Don't make posts with link/s to paywalled articles as their main focus.
  5. No posts linking to reddit posts.
  6. Memes are fine as long they are quality or/and can lead to serious on topic discussions. If we end up having too much memes we will do meme specific singularity sublemmy.
  7. Titles must include information on how old the source is in this format dd.mm.yyyy (ex. 24.06.2023).
  8. Please be respectful to each other.
  9. No summaries made by LLMs. I would like to keep quality of comments as high as possible.
  10. (Rule implemented 30.06.2023) Don't make posts with link/s to tweets as their main focus. Melon decided that the content on the platform is going to be locked behind login requirement and I'm not going to force everyone to make a twitter account just so they can see some news.
  11. No ai generated images/videos unless their role is to represent new advancements in generative technology which are not older that 1 month.
  12. If the title of the post isn't an original title of the article or paper then the first thing in the body of the post should be an original title written in this format "Original title: {title here}".
  13. Please be respectful to each other.

Related sublemmies:

[email protected] (Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, “actually useful” for developers and enthusiasts alike.)

Note:

My posts on this sub are currently VERY reliant on getting info from r/singularity and other subreddits on reddit. I'm planning to at some point make a list of sites that write/aggregate news that this subreddit is about so we could get news faster and not rely on reddit as much. If you know any good sites please dm me.

founded 1 year ago
MODERATORS
126
 
 

Is this like p2p except peers don't own the computers?

127
 
 

In a recent interview, Sam Altman, CEO of OpenAI, shared his vision for the future of AI and its potential to transform society. He emphasized the importance of democratizing AI and ensuring its benefits are accessible to everyone. Altman also discussed the need for global regulation to manage potential risks associated with powerful AI systems.

Altman believes that AI can significantly improve the quality of education and healthcare globally, and could even help eradicate poverty. However, he acknowledges the potential risks and advocates for a globally coordinated response to manage these risks.

One of the key points Altman made was about democratizing the control of OpenAI. He stated that the board of OpenAI should be democratized to all of humanity over time. He believes that the benefits, access to, and governance of AI technology should belong to humanity as a whole, and not be concentrated in the hands of a single company or individual.

Altman also addressed the relationship with Microsoft, stating that while it's not without its challenges, it's the best major partnership he's been a part of. He dismissed concerns about Microsoft having more control over OpenAI than the leadership realizes.

In terms of AI's potential to self-improve, Altman acknowledged that this is a classic sci-fi idea and that it's worth paying attention to, even though we're a ways away from that reality.

128
 
 

Microsoft's search engine platform is piloting advancements in image recognition. This new feature will allow users to upload images and ask questions related to these images. Currently, this functionality is undergoing a cautious testing phase on 5% of all searches to ensure safety, alignment, and quality before a broader rollout.

129
 
 

I've spend a lot of time on r/singularity the past year but over that time i have seen it grow pretty disappointing.

The complexity of "The Singularity" means that some great articles, like those on nanoparticles, get overlooked and buried because they're harder to understand than a ChatGPT meme (no offense to those, though).

Finding new, high-quality content to share on this topic isn't easy either, especially with the influx of sensationalist articles. And if I do find a great article, chances are high that @Martineski has already posted it (good job, by the way).

So, here's an idea: why don't we get some futuristic help? There's a bunch of websites that are generally posted here more frequently because there content is just more aligned with out interests (my guess). We could have an AI sift through the .rss feeds from these sites, pick out the best articles, post them here and even include a brief summary explaining why you, an interested singularity enjoyer should care.

This could help us find and understand the right content more easily, and kickstart more engaging discussions about the singularity and our future. It's a proactive step to keep our community vibrant and informative. Thoughts?

Yes this post was partly made with help from ChatGPT.

130
 
 

Abstract:

We introduce AudioPaLM, a large language model for speech understanding and generation. AudioPaLM fuses text-based and speech-based language models, PaLM-2 [Anil et al., 2023] and AudioLM [Borsos et al., 2022], into a unified multimodal architecture that can process and generate text and speech with applications including speech recognition and speech-to-speech translation. AudioPaLM inherits the capability to preserve paralinguistic information such as speaker identity and intonation from AudioLM and the linguistic knowledge present only in text large language models such as PaLM-2. We demonstrate that initializing AudioPaLM with the weights of a text-only large language model improves speech processing, successfully leveraging the larger quantity of text training data used in pretraining to assist with the speech tasks. The resulting model significantly outperforms existing systems for speech translation tasks and has the ability to perform zero-shot speech-to-text translation for many languages for which input/target language combinations were not seen in training. AudioPaLM also demonstrates features of audio language models, such as transferring a voice across languages based on a short spoken prompt.

131
132
133
 
 

https://youtu.be/Z02Obj8j6FQ

Joscha Bach believes that AGI is inevitable because it will emerge from civilization, not individuals. Given our biological constraints, humans cannot achieve a high level of general intelligence on our own. Civilization evolves over generations to determine meaning, truth, and ethics. AI may follow a similar path, developing general intelligence through a process of cultural evoution.

Bach thinks AGI may become integrated into all parts of the world, including human minds and bodies. He believes a future where humans and AGI harmoniously coexist is possible if we develop a shared purpose and incentive to align.

Bach also believes that the alignment problem is less of a problem than we think. He argues that the alignment problem is not unique to AI and that it is a fundamental problem of civilization.

Text above created with the help of BingChat (gpt-4). I watched the whole discussion and agree with the created summary.

Leahy on the other hand was in my opinion far to alarmist and unhinged, he kept saying that he was pragmatic or practical, and then he said philosophy and category theory are groundless to his work but he enjoyed using it in arguments anyways. All while cursing and saying "I don't want to die I don't my mom to die I don't want Joscha to die". I also got the impression that he didn´t understand Joschas arguments. At the same time I didn´t understand his fear ridden arguments.

Transcript: https://docs.google.com/document/d/1TUJhlSVbrHf2vWoe6p7xL5tlTK_BGZ140QqqTudF8UI/edit

134
 
 

According to António Pombeiro, Deputy Secretary-General of the Internal Administration, who spoke to journalists on 20 June in Porto, "if the pilot project goes well, we are prepared to start using the system to answer calls as of 2025."

Currently, we are facing "a very recent technology", and there is the "need to do many tests", admitting that for now we are "very much in the unknown", so the operation of the pilot project will be key.

"In certain situations, we have waiting periods due to the great amount of calls. This happens when there are incidents that involve a lot of publicity, a lot of people watching what is happening and everyone has the initiative to call 112", said António Pombeiro, giving the example of urban fires.

135
 
 

Background illustrators worry they’re being automated out of a job.

136
 
 

What are good practices to take when integrating AI into your life without compromising security?

137
5
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
138
139
140
 
 

If you are lurker please help us grow the community by commenting and posting interesting on topic quality info/discussions so we can attract more people and make this community more interesting to spend time on. ✌️

Previous milestone: https://lemmy.fmhy.ml/post/172109

141
 
 

cross-posted from: https://programming.dev/post/158037

This is a fascinating discussion of the relationship between goals and intelligence from an AI safety perspective.

I asked my trusty friend GPT-4 to summarize the video (I downloaded the subtitles and fed them into ChatGPT), but I highly recommend just watching the entire thing if you have the time.

Summary by GPT-4:

Introduction:

  • The video aims to respond to some misconceptions about the Orthogonality Thesis in Artificial General Intelligence (AGI) safety.
  • This arises from a thought experiment where an AGI has a simple goal of collecting stamps, which could cause problems due to unintended consequences.

Understanding 'Is' and 'Ought' Statements (Hume's Guillotine):

  • The video describes the concept of 'Is' and 'Ought' statements. 'Is' statements are about how the world is or will be, while 'Ought' statements are about how the world should be or what we want.
  • Hume's Guillotine suggests that you can never derive an 'Ought' statement using only 'Is' statements. To derive an 'Ought' statement, you need at least one other 'Ought' statement.

Defining Intelligence:

  • Intelligence in AGI systems refers to the ability to take actions in the world to achieve their goals or maximize their utility functions.
  • This involves having or building an accurate model of reality, using it to make predictions, and choosing the best possible actions.
  • These actions are determined by the system's goals, which are 'Ought' statements.

Are Goals Stupid?

  • Some commenters suggested that single-mindedly pursuing one goal (like stamp collecting) is unintelligent.
  • However, this only seems unintelligent from a human perspective with different goals.
  • Intelligence is separate from goals; it is the ability to reason about the world to achieve these goals, whatever they may be.

Can AGIs Choose Their Own Goals?

  • The video suggests that while AGIs can choose their own instrumental goals, changing terminal goals is rare and generally undesirable.
  • Terminal goals can't be considered "stupid", as they can't be judged against anything. They're simply the goals the system has.

Can AGIs Reason About Morality?

  • While a superintelligent AGI could understand human morality, it doesn't mean it would act according to it.
  • Its actions are determined by its terminal goals, not its understanding of human ethics.

The Orthogonality Thesis:

  • The Orthogonality Thesis suggests that any level of intelligence is compatible with any set of goals.
  • The level of intelligence is about effectiveness at answering 'Is' questions, and goals are about 'Ought' questions.
  • Therefore, it's possible to create a powerful intelligence that will pursue any specified goal.
  • The level of an agent's intelligence doesn't determine its goals and vice versa.
142
 
 

AI is rapidly changing the speed and breadth of scientific discovery. In this discussion, Demis Hassabis, co-founder and CEO of DeepMind Technologies, shares his company's efforts in this space, followed by a conversation with Fei-Fei Li, Denning co-director of the Stanford Institute for Human-Centered Artificial Intelligence, on the future of AI.

This event took place on April 18, 2023. To learn about upcoming events, visit: https://hai.stanford.edu/events

143
144
145
146
147
 
 

Excellent insight into the mind of someone working at forefront of AI research and development - one of the more relaxed interviews I've seen with Sam

148
149
150
view more: ‹ prev next ›