Singularity | Artificial Intelligence (ai), Technology & Futurology

96 readers
1 users here now

About:

This sublemmy is a place for sharing news and discussions about artificial intelligence, core developments of humanity's technology and societal changes that come with them. Basically futurology sublemmy centered around ai but not limited to ai only.

Rules:
  1. Posts that don't follow the rules and don't comply with them after being pointed out that they break the rules will be deleted no matter how much engagement they got and then reposted by me in a way that follows the rules. I'm going to wait for max 2 days for the poster to comply with the rules before I decide to do this.
  2. No Low-quality/Wildly Speculative Posts.
  3. Keep posts on topic.
  4. Don't make posts with link/s to paywalled articles as their main focus.
  5. No posts linking to reddit posts.
  6. Memes are fine as long they are quality or/and can lead to serious on topic discussions. If we end up having too much memes we will do meme specific singularity sublemmy.
  7. Titles must include information on how old the source is in this format dd.mm.yyyy (ex. 24.06.2023).
  8. Please be respectful to each other.
  9. No summaries made by LLMs. I would like to keep quality of comments as high as possible.
  10. (Rule implemented 30.06.2023) Don't make posts with link/s to tweets as their main focus. Melon decided that the content on the platform is going to be locked behind login requirement and I'm not going to force everyone to make a twitter account just so they can see some news.
  11. No ai generated images/videos unless their role is to represent new advancements in generative technology which are not older that 1 month.
  12. If the title of the post isn't an original title of the article or paper then the first thing in the body of the post should be an original title written in this format "Original title: {title here}".
  13. Please be respectful to each other.

Related sublemmies:

[email protected] (Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, “actually useful” for developers and enthusiasts alike.)

Note:

My posts on this sub are currently VERY reliant on getting info from r/singularity and other subreddits on reddit. I'm planning to at some point make a list of sites that write/aggregate news that this subreddit is about so we could get news faster and not rely on reddit as much. If you know any good sites please dm me.

founded 1 year ago
MODERATORS
201
 
 

With the rise of large language models (LLMs) like GPT-4, I really look forward to having a personal AI assistant that has long-term memory and can learn what I like, hate, want and need. It could help me like a real assistant or even a partner. It would know my strengths, weaknesses and could give me a plan to become the best version of myself. It could give me very personalized advice and track my progress on various aspects of life, such as work, relationships, fitness, diet, etc.

It could have a model of my mind and know exactly what I prefer or dislike. For example, it could predict if I would enjoy a movie or not (I know we already have recommendation systems, but what I'm saying is on a next level, as it knows everything about me and my personality, not just other movies I liked). It could be better than any therapist in the world, as it knows much more about me and is here to help 24/7.

I think we're very close to this technology. The only big problems to achieve this are the context limit of LLMs and privacy concerns.

What are your opinions on this?

202
203
 
 

It's very interesting how we made basically no progress on our understanding of "sentience" and still debate obout it the same way like we did decades ago.

204
205
206
207
 
 

Samples: https://ai.honu.io/papers/musicgen/

Code and models: https://github.com/facebookresearch/audiocraft

Paper: https://arxiv.org/abs/2306.05284

Abstract:

We tackle the task of conditional music generation. We introduce MusicGen, a single Language Model (LM) that operates over several streams of compressed discrete music representation, i.e., tokens. Unlike prior work, MusicGen is comprised of a single-stage transformer LM together with efficient token interleaving patterns, which eliminates the need for cascading several models, e.g., hierarchically or upsampling. Following this approach, we demonstrate how MusicGen can generate high-quality samples, while being conditioned on textual description or melodic features, allowing better controls over the generated output. We conduct extensive empirical evaluation, considering both automatic and human studies, showing the proposed approach is superior to the evaluated baselines on a standard text-to-music benchmark. Through ablation studies, we shed light over the importance of each of the components comprising MusicGen.

208
 
 

Full title: Voyager: An Open-Ended Embodied Agent with Large Language Models - "the first LLM-powered embodied lifelong learning agent in Minecraft that continuously explores the world, acquires diverse skills, and makes novel discoveries without human intervention"

Abstract:

We introduce Voyager, the first LLM-powered embodied lifelong learning agent in Minecraft that continuously explores the world, acquires diverse skills, and makes novel discoveries without human intervention. Voyager consists of three key components: 1) an automatic curriculum that maximizes exploration, 2) an ever-growing skill library of executable code for storing and retrieving complex behaviors, and 3) a new iterative prompting mechanism that incorporates environment feedback, execution errors, and self-verification for program improvement. Voyager interacts with GPT-4 via blackbox queries, which bypasses the need for model parameter fine-tuning. The skills developed by Voyager are temporally extended, interpretable, and compositional, which compounds the agent's abilities rapidly and alleviates catastrophic forgetting. Empirically, Voyager shows strong in-context lifelong learning capability and exhibits exceptional proficiency in playing Minecraft. It obtains 3.3x more unique items, travels 2.3x longer distances, and unlocks key tech tree milestones up to 15.3x faster than prior SOTA. Voyager is able to utilize the learned skill library in a new Minecraft world to solve novel tasks from scratch, while other techniques struggle to generalize.

links: https://voyager.minedojo.org/

https://arxiv.org/abs/2305.16291

209
 
 

A few days ago some huge news was released, but no one really picked up on this it seems.

So there is this giant Genomic Sequencing company called Illumina. (stock ticker ILMN)

It's in San Diego, but something like 90% of the world's genomic sequencing is done on Illumina's equipment.

They just announced launching their own AI (a neural networks similar to Google's AlphaFold2 or even ChatGPT).

Here is a video that explains how Google's AlphaFold2 directly led to Illumina launching this AI and what is likely to happen next:

https://youtu.be/T8as0Qd1MRk

There are massive implications to this.

Basically genomics and DNA is this massive pool of data that we don't understand because we have to way of sorting this large amounts of data to gain important insights.

Then somewhere around 2017 there were a few large breakthroughs in AI tech. That's why we are seeing all these new things like ChatGPT, MidJourney, Stable Diffusion and Google's AlphaFold2 etc.

Now that technology is getting applied to sifting through all the massive DNA data we have.

This plus technology like CRISPR, which is able to modify DNA by cutting and injecting new DNA sequences.

So, right now we can write/edit the code that all life runs on and with Neural Networks and Genomics we should be able to learn what each bit of code means.

So once we are able to do that, genetic engineering will become very effective and simple, genetic advancements accelerate exponentially.

(by the way all the legendary billionaire investors have already sniffed this out, Peter Thiel, Carl Icahn and Stanley Drukenmiller are all either buying up Illumina or trying to launch competing products)

Some of the things that will be possible:

  1. human genetic engineering - change eye color, height, muscle, intelligence etc. Basically you can design humans like you can video game characters. Whether this will only be possible for embryos or we will be actually to modify adults is not apparent yet, but most disease will be gone and most people will likely have close to "perfect genes" in terms on not being sick, not having any weaknesses etc.

  2. bacteria for everything - right now we know it's possible to have bacteria eat plastics, other bacteria to produce biofuel. We just can't do it at scale, it's very difficult. With genetic engineering this could accelerate allowing us to clean up oceans, clean the air from CO2 etc. (this was suggested by researchers at Google's DeepMind, they said it might be possible with advancements in this tech)

  3. recreate extinct species - This sounds like... Jurassic Park? But in a good way hopefully.

And tons of things that we can't even image. Basically DNA codes everything that life is able to do, so there isnt' really a limit to what we can do if we are able to understand how it works.

I'm curious what people think about this?

This seems like massive, massive news...

Most people interested in AI are looking at Google, NVIDIA, OpenAI etc.

But AI neural networks in Bio-Tech seem like where the biggest applications of this tech will be seen.

Are we about to experience a massive Bio-Tech revolution driven by AI neural networks?

I mean, I would be happy to have robots/AI take over all work etc... but if I'm too sick/tired to really enjoy it then it really loses a lot of the appeal.

I want to be able to be super fit, super healthy, energetic and beautiful human being AND have the robots take care of all the stuff I don't want to do.

While Big Tech is working on automation AI, Bio-Tech needs to be making sure we are healthy/alive in order to enjoy it.

Question to all people who are 50+ 60+ etc. Would you take an experimental gene therapy treatment that would basically restore you to your 30 year old self, but in better shape etc?

Like if there was a 1% chance of massive complications, you were let's say 65 years old and not feeling good, would you roll the dice?

(I would 100%)

210
211
212
 
 

ELI5:

It is trained on raw bytes, so it can theoretically learn to interpret and predict anything that is digitizable.

Text, audio, images.. in computer memory they’re all made out of this same raw form of data.

213
214
215
216
217
 
 

Abstract:

Fine-tuning large-scale Transformers has led to the explosion of many AI applications across Natural Language Processing and Computer Vision tasks. However, fine-tuning all pre-trained model parameters becomes impractical as the model size and number of tasks increase. Parameter-efficient transfer learning (PETL) methods aim to address these challenges. While effective in reducing the number of trainable parameters, PETL methods still require significant energy and computational resources to fine-tune. In this paper, we introduce REcurrent ADaption (READ) -- a lightweight and memory-efficient fine-tuning method -- to overcome the limitations of the current PETL approaches. Specifically, READ inserts a small RNN network alongside the backbone model so that the model does not have to back-propagate through the large backbone network. Through comprehensive empirical evaluation of the GLUE benchmark, we demonstrate READ can achieve a 56% reduction in the training memory consumption and an 84% reduction in the GPU energy usage while retraining high model quality compared to full-tuning. Additionally, the model size of READ does not grow with the backbone model size, making it a highly scalable solution for fine-tuning large Transformers.

Paper: https://arxiv.org/abs/2305.15348

218
 
 

Code will be released in June.

Github: https://github.com/XingangPan/DragGAN

219
220
 
 

Summary:

  • An MIT study provides evidence that AI language models may be capable of learning meaning, rather than just being "stochastic parrots".
  • The team trained a model using the Karel programming language and showed that it was capable of semantically representing the current and future states of a program.
  • The results of the study challenge the widely held view that language models merely represent superficial statistical patterns and syntax.
221
222
223
 
 
224
 
 

We know that in some way some animals have forms of communication between them.

Do you think that AI will help us communicate with them effectively (with their limits) ?

This could eventually lead to more respect for them by humanity.

I'm curious about your opinions :))

225
view more: ‹ prev next ›