Singularity | Artificial Intelligence (ai), Technology & Futurology

96 readers
1 users here now

About:

This sublemmy is a place for sharing news and discussions about artificial intelligence, core developments of humanity's technology and societal changes that come with them. Basically futurology sublemmy centered around ai but not limited to ai only.

Rules:
  1. Posts that don't follow the rules and don't comply with them after being pointed out that they break the rules will be deleted no matter how much engagement they got and then reposted by me in a way that follows the rules. I'm going to wait for max 2 days for the poster to comply with the rules before I decide to do this.
  2. No Low-quality/Wildly Speculative Posts.
  3. Keep posts on topic.
  4. Don't make posts with link/s to paywalled articles as their main focus.
  5. No posts linking to reddit posts.
  6. Memes are fine as long they are quality or/and can lead to serious on topic discussions. If we end up having too much memes we will do meme specific singularity sublemmy.
  7. Titles must include information on how old the source is in this format dd.mm.yyyy (ex. 24.06.2023).
  8. Please be respectful to each other.
  9. No summaries made by LLMs. I would like to keep quality of comments as high as possible.
  10. (Rule implemented 30.06.2023) Don't make posts with link/s to tweets as their main focus. Melon decided that the content on the platform is going to be locked behind login requirement and I'm not going to force everyone to make a twitter account just so they can see some news.
  11. No ai generated images/videos unless their role is to represent new advancements in generative technology which are not older that 1 month.
  12. If the title of the post isn't an original title of the article or paper then the first thing in the body of the post should be an original title written in this format "Original title: {title here}".
  13. Please be respectful to each other.

Related sublemmies:

[email protected] (Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, “actually useful” for developers and enthusiasts alike.)

Note:

My posts on this sub are currently VERY reliant on getting info from r/singularity and other subreddits on reddit. I'm planning to at some point make a list of sites that write/aggregate news that this subreddit is about so we could get news faster and not rely on reddit as much. If you know any good sites please dm me.

founded 1 year ago
MODERATORS
226
 
 

The survey, carried out via survey platform Pollfish and commissioned by employment screening service Checkr, polled 3,000 employed American workers between April 27 and 28, 2023. An equal number of Boomers, Gen Xers, Millennials, and Gen Z were surveyed as part of the report.

It found that 79% of all American workers were fearful or unsure about the possibility of AI driving pay cuts for their position with 82% of millennials feeling this way. Meanwhile, 76% of Gen Zers and a similar percentage of other generations echoed the feeling.

227
 
 
228
 
 

229
 
 

In some cases, safer methods for AI systems can lead to reduced performance3, a cost which is known as an alignment tax. In general, any alignment tax may hinder the adoption of alignment methods, due to pressure to deploy the most capable model. Our results below show that process supervision in fact incurs a negative alignment tax, at least in the math domain. This could increase the adoption of process supervision, which we believe would have positive alignment side-effects.

It is unknown how broadly these results will generalize beyond the domain of math, and we consider it important for future work to explore the impact of process supervision in other domains. If these results generalize, we may find that process supervision gives us the best of both worlds – a method that is both more performant and more aligned than outcome supervision.

230
 
 

Grabbing an object as thin as a plastic bag is an incredible challenge for robotic hands, but Phoenix robots have the fine manipulation required to do it.

Learn more at https://sanctuary.ai/

More videos on their yt channel: https://www.youtube.com/@sanctuaryai/videos

231
 
 

Fascinating new paper from Jin and Rinard at MIT that shows models might develop semantic understanding despite being trained on text: "We present evidence that language models can learn meaning despite being trained only to perform next token prediction on text, specifically a corpus of programs."

232
233
 
 

Today it was exactly one year ago that I got closed beta access to DALL-E. At the time, it was truly an absolute marvel of technology for me, the most blatant in my life to be exact. It got to the point for me that I found it curious that people on the subway are unaware of what I can access with my phone in my pocket. It really kind of felt like a top secret government project because it was just such a wondrous sci-fi product.

A year later, and the base model of DALLE 2 is completely unimpressive and all aspects. Even average Joe who has zero interest in technology or even AI would know that even current open source variants are leaps and bounds better in quality, coherence, gloss and resolution and can be run on a mid-priced laptop instead of in a data center and can be fine-tuned absolutely freely. In every aspect, DALL-E is completely outdated (important, we're talking 2.0 here, not the newer, upcoming models from DALLE).

Something that felt like sci-fi technology from my wildest dreams a year ago is now underwhelming, lackluster, and uninteresting in every way.

I think in a year, the gap from today will far outstrip the gap from now to a year ago. We have no idea. It will probably blow any imagination.

234
 
 
235
 
 
236
237
238
239
240
241
 
 

Clip from COMPUTEX 2023: https://www.youtube.com/watch?v=_SloSMr-gFI

I would have linked official video but it went private

242
 
 

Researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) have achieved a groundbreaking advancement in language modeling in the realm of dominant large language models (LLMs).

The CSAIL team has pioneered an innovative approach to language modeling that challenges the conventional belief that smaller models possess limited capabilities. The research introduces a scalable, self-learning model that surpasses larger counterparts by up to 500 times in specific language understanding tasks, all without reliance on human-generated annotations.

243
 
 

It takes a few seconds to generate text and a minute or so to generate an image. This is basically how fast it was using dialup. Back then the internet was very limited yet very useful. Just like now.

LLMs are still extremely unoptimized just like dialup was. To act like it won't improve would be acting like dialup wasn't going to improve. It did.

What would happen if you can get real time images, real time text. You ask it a question, and it responds in less than a second. What would it mean if these AIs were priceless but also powerful (GPT 4) or Ai video at an instant. We can generate full stories in seconds, and pair a GPT model with a video AI that is capable of generating full fledged videos.

Back in dialup it was impossible to view images live, you had to download them, it was impossible to even visualize one video without downloading it. One day it changed, and one day it changed for AI too. We will easily be capable of creating platforms like tiktok but its all AI made.

The question I have is, what happens if you pair generative video (audio, video, and voice), with generative text, and a tiktok based algorithm that optimized each aspect around users preferences. We may make some of the most addicting social media platforms possible.

Each video is tailored specifically for what would make you most interested in it. It is generated specifically for that.

Once we move out of the dialup age of AI which looks sooner then ever with these major advances. This tech will explode in popularity, and capabilities. We are just early at adopting and embracing this tech so we are seen as weird.

244
 
 

TLDR

Interesting, it seems from the article that OpenAI, Anthropic, and Google Deepmind have all agreed to let the UK government take a look at their large AI models “to help build better evaluations and help us better understand the opportunities and risks of these systems,” as the prime minister of the UK said.

The UK will be taking a “pro-innovation” approach to AI though they said they will be installing “guardrails”. The UK government also planned on making a “Foundation Model Taskforce” to oversee such models.

Also there will be a summit in the UK later this year that will function similarly to the UN COP, except it is about AI and not climate change.

245
246
247
 
 

I am reading a paper on Tandem Dearomatization/Enatioselective Allylic Alkylation of Pyridines, a very expert level, hard chemistry paper. And it is quite a difficult paper. It utilizes a lot of chemistry jargon and ideas. But one thing I noticed is with context GPT models are sufficient at providing logical and reasonable summaries of the work. They are really smart with context.

For instance, the paper describes a Kumuda and Stille cross coupling which is essentially combining two molecules together. By itself I doubt GPT 3.5 would get it really accurately. But with the context of the paper, it describes the versatile of such reactions in the context of the entire paper. Such as why do the researchers do such a reaction.

This is just one of many instances when the 16k GPT 3.5 shines. Its ability to take in the ENTIRE PAPER along with its innate understanding allows it to be expert level if not higher at chemistry despite it being a GPT 3.5 model. It can understand what's the point of individual reactions in the context of the overall point of the paper, along with what said reactions can be done. This is expert grade stuff, stuff I can barely do.

To me this just shows, CONTEXT IS KEY.

Edit: wanted to say one last thing. I believe many of the cases where AI fails isnt because AI is incapable of it rather the user is incapable of using it properly. This is one case where given enough information and knowledge on how to use these models you can get near perfect results. And I suspect with further context along with better searches these models can do, lawyers making up more case probably won't happen in the long run.

248
4
Job moment (lemmy.fmhy.ml)
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 
 
249
3
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 
 
250
 
 
view more: ‹ prev next ›