Singularity | Artificial Intelligence (ai), Technology & Futurology

96 readers
1 users here now

About:

This sublemmy is a place for sharing news and discussions about artificial intelligence, core developments of humanity's technology and societal changes that come with them. Basically futurology sublemmy centered around ai but not limited to ai only.

Rules:
  1. Posts that don't follow the rules and don't comply with them after being pointed out that they break the rules will be deleted no matter how much engagement they got and then reposted by me in a way that follows the rules. I'm going to wait for max 2 days for the poster to comply with the rules before I decide to do this.
  2. No Low-quality/Wildly Speculative Posts.
  3. Keep posts on topic.
  4. Don't make posts with link/s to paywalled articles as their main focus.
  5. No posts linking to reddit posts.
  6. Memes are fine as long they are quality or/and can lead to serious on topic discussions. If we end up having too much memes we will do meme specific singularity sublemmy.
  7. Titles must include information on how old the source is in this format dd.mm.yyyy (ex. 24.06.2023).
  8. Please be respectful to each other.
  9. No summaries made by LLMs. I would like to keep quality of comments as high as possible.
  10. (Rule implemented 30.06.2023) Don't make posts with link/s to tweets as their main focus. Melon decided that the content on the platform is going to be locked behind login requirement and I'm not going to force everyone to make a twitter account just so they can see some news.
  11. No ai generated images/videos unless their role is to represent new advancements in generative technology which are not older that 1 month.
  12. If the title of the post isn't an original title of the article or paper then the first thing in the body of the post should be an original title written in this format "Original title: {title here}".
  13. Please be respectful to each other.

Related sublemmies:

[email protected] (Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, “actually useful” for developers and enthusiasts alike.)

Note:

My posts on this sub are currently VERY reliant on getting info from r/singularity and other subreddits on reddit. I'm planning to at some point make a list of sites that write/aggregate news that this subreddit is about so we could get news faster and not rely on reddit as much. If you know any good sites please dm me.

founded 1 year ago
MODERATORS
276
 
 

Google puts it foot on the accelerator, casting aside safety concerns to not only release a GPT 4 -competitive model, PaLM 2, but also announce that they are already training Gemini, a GPT 5 competitor [likely on TPU v5 chips]. This is truly a major day in AI history, and I try to cover it all.

I'll show the benchmarks in which PaLM (which now powers Bard) beats GPT 4, and detail how they use SmartGPT-like techniques to boost performance. Crazily enough, PaLM 2 beats even Google Translate, due in large part to the text it was trained on. We'll talk coding in Bard, translation, MMLU, Big Bench, and much more.

277
 
 

Paper: https://arxiv.org/abs/2306.07052

Abstract:

In this work, we empirically show that updating pretrained LMs (350M, 1.3B, 2.7B) with just a few steps of Gradient Ascent Post-training (GAP) on random, unlabeled text corpora enhances its zero-shot generalization capabilities across diverse NLP tasks. Specifically, we show that GAP can allow LMs to become comparable to 2-3x times larger LMs across 12 different NLP tasks. We also show that applying GAP on out-of-distribution corpora leads to the most reliable performance improvements. Our findings indicate that GAP can be a promising method for improving the generalization capability of LMs without any task-specific fine-tuning.

Pretty cool research. I'm wondering if this method could be applied in a more effective way, e.g. by introducing gradient ascent throughout the full training process (I'd be curious to see how different ratios of descent:ascent during training would affect convergence/generalization abilities). Will also be neat to see this applied to larger models.

Edit for Lemmies: You can read paper here: https://www.researchgate.net/publication/371505904_Gradient_Ascent_Post-training_Enhances_Language_Model_Generalization

278
279
280
281
5
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
282
283
 
 

In the project "Seeing the World through Your Eyes," researchers at the University of Maryland, College Park, show that the reflections of the human eye can be used to reconstruct 3D scenes. This, they say, is an "underappreciated source of information about what the world around us looks like".

Summary:

  • Researchers at the University of Maryland have developed a NeRF-based method to reconstruct 3D scenes from reflections in the human eye. They believe this is an underappreciated source of information about the world around us.

  • The method uses the uniform geometry of the cornea in healthy adults to estimate the position and orientation of the eye. An important aspect of the work is the development of a corneal position optimization technique that helps improve the robustness of the method.

  • Tests have been performed with both synthetic eye images and real photographs, but only under laboratory conditions. Despite certain challenges, such as inaccuracies in the localization of the cornea and the low resolution of the images, the method is considered promising.

Source: https://the-decoder.com/better-watch-what-youre-looking-at-ai-can-reconstruct-it-in-3d/

Paper & more: https://world-from-eyes.github.io/

284
 
 

The UN report highlights the threat to information security from deepfakes created with the help of AI. Despite the potential of neural networks in solving global problems, the UN has expressed concerns about their use in generating fake images and videos, especially in conflict situations.

The UN is calling on all stakeholders to use AI responsibly, insisting that action be taken to ensure that the technology is used safely and ethically, in line with international human rights. Digital platform owners are also encouraged to invest in content moderation systems and transparent reporting. The UN Secretary General expressed his hope for the joint efforts of the community to solve this problem at the upcoming summit in 2024.

285
 
 

This innovation will enable AI to memorize long interaction histories, opening doors for more meaningful dialogues with AI systems.

🔗 Quick Read: https://www.marktechpost.com/2023/06/16/researchers-from-microsoft-and-uc-santa-barbara-propose-longmem-an-ai-framework-that-enables-llms-to-memorize-long-history/

📚 Full Paper: https://arxiv.org/abs/2306.07174

👨‍💻 GitHub Repo: https://github.com/Victorwz/LongMem

What potential do you see in this breakthrough? Share your thoughts!

286
 
 

What's your stance on this?

Let's have a respectful and informed discussion about this topic and the different perspectives. Feel free to share your thoughts in the comments section below.