Singularity | Artificial Intelligence (ai), Technology & Futurology

96 readers
1 users here now

About:

This sublemmy is a place for sharing news and discussions about artificial intelligence, core developments of humanity's technology and societal changes that come with them. Basically futurology sublemmy centered around ai but not limited to ai only.

Rules:
  1. Posts that don't follow the rules and don't comply with them after being pointed out that they break the rules will be deleted no matter how much engagement they got and then reposted by me in a way that follows the rules. I'm going to wait for max 2 days for the poster to comply with the rules before I decide to do this.
  2. No Low-quality/Wildly Speculative Posts.
  3. Keep posts on topic.
  4. Don't make posts with link/s to paywalled articles as their main focus.
  5. No posts linking to reddit posts.
  6. Memes are fine as long they are quality or/and can lead to serious on topic discussions. If we end up having too much memes we will do meme specific singularity sublemmy.
  7. Titles must include information on how old the source is in this format dd.mm.yyyy (ex. 24.06.2023).
  8. Please be respectful to each other.
  9. No summaries made by LLMs. I would like to keep quality of comments as high as possible.
  10. (Rule implemented 30.06.2023) Don't make posts with link/s to tweets as their main focus. Melon decided that the content on the platform is going to be locked behind login requirement and I'm not going to force everyone to make a twitter account just so they can see some news.
  11. No ai generated images/videos unless their role is to represent new advancements in generative technology which are not older that 1 month.
  12. If the title of the post isn't an original title of the article or paper then the first thing in the body of the post should be an original title written in this format "Original title: {title here}".
  13. Please be respectful to each other.

Related sublemmies:

[email protected] (Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, “actually useful” for developers and enthusiasts alike.)

Note:

My posts on this sub are currently VERY reliant on getting info from r/singularity and other subreddits on reddit. I'm planning to at some point make a list of sites that write/aggregate news that this subreddit is about so we could get news faster and not rely on reddit as much. If you know any good sites please dm me.

founded 1 year ago
MODERATORS
101
 
 

Copied from reddit:

The National Institutes of Health (US) have officially forbidden use of large language models and other generative AI in peer review of research grant applications (see linked blog post). This despite the quoted opinion of an unnamed "well-known AI tool," which was gung-ho in favor of AI use.

https://www.csr.nih.gov/reviewmatters/2023/06/23/using-ai-in-peer-review-is-a-breach-of-confidentiality/

They also mention the possibility of applicants using AI in preparing their applications, which I'm sure is already happening. This is not forbidden, but said to be at the applicants' own risk.

A year ago, I couldn't have imagined such statements from NIH would be needed soon.

102
103
104
 
 

Full title: LLM Powered Autonomous Agents: An exploration of the current landscape of the latest in Automomous AI Agents - blogpost by OpenAI Fmr. Chief Researcher and Current Head of AI Saftey Lilian Weng

105
 
 

In Toronto, a candidate in this week’s mayoral election who vows to clear homeless encampments released a set of campaign promises illustrated by artificial intelligence, including fake dystopian images of people camped out on a downtown street and a fabricated image of tents set up in a park. In New Zealand, a political party posted a realistic-looking rendering on Instagram of fake robbers rampaging through a jewelry shop. In Chicago, the runner-up in the mayoral vote in April complained that a Twitter account masquerading as a news outlet had used A.I. to clone his voice in a way that suggested he condoned police brutality. What began a few months ago as a slow drip of fund-raising emails and promotional images composed by A.I. for political campaigns has turned into a steady stream of campaign materials created by the technology, rewriting the political playbook for democratic elections around the world.

106
 
 

My AI virtual assistant ('Cheevly') is now seamlessly integrated into the entire desktop experience. He contextually understands what you're doing on the desktop and verbally answers prompts in real-time.

https://www.youtube.com/watch?v=0FpqZ0wUOrY

107
 
 

NASA is actively developing an artificial intelligence (AI) system, akin to ChatGPT, to provide support to astronauts during space missions. This AI assistant is intended to serve as a conduit between the astronauts, their spacecraft, and the control teams on Earth. Furthermore, it will participate actively in carrying out complex tasks and space experiments.

The first trials of the AI chatbot are set to be conducted on the Lunar Gateway space station. This station is slated to launch in 2024 as a part of the Artemis program. As stated by Dr. Larisa Suzuki at an Institute of Electrical and Electronics Engineers (IEEE) conference in London, the primary role of this AI will be to identify and possibly rectify technical issues and inefficiencies in real-time. It will also supply astronauts with the most current data and findings in space.

108
 
 

109
110
 
 

We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Data, demo, and pretrained models are available at this https URL.

111
 
 

German broadcaster Bayerischer Rundfunk (BR24) reports that OpenAI is losing key employees to Google. Some of these employees have already resigned and signed contracts with Google. Others will do so in the coming days. The BR says that its information comes from interviews with former and current OpenAI employees.

The disgruntled employees are reportedly unhappy with the development of ChatGPT and the rapid growth from about 100 to 600 employees since December 2022. Sam Altman, CEO of OpenAI, is criticized for having only a "superficial understanding" without much involvement in day-to-day operations. Supposedly self-critical narratives about the risks of AI and the associated call for regulation are just political show, reports BR24.

112
 
 

“Our own hope is that, through AI, we can eventually approximate a 1:1 teacher:student ratio for every student in CS50, as by providing them with software-based tools that, 24/7, can support their learning at a pace and in a style that works best for them individually,” CS50 professor David Malan told The Harvard Crimson.

113
 
 

TL;DR: Baidu has released ERNIE 3.5, an upgraded version of their large language model (LLM). ERNIE 3.5 features significant improvements in efficacy, functionality, and performance, surpassing ChatGPT (3.5) in comprehensive ability scores and outperforming GPT-4 in several Chinese language capabilities. Key features of ERNIE 3.5 include plugins such as "Baidu Search" and "ChatFile" that enhance the model's capabilities. The model has also been enhanced with cutting-edge strategies from PaddlePaddle, adaptive hybrid parallel training technology, mixed-precision computing, and improved data distribution. ERNIE 3.5 incorporates techniques like "Knowledge Snippet Enhancement" and has improved reasoning in logical reasoning, mathematical computation, and code generation. ERNIE Bot, built on ERNIE 3.5, is currently in public beta testing and can be utilized in various applications involving language, text, or code.

ERNIE Bot v2.1.0

114
115
116
 
 

If you are lurker please help us grow the community by commenting and posting interesting on topic quality info/discussions so we can attract more people and make this community more interesting to spend time on. ✌️

Previous milestone: https://lemmy.fmhy.ml/post/239561

117
 
 

Paperclip maximizer would suggest the ai will purposely kill undesirables to “save” others because no option to avoid harm in the situation was presented in the training data.

118
6
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 
 

AWS is stepping up its AI accelerator efforts via a $100 million Generative AI Innovation Center.

119
 
 

This is a new family of open source video models designed to take on Gen-2. There is a 576x320 model that uses under 8gb of vram, and the 1024x576 model that uses under 16gb of vram. The recommended workflow is to render with the 576 model, then use vid2vid via the 1111 text2video extension to upscale to 1024x576. This allows for better compositions overall and faster exploration of ideas before committing to a high res render.

120
 
 

website: https://crfm.stanford.edu/2023/06/15/eu-ai-act.html

github: https://github.com/stanford-crfm/TransparencyIndex

Martineski: I would have wrote the date of the article/study but I couldn't find info on when it was posted/released. If you find this info please tell me in the comments.

121
122
 
 

Full title: Creating what I always wanted from the singularity: Alpha version of a AI Librarian/Analyst that finds the best voices and most relevant articles, podcasts, and videos for a topic and gives me a synthesis

The world is too noisy and sifting through all the crap is hard. With GPT I think it's finally possible to find all that amazing stuff hidden in random Twitter threads, Youtube videos, podcasts, or blogs from leading thinkers on these topics.

A lot of us follow specific people on social media, but it's just one voice and perspective. I always wanted a way to get all the relevant voices and compare/contrast their perspectives. Finally building that!

If you're interested in getting updates on this please signup at https://cicero.ly/

123
 
 

Comment (not author's) copied from reddit:

Ok, very interesting. Looked at the tweet below, whats going on there and oscillated between getting hyped, then let down, then a bit more carefully hyped again.

This is the AI tool he uses, which can generate a depthmap from a single image:

https://pytorch.org/hub/intelisl_midas_v2/

Then he uses a shader to use that depthmap to displace the original image.

So what this means is that as of now, you can only get the kind of view that the camera above shows. Its not a full generated scene. Obviously in the single image there is data missing how the various objects would look at from different angles, especially the 'back' of the objects in OP, and his tool does not generate that. The stretchy artefacts you can see at various points are also an effect of that, and should be noted that his specific camera track is curated to make this look its best. If you went deeper into the scene and looked more to left and right, things would start look at lot more shitty. Dont want to say he is pretending otherwise, just to make that clear for people here.

But seems to me, even in the near future it shouldnt be so out of reach to combine a few different tools to make the jump to creating a full 360 3D scene. First you use midjourney to generate images of the same scene from various angles, then get depthmaps for all of them and then you somehow combine that and voila. Looking forward to it, really dont think it will take that long.

124
125
 
 
view more: ‹ prev next ›