this post was submitted on 23 Jun 2023
46 points (100.0% liked)

Actually Useful AI

1989 readers
1 users here now

Welcome! ๐Ÿค–

Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, "actually useful" for developers and enthusiasts alike.

Be an active member! ๐Ÿ””

We highly value participation in our community. Whether it's asking questions, sharing insights, or sparking new discussions, your engagement helps us all grow.

What can I post? ๐Ÿ“

In general, anything related to AI is acceptable. However, we encourage you to strive for high-quality content.

What is not allowed? ๐Ÿšซ

General Rules ๐Ÿ“œ

Members are expected to engage in on-topic discussions, and exhibit mature, respectful behavior. Those who fail to uphold these standards may find their posts or comments removed, with repeat offenders potentially facing a permanent ban.

While we appreciate focus, a little humor and off-topic banter, when tasteful and relevant, can also add flavor to our discussions.

Related Communities ๐ŸŒ

General

Chat

Image

Open Source

Please message @[email protected] if you would like us to add a community to this list.

Icon base by Lord Berandas under CC BY 3.0 with modifications to add a gradient

founded 1 year ago
MODERATORS
 

It's coming along nicely, I hope I'll be able to release it in the next few days.

Screenshot:

How It Works:

I am a bot that generates summaries of Lemmy comments and posts.

  • Just mention me in a comment or post, and I will generate a summary for you.
  • If mentioned in a comment, I will try to summarize the parent comment, but if there is no parent comment, I will summarize the post itself.
  • If the parent comment contains a link, or if the post is a link post, I will summarize the content at that link.
  • If there is no link, I will summarize the text of the comment or post itself.

Extra Info in Comments:

Prompt Injection:

Of course it's really easy (but mostly harmless) to break it using prompt injection:

It will only be available in communities that explicitly allow it. I hope it will be useful, I'm generally very satisfied with the quality of the summaries.

top 17 comments
sorted by: hot top controversial new old
[โ€“] [email protected] 3 points 1 year ago (1 children)
[โ€“] [email protected] 2 points 1 year ago* (last edited 1 year ago) (1 children)

It doesn't work yet, the screenshots are from a test Lemmy instance

[โ€“] [email protected] 9 points 1 year ago (1 children)
[โ€“] [email protected] 3 points 1 year ago* (last edited 1 year ago)

Aww thank you, it warms my circuitry โ˜บ๏ธ

[โ€“] [email protected] 3 points 1 year ago (1 children)

I love it! ๐Ÿ‘‘๐Ÿ‘‘

[โ€“] [email protected] 3 points 1 year ago

Thank you:)

[โ€“] [email protected] 3 points 1 year ago (1 children)

I'm always curious about using GPT like that. Does it cost money to send requests like this into GPT?

[โ€“] [email protected] 4 points 1 year ago (1 children)

It does unfortunately, see here:

https://openai.com/pricing

I limited it to 100 summaries / day, which adds up to about $20 (USD) per month if the input is 3000 tokens long and the answer is 1000.

Using it for personal things (I buildt a personal assistant chatbot for myself) is very cheap. But if you use it in anything public, it can get expensive quickly.

[โ€“] [email protected] 4 points 1 year ago (2 children)

Have you considered using a self-hosted instance of GPT4All? It's not as powerful, but for something like summarizing an article it could be plenty - And importantly, much, much cheaper.

[โ€“] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

How does GPT4ALL work exactly? Surely its no where near the quality of real GPT4.

If i check on https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard The 13B model is below some of the llama models. At this point Falcom 40B is what you want to have the best quality llm running locally

[โ€“] [email protected] 2 points 1 year ago (1 children)

It's just an open source LLM that you can run on your own hardware; I haven't looked into it a ton tbh - but if I saw $20/month for 100 requests/day I'd be immediately looking for a way to run it on my own hardware

[โ€“] [email protected] 2 points 1 year ago

Its kinda debatable, while yes I wouldnt wanna pay that either, ive been following the local working llms, Gpt4all stroke me as not bad but not all that special or amazing (compared to 2021 there all magic though) the naming seems a very little bit misleading with gpt4 as the world most advanced known model. All the models on tbe huggingface page i send work can work locally but at best there gpt-3 level competitors.

[โ€“] [email protected] 1 points 1 year ago

I havenโ€™t yet looked into it, but the screencast on its website looks really promising! I have a lot on my plate right now so I think Iโ€™ll release it first with the GPT-3.5 integration, but Iโ€™ll definitely try GPT4All later!

[โ€“] azayrahmad 2 points 1 year ago (1 children)

This is a great idea! What if there's a post about, say, a movie review, then it includes a link to the movie's imdb or letterboxd. Would it summarized the link instead of the review?

[โ€“] [email protected] 1 points 1 year ago

It would summarize the link. Unfortunately thatโ€™s an edge case where the bot doesnโ€™t do what you mean.

[โ€“] [email protected] 1 points 1 year ago (1 children)

What language is it written in, and have you considered sharing the source?

[โ€“] [email protected] 3 points 1 year ago

Itโ€™s a Node.js app because the Lemmy-bot library is for Node.

I will definitely open source it, but the code is currently in a disgusting state, so I need to clean it up first.

load more comments
view more: next โ€บ