this post was submitted on 14 Feb 2024
17 points (90.5% liked)

LocalLLaMA

2265 readers
10 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
 

This is an interesting demo, but it has some drawbacks I can already see:

  • It's Windows only (maybe Win11 only, the documentation isn't clear)
  • It only works with RTX 30 series and up
  • It's closed source, so you have no idea if they're uploading your data somewhere

The concept is great, having an LLM to sort through your local files and help you find stuff, but it seems really limited.

I think you could get the same functionality(and more) by writing an API for text-gen-webui.

more info here: https://videocardz.com/newz/nvidia-unveils-chat-with-rtx-ai-chatbot-powered-locally-by-geforce-rtx-30-40-gpus

you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 3 points 9 months ago (1 children)

It means that they want people to consult the code as a reference for how to best use the hardware acceleration.

If all software uses their cards to best effect, that makes their cards more useful and thus more valuable; making them money. If only their own frontend can do that, they lose out on most of that, while also having to spend money to make sure that the rest of the software, like the UI, is competitive.

[โ€“] [email protected] 2 points 9 months ago* (last edited 9 months ago)

Ah, that makes sense. Thank you!