this post was submitted on 22 Oct 2023
1 points (100.0% liked)
Emacs
311 readers
2 users here now
A community for the timeless and infinitely powerful editor. Want to see what Emacs is capable of?!
Get Emacs
Rules
- Posts should be emacs related
- Be kind please
- Yes, we already know: Google results for "emacs" and "vi" link to each other. We good.
Emacs Resources
Emacs Tutorials
- Beginner’s Guide to Emacs
- Absolute Beginner's Guide to Emacs
- How to Learn Emacs: A Hand-drawn One-pager for Beginners
Useful Emacs configuration files and distributions
Quick pain-saver tip
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
In that case you can use it right now with gptel, which supports an Org interface for chat.
Enable the server mode in the desktop app, and in Emacs, run
Then you can spawn a dedicated chat buffer with
M-x gptel
or chat from any buffer by selecting a region of text and runningM-x gptel-send
.Great news - will try in the next days. Thank you.
In the meantime I added explicit support for GPT4All, the above instructions may be incorrect by the time you get to it. The Readme should have updated instructions (if it mentions support for local LLMs at all).
3 days painfully waiting :)
I have played with this a bit in the last few days.
It's nice and minimal, but I am hitting some issues with not enough memory. It seems gptel wants to load whatever model is specified, but I don't have enough memory to run the model GPT4All desktop loads by default plus what gptel wants to load.
It isn't actually the same, though - they don't support streaming. How are you getting around this?