s-kostyaev

joined 11 months ago
[–] [email protected] 1 points 9 months ago

See also [this reply](https://github.com/s-kostyaev/ellama/issues/13#issuecomment-1807954046) about using ellama on weak hardware:
You can try:

[–] [email protected] 1 points 9 months ago

But you can use it with open ai or Google api.

[–] [email protected] 1 points 9 months ago

On most Chromebooks are very weak hardware. I don't think it will work fast enough to be useful.

[–] [email protected] 1 points 9 months ago (5 children)

This and other things also possible with ellama. It also works with local models.

[–] [email protected] 1 points 10 months ago

Not now. And personally I don't think it will be useful. But I see how it can be done. In that case hardest thing is to collect good context for completion. Without it this would be dummy t9.

[–] [email protected] 1 points 10 months ago

Sure. You can use functions without double dashes as public api. If you want some specific, open issue or you can even use llm library directly if you want more control.

 

ellama is a tool for interacting with LLMs from Emacs. It now uses llm library as a backend and supports various providers.

[–] [email protected] 1 points 11 months ago

I like large language models like chatGPT, but prefer to run it locally on my hardware. There is cool project to help with it - https://github.com/jmorganca/ollama. I prefer Emacs interface to interact with text, so I create this project. All components are open source. Feedback is welcome.