this post was submitted on 07 Dec 2024
34 points (97.2% liked)
Free Open-Source Artificial Intelligence
2928 readers
1 users here now
Welcome to Free Open-Source Artificial Intelligence!
We are a community dedicated to forwarding the availability and access to:
Free Open Source Artificial Intelligence (F.O.S.A.I.)
More AI Communities
LLM Leaderboards
Developer Resources
GitHub Projects
FOSAI Time Capsule
- The Internet is Healing
- General Resources
- FOSAI Welcome Message
- FOSAI Crash Course
- FOSAI Nexus Resource Hub
- FOSAI LLM Guide
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I followed their instructions here: https://speech.fish.audio/
I am using the locally-run API server to do inference: https://speech.fish.audio/inference/#http-api-inference
I don't know about other ways. To be clear, this is not (necessarily) an LLM, it's just for speech synthesis, so you don't run it on ollama. That said I think it does technically use Llama under the hood since there are two models, one for encoding text and the other for decoding to audio. Honestly the paper is terrible but it explains the architecture somewhat: https://arxiv.org/pdf/2411.01156