this post was submitted on 09 Jan 2025
482 points (99.2% liked)
Opensource
2381 readers
407 users here now
A community for discussion about open source software! Ask questions, share knowledge, share news, or post interesting stuff related to it!
⠀
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Et tu, Brute?
Oh, so it's basically like YouTube's auto-generatedd subtitles. Never mind.
In my experiments, local Whisper models I can run locally are comparable to YouTube's — which is to say, not production-quality but certainly better then nothing.
I've also had some success cleaning up the output with a modest LLM. I suspect the VLC folks could do a good job with this, though I'm put off by the mention of cloud services. Depends on how they implement it.
Since VLC runs on just about everything, I'd imagine that the cloud service will be best for the many devices that just don't have the horsepower to run an LLM locally.
True. I guess they will require you to enter your own OpenAI/Anthropic/whatever API token, because there's no way they can afford to do that centrally. Hopefully you can point it to whatever server you like (such as a selfhosted ollama or similar).
It's not just computing power - you don't always want your device burning massive amounts of battery.