this post was submitted on 24 Oct 2023
1 points (100.0% liked)

Self-Hosted Main

511 readers
1 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

For Example

We welcome posts that include suggestions for good self-hosted alternatives to popular online services, how they are better, or how they give back control of your data. Also include hints and tips for less technical readers.

Useful Lists

founded 1 year ago
MODERATORS
 

Hey all,

Some might remember this from about 9 months ago. I've been running it with zero maintenance since then, but saw there were some new updates that could be leveraged.

What has changed?

  • Jellyfin is supported (in addition to Plex and Tautulli)
  • Moved away from whisper.cpp to stable-ts and faster-whisper (faster-whisper can support Nvidia GPUs)
  • Significant refactoring of the code to make it easier to read and for others to add 'integrations' or webhooks
  • Renamed the webhook from webhook to plex/tautulli/jellyfin
  • New environment variables for additional control

What is this?

This will transcribe your personal media on a Plex or Jellyfin server to create subtitles (.srt). It is currently reliant on webhooks from Jellyfin, Plex, or Tautulli. This uses stable-ts and faster-whisper which can use both Nvidia GPUs and CPUs.

How do I run it?

I recommend reading through the documentation at: McCloudS/subgen: Autogenerate subtitles using OpenAI Whisper Model via Jellyfin, Plex, and Tautulli (github.com) , but quick and dirty, pull mccloud/subgen from Dockerhub, configure Tautulli/Plex/Jellyfin webhooks, and map your media volumes to match Plex/Jellyfin identically.

What can I do?

I'd love any feedback or PRs to update any of the code or the instructions. Also interested to hear if anyone can get GPU transcoding to work. I have a Tesla T4 in the mail to try it out soon.

top 25 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 1 year ago

this is cool, do you see a lot of incorrect word matches?

[–] [email protected] 1 points 1 year ago

This looks very cool, I am interested. Do I install it on the Plex server itself, or a pc running a plex client?

[–] [email protected] 1 points 1 year ago

I didn't know this project existed and j genuinely was thinking making this tool. This is amazing, thank you! I'll definitely try it out, especially since I have a hard time finding subtitles for a lot of shows with proper sync.

[–] [email protected] 1 points 1 year ago

holy crap!! i'm going to try this tonight.

I was having some subtitle timing issues on Breaking Bad that was driving me nuts

[–] [email protected] 1 points 1 year ago

What a cool project! Good job!

[–] [email protected] 1 points 1 year ago (1 children)

Wow, this is great! I'd be interested in doing some subtitles for some non-English shows I have, would you happen to know if translating into English subtitles is supported?

Also, take a look at https://github.com/m-bain/whisperX - subsai uses this and it's much faster than whisper.cpp

[–] [email protected] 1 points 1 year ago (1 children)

It should detect the foreign language and make english subtitles, but I haven't personally tried it.

I'm not using whisper.cpp anymore. I did some short comparisons between WhisperX and stable-ts and ultimately decided to go with stable-ts. Functionally, I'm sure they're very similar.

[–] [email protected] 1 points 1 year ago

I was reading the docs for both openai-whisper and faster-whisper and it can translate to English

[–] [email protected] 1 points 1 year ago (1 children)

Like just last week I set up bazarr and was delighted to learn that it has a similar feature to this and it works great (with a GTX 1070).. I would have set your project up in lieu of bazarr, but I liked how bazarr searches other sources and does a lot of other stuff in regards to also fixing+syncing existing subtitles.

Do you have any plans on anything similar to these bazarr features or maybe potentially even creating a provider for bazarr?

[–] [email protected] 1 points 1 year ago (1 children)

Damn this looks good, any chance of it coming to Emby?

[–] [email protected] 1 points 1 year ago (1 children)

If I knew what the endpoints were, nothing would prohibit it. I can add it to my short list.

[–] [email protected] 1 points 1 year ago

I just tried, Emby won't actually send out the webook on an action. I can use the test webhook, but it won't trigger off media actions. Documentation half-implies that it's a premiere options?

[–] [email protected] 1 points 1 year ago

Very cool! If plex still had plugin support, this is the kind of stuff i'd want to see

[–] [email protected] 1 points 1 year ago (1 children)

Nice! Do you reckon with GPU you could potentially run it in real time? I've set up an endpoint with Whisper to transcribe videos one of my colleagues needed for work on my homelab server, which cumulatively must have saved everyone days worth of time by now.

[–] [email protected] 1 points 1 year ago

I'm not sure yet. Faster-whisper has some benchmarks of the Largev2 model taking about 1 minute for 13 minutes of audio. Smaller models ought to be quicker. Unsure if the specs of the GPU will make much differenece.

[–] [email protected] 1 points 1 year ago (1 children)

Suhweeet!!! English only or will it handle other languages and translation too, Spanish to English?

[–] [email protected] 1 points 1 year ago (1 children)

It can only translate into English, but the source audio can be a foreign language.

[–] [email protected] 1 points 1 year ago

Great, that's what I need!

I see a Docker pull in my future.

[–] [email protected] 1 points 1 year ago

This is Awesome!

I'd love to help out with this! I was starting to write something similar to add hooks to audibookshelf so that it can scan through audiobooks to generate correct chapters / timings also, but it's better implementing this here.

A good idea would to make the GPU / CPU transcoding a transcoding container, so that the main container can send out work to your gaming PC when it's online etc and the main container has scheduled jobs that it can trigger on the transcode nodes when available, there's lots of cool stuff that can be made, really fun project!

Maybe we can create a discord channel for more people who are interested in developing this.

[–] [email protected] 1 points 1 year ago

I'm getting all sorts of syntax errors going off your dockerfile.

[–] [email protected] 1 points 1 year ago

How do I know if it's working/doing it's thing? I installed it but seens to be doing nothing

[–] [email protected] 1 points 1 year ago

The app works perfectly, really nice idea! But I noticed something on my install, on the GitHub it mention that it will transcribe into English from other languages, but I tried Japanese and Portuguese files and they got transcribed at their native language

portuguese > portuguese

japanese > japanese

english > english

is that the expected behavior or should i add some argument on the docker compose to force translation into english?

[–] [email protected] 1 points 11 months ago (1 children)

I have a suggestion. I have installed it and it seems to be working, but I don't know which file it is working on at the time. I look at the logs and I can see where it is determining the language and translating and transcribing, but I have no idea which movie/show it is processing.

Thanks for the great app!

[–] [email protected] 1 points 11 months ago

Unfortunately stable-ts and whisper don’t obviously output which files it is working on, so you’re dependent on trying to decipher it from the logs. I tried to add prints to show which files it has queued and started, but with threading, the std-out sometimes gets lost or buffered in strange ways.