this post was submitted on 09 Jan 2025
369 points (98.7% liked)

Opensource

1608 readers
661 users here now

A community for discussion about open source software! Ask questions, share knowledge, share news, or post interesting stuff related to it!

CreditsIcon base by Lorc under CC BY 3.0 with modifications to add a gradient



founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 39 minutes ago

Now if only I could get it to play nice with my Chromecast... But I'm sure that's on Google.

[–] [email protected] 23 points 11 hours ago (2 children)

accessibility is honestly the first good use of ai. i hope they can find a way to make them better than youtube's automatic captions though.

[–] [email protected] 2 points 1 hour ago

There are other good uses of AI. Medicine. Genetics. Research, even into humanities like history.

The problem always was the grifters who insist calling any program more complicated than adding two numbers AI in the first place, trying to shove random technologies into random products just to further their cancerous sales shell game.

The problem is mostly CEOs and salespeople thinking they are software engineers and scientists.

[–] yonder 6 points 10 hours ago

I know Jeff Geerling on Youtube uses OpenAIs Whisper to generate captions for his videos instead of relying on Youtube's. Apparently they are much better than Youtube's being nearly flawless. I would have a guess that Google wants to minimize the compute that they use when processing videos to save money.

[–] [email protected] 6 points 10 hours ago

I am still waiting for seek previews

[–] [email protected] 8 points 11 hours ago

Perhaps we could also get a built-in AI tool for automatic subtitle synchronization?

[–] [email protected] 5 points 10 hours ago

I've been waiting for ~~this~~ break-free playback for a long time. Just play Dark Side of the Moon without breaks in between tracks. Surely a single thread could look ahead and see the next track doesn't need any different codecs launched, it's technically identical to the current track, there's no need to have a break. /rant

[–] [email protected] 16 points 15 hours ago (1 children)
[–] [email protected] 6 points 12 hours ago

Thank you for your service

[–] [email protected] 155 points 1 day ago (3 children)

I know people are gonna freak out about the AI part in this.

But as a person with hearing difficulties this would be revolutionary. So much shit I usually just can’t watch because open subtitles doesn’t have any subtitles for it.

[–] [email protected] 95 points 22 hours ago* (last edited 22 hours ago) (1 children)

The most important part is that it’s a local ~~LLM~~ model running on your machine. The problem with AI is less about LLMs themselves, and more about their control and application by unethical companies and governments in a world driven by profit and power. And it’s none of those things, it’s just some open source code running on your device. So that’s cool and good.

[–] [email protected] 33 points 21 hours ago (2 children)

Also the incessant ammounts of power/energy that they consume.

[–] [email protected] 12 points 15 hours ago (2 children)

Running an llm llocally takes less power than playing a video game.

[–] [email protected] 7 points 10 hours ago

The training of the models themselves also takes a lot of power usage.

[–] [email protected] 2 points 11 hours ago (1 children)
[–] [email protected] 5 points 10 hours ago* (last edited 10 hours ago) (1 children)

I don't have a source for that, but the most that any locally-run program can cost in terms of power is basically the sum of a few things: maxed-out gpu usage, maxed-out cpu usage, maxed-out disk access. GPU is by far the most power-consuming of these things, and modern video games make essentially the most possible use of the GPU that they can get away with.

Running an LLM locally can at most max out usage of the GPU, putting it in the same ballpark as a video game. Typical usage of an LLM is to run it for a few seconds and then submit another query, so it's not running 100% of the time during typical usage, unlike a video game (where it remains open and active the whole time, GPU usage dips only when you're in a menu for instance.)

Data centers drain lots of power by running a very large number of machines at the same time.

[–] [email protected] 2 points 3 hours ago

From what I know, local LLMs take minutes to process a single prompt, not seconds, but I guess that depends on the use case.

But also games, dunno about maxing GPU in most games. I maxed mine for crypto mining, and that was power hungry. So I would put LLMs closer to crypto than games.

Not to mention games will entertain you way more for the same time.

[–] Sixtyforce 1 points 10 hours ago

Curious how resource intensive AI subtitle generation will be. Probably fine on some setups.

Trying to use madVR (tweaker's video postprocessing) in the summer in my small office with an RTX 3090 was turning my office into a sauna. Next time I buy a video card it'll be a lower tier deliberately to avoid the higher power draw lol.

[–] [email protected] 37 points 22 hours ago

Yeah, transcription is one of the only good uses for LLMs imo. Of course they can still produce nonsense, but bad subtitles are better none at all.

[–] [email protected] 17 points 22 hours ago

Indeed, YouTube had auto generated subtitles for a while now and they are far from perfect, yet I still find it useful.

[–] [email protected] 5 points 13 hours ago (2 children)

This is great timing considering the recent Open Subtitles fiasco.

[–] [email protected] 6 points 12 hours ago (1 children)
[–] [email protected] 3 points 11 hours ago (1 children)

Open Subtitles now only allows 5 downloads per 24 hours per IP. You have to pay for more.

[–] [email protected] 1 points 46 minutes ago

Kind of annoying when searching for the exact sub file for the movie file you have.

Especially when half those subtitle files appear to be AI generated anyway, or have weird Asian gambling ads shoved in.

Glad MKV seems to be the standard now, and include subs from the original sources.

[–] [email protected] 2 points 10 hours ago* (last edited 10 hours ago)
[–] [email protected] 13 points 17 hours ago (2 children)

I don't mind the idea, but I would be curious where the training data comes from. You can't just train them off of the user's (unsubtitled) videos, because you need subtitles to know if the output is right or wrong. I checked their twitter post, but it didn't seem to help.

[–] [email protected] 15 points 15 hours ago (1 children)

subtitles aren't a unique dataset it's just audio to text

[–] [email protected] 10 points 14 hours ago (1 children)

They may have to give it some special training to be able to understand audio mixed by the Chris Nolan school of wtf are they saying.

[–] [email protected] 2 points 12 hours ago

No, if you have a center track you can just use that. Volume isn't a problem for a computer listening to it since they don't use the physical speakers.

[–] [email protected] 8 points 16 hours ago

I hope they're using Open Subtitles, or one of the many academic Speech To Text datasets that exist.

[–] [email protected] 81 points 1 day ago (8 children)

Et tu, Brute?

VLC automatic subtitles generation and translation based on local and open source AI models running on your machine working offline, and supporting numerous languages!

Oh, so it's basically like YouTube's auto-generatedd subtitles. Never mind.

[–] [email protected] 61 points 1 day ago (13 children)

Hopefully better than YouTube's, those are often pretty bad, especially for non-English videos.

[–] [email protected] 11 points 17 hours ago (2 children)

Youtube's removal of community captions was the first time I really started to hate youtube's management, they removed an accessibility feature for no good reason, making my experience with it significantly worse. I still haven't found a replacement for it (at least, one that actually works)

[–] [email protected] 10 points 17 hours ago

and if you are forced to use the auto-generated ones remember no [__] swearing either! as we all know disabled people are small children who need to be coddled!

[–] [email protected] 1 points 11 hours ago

Same here. It kick-started my hatred of YouTube, and they continued to make poor decision after poor decision.

[–] [email protected] 19 points 23 hours ago (1 children)

They're awful for English videos too, IMO. Anyone with any kind of accent(read literally anyone except those with similar accents to the team that developed the auto-caption) it makes egregious errors, it's exceptionally bad with Australian, New Zealand, English, Irish, Scottish, Southern US, and North Eastern US. I'm my experience "using" it i find it nigh unusable.

load more comments (1 replies)
load more comments (11 replies)
load more comments (7 replies)
[–] [email protected] 59 points 23 hours ago

All hail the peak humanity levels of VLC devs.

FOSS FTW

[–] [email protected] 21 points 20 hours ago (1 children)

And yet they still can't seek backwards

[–] [email protected] 26 points 18 hours ago (2 children)

Iirc this is because of how they've optimized the file reading process; it genuinely might be more work to add efficient frame-by-frame backwards seeking than this AI subtitle feature.

That said, jfc please just add backwards seeking. It is so painful to use VLC for reviewing footage. I don't care how "inefficient" it is, my computer can handle any operation on a 100mb file.

[–] [email protected] 9 points 14 hours ago (1 children)

If you have time to read the issue thread about it, it's infuriating. There are multiple viable suggestions that are dismissed because they don't work in certain edge cases where it would be impossible for any method at all to work, and which they could simply fail gracefully for.

[–] [email protected] 6 points 13 hours ago (2 children)

That kind of attitude in development drives me absolutely insane. See also: support for DHCPv6 in Android. There's a thread that has been raging for I think over a decade now

[–] [email protected] 1 points 6 hours ago

I now know more about Android IPv6 than ever before

[–] [email protected] 2 points 12 hours ago

Same for simply allowing to pause on click... Luckily extension exists but it's sad that you need one.

load more comments (1 replies)
[–] [email protected] 41 points 23 hours ago (4 children)

I know AI has some PR issues at the moment but I can’t see how this could possibly be interpreted as a net negative here.

In most cases, people will go for (manually) written subtitles rather than autogenerated ones, so the use case here would most often be in cases where there isn’t a better, human-created subbing available.

I just can’t see AI / autogenerated subtitles of any kind taking jobs from humans because they will always be worse/less accurate in some way.

[–] [email protected] 7 points 19 hours ago

I can’t see how this could possibly be interpreted as a net negative here

Not judging this as bad or good, but for sure if it's offline generated it will bloat the size of the program.

[–] [email protected] 16 points 22 hours ago (1 children)

Autogenerated subtitles are pretty awesome for subtitle editors I'd imagine.

[–] [email protected] 23 points 22 hours ago (7 children)

even if they get the words wrong, but the timestamps right, it'd still save a lot of time

load more comments (7 replies)
load more comments (2 replies)
[–] [email protected] 33 points 22 hours ago

Solving problems related to accessibility is a worthy goal.

[–] [email protected] 17 points 21 hours ago

It's nice to see a good application of ai. I hope my low end stuff will be able to run it.

load more comments
view more: next ›