this post was submitted on 14 Dec 2023
60 points (94.1% liked)

Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ

54462 readers
270 users here now

⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.

Rules • Full Version

1. Posts must be related to the discussion of digital piracy

2. Don't request invites, trade, sell, or self-promote

3. Don't request or link to specific pirated titles, including DMs

4. Don't submit low-quality posts, be entitled, or harass others



Loot, Pillage, & Plunder

📜 c/Piracy Wiki (Community Edition):


💰 Please help cover server costs.

Ko-Fi Liberapay
Ko-fi Liberapay

founded 1 year ago
MODERATORS
60
submitted 10 months ago* (last edited 10 months ago) by [email protected] to c/[email protected]
 

TL-DR; for stuff that is NOT from sonarrr/radrr (e.g. downloaded long time ago / gotten from friends, RSS feeds, whatever), is there a better way to find subs than downloading everything from manual DDL sites and trying everything until one works (matching english text and correctly synced)?

I am not currently using bazarr and I understand that it can catch anything from sonarr that is missing subs but that is not the use-case I need. I am still open to it but since most of the new stuff I get already has subs, I'm looking more at my stuff that is NOT coming from sonarr bc that's where I have the most missing subs. thinking since there github say:

Be aware that Bazarr doesn't scan disk to detect series and movies: It only takes care of the series and movies that are indexed in Sonarr and Radarr."

that most of my use-case is going to be manual searches. It also sounds like Bazarr uses same kind of DDL sites like opensubtitles and subscene that I am already using as its backend / source so curious if there is any advantage vs looking up old stuff on the sites directly.

And especially if there is some way to match existing files with the correct subs, even if the file/folder names no longer contain the release group (e.g. via duration or other mediainfo data or maybe even via checksums). I know vlc can do it for a single file.. but since I have a LOT of stuff w missing subs, I'm looking for a way that I can do something similar from a bash script or some other bulk job without getting a bunch of unsynced subs.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 8 points 10 months ago (6 children)

I got annoyed at not finding CC for the media I have dubbed, so if the show/movie is originally in English and I have it in Spanish, the Spanish subtitles are not from the Spanish audio, but translations of the English audio, so they don't usually match.
(which Tom Scott recently made a video about this issue https://youtu.be/pU9sHwNKc2c)

I found and been using this project https://github.com/jhj0517/Whisper-WebUI
It's been pretty good, for youtube videos (10-30 minutes) has been perfect.
But there are some issues when I tried it with movies, the timings are not great, and sometimes it hallucinates some words in parts where there aren't any. Just a few words are actually wrong/missing. (I tried it with fastwhisper since I don't have that much ram)

[–] [email protected] 2 points 10 months ago* (last edited 10 months ago)

thanks for the suggestion. i was completely unaware of the whisper project and even if it doesnt help much for movies, that might come in real handy for some of the OTA rips I have from my friends (was pretty sure I was SOL for those but this seems like a dcent option).

sounds like it can even be run entirely offline so even better

load more comments (5 replies)