Fork of Bunkai, which reimplemented the functionality first pioneered by cb4960 with subs2srs.
๐๐ฎ๐๐ถ๐ฐ ๐๐๐ฏ๐๐ฎ๐๐ฟ๐ ๐ณ๐๐ป๐ฐ๐๐ถ๐ผ๐ป๐ฎ๐น๐ถ๐๐
$ langkit subs2cards media.mp4 media.th.srt media.en.srt
๐๐๐๐ผ๐บ๐ฎ๐๐ถ๐ฐ ๐๐๐ฏ๐๐ถ๐๐น๐ฒ ๐๐ฒ๐น๐ฒ๐ฐ๐๐ถ๐ผ๐ป (๐ฉ๐ฆ๐ณ๐ฆ: ๐ญ๐ฆ๐ข๐ณ๐ฏ ๐ฃ๐ณ๐ข๐ป๐ช๐ญ๐ช๐ข๐ฏ ๐ฑ๐ฐ๐ณ๐ต๐ถ๐จ๐ฆ๐ด๐ฆ ๐ง๐ณ๐ฐ๐ฎ ๐ค๐ข๐ฏ๐ต๐ฐ๐ฏ๐ฆ๐ด๐ฆ ๐ฐ๐ณ ๐ช๐ง ๐ถ๐ฏ๐ข๐ท๐ข๐ช๐ญ๐ข๐ฃ๐ญ๐ฆ, ๐ต๐ณ๐ข๐ฅ๐ช๐ต๐ช๐ฐ๐ฏ๐ข๐ญ ๐ค๐ฉ๐ช๐ฏ๐ฆ๐ด๐ฆ)
$ langkit subs2cards media.mp4 -l "pt-BR,yue,zh-Hant"
๐๐๐น๐ธ ๐ฝ๐ฟ๐ผ๐ฐ๐ฒ๐๐๐ถ๐ป๐ด (๐ฟ๐ฒ๐ฐ๐๐ฟ๐๐ถ๐๐ฒ)
$ langkit subs2cards /path/to/media/dir/ -l "th,en"
๐ ๐ฎ๐ธ๐ฒ ๐ฎ๐ป ๐ฎ๐๐ฑ๐ถ๐ผ๐๐ฟ๐ฎ๐ฐ๐ธ ๐๐ถ๐๐ต ๐ฒ๐ป๐ต๐ฎ๐ป๐ฐ๐ฒ๐ฑ/๐ฎ๐บ๐ฝ๐น๐ถ๐ณ๐ถ๐ฒ๐ฑ ๐๐ผ๐ถ๐ฐ๐ฒ๐ ๐ณ๐ฟ๐ผ๐บ ๐๐ต๐ฒ ๐ฎ๐ป๐ฑ ๐ฎ๐๐ฑ๐ถ๐ผ๐๐ฟ๐ฎ๐ฐ๐ธ ๐ผ๐ณ ๐๐ต๐ฒ ๐บ๐ฒ๐ฑ๐ถ๐ฎ (๐๐ฆ๐ฑ๐ญ๐ช๐ค๐ข๐ต๐ฆ ๐๐๐ ๐ต๐ฐ๐ฌ๐ฆ๐ฏ ๐ฏ๐ฆ๐ฆ๐ฅ๐ฆ๐ฅ)
$ langkit enhance media.mp4 -a 2 --sep demucs
๐ ๐ฎ๐ธ๐ฒ ๐ฎ ๐ฑ๐๐ฏ๐๐ถ๐๐น๐ฒ ๐ผ๐ณ ๐๐ต๐ฒ ๐บ๐ฒ๐ฑ๐ถ๐ฎ ๐๐๐ถ๐ป๐ด ๐ฆ๐ง๐ง ๐ผ๐ป ๐๐ต๐ฒ ๐๐ถ๐บ๐ฒ๐ฐ๐ผ๐ฑ๐ฒ๐ ๐ผ๐ณ ๐ฝ๐ฟ๐ผ๐๐ถ๐ฑ๐ฒ๐ฑ ๐๐๐ฏ๐๐ถ๐๐น๐ฒ ๐ณ๐ถ๐น๐ฒ (๐๐ฆ๐ฑ๐ญ๐ช๐ค๐ข๐ต๐ฆ ๐๐๐ ๐ต๐ฐ๐ฌ๐ฆ๐ฏ ๐ฏ๐ฆ๐ฆ๐ฅ๐ฆ๐ฅ)
$ langkit subs2dubs --stt whisper media.mp4 (media.th.srt) -l "th"
๐๐ผ๐บ๐ฏ๐ถ๐ป๐ฒ ๐ฎ๐น๐น ๐ผ๐ณ ๐๐ต๐ฒ ๐ฎ๐ฏ๐ผ๐๐ฒ ๐ถ๐ป ๐ผ๐ป๐ฒ ๐ฐ๐ผ๐บ๐บ๐ฎ๐ป๐ฑ
$ langkit subs2cards /path/to/media/dir/ -l "th,en" --stt whisper --sep demucs
This fork require FFmpeg v6 or higher (dev builds being preferred), Mediainfo, a Replicate API token.
The FFmpeg dev team recommends end-users to use only the latest builds from the dev branch (master builds). The FFmpeg binary's location can be provided by a flag, in $PATH or in a "bin" directory placed in the folder where langkit is.
At the moment tokens should be passed through these env variables: REPLICATE_API_TOKEN, ASSEMBLYAI_API_KEY, ELEVENLABS_API_TOKEN.
Use modern codecs to save storage. The image/audio codecs which langkit uses are state-of-the-art and are currently in active development.
The static FFmpeg builds guarantee that you have up-to-date codecs. If you don't use a well-maintained bleeding edge distro or brew, use the dev builds. You can check your distro here.
Translations of dubbings and of subtitles differ.ยน Therefore dubbings can't be used with subtitles in the old subs2srs unless said subs are closed captions or dubtitles.
With the flag --stt
you can use Whisper (v3-large) on the audio clips corresponding to timecodes of the subtitles to get the transcript of the audio and then, have it replace the translation of the subtitles. AFAIK Language Reactor was the first to combine this with language learning from content however I found the accuracy of the STT they use to be unimpressive.
By default a dubtitle file will also be created from these transcriptions.
Name (to be passed with --stt) | Word Error Rate average across all supported langs (june 2024) | Number of languages supported | Price | Type | Note |
---|---|---|---|---|---|
whisper, wh | 10,3% | 57 | $1.1/1000min | MIT | See here for a breakdown of WER per language. |
insanely-fast-whisper, fast | 16,2% | 57 | $0.0071/run | MIT | |
universal-1, u1 | 8,7% | 17 | $6.2/1000min | proprietary | Untested (doesn't support my target lang) |
See ArtificialAnalysis and Amgadoz @Reddit for detailed comparisons.
Note: OpenAI just released a turbo model of large-v3 but they say it's on a par with large-v2 as far as accuracy is concerned so I won't bother to add it.
langkit will automatically make an audio file containing all the audio snippets of dialog in the audiotrack.
This is meant to be used for passive listening.
More explanations and context here: Optimizing Passive Immersion: Condensed Audio - YouTube
Make a new audiotrack with voices louder. This is very useful for languages that are phonetically dense, such as tonal languages, or for languages that sound very different from your native language.
It works by merging the original audiotrack with an audiotrack containing the voices only.
The separated voices are obtained using one of these:
Name (to be passed with --sep) | Quality of separated vocals | Price | Type | Note |
---|---|---|---|---|
demucs, de | good | very cheap 0.063$/run | MIT license | Recommended |
demucs_ft, ft | good | cheap 0.252$/run | MIT license | Fine-tuned version: "take 4 times more time but might be a bit better". I couldn't hear any difference with the original in my test. |
spleeter, sp | rather poor | very, very cheap 0.00027$/run | MIT license | |
elevenlabs, 11, el | good | very, very expensive 1$/MINUTE |
proprietary | Not fully supported due to limitations of their API (mp3 only) which desync the processed audio with the original. Requires an Elevenlabs API token. Does more processing than the others: noises are entirely eliminated, but it distort the soundstage to put the voice in the center. It might feel a bit uncanny in an enhanced track. |
Note
demucs and spleeter are originally meant for songs (ie. tracks a few minutes long) and the GPUs allocated by Replicate to these models are not the best. You may encounter OOM GPU (out of memory) errors when trying to process audio tracks of movies. As far as my testing goes, trying a few hours later solves the problem.
Replicate also offers to make deployments with a GPU of one's choice, but this isn't cost-effective or user-friendly so it probably won't ever be supported.
By default all CPU cores available are used. You can reduce CPU usage by passing a lower --workers
value than the default.
...if you pass a directory instead of a mp4. The target and native language must be set using -l
, see tldr section.
There are plenty of good options already: Language Reactor (previously Language Learning With Netflix), asbplayer, mpvacious, voracious, memento...
Here is a list: awesome-immersion
They are awesome but all of them are media-centric: they are implemented around watching shows.
The approach here is word-centric:
- word-centric notes referencing all common meanings: I cross-source dictionaries, LLMs to the map the meanings, connotations and register of a word. Then I use another tool to search my database of generated TSV to illustrate & disambiguate with real-world examples the meanings I have found. This results in high quality notes regrouping all examples sentences, TTS, picture... and any other fields related to the word, allowing for maximum context.
- word-note reuse for language laddering: another advantage of this approach it that you can use this very note as basis for making cards for a new target language further down the line, while keeping all your previous note fields at hand for making the cards template for your new target language. The initial language acts just like Note ID for a meaning mapped across multiple languages. The majority of the basic vocabulary can be translated across languages directly with no real loss of meaning (and you can go on to disambiguate it further, using the method above for example). The effort that you spend on your first target language will thus pay off on subsequent languages.
There are several additional tools I made to accomplish this but they are hardcoded messes so don't expect me to publish them, langkit is enough work for me by itself! :)
All new contributions from commit d540bd4 onward are licensed under GPL-3.0.
See original README of bunkai below for the basic features:
Dissects subtitles and corresponding media files into flash cardsfor sentence mining with an SRS system like Anki. It is inspired by the linked article on sentence mining and existing tools, which you might want to check out as well.
- One or two subtitle files: Two subtitle files can be used together to provide foreign and native language expressions on the same card.
- Multiple subtitle formats: Any format which is supported by go-astisub
is also supported by this application, although some formats may work slightly
better than others. If in doubt, try to use
.srt
subtitles.
There is no proper release process at this time, nor a guarantee of stability of any sort, as I'm the only user of the software that I am aware of. For now, you must install the application from source.
Requirements:
go
command inPATH
(only to build and install the application)ffmpeg
command inPATH
(used at runtime)
go get github.com/tassa-yoniso-manasi-karoto/langkit
langkit is mainly used to generate flash cards from one or two subtitle files and a corresponding media file.
For example:
langkit subs2cards media-content.mp4 foreign.srt native.srt
The above command generates the tab-separated file foreign.tsv
and a
corresponding directory foreign.media/
containing the associated images and
audio files. To do sentence mining, import the file foreign.tsv
into a new
deck and then, at least in the case of Anki, copy the media files manually into
Anki's collection.media directory.
Before you can import the deck with Anki though, you must add a new
Note Type
which includes some or all of the fields below on the front and/or back of
each card. The columns in the generated .tsv
file are as follows:
# | Name | Description |
---|---|---|
1 | Sound | Extracted audio as a [sound] tag for Anki |
2 | Time | Subtitle start time code as a string |
3 | Source | Base name of the subtitle source file |
4 | Image | Selected image frame as an <img> tag |
5 | ForeignCurr | Current text in foreign subtitles file |
6 | NativeCurr | Current text in native subtitles file |
7 | ForeignPrev | Previous text in foreign subtitles file |
8 | NativePrev | Previous text in native subtitles file |
9 | ForeignNext | Next text in foreign subtitles file |
10 | NativeNext | Next text in native subtitles file |
When you review the created deck for the first time, you should go quickly through the entire deck at once. During this first pass, your goal should be to identify those cards which you can understand almost perfectly, if not for the odd piece of unknown vocabulary or grammar; all other cards which are either too hard or too easy should be deleted in this pass. Any cards which remain in the imported deck after mining should be refined and moved into your regular deck for studying the language on a daily basis.
For other uses, run langkit --help
to view the built-in documentation.
The state of affairs when it comes to open-source subtitle editors is a sad one, but here's a list of editors which may or may not work passably. If you know a good one, please let me know!
Name | Platforms | Description |
---|---|---|
Aegisub | macOS & others | Seems to have been a popular choice, but is no longer actively maintained. |
Jubler | macOS & others | Works reasonably well, but fixing timing issues is still somewhat cumbersome. |
There are at least three alternatives to this application that I know of, by now. Oddly enough, I found substudy just after the prototype and movies2anki when I published this repository. Something is off with my search skills! :)
- movies2anki: Fully-integrated add-on for Anki which has some advanced features and supports all platforms
- substudy: CLI alternative to subs2srs with the ability to export into other formats as well, not just SRS decks
- subs2srs: GUI software for Windows with many features, and inspiration for substudy and Bunkai