README.md (view raw)
1# videocr
2
3Extract hardcoded subtitles from videos using the [Tesseract](https://github.com/tesseract-ocr/tesseract) OCR engine with Python.
4
5Input a video with hardcoded subtitles:
6
7<p float="left">
8 <img width="430" alt="screenshot" src="https://user-images.githubusercontent.com/10210967/56873658-3b76dd00-6a34-11e9-95c6-cd6edc721f58.png">
9 <img width="430" alt="screenshot" src="https://user-images.githubusercontent.com/10210967/56873659-3b76dd00-6a34-11e9-97aa-2c3e96fe3a97.png">
10</p>
11
12```python
13import videocr
14
15print(videocr.get_subtitles('video.avi', lang='chi_sim+eng', sim_threshold=70))
16```
17
18Output:
19
20```
210
2200:00:01,042 --> 00:00:02,877
23喝 点 什么 ?
24What can I get you?
25
261
2700:00:03,044 --> 00:00:05,463
28我 不 知道
29Um, I'm not sure.
30
312
3200:00:08,091 --> 00:00:10,635
33休闲 时 光 …
34For relaxing times, make it...
35
363
3700:00:10,677 --> 00:00:12,595
38三 得 利 时 光
39Bartender, Bob Suntory time.
40
414
4200:00:14,472 --> 00:00:17,142
43我 要 一 杯 伏特 加
44Un, I'll have a vodka tonic.
45
465
4700:00:18,059 --> 00:00:19,019
48谢谢
49Laughs Thanks.
50
51```
52
53## Performance
54
55The OCR process runs in parallel and is CPU intensive. It takes 3 minutes on my dual-core laptop to extract a 20 seconds video. You may want more cores for longer videos.
56
57## API
58
59```python
60videocr.get_subtitles(
61 video_path: str, lang='eng', time_start='0:00', time_end='',
62 conf_threshold=65, sim_threshold=90, use_fullframe=False)
63```
64Return the subtitles string in SRT format.
65
66
67```python
68
69videocr.save_subtitles_to_file(
70 video_path: str, file_path='subtitle.srt', lang='eng', time_start='0:00',
71 time_end='', conf_threshold=65, sim_threshold=90, use_fullframe=False)
72```
73Write subtitles to `file_path`. If the file does not exist, it will be created automatically.
74
75### Parameters
76
77- `lang`
78
79 The language of the subtitles in the video. All language codes on [this page](https://github.com/tesseract-ocr/tesseract/wiki/Data-Files#data-files-for-version-400-november-29-2016) (e.g. `'eng'` for English) and all script names in [this repository](https://github.com/tesseract-ocr/tessdata_fast/tree/master/script) (e.g. `'HanS'` for simplified Chinese) are supported.
80
81 Note that you can use more than one language. For example, `'hin+eng'` means using Hindi and English together for recognition. More details are available in the [Tesseract documentation](https://github.com/tesseract-ocr/tesseract/wiki/Command-Line-Usage#using-multiple-languages).
82
83 Language data files will be automatically downloaded to your `$HOME/tessdata` directory when necessary. You can read more about Tesseract language data files on their [wiki page](https://github.com/tesseract-ocr/tesseract/wiki/Data-Files).
84
85- `time_start` and `time_end`
86
87 Extract subtitles from only a part of the video. The subtitle timestamps are still calculated according to the full video length.
88
89- `conf_threshold`
90
91 Confidence threshold for word predictions. Words with lower confidence than this threshold are discarded. The default value is fine for most cases.
92
93 Make it closer to 0 if you get too few words from the predictions, or make it closer to 100 if you get too many excess words.
94
95- `sim_threshold`
96
97 Similarity threshold for subtitle lines. Neighbouring subtitles with larger [Levenshtein](https://en.wikipedia.org/wiki/Levenshtein_distance) ratios than this threshold will be merged together. The default value is fine for most cases.
98
99 Make it closer to 0 if you get too many duplicated subtitle lines, or make it closer to 100 if you get too few subtitle lines.
100
101- `use_fullframe`
102
103 By default, only the bottom half of each frame is used for OCR. You can explicitly use the full frame if your subtitles are not within the bottom half of each frame.
104