CN107480670A - A kind of method and apparatus of caption extraction - Google Patents
A kind of method and apparatus of caption extraction Download PDFInfo
- Publication number
- CN107480670A CN107480670A CN201610404764.6A CN201610404764A CN107480670A CN 107480670 A CN107480670 A CN 107480670A CN 201610404764 A CN201610404764 A CN 201610404764A CN 107480670 A CN107480670 A CN 107480670A
- Authority
- CN
- China
- Prior art keywords
- captions
- caption
- frame
- character
- caption area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/635—Overlay text, e.g. embedded captions in a TV program
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Television Systems (AREA)
Abstract
The invention discloses a kind of method of caption extraction, including:Certain two field picture is detected whether with the presence of captions;The position of captions is positioned, and produces a frame for including captions;Character in caption area is strengthened;Adaptivenon-uniform sampling is carried out to the character in video caption region to form single character;Obtained single character will be split to identify to obtain the related text message of current video frame by OCR.The invention also discloses a kind of equipment of caption extraction.Using technical scheme provided by the invention, automatically massive video program can be classified and managed, change manual warehousing/search modes of conventional video program, improve efficiency, improve the accuracy rate of the caption information extracted.
Description
Technical field
The present invention relates to video analysis field, and in particular to the method and apparatus of caption extraction.
Background technology
TV programme are one of important channels that people obtain information, and the main title in video often enumerates this and regarded
The main contents of frequency.If caption recognition can be come out, then for visual classification is arranged, fast search, have very big
Benefit.In information-intensive society, it is interested that people in face of substantial amounts of digital picture and numerous news informations make it that people find
News content is more and more difficult, and along with time and efforts is limited, it is extremely time-consuming, laborious that manual identified, which goes out these captions,.
And data volume is huge, local-caption extraction is relatively difficult, and during video image complexity, subtitles appearances many and captions approximate with its are transported
It is dynamic various, it is more difficult to be accurately positioned.Therefore it is badly in need of a kind of video search engine, efficiently and accurately to find what oneself was liked
Theme, how to improve the accuracy rate of the caption information extracted.
The content of the invention
In view of this, it is an object of the present invention to provide a kind of method and apparatus of caption extraction, automation
Ground is classified and managed to massive video program, changes manual warehousing/search modes of conventional video program, improves efficiency,
The accuracy rate of the caption information extracted.In order to which some aspects of the embodiment to disclosure have a basic understanding, under
Face gives simple summary.The summarized section is not extensive overview, nor to determine key/critical component or description
The protection domain of these embodiments.Its sole purpose is that some concepts are presented with simple form, detailed in this, as below
The preamble of explanation.
It is an object of the present invention to provide a kind of caption extraction method, including:
Certain two field picture is detected whether with the presence of captions;
The position of captions is positioned, and produces a frame for including captions;
Character in caption area is strengthened;
Adaptivenon-uniform sampling is carried out to the character in video caption region to form single character;
Obtained single character will be split to identify to obtain the related text message of current video frame by OCR.
In some optional embodiments, whether described certain two field picture of detection with the presence of captions specifically includes following steps:
Scene conversion detects;
The detection of captions or title appearing and subsiding frame;
Captions or title is asked the poor shadow image of its former frame of frame domain occur;
Feature extraction and classifying;
The generation of caption area;
The checking of caption area.
In some optional embodiments, the caption location determines that the position of captions includes:
The subtitles appearances that selection can differentiate captions and background;
Subtitles appearances are extracted using algorithm;
Assemble the adjacent characteristic point forming region in space;
Some regions for being unlikely to be captions, which are removed, with other features of letter obtains candidate regions caption area;
Candidate's caption area is verified with some features of letter to obtain real caption area.
In some optional embodiments, the character by caption area strengthens, and specifically includes:Single frames caption area
Enhancing and the enhancing of multiframe caption area.
It is described to split captions from background in some optional embodiments, particular by caption area
The adaptivenon-uniform sampling of single character is completed in Dynamic Distribution's local threshold binaryzation, candidate region enhancing, upright projection regionally detecting.
It is an object of the present invention to provide a kind of equipment of caption extraction, it is characterised in that including:
Caption frame detection unit, for judging certain two field picture whether with the presence of captions;
Caption location unit, for determining the position of captions, and produce a frame for including captions;
Captions enhancement unit, for the character in caption area to be strengthened;
Caption extraction unit, for carrying out adaptivenon-uniform sampling to the character in video caption region to form single character;
Character recognition unit, it is related to identify to obtain current video frame for obtained single character will to be split by OCR
Text message.
In some optional embodiments, the caption frame detection unit specifically uses the lens detection method of Spatial-temporal slice
Detected, the method for the space-time comprises the following steps:
Scene conversion detects;
The detection of captions or title appearing and subsiding frame;
Captions or title is asked the poor shadow image of its former frame of frame domain occur;
Feature extraction and classifying;
The generation of caption area;
The checking of caption area.
In some optional embodiments, the caption location includes:
The subtitles appearances that selection can differentiate captions and background;
Subtitles appearances are extracted using algorithm;
Assemble the adjacent characteristic point forming region in space;
Some regions for being unlikely to be captions, which are removed, with other features of letter obtains candidate regions caption area;
Candidate's caption area is verified with some features of letter to obtain real caption area.
In some optional embodiments, the captions enhancement unit is specifically used for the enhancing of single frames caption area and multiframe word
The enhancing of curtain region.
In some optional embodiments, the caption extraction unit is split using sciagraphy to captions, and to dividing
Captions after cutting carry out interpolation amplification, binaryzation, character and separated.
Using methods and apparatus of the present invention, there is following effect:
Relative to text search, the present invention can provide more abundant search result display form, relative to common
Text information is searched for, the video data of search includes more abundant content and information, can be automatically to massive video section
Mesh is classified and managed, and changes manual warehousing/search modes of conventional video program, is improved efficiency, is improved the word extracted
The accuracy rate of curtain information.
For above-mentioned and related purpose, one or more embodiments include will be explained in below and in claim
In the feature that particularly points out.Following explanation and accompanying drawing describe some illustrative aspects in detail, and its instruction is only
Some modes in the utilizable various modes of principle of each embodiment.Other benefits and novel features will be under
The detailed description in face is considered in conjunction with the accompanying and becomes obvious, the disclosed embodiments be will include all these aspects and they
Be equal.
Figure of description
Fig. 1 is the method flow diagram of caption extraction provided by the invention;
Fig. 2 is the composition schematic diagram of the equipment of caption extraction provided by the invention.
Embodiment
The following description and drawings fully show specific embodiments of the present invention, to enable those skilled in the art to
Put into practice them.Other embodiments can include structure, logic, it is electric, process and other change.Embodiment
Only represent possible change.Unless explicitly requested, otherwise single component and function are optional, and the order operated can be with
Change.The part of some embodiments and feature can be included in or replace part and the feature of other embodiments.This hair
The scope of bright embodiment includes the gamut of claims, and claims is all obtainable equivalent
Thing.Herein, these embodiments of the invention can individually or generally be represented that this is only with term " invention "
For convenience, and if in fact disclosing the invention more than one, the scope for being not meant to automatically limit the application is to appoint
What single invention or inventive concept.
Embodiment one
A kind of method of caption extraction provided by the invention, reference picture 1, this method includes:
Step S101, whether certain two field picture is detected with the presence of captions;
Step S102, the position of captions is positioned, and produces a frame for including captions;
Step S103, the character in caption area is strengthened;
Step S104, adaptivenon-uniform sampling is carried out to the character in video caption region to form single character;
Step S105, obtained single character will be split to identify to obtain the related text message of current video frame by OCR.
Embodiment two
A kind of method of caption extraction provided by the invention, this method include:
Step S101, whether certain two field picture is detected with the presence of captions;
It is preferred that specifically include following steps:
Step S1011, scene conversion detection;
Step S1012, the detection of captions or title appearing and subsiding frame;
Step S1013, captions or title is asked the poor shadow image of its former frame of frame domain occur;
Step S1014, feature extraction and classifying;
Step S1015, the generation of caption area;
Step S1016, the checking of caption area.
Step S102, the position of captions is positioned, and produces a frame for including captions;
It is preferred that specifically include following steps:
S1021, the subtitles appearances that selection can differentiate captions and background;
S1022, subtitles appearances are extracted using algorithm;
S1023, the adjacent characteristic point forming region in aggregation space;
S1024, remove some regions for being unlikely to be captions with other features of letter and obtain candidate regions caption area;
S1025, candidate's caption area is verified with some features of letter to obtain real caption area.
Step S103, the character in caption area is strengthened;
It is preferred that the character by caption area strengthens, specifically include:Single frames caption area strengthens and multiframe captions
Region strengthens.
Step S104, adaptivenon-uniform sampling is carried out to the character in video caption region to form single character;
It is preferred that described split captions from background, particular by caption area Dynamic Distribution part threshold
It is worth binaryzation, the adaptivenon-uniform sampling of single character is completed in candidate region enhancing, upright projection regionally detecting.
Step S105, obtained single character will be split to identify to obtain the related text message of current video frame by OCR.
Embodiment three
Reference picture 2, the invention provides a kind of equipment of caption extraction, including:Caption frame detection unit 10, word
Curtain positioning unit 20, captions enhancement unit 30, caption extraction unit 40 and character recognition unit 50.Wherein,
Caption frame detection unit 10, for judging certain two field picture whether with the presence of captions;
It is preferred that the caption frame detection unit 10 is specifically detected using the lens detection method of Spatial-temporal slice, institute
The method for stating space-time comprises the following steps:
Scene conversion detects;
The detection of captions or title appearing and subsiding frame;
Captions or title is asked the poor shadow image of its former frame of frame domain occur;
Feature extraction and classifying;
The generation of caption area;
The checking of caption area.
Caption location unit 20, for determining the position of captions, and produce a frame for including captions;
It is preferred that the caption location unit 20 includes:
The subtitles appearances that selection can differentiate captions and background;
Subtitles appearances are extracted using algorithm;
Assemble the adjacent characteristic point forming region in space;
Some regions for being unlikely to be captions, which are removed, with other features of letter obtains candidate regions caption area;
Candidate's caption area is verified with some features of letter to obtain real caption area.
Captions enhancement unit 30, for the character in caption area to be strengthened;
It is preferred that the captions enhancement unit 30 is specifically used for the enhancing of single frames caption area and the enhancing of multiframe caption area.
Caption extraction unit 40, for carrying out adaptivenon-uniform sampling to the character in video caption region to form single word
Symbol;
It is preferred that the caption extraction unit 40 is split using sciagraphy to captions, and the captions after segmentation are entered
Row interpolation amplification, binaryzation, character separate.
Character recognition unit 50, identify to obtain current video frame correlation for obtained single character will to be split by OCR
Text message.
Example IV
It is clearer to make principle, characteristic and the advantage of the present invention, it is described with reference to specific embodiment.
, it is necessary to which video flowing is first cut into video frame image one by one before video caption detection is proceeded by,
Then the detection of captions event is carried out to image again.Caption frame detection is generally used in image sequence, whether detects certain two field picture
With the presence of captions, this stage does not have any prior information to utilize.Because do not know in given image whether there are captions.
The video frame image of separatrix is subjected to caption area detection, to judge whether there is caption area in frame of video, and detects captions
The position in region, then caption area is split to the processing for being input to and next step being carried out in next module, while produce one
The frame of individual envelope captions.Although the envelope frame of captions can provide the exact position of captions, for the ease of the identification of captions,
A captions is also needed to be split from background.Caption recognition is that the character in captions is split from background.Caption location
Essence be image segmentation, the horizontal difference of caption frame detected can be utilized to orient captions row, then according to key frame
Difference realizes caption extraction, finally carries out the post processings such as split degree and improves segmentation.Because the resolution ratio of caption area may be compared with
Reason, the subtitling images of extraction such as low and noise must carry out enhancing to it before OCR is input to and be then split into binary picture
Picture.The caption area sometimes extracted due to the problems such as background is complicated or stroke is unintelligible, cause captions resolution ratio compared with
It is low, the processing of next step can be so influenceed, so can typically strengthen when carrying out next step captions, improves resolution ratio,
The average Beijing of improved multi-frame mean method can be used, reduce noise.Before OCR identifications are carried out, it is desirable to the captions figure of input
The picture clear background of stroke is simple, and general to require to be black matrix wrongly written or mispronounced character or white gravoply, with black engraved characters, this requires the subtitling image to detecting
First carry out binary conversion treatment.The method of subtitling image binary conversion treatment is broadly divided into two kinds of global threshold and local threshold.In addition,
Subtitling image after gray processing, sometimes due to Pekinese's complexity and unintelligible stroke the problems such as, cause subtitle region
Domain resolution ratio is poor, influences the effect of binaryzation.In order to improve the effect of picture after binaryzation, can be to gray processing after
Picture carries out image enhaucament, improves the contrast of captions and background.The main method of image enhaucament has image sharpening, image filter
Ripple, histogram equalization and image smoothing etc..The captions picture of binaryzation is finally identified using existing OCR software, then
Output includes the subtitle file of recognition result.
It should be understood that the particular order or level of the step of during disclosed are the examples of illustrative methods.Based on setting
Count preference, it should be appreciated that during the step of particular order or level can be in the feelings for the protection domain for not departing from the disclosure
Rearranged under condition.Appended claim to a method gives the key element of various steps with exemplary order, and not
It is to be limited to described particular order or level.
In above-mentioned detailed description, various features combine in single embodiment together, to simplify the disclosure.No
This open method should be construed to reflect such intention, i.e. the embodiment of theme claimed needs clear
The more features of feature stated in each claim to Chu.On the contrary, that reflected such as appended claims
Sample, the present invention are in the state fewer than whole features of disclosed single embodiment.Therefore, appended claims is special
This is expressly incorporated into detailed description, and wherein each claim is alone as the single preferred embodiment of the present invention.
Realized for software, technology described in this application can use the module for performing herein described function (for example, mistake
Journey, function etc.) realize.These software codes can be stored in memory cell and by computing device.Memory cell can
With realize in processor, can also realize outside processor, in the latter case, it via various means by correspondence
It is coupled to processor, these are all well known in the art.
Described above includes the citing of one or more embodiments.Certainly, in order to above-described embodiment is described and description portion
The all possible combination of part or method is impossible, but it will be appreciated by one of ordinary skill in the art that each implementation
Example can do further combinations and permutations.Therefore, embodiment described herein is intended to fall into appended claims
Protection domain in all such changes, modifications and variations.In addition, with regard to the term used in specification or claims
"comprising", the mode that covers of the word are similar to term " comprising ", just as " including " solved in the claims as link word
As releasing.In addition, the use of any one term "or" in the specification of claims is to represent " non-exclusionism
Or ".
Claims (9)
- A kind of 1. method of caption extraction, it is characterised in that including:Certain two field picture is detected whether with the presence of captions;The position of captions is positioned, and produces a frame for including captions;Character in caption area is strengthened;Adaptivenon-uniform sampling is carried out to the character in video caption region to form single character;Obtained single character will be split to identify to obtain the related text message of current video frame by OCR.
- 2. the method as described in claim 1, it is characterised in that whether described certain two field picture of detection specifically includes with the presence of captions Following steps:Scene conversion detects;The detection of captions or title appearing and subsiding frame;Captions or title is asked the poor shadow image of its former frame of frame domain occur;Feature extraction and classifying;The generation of caption area;The checking of caption area.
- 3. the method as described in claim 1, it is characterised in that the caption location determines that the position of captions includes:The subtitles appearances that selection can differentiate captions and background;Subtitles appearances are extracted using algorithm;Assemble the adjacent characteristic point forming region in space;Some regions for being unlikely to be captions, which are removed, with other features of letter obtains candidate regions caption area;Candidate's caption area is verified with some features of letter to obtain real caption area.
- 4. the method as described in claim 1, it is characterised in that the character by caption area strengthens, and specifically includes:It is single Frame caption area strengthens and the enhancing of multiframe caption area.
- 5. the method as described in claim 1, it is characterised in that it is described to split captions from background, particular by To caption area Dynamic Distribution local threshold binaryzation, candidate region enhancing, upright projection regionally detecting complete single character Adaptivenon-uniform sampling.
- A kind of 6. equipment of caption extraction, it is characterised in that including:Caption frame detection unit, for judging certain two field picture whether with the presence of captions;Caption location unit, for determining the position of captions, and produce a frame for including captions;Captions enhancement unit, for the character in caption area to be strengthened;Caption extraction unit, for carrying out adaptivenon-uniform sampling to the character in video caption region to form single character;Character recognition unit, identify to obtain the related text of current video frame for obtained single character will to be split by OCR Information.
- 7. equipment as claimed in claim 6, it is characterised in that the caption frame detection unit specifically uses the mirror of Spatial-temporal slice Head inspecting method is detected, and the method for the space-time comprises the following steps:Scene conversion detects;The detection of captions or title appearing and subsiding frame;Captions or title is asked the poor shadow image of its former frame of frame domain occur;Feature extraction and classifying;The generation of caption area;The checking of caption area.
- 8. equipment as claimed in claim 6, it is characterised in that the caption location includes:The subtitles appearances that selection can differentiate captions and background;Subtitles appearances are extracted using algorithm;Assemble the adjacent characteristic point forming region in space;Some regions for being unlikely to be captions, which are removed, with other features of letter obtains candidate regions caption area;Candidate's caption area is verified with some features of letter to obtain real caption area.
- 9. equipment as claimed in claim 6, it is characterised in that the captions enhancement unit increases specifically for single frames caption area The enhancing of strong and multiframe caption area.Equipment as claimed in claim 6, it is characterised in that the caption extraction unit is divided captions using sciagraphy Cut, and the captions after segmentation are carried out with interpolation amplification, binaryzation, character and is separated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610404764.6A CN107480670A (en) | 2016-06-08 | 2016-06-08 | A kind of method and apparatus of caption extraction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610404764.6A CN107480670A (en) | 2016-06-08 | 2016-06-08 | A kind of method and apparatus of caption extraction |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107480670A true CN107480670A (en) | 2017-12-15 |
Family
ID=60593906
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610404764.6A Pending CN107480670A (en) | 2016-06-08 | 2016-06-08 | A kind of method and apparatus of caption extraction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107480670A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108319917A (en) * | 2018-02-02 | 2018-07-24 | 杭州清本科技有限公司 | Key message Enhancement Method, apparatus and system, storage medium in certificate image |
CN108769776A (en) * | 2018-05-31 | 2018-11-06 | 北京奇艺世纪科技有限公司 | Main title detection method, device and electronic equipment |
CN109151616A (en) * | 2018-08-07 | 2019-01-04 | 石家庄铁道大学 | Video key frame extracting method |
CN110598622A (en) * | 2019-09-06 | 2019-12-20 | 广州华多网络科技有限公司 | Video subtitle positioning method, electronic device, and computer storage medium |
CN111709342A (en) * | 2020-06-09 | 2020-09-25 | 北京字节跳动网络技术有限公司 | Subtitle segmentation method, device, equipment and storage medium |
CN112749696A (en) * | 2020-09-01 | 2021-05-04 | 腾讯科技(深圳)有限公司 | Text detection method and device |
CN112883970A (en) * | 2021-03-02 | 2021-06-01 | 湖南金烽信息科技有限公司 | Digital identification method based on neural network model |
CN113191811A (en) * | 2021-05-10 | 2021-07-30 | 武汉埸葵电子商务有限公司 | Intelligent advertisement pushing method and device and computer readable storage medium |
TWI771991B (en) * | 2021-04-21 | 2022-07-21 | 宏芯科技股份有限公司 | Video image interpolation apparatus and method for adaptive motion-compensated frame interpolation |
CN112749696B (en) * | 2020-09-01 | 2024-07-05 | 腾讯科技(深圳)有限公司 | Text detection method and device |
-
2016
- 2016-06-08 CN CN201610404764.6A patent/CN107480670A/en active Pending
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108319917A (en) * | 2018-02-02 | 2018-07-24 | 杭州清本科技有限公司 | Key message Enhancement Method, apparatus and system, storage medium in certificate image |
CN108769776A (en) * | 2018-05-31 | 2018-11-06 | 北京奇艺世纪科技有限公司 | Main title detection method, device and electronic equipment |
CN108769776B (en) * | 2018-05-31 | 2021-03-19 | 北京奇艺世纪科技有限公司 | Title subtitle detection method and device and electronic equipment |
CN109151616A (en) * | 2018-08-07 | 2019-01-04 | 石家庄铁道大学 | Video key frame extracting method |
CN109151616B (en) * | 2018-08-07 | 2020-09-08 | 石家庄铁道大学 | Video key frame extraction method |
CN110598622A (en) * | 2019-09-06 | 2019-12-20 | 广州华多网络科技有限公司 | Video subtitle positioning method, electronic device, and computer storage medium |
CN110598622B (en) * | 2019-09-06 | 2022-05-27 | 广州华多网络科技有限公司 | Video subtitle positioning method, electronic device, and computer storage medium |
CN111709342B (en) * | 2020-06-09 | 2023-05-16 | 北京字节跳动网络技术有限公司 | Subtitle segmentation method, device, equipment and storage medium |
CN111709342A (en) * | 2020-06-09 | 2020-09-25 | 北京字节跳动网络技术有限公司 | Subtitle segmentation method, device, equipment and storage medium |
CN112749696A (en) * | 2020-09-01 | 2021-05-04 | 腾讯科技(深圳)有限公司 | Text detection method and device |
CN112749696B (en) * | 2020-09-01 | 2024-07-05 | 腾讯科技(深圳)有限公司 | Text detection method and device |
CN112883970A (en) * | 2021-03-02 | 2021-06-01 | 湖南金烽信息科技有限公司 | Digital identification method based on neural network model |
TWI771991B (en) * | 2021-04-21 | 2022-07-21 | 宏芯科技股份有限公司 | Video image interpolation apparatus and method for adaptive motion-compensated frame interpolation |
US11451740B1 (en) | 2021-04-21 | 2022-09-20 | Terawins, Inc. | Video-image-interpolation apparatus and method for adaptive motion-compensated frame interpolation |
CN113191811B (en) * | 2021-05-10 | 2022-07-01 | 北京顶当互动科技有限公司 | Intelligent advertisement pushing method and device and computer readable storage medium |
CN113191811A (en) * | 2021-05-10 | 2021-07-30 | 武汉埸葵电子商务有限公司 | Intelligent advertisement pushing method and device and computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107480670A (en) | A kind of method and apparatus of caption extraction | |
US7184100B1 (en) | Method of selecting key-frames from a video sequence | |
US20120163708A1 (en) | Apparatus for and method of generating classifier for detecting specific object in image | |
JP4626886B2 (en) | Method and apparatus for locating and extracting captions in digital images | |
Yang et al. | A framework for improved video text detection and recognition | |
CN109299717B (en) | Method, apparatus, medium, and device for establishing character recognition model and character recognition | |
Jamil et al. | Edge-based features for localization of artificial Urdu text in video images | |
CN107203763B (en) | Character recognition method and device | |
JP4893861B1 (en) | Character string detection apparatus, image processing apparatus, character string detection method, control program, and recording medium | |
Roy et al. | New tampered features for scene and caption text classification in video frame | |
CN112784835A (en) | Method and device for identifying authenticity of circular seal, electronic equipment and storage medium | |
Aung et al. | Automatic license plate detection system for myanmar vehicle license plates | |
CN113435438B (en) | Image and subtitle fused video screen plate extraction and video segmentation method | |
CN108446603B (en) | News title detection method and device | |
Zhang et al. | A novel approach for binarization of overlay text | |
Yang et al. | Caption detection and text recognition in news video | |
Tsai et al. | A comprehensive motion videotext detection localization and extraction method | |
Arai et al. | Text extraction from TV commercial using blob extraction method | |
CN102625028A (en) | Method and apparatus for detecting static logo existing in video | |
CN109558875A (en) | Method, apparatus, terminal and storage medium based on image automatic identification | |
Huang | Automatic video text detection and localization based on coarseness texture | |
Arai et al. | Method for extracting product information from TV commercial | |
Roy et al. | Temporal integration for word-wise caption and scene text identification | |
Liu et al. | Extracting captions in complex background from videos | |
Kumar et al. | Moving text line detection and extraction in TV video frames |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20171215 |