CN107124642A - The detection method and system of captions in continuous moving image - Google Patents

The detection method and system of captions in continuous moving image Download PDF

Info

Publication number
CN107124642A
CN107124642A CN201710133750.XA CN201710133750A CN107124642A CN 107124642 A CN107124642 A CN 107124642A CN 201710133750 A CN201710133750 A CN 201710133750A CN 107124642 A CN107124642 A CN 107124642A
Authority
CN
China
Prior art keywords
logo
captions
colourity
brightness
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710133750.XA
Other languages
Chinese (zh)
Inventor
姜建德
余横
查林
马琰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Xinxin Microelectronics Technology Co Ltd
Original Assignee
HONGYOU IMAGE TECHNOLOGY (SHANGHAI) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HONGYOU IMAGE TECHNOLOGY (SHANGHAI) Co Ltd filed Critical HONGYOU IMAGE TECHNOLOGY (SHANGHAI) Co Ltd
Priority to CN201710133750.XA priority Critical patent/CN107124642A/en
Publication of CN107124642A publication Critical patent/CN107124642A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The invention provides the detection method and system of captions in a kind of continuous moving image, including detecting step:Detect the logo parts in video pictures;Colourity and brightness statistics step:The brightness for the logo that statistic mixed-state is arrived and colourity;Frequency statistics step:There is the frequency of occurrences of logo in logo region in picture in statistics;Captions discriminating step:If different logo can be repeatedly detected in some region or specified inspection area, and difference logo brightness, colourity are approximate, then it is assumed that the region occurs in that captions logo.Method in the present invention can increase the protection to moving captions, improve interpolation image effect, it is to avoid move the broken of captions, and only checked in the normal region of captions set in advance, rather than full frame inspection, save resource, and detection efficiency is high.

Description

The detection method and system of captions in continuous moving image
Technical field
The present invention relates to video caption processing technology field, in particular it relates in continuous moving image captions detection side Method and system.
Background technology
Current LCD TV has replaced traditional CRT TVs to turn into the main flow in market, and the display yet with liquid crystal molecule is special Property, there is picture conditions of streaking in liquid crystal display viewing TV, if refresh rate is not enough, and some motion pictures, which also can be appreciated that, significantly to be trembled It is dynamic.One of the common methods for solving shake and hangover are that frame per second conversion is generally used in lifting tab source frame rate, current TV chip Motion estimation motion compensates (Motion estimation and motion compensation, MEMC) technology.MEMC technologies Application drastically increase viewing experience, but algorithm has limitation under many scenes, and motion search can not be found just True motion vector, for example compound movement, repetitive structure, captions logo etc..Wherein, image content is being moved, but Logo Transfixion, logo motion vector is easy to the vector influence of passive movement background, it is impossible to converge to zero vector.
Current solution is detection of the increase to static logo, uses zero vector in logo place, interpolation when Time is protected to it.However, captions logo had both possessed common logo inactive, and change in time than common logo It hurry up, therefore traditional logo protections are difficult to protect captions logo to crush in motion estimation motion compensation to fly out.
The content of the invention
For defect of the prior art, it is an object of the invention to provide a kind of detection side of captions in continuous moving image Method and system.
The detection method of captions, comprises the following steps in the continuous moving image provided according to the present invention:
Detecting step:Detect the logo parts in video pictures;
Colourity and brightness statistics step:The brightness for the logo that statistic mixed-state is arrived and colourity;
Frequency statistics step:There is the frequency of occurrences of logo in logo region in picture in statistics;
Captions discriminating step:If different logo can be repeatedly detected in some region or specified inspection area, The brightness to the front and rear logo detected, the similitude of colourity are judged, if similar, then it is assumed that the region occurs in that captions logo.
Preferably, the detecting step includes:Static logo is detected using based on local feature, specifically, bag Include following steps:
Step A1:Divide the image into as M × N blocks, the block detection logo based on segmentation;
Step A2:The local feature for each block split in calculation procedure A1, the local feature includes:SIFT feature, SURF features, ORB features, HOG features, or any one local feature algorithm for description block particularity;
Step A3:Compare local feature description's of block at same position in two field pictures, with local feature description's Distance weighs similitude of the block in two interframe, and the similitude distinguishing rule includes:Absolute difference, similitude is higher, then corresponds to Block is bigger for logo possibility;
Step A4:If on consecutive hours countershaft, local feature description's son difference of adjacent two frame of corresponding position block is at certain It is less than given threshold in one setting time section always, then it is assumed that the block detected is logo.
Preferably, the frequency statistics step includes:Check whether certain region can be repeatedly detected logo, and judge inspection Whether the logo measured is that the content being continually changing (counts the frequency of occurrences of the logo in the region, if occurring repeatedly, corresponded to High frequency), if, then it is assumed that detect captions.
Preferably, the specified inspection area in the captions discriminating step includes:Below screen or screen top position.
Different logo brightness, colourity are compared in captions discriminating step, comprised the following steps that:
If different logo can be repeatedly detected in some region or specified inspection area, detected to front and rear Logo brightness, colourity ask poor respectively, and colour difference is designated as:Diff_C, luminance difference is designated as:Diff_Y, if diff_Y<Th1, and diff_C<th2;The logo detected before and after then thinking brightness is similar with colourity, wherein, th1, th2 is the threshold that user gives It is worth higher limit, if the front and rear logo detected brightness is similar with colourity, then it is assumed that the region occurs in that captions logo.
The detecting system of captions in the continuous moving image provided according to the present invention, including following module:
Detection module:For detecting the logo parts in video pictures;
Colourity and brightness statistic:The logo arrived for statistic mixed-state brightness and colourity;
Frequency statistics module:Occurs the frequency of occurrences of logo in logo region in picture for counting;
Captions discrimination module:For discriminating whether that in some region or specified inspection area difference can be repeatedly detected Logo, if the different logo detected brightness, colourity are similar, then it is assumed that the region occurs in that captions logo.
Compared with prior art, the present invention has following beneficial effect:
1st, the detection method of captions can increase the protection to moving captions in the continuous moving image provided in the present invention, Improve interpolation image effect, it is to avoid move the broken of captions
2nd, the detection method of captions only often goes out in the captions set in advance in the continuous moving image of the offer in the present invention Existing region is checked, rather than full frame inspection, saves resource, efficiency high.
Brief description of the drawings
By reading the detailed description made with reference to the following drawings to non-limiting example, further feature of the invention, Objects and advantages will become more apparent upon:
The method flow schematic diagram that Fig. 1 provides for the present invention.
Embodiment
With reference to specific embodiment, the present invention is described in detail.Following examples will be helpful to the technology of this area Personnel further understand the present invention, but the invention is not limited in any way.It should be pointed out that to the ordinary skill of this area For personnel, without departing from the inventive concept of the premise, some changes and improvements can also be made.These belong to the present invention Protection domain.
The detection method of captions, comprises the following steps in the continuous moving image provided according to the present invention:
Step 1:Detect the logo parts in video pictures;
Step 2:The brightness for the logo that statistic mixed-state is arrived and colourity;
Step 3:There is the frequency of occurrences of logo in logo region in picture in statistics;
Step 4:If different logo can be repeatedly detected in some region or specified inspection area, judge to preceding The logo detected afterwards brightness, the similitude of colourity, if similar, then it is assumed that the region occurs in that captions logo.
The step 1 includes:Static logo is detected using based on local feature, specifically, comprised the following steps:
Step 1.1:Divide the image into as M × N blocks, the block detection logo based on segmentation;
Step 1.2:The local feature for each block split in calculation procedure 1.1, the local feature includes:SIFT is special Levy, SURF features, ORB features, HOG features, or any one be used for description block particularity local feature algorithm;
Step 1.3:Compare local feature description's of block at same position in two field pictures, with local feature description's Distance weighs similitude of the block in two interframe, and the similitude distinguishing rule includes:Absolute difference, similitude is higher, then corresponds to Block is bigger for logo possibility;
Step 1.4:If on consecutive hours countershaft, local feature description's son difference of adjacent two frame of corresponding position block is at certain It is less than given threshold in one setting time section always, then it is assumed that the block detected is logo.
The step 3 includes:Check that (statistics is made in the region that only captions can be selected often to occur, and for example shields in certain region Curtain lower section) logo whether can be repeatedly detected, and judge whether the logo detected is the content being continually changing, if so, then Think to detect captions, count the frequency of occurrences of the logo in the region.
Specified inspection area in the step 4 includes:Below screen or screen top position.
Different logo brightness, colourity are compared in step 4, comprised the following steps that:
If different logo can be repeatedly detected in some region or specified inspection area, detected to front and rear Logo brightness, colourity ask poor respectively, and colour difference is designated as:Diff_C, luminance difference is designated as:Diff_Y, if diff_Y<Th1, and diff_C<th2;The logo detected before and after then thinking brightness is similar with colourity, wherein, th1, th2 is the threshold that user gives It is worth higher limit, if the front and rear logo detected brightness is similar with colourity, then it is assumed that the region occurs in that captions logo.
The detecting system of captions in the continuous moving image provided according to the present invention, including following module:
Detection module:For detecting the logo parts in video pictures;
Colourity and brightness statistic:The logo arrived for statistic mixed-state brightness and colourity;
Frequency statistics module:Occurs the frequency of occurrences of logo in logo region in picture for counting;
Captions discrimination module:For discriminating whether that in some region or specified inspection area difference can be repeatedly detected Logo, if the different logo detected brightness, colourity are similar, then it is assumed that the region occurs in that captions logo.
More detailed explanation is done to the technical scheme in the present invention with reference to specific embodiment.
Embodiment using method in the present invention comprises the following steps:
Step S1:Logo is detected based on local feature algorithm, image is divided into block, the hog features of calculation block;
Step S2:The otherness of frame same position block hog features before and after calculating:Diff=abs (HOGpre-HOGcur), diff<Th then Logo_Flag=1;
Step S3:On consecutive hours countershaft, time>In th, Logo_Flag is considered Logo for 1 block always;
Step S4:All pixels to present frame labeled as Logo ask its brightness, colourity average value;
Step S5:Th in step S3 is set to a less value, in time<In P period, P>Th, is referring to Determine in region, detect logo more than n times, and front and rear logo brightness, colourity difference are smaller, diff_Y<th_Y,diff_C <Th_C, then think that the region occurs in that motion captions logo;
Step S6:Conclusion in step S5 is dynamically adapted the th in step S3, it is believed that when having motion captions at this, Th can set sufficiently small, you can detect logo quickly, and captions are protected;During without captions, th sets larger, it is to avoid mistake is protected Shield.
The specific embodiment of the present invention is described above.It is to be appreciated that the invention is not limited in above-mentioned Particular implementation, those skilled in the art can make a variety of changes or change within the scope of the claims, this not shadow Ring the substantive content of the present invention.In the case where not conflicting, feature in embodiments herein and embodiment can any phase Mutually combination.

Claims (5)

1. the detection method of captions in a kind of continuous moving image, it is characterised in that comprise the following steps:
Detecting step:Detect the logo parts in video pictures;
Colourity and brightness statistics step:The brightness for the logo that statistic mixed-state is arrived and colourity;
Frequency statistics step:There is the frequency of occurrences of logo in logo region in picture in statistics;
Captions discriminating step:If different logo can be repeatedly detected in some region or specified inspection area, judge The similitude of brightness, colourity to the front and rear logo detected, if similar, then it is assumed that the region occurs in that captions logo.
2. the detection method of captions in continuous moving image according to claim 1, it is characterised in that the detecting step Including:Static logo is detected using based on local feature, specifically, comprised the following steps:
Step A1:Divide the image into as M × N blocks, the block detection logo based on segmentation;
Step A2:The local feature for each block split in calculation procedure A1, the local feature includes:SIFT feature, SURF Feature, ORB features, HOG features, or any one local feature algorithm for description block particularity;
Step A3:Compare local feature description's of block at same position in two field pictures, with the distance of local feature description's To weigh similitude of the block in two interframe, the similitude distinguishing rule includes:Absolute difference, similitude is higher, then corresponding blocks are Logo possibility is bigger;
Step A4:If on consecutive hours countershaft, local feature description's son difference of adjacent two frame of corresponding position block is set a certain Fix time and be less than given threshold in section always, then it is assumed that the block detected is logo.
3. the detection method of captions in continuous moving image according to claim 1, it is characterised in that the frequency statistics Step includes:Check whether certain region can be repeatedly detected logo, and judge whether the logo detected is continually changing Content, if, then it is assumed that detect captions.
4. the detection method of captions in continuous moving image according to claim 1, it is characterised in that the captions differentiate Specified inspection area in step includes:Below screen or screen top position.
Different logo brightness, colourity are compared in captions discriminating step, comprised the following steps that:
If different logo can be repeatedly detected in some region or specified inspection area, to the front and rear logo detected Brightness, colourity ask poor respectively, colour difference is designated as:Diff_C, luminance difference is designated as:Diff_Y, if diff_Y<Th1, and diff_C<th2;The logo detected before and after then thinking brightness is similar with colourity, wherein, th1, th2 is the threshold that user gives It is worth higher limit, if the front and rear logo detected brightness is similar with colourity, then it is assumed that the region occurs in that captions logo.
5. the detecting system of captions in a kind of continuous moving image, it is characterised in that including following module:
Detection module:For detecting the logo parts in video pictures;
Colourity and brightness statistic:The logo arrived for statistic mixed-state brightness and colourity;
Frequency statistics module:Occurs the frequency of occurrences of logo in logo region in picture for counting;
Captions discrimination module:It is different for discriminating whether can be repeatedly detected in some region or specified inspection area Logo, if the different logo detected brightness, colourity are similar, then it is assumed that the region occurs in that captions logo.
CN201710133750.XA 2017-03-08 2017-03-08 The detection method and system of captions in continuous moving image Pending CN107124642A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710133750.XA CN107124642A (en) 2017-03-08 2017-03-08 The detection method and system of captions in continuous moving image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710133750.XA CN107124642A (en) 2017-03-08 2017-03-08 The detection method and system of captions in continuous moving image

Publications (1)

Publication Number Publication Date
CN107124642A true CN107124642A (en) 2017-09-01

Family

ID=59717391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710133750.XA Pending CN107124642A (en) 2017-03-08 2017-03-08 The detection method and system of captions in continuous moving image

Country Status (1)

Country Link
CN (1) CN107124642A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103607635A (en) * 2013-10-08 2014-02-26 十分(北京)信息科技有限公司 Method, device and terminal for caption identification
CN103870795A (en) * 2012-12-13 2014-06-18 北京捷成世纪科技股份有限公司 Automatic detection method and device of video rolling subtitle
CN104602096A (en) * 2014-12-26 2015-05-06 北京奇艺世纪科技有限公司 Detecting method and device for video subtitle area
CN104780362A (en) * 2015-04-24 2015-07-15 宏祐图像科技(上海)有限公司 Video static logo detecting method based on local feature description
CN104980765A (en) * 2015-06-15 2015-10-14 北京博威康技术有限公司 Multi-channel plain text frame monitoring method
US20160189380A1 (en) * 2014-12-24 2016-06-30 Samsung Electronics Co., Ltd. Method and apparatus for processing image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103870795A (en) * 2012-12-13 2014-06-18 北京捷成世纪科技股份有限公司 Automatic detection method and device of video rolling subtitle
CN103607635A (en) * 2013-10-08 2014-02-26 十分(北京)信息科技有限公司 Method, device and terminal for caption identification
US20160189380A1 (en) * 2014-12-24 2016-06-30 Samsung Electronics Co., Ltd. Method and apparatus for processing image
CN104602096A (en) * 2014-12-26 2015-05-06 北京奇艺世纪科技有限公司 Detecting method and device for video subtitle area
CN104780362A (en) * 2015-04-24 2015-07-15 宏祐图像科技(上海)有限公司 Video static logo detecting method based on local feature description
CN104980765A (en) * 2015-06-15 2015-10-14 北京博威康技术有限公司 Multi-channel plain text frame monitoring method

Similar Documents

Publication Publication Date Title
US8144255B2 (en) Still subtitle detection apparatus and image processing method therefor
CN105282475B (en) Crawl detection and compensation method and system
US10540791B2 (en) Image processing apparatus, and image processing method for performing scaling processing based on image characteristics
US8355079B2 (en) Temporally consistent caption detection on videos using a 3D spatiotemporal method
JP2006072359A (en) Method of controlling display apparatus
CA2920834A1 (en) Legibility enhancement for a logo, text or other region of interest in video
JP2004080252A (en) Video display unit and its method
TWI504248B (en) Image processing apparatus and image processing method
CN102044067A (en) Method and system for reducing ringing artifacts of image deconvolution
CN102724458A (en) Video picture full-screen display subtitle processing method and video terminal
CN104010114B (en) Video denoising method and device
US20120182311A1 (en) Image displaying apparatus
US9516260B2 (en) Video processing method and apparatus
CN102254544A (en) Method for automatically adjusting video signal proportion and television using same
KR20140046370A (en) Method and apparatus for detecting a television channel change event
CN102629969A (en) Smear eliminating method during shooting of plane objects
CN107124642A (en) The detection method and system of captions in continuous moving image
WO2016199418A1 (en) Frame rate conversion system
CN108074248B (en) OSD automatic detection method and device based on image content
US9715736B2 (en) Method and apparatus to detect artificial edges in images
JP5188272B2 (en) Video processing apparatus and video display apparatus
JP2008147826A (en) Black stripe region detection circuit
US8879000B2 (en) Method and system for detecting analog noise in the presence of mosquito noise
US20130201404A1 (en) Image processing method
US8233085B1 (en) Method and system for interpolating a pixel value of a pixel located at an on-screen display

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20190712

Address after: 266000 Shandong Province, Qingdao city Laoshan District Songling Road No. 399

Applicant after: Qingdao Xinxin Microelectronics Technology Co., Ltd.

Address before: Room 316, Building 88 Chenhui Road, China (Shanghai) Free Trade Pilot Area, Pudong New Area, Shanghai, 201203

Applicant before: HONGYOU IMAGE TECHNOLOGY (SHANGHAI) CO., LTD.

TA01 Transfer of patent application right
RJ01 Rejection of invention patent application after publication

Application publication date: 20170901

RJ01 Rejection of invention patent application after publication