CN102014267A - Caption region detecting method - Google Patents

Caption region detecting method Download PDF

Info

Publication number
CN102014267A
CN102014267A CN200910173122XA CN200910173122A CN102014267A CN 102014267 A CN102014267 A CN 102014267A CN 200910173122X A CN200910173122X A CN 200910173122XA CN 200910173122 A CN200910173122 A CN 200910173122A CN 102014267 A CN102014267 A CN 102014267A
Authority
CN
China
Prior art keywords
pixel
captions
scan line
subtitle region
detecting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN200910173122XA
Other languages
Chinese (zh)
Other versions
CN102014267B (en
Inventor
陈滢如
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Himax Technologies Ltd
Original Assignee
Himax Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Himax Technologies Ltd filed Critical Himax Technologies Ltd
Priority to CN 200910173122 priority Critical patent/CN102014267B/en
Publication of CN102014267A publication Critical patent/CN102014267A/en
Application granted granted Critical
Publication of CN102014267B publication Critical patent/CN102014267B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Television Systems (AREA)

Abstract

The invention relates to a caption region detecting method which comprises the following steps of: carrying out caption pixel detection on each target pixel of a current scanning line firstly to judge whether each pixel of the current scanning line is a caption pixel; counting the amount (spf) of caption pixels of a previous frame, and setting the caption pixels of the current scanning line to be static pixels when the amount is larger than a preset value; and counting an amount (spl) of caption pixels of a previous scanning line, and regulating and setting a part of non-static pixels of the a current scanning line to be static pixels according to the previous frame and a next frame when the amount (spl) is larger than the preset value.

Description

The subtitle region method for detecting
Technical field
The present invention relates to deinterleave (de-interlacing) conversion, particularly relate to a kind of subtitle region method for detecting, carry out deinterleave with this.
Background technology
General television broadcast video signal adopts staggered (interlaced) form, for example NTSC, PAL, SECAM, it interlocks in regular turn with odd field (field) and even field and is presented on the television screen, utilizes persistence of vision principle to demonstrate the video content of frame (frame).Interlace video signal only need use low frequency range can transmit acceptable video quality; Yet its shortcoming is to cause flicker (flicker) phenomenon in the reduction of vertical resolution, lines or zone.The vision signal of general computer display then is to adopt noninterlace (non-interlaced) or (progressive) form in proper order, and it is directly the video content of frame to be presented on the computer display.
For interlace video signal can be presented in proper order on the display of signal format (for example display of computer), must earlier interlace video signal be converted to the vision signal of noninterlace/in proper order, this format conversion processing process is called deinterleave (de-interlacing) or frequency multiplication in proper order.By means of deinterleave conversion, with odd field (field) originally and even field in conjunction with producing a frame.
The interleaving removal conversation method of vision signal can reduce following two kinds: space (spatial) conversion and time (temporal) conversion.In space conversion, the pixel (pixel) of only using same is to produce new pixel, and therefore, this conversion generally is called interior (intra-field) conversion again.In the time conversion, use adjacent different pixel to produce new pixel, therefore, this conversion generally is called (inter-field) conversion between the field again.Usually, the quiescent centre be with the time go up or between (inter-field) interpolation technique carry out deinterleave, turnover zone then with on the space or in (intra-field) interpolation technique carry out deinterleave.
In general, mobile detection (motion detection) adopts the difference between the corresponding pixel in same odevity field (sameparity field) to judge usually; The big person of difference represents to have mobile, and the little person of difference represents not move.Yet,, subtitle region may be considered as turnover zone and carry out deinterleave, thereby cause the phenomenon of flashing of subtitle region if the background of subtitle region has when mobile.
In view of conventional art can't correctly be judged subtitle region thereby cause flashing of subtitle region, therefore need mobile detection and the deinterleave mechanism that proposes a kind of novelty badly, in order to correctly to detect subtitle region, be beneficial to the carrying out of deinterleave.
Summary of the invention
In view of above-mentioned, the embodiment of the invention proposes a kind of method for detecting of subtitle region, obtains subtitle region in order to correct detecting, carries out deinterleave with this, and the subtitle region that is improved flash phenomenon.
According to the embodiment of the invention, at first, each object pixel of current scanline line is carried out the detecting of captions pixel, whether be the captions pixel in order to each pixel of judging the current scanline line.The number (spf) of captions pixel of counting former frame, when its during greater than a preset value, then the captions pixel with the current scanline line is made as static pixel.The number (spl) of captions pixel of the last scan line of counting present frame, when its during greater than a preset value, then according to former frame, back one frame, the part nonstatic pixel adjustment of current scanline line is set at static pixel.
According to an embodiment, above-mentioned captions pixel detecting step comprises the following step.At first, carry out the mobile detection of object pixel.The brightness value of the neighbor of detecting object pixel judges whether have big or especially little especially brightness value.Whether then, check in the former frame, be static corresponding to the state of the neighbor of object pixel.At last, whether the detecting object pixel is positioned at the captions border.
Description of drawings
Fig. 1 has illustrated the part of scanning line of three successive frames.
Fig. 2 shows the flow chart of the method for detecting of captions pixel.
Fig. 3 shows the flow chart of the subtitle region method for detecting of the embodiment of the invention.
Fig. 4 has illustrated scan line with captions and corresponding former frame thereof, back one frame scan line.
[main element symbol description]
The detecting step of 21-24 captions pixel
31-38 subtitle region detecting step
The scan line of PA, PC, PE former frame
The scan line of B, D present frame
The scan line of a frame behind NA, NC, the NE
The pixel of pa, pb, pc, pd, pe former frame
The pixel of b, d present frame
The pixel of a frame behind na, nc, the ne
Embodiment
Fig. 1 has illustrated the part of scanning line of three successive frames (that is, former frame, present frame and back one frame).Wherein, former frame and back one frame are odd field, and present frame is an even field.Former frame shows scan line PA, PC, PE, and present frame shows scan line B, D, and then a frame then shows scan line NA, NC, NE.
Whether Fig. 2 shows the flow chart of the method for detecting of captions (subtitle) pixel, treats the current pixel t (being called object pixel again) of deinterleave in order to detecting, may be the captions pixel to determine it.Though present embodiment is detected object pixel t with step 21 in regular turn to 24, yet the order of these steps can change.Moreover some step can be omitted, and also can increase other additional step.
At first, carry out the mobile detection (motion detection) of object pixel t with step 21.In view of the time that captions occur generally all can keep several seconds, so that the beholder reads.So when caption area was carried out mobile detection, the difference between the same odevity more than at least two (same parity) field can be especially little.In the present embodiment, above-mentioned difference with the odevity field is by corresponding to a plurality of pixels of the former frame of object pixel t (for example measuring, contain a plurality of pixels in the window of pixel p c) and a plurality of pixels of back one frame are (for example, absolute difference sum a plurality of pixels that contain the window of pixel nc) (sum of absolute differences, SAD).The value SAD of absolute difference sum can be expressed from the next:
SAD = Σ i = 1 k | ( nc i - pc i ) |
Wherein, nc iPixel after representing on the scan line NC of a frame, pc iRepresent the pixel on the scan line PC of former frame, k represents the number of pixels of window.
If above-mentioned absolute difference sum SAD is less than a preset value T, the state that then indicates this object pixel t is static (static), and continues subsequent step; If absolute difference sum SAD greater than preset value T, then is labeled as mobile (motion), and finishes the flow process of Fig. 2 and determine that this object pixel t is not the captions pixel.
Then, with the neighbor of step 22 detecting object pixel t (for example, be positioned at the pixel b of object pixel t top and the pixel d of below in the present frame, and be arranged in the pixel p c of former frame corresponding to object pixel t) brightness value, whether have big or especially little especially brightness value in order to detecting.In order to make captions eye-catching and produce difference with background, captions can use dark color (especially little brightness value) housing to cooperate interior the word of light tone (big especially brightness value) to contrast strengthening usually.Therefore, when step 22 detected especially big or especially little brightness value, this object pixel t promptly may be positioned at subtitle region, and continued subsequent step; Otherwise, finish the flow process of Fig. 2 and determine that this object pixel t is not the captions pixel.In the present embodiment, step 22 judges thus whether pixel b, d, pc have big especially brightness value (greater than preset value T2) or whether have especially little brightness value (less than preset value T3).That is, b>T2 or d>T2 or pc>T2 or b<T3 or d<T3 or pc<T3.
Yet, if when pixel falls within level and smooth district (that is general dull brightness value), above-mentioned steps 21 and 22 is not sufficient to pick out captions.Even, also might thereby error detection go out a lot of captions pixels.Therefore, present embodiment continues to do further detecting with step 23 and 24.
In step 23, check in the former frame, whether be static corresponding to the state of the neighbor (for example, pixel p b and pd) of object pixel t.If for being then to continue subsequent step; Otherwise, finish the flow process of Fig. 2 and determine that this object pixel t is not the captions pixel.In the present embodiment, be whether the state of checking pixel p b and pd is static.
At last, in step 24, whether detecting object pixel t is positioned at the captions border.The border of captions generally has one of following two kinds of situations: (a) (definitely) difference between the neighbouring scan line of object pixel t (for example scan line B and scan line D) is very big; (b) (definitely) difference between the neighbouring scan line of object pixel t is very little, and still, (definitely) difference between the scan line of respective objects pixel t in this neighbouring scan line and the former frame (for example scan line PC) is but very big.If situation (a) or (b) person are arranged, judge that then object pixel is the captions pixel; Otherwise, finish the flow process of Fig. 2 and determine that this object pixel t is not the captions pixel.In the present embodiment, situation (a) is to judge that whether the absolute difference of window average of the window average of scan line B (containing pixel b) and scan line D (containing pixel d) is greater than a preset value T4.That is, | B-D|>T4.Situation (b) be judge the window average of scan line B (containing pixel b) and scan line D (containing pixel d) window average absolute difference whether less than a preset value T5 (that is, | B-D|<T5), and the absolute difference of scan line B/ scan line D and scan line PC (containing pixel p c) whether greater than preset value T4 (that is, | B-PC|>T4 or | D-PC|>T4).
Fig. 3 shows the flow chart of the subtitle region method for detecting of the embodiment of the invention, can carry out the deinterleave of follow-up (on stream not shown) according to the mobile detection result that it obtained.Whether at first, in step 31, in regular turn each object pixel of current scanline line is carried out the detecting of captions pixel with the flow process of Fig. 2, be the captions pixel in order to each pixel of judging this scan line.
Then, in step 32, the number of the captions pixel of counting former frame (subtitle pixels inprevious frame, spf).Because identical captions can appear in a plurality of frames continuously, therefore, can learn whether present frame has captions by the result of former frame.In general, when the number of captions pixel (spf) is enough big in the frame, promptly may really have captions.
When the number (spf) of the captions pixel of former frame during greater than a preset value T5 (step 33), then the captions pixel with the current scanline line is made as static (static) pixel (step 34); Otherwise promptly the result according to general mobile detection (step 35) carries out deinterleave.For example, with the time go up or between (inter-field) interpolation technique carry out the deinterleave of quiescent centre, and with on the space or in (intra-field) interpolation technique carry out the deinterleave of turnover zone.Then, in step 36, the number of the captions pixel of counting (present frame) last scan line (subtitle pixels in previous scan line, spl).
If the number (spl) of the captions pixel of last scan line is greater than a preset value T6 (step 37), then the current scanline line is the scan line with captions; Otherwise promptly the result according to general mobile detection (step 35) carries out deinterleave.According to present embodiment, can learn whether present frame has captions by the intact data (for example spf) of former frame computing.In order to save the storage area of additional records information, the position of subtitle region is then obtained by the data (for example spl) of present frame.
Through being judged to be in the middle of the scan line with captions, some pixel is static, and some pixel then is not static.If the absolute difference between pairing former frame pixel of " nonstatic " pixel and the back one frame pixel then is set at static (static) pixel with step 38 with its adjustment less than a preset value T7.Fig. 4 has illustrated scan line with captions and corresponding former frame thereof, back one frame scan line.In this example, the 3rd pixel t3 with scan line of captions is the nonstatic pixel, and its corresponding former frame pixel is pc3, and corresponding back one frame pixel is nc3.If the absolute difference of pixel nc3 and pixel p c3 less than preset value T7 (that is, | nc-pc|<T7), then pixel t3 is adjusted and is set at static pixel.
According to present embodiment, can make subtitle region and adjacent domain thereof be judged to be the quiescent centre, move it the influence that detecting is not moved by background, and with the time go up or between (inter-field) interpolation technique carry out the deinterleave of this quiescent centre.
The above is the preferred embodiments of the present invention only, is not in order to limit the scope of appended claims of the present invention; Any equivalence that other is finished under the spirit that the disengaging invention is not disclosed changes or modifies, and all should comprise within the scope of the appended claims.

Claims (12)

1. subtitle region method for detecting comprises:
Whether each object pixel to the current scanline line carries out the detecting of captions pixel, be the captions pixel in order to each pixel of judging this current scanline line;
The number (spf) of the captions pixel of counting former frame;
When the number (spf) of the captions pixel of this former frame during, the captions pixel of this current scanline line is made as static pixel greater than a preset value;
The number (spl) of the captions pixel of the last scan line of counting present frame; And
When the number (spl) of the captions pixel of this last scan line during,, the part nonstatic pixel adjustment of current scanline line is set at static pixel according to former frame, back one frame greater than a preset value.
2. subtitle region method for detecting according to claim 1, wherein above-mentioned captions pixel detecting step comprises the following step:
Carry out the mobile detection of this object pixel; And
Detect the brightness value of the neighbor of this object pixel, judge whether have big or especially little especially brightness value.
3. subtitle region method for detecting according to claim 2, wherein the mobile detection step of above-mentioned object pixel comprises:
Measurement is corresponding to the absolute difference sum (SAD) between a plurality of pixels of a plurality of pixels of the former frame of this object pixel and back one frame;
Wherein this absolute difference sum (SAD) is if less than a preset value, and the state that then indicates this object pixel is static, otherwise, be labeled as mobile.
4. subtitle region method for detecting according to claim 2, wherein above-mentioned neighbor comprise the pixel that is positioned at this object pixel top in the present frame and the pixel of below, and are arranged in the pixel of former frame corresponding to this object pixel.
5. subtitle region method for detecting according to claim 2, wherein above-mentioned captions pixel detecting step also comprises:
Whether check in the former frame, be static corresponding to the state of the neighbor of this object pixel.
6. subtitle region method for detecting according to claim 5, wherein above-mentioned captions pixel detecting step also comprises:
Detect this object pixel and whether be positioned at the captions border.
7. subtitle region method for detecting according to claim 6, the detecting step on wherein above-mentioned captions border comprises:
Whether the absolute difference of the window average of the last one scan line of judgement current scanline line and the window average of next scan line is greater than a preset value.
8. subtitle region method for detecting according to claim 6, the detecting step on wherein above-mentioned captions border comprises:
Whether the absolute difference of the window average of the last one scan line of judgement current scanline line and the window average of next scan line is less than a preset value, and whether the absolute difference of the respective scan line of this last one scan line/this next scan line and former frame is greater than a preset value.
9. subtitle region method for detecting according to claim 1 is if the absolute difference between pairing former frame pixel of the nonstatic pixel of current scanline line and the back one frame pixel less than a preset value, then carries out the adjustment of this static pixel and sets step.
10. subtitle region method for detecting according to claim 1, when captions number of pixels (spf) is less than this preset value in this former frame, judge that then this current scanline line is not a subtitle region, and carry out the deinterleave of quiescent centre, or carry out the deinterleave of turnover zone with interpolation field benefit technology with interpolation technology between the field.
11. subtitle region method for detecting according to claim 1, when the number (spl) of the captions pixel of the last scan line of this present frame during less than this preset value, judge that then this current scanline line is not a subtitle region, and carry out the deinterleave of quiescent centre, or carry out the deinterleave of turnover zone with interpolation field benefit technology with interpolation technology between the field.
12. subtitle region method for detecting according to claim 1 after step is set in the adjustment of this static pixel, carries out the deinterleave of quiescent centre with interpolation technology between the field.
CN 200910173122 2009-09-07 2009-09-07 Caption region detecting method Expired - Fee Related CN102014267B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200910173122 CN102014267B (en) 2009-09-07 2009-09-07 Caption region detecting method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200910173122 CN102014267B (en) 2009-09-07 2009-09-07 Caption region detecting method

Publications (2)

Publication Number Publication Date
CN102014267A true CN102014267A (en) 2011-04-13
CN102014267B CN102014267B (en) 2013-01-09

Family

ID=43844256

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200910173122 Expired - Fee Related CN102014267B (en) 2009-09-07 2009-09-07 Caption region detecting method

Country Status (1)

Country Link
CN (1) CN102014267B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1103526A (en) * 1993-12-04 1995-06-07 行健电讯股份有限公司 Method and system for overlapping text on live broadcast of satellite
CN1170309A (en) * 1996-05-03 1998-01-14 三星电子株式会社 Closed-caption broadcasting and receiving method suitable for syllable characters
CN1176557A (en) * 1996-09-06 1998-03-18 三星电子株式会社 Caption signal broadcasting method for audience selecting caption broad cast
TWI255140B (en) * 2004-11-04 2006-05-11 Himax Tech Inc Caption detection and compensation for interlaced image
US20070030384A1 (en) * 2001-01-11 2007-02-08 Jaldi Semiconductor Corporation A system and method for detecting a non-video source in video signals

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1103526A (en) * 1993-12-04 1995-06-07 行健电讯股份有限公司 Method and system for overlapping text on live broadcast of satellite
CN1170309A (en) * 1996-05-03 1998-01-14 三星电子株式会社 Closed-caption broadcasting and receiving method suitable for syllable characters
CN1176557A (en) * 1996-09-06 1998-03-18 三星电子株式会社 Caption signal broadcasting method for audience selecting caption broad cast
US20070030384A1 (en) * 2001-01-11 2007-02-08 Jaldi Semiconductor Corporation A system and method for detecting a non-video source in video signals
TWI255140B (en) * 2004-11-04 2006-05-11 Himax Tech Inc Caption detection and compensation for interlaced image

Also Published As

Publication number Publication date
CN102014267B (en) 2013-01-09

Similar Documents

Publication Publication Date Title
KR100303728B1 (en) Deinterlacing method of interlaced scanning video
US8497937B2 (en) Converting device and converting method of video signals
US7170562B2 (en) Apparatus and method for deinterlace video signal
KR100403364B1 (en) Apparatus and method for deinterlace of video signal
US8115867B2 (en) Image processing device
US7405766B1 (en) Method and apparatus for per-pixel motion adaptive de-interlacing of interlaced video fields
US6636267B1 (en) Line interpolation apparatus and line interpolation method
USRE45306E1 (en) Image processing method and device thereof
KR20050025086A (en) Image processing apparatus and image processing method
CN1512773A (en) Image format converting device and method
US20080100744A1 (en) Method and apparatus for motion adaptive deinterlacing
CN102014267B (en) Caption region detecting method
CN101076104B (en) Method for inspecting film mode
JP4433949B2 (en) Image processing apparatus and method
CN101959048B (en) Image processing apparatus and image processing method
US20070140357A1 (en) Method and apparatus for treating a video signal
US20080111916A1 (en) Image de-interlacing method
TWI403160B (en) Method of detecting a subtitle
CN102572289A (en) Method and device for detecting and processing movie mode
CN101483747B (en) Movement detection method suitable for deinterlacing technique
CN102497492B (en) Detection method for subtitle moving in screen
JP2000228762A (en) Scanning conversion circuit
US8045820B2 (en) System and method for edge direction detection for spatial deinterlace
CN101106687B (en) Dynamic interleaving conversion method and device
CN101106688B (en) Interleaving removal conversion method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130109

CF01 Termination of patent right due to non-payment of annual fee