CN101867736B - De-interlacing device and de-interlacing method and dynamic title compensator - Google Patents

De-interlacing device and de-interlacing method and dynamic title compensator Download PDF

Info

Publication number
CN101867736B
CN101867736B CN200910132831A CN200910132831A CN101867736B CN 101867736 B CN101867736 B CN 101867736B CN 200910132831 A CN200910132831 A CN 200910132831A CN 200910132831 A CN200910132831 A CN 200910132831A CN 101867736 B CN101867736 B CN 101867736B
Authority
CN
China
Prior art keywords
pixel
motion
vector
captions
compensation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN200910132831A
Other languages
Chinese (zh)
Other versions
CN101867736A (en
Inventor
苏伟祺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Novatek Microelectronics Corp
Original Assignee
Novatek Microelectronics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Novatek Microelectronics Corp filed Critical Novatek Microelectronics Corp
Priority to CN200910132831A priority Critical patent/CN101867736B/en
Publication of CN101867736A publication Critical patent/CN101867736A/en
Application granted granted Critical
Publication of CN101867736B publication Critical patent/CN101867736B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to a de-interlacing device, a de-interlacing method and a dynamic title compensator. The de-interlacing method comprises the following steps: obtaining de-interlacing pixel data of a target pixel based on a current field and multiple relative fields; defining one caption area of the current field based on the fields and obtaining an corresponding area motion vector and an area reliability degree; performing motion compensation to obtain compensation pixel data of the target pixel based on at least parts of the fields and the area motion vector when the target pixel belongs to the caption area, and judging the target pixel to be a foreground pixel or a background pixel to output a compensation selective signal based on the area motion vector and the area reliability degree; and outputting de-interlacing pixel data or compensation pixel data to be served as correction pixel data of the target pixel based on the compensation selective signal.

Description

De-interlacer and method and dynamic title compensator
Technical field
The relevant a kind of de-interlacer of the present invention and method and dynamic title compensator, and particularly relevant a kind of de-interlacer and the method and dynamic title compensator that can compensate captions.
Background technology
Because the restriction of the speed of processor and frequency range, radio data system now are to adopt alternating expression (interlace) signal of video signal to replace traditional gradual (progressive) signal of video signal.Yet present novel demonstration is put like LCD or plasma scope etc. and is only supported progressive scan, so the function that need possess release of an interleave is to convert the alternating expression signal of video signal into gradual signal of video signal.
Yet traditional deinterlacer only carries out the action of release of an interleave to whole image usually, for example uses the homologous field interpolation method that whole image compensated, and does not carry out the action of detection of dynamic or dynamic compensation for the captions that in image, dynamically move.Thus,,, also may cause many bad visual effects, for example flicker, lines shake and irregularity etc. so, then not only can reduce resolution if still use the homologous field interpolation method to carry out release of an interleave because captions are accompanied by the high frequency composition usually.
Summary of the invention
The purpose of this invention is to provide a kind of de-interlacer and method and dynamic title compensator, utilize a plurality of figure of being associated field to define caption area, and the pixel in the caption area is carried out motion compensation and pixel data after obtaining proofreading and correct.
According to an aspect of the present invention, propose a kind of de-interlacer, comprise a deinterlacer, a dynamic title compensator and a multiplexer.Deinterlacer obtains the release of an interleave pixel data of an object pixel according to a current figure field and a plurality of figure of being associated field.Dynamic title compensator is schemed a caption area of the settled preceding figure of boundarys and is obtained corresponding one regional motion-vector and a regional reliability according to these.Wherein work as object pixel and belong to caption area; Dynamic title compensator foundation these part and regional motion-vectors of scheming the field is at least carried out the compensation pixel data that motion compensation obtains object pixel, and judges that according to regional motion-vector and regional reliability object pixel is that a foreground pixel or a background pixel compensate the selection signal to export one.Multiplexer is compensated to be selected signal controlling and exports the release of an interleave pixel data or the compensation pixel data are the correcting pixel data of object pixel.
According to a second aspect of the invention, propose a kind of dynamic title compensator, comprise that a caption area defines block, a foreground/background is judged a block and a motion compensation block.Caption area defines block in order to the motion-vector of each pixel and the pixel gray level counter-rotating number of times of these detection lines on many detection lines estimating current figure field according to a current figure field and a plurality of figure of being associated field respectively; Determining whether these detection lines are a captions line, and define a caption area and obtain corresponding one regional motion-vector and a regional reliability according to these captions lines.Foreground/background judges that block is that a foreground pixel or a background pixel are selected signal to export a compensation according to the motion-vector of an object pixel, regional motion-vector and regional reliability judgement object pixel.The motion compensation block is in order to belong to caption area when object pixel; Foundation part and regional motion-vectors of these figure is at least carried out motion compensation, and judges whether the detection line under the object pixel is that a drop-down captions line is to obtain the compensation pixel data of object pixel.
According to a third aspect of the invention we, propose a kind of de-interlace method, comprise the following steps.Obtain the release of an interleave pixel data of an object pixel according to a current figure field and a plurality of figure of being associated field.Scheme a caption area of the settled preceding figure of boundarys and obtain corresponding one regional motion-vector and a regional reliability according to these.When object pixel belongs to caption area; Foundation these part and regional motion-vectors of scheming the field is at least carried out the compensation pixel data that motion compensation obtains object pixel, and judges that according to regional motion-vector and regional reliability object pixel is that a foreground pixel or a background pixel compensate the selection signal to export one.Selecting signal output release of an interleave pixel data or compensation pixel data according to compensation is the correcting pixel data of object pixel.
Description of drawings
For letting the foregoing of the present invention can be more obviously understandable, hereinafter elaborates conjunction with figs. to preferred embodiment of the present invention, wherein:
Fig. 1 illustrates the calcspar according to the de-interlacer of preferred embodiment of the present invention.
Fig. 2 illustrates the calcspar according to the dynamic title compensator of preferred embodiment of the present invention.
Fig. 3 A and Fig. 3 B illustrate the mobile appraisal sketch map according to preferred embodiment of the present invention.
Fig. 4 A illustrates the illustration intention according to the caption area detection of preferred embodiment of the present invention.
Fig. 4 B illustrates another illustration intention according to the caption area detection of preferred embodiment of the present invention.
Fig. 5 illustrates the sketch map according to an example of the motion compensation of preferred embodiment of the present invention.
Fig. 6 illustrates the flow chart according to the de-interlace method of preferred embodiment of the present invention.
Fig. 7 illustrates the detail flowchart according to the step S610 of the de-interlace method of preferred embodiment of the present invention.
Embodiment
The present invention provides a kind of de-interlacer and method and dynamic title compensator; Be to utilize a plurality of figure of being associated field (field) to judge whether many detection lines (detecting line) are captions line (caption line); Characteristic through the captions line defines caption area; And the pixel in the caption area carried out motion compensation (motioncompensation), be to belong to prospect (foreground) pixel or background (background) pixel and pixel data after obtaining proofreading and correct according to pixel then.
Please with reference to Fig. 1, it illustrates the calcspar according to the de-interlacer of preferred embodiment of the present invention.In present embodiment, lift deinterlacer now and cooperate running to do explanation with dynamic title compensator for example, be noted that so dynamic title compensator of the present invention can operate separately, does not limit.De-interlacer 10 comprises a deinterlacer (de-interlacer) 20, a dynamic title compensator (moving caption compensator) 30 and one multiplexer 40.Deinterlacer 20 obtains the release of an interleave pixel data DEI_OUT of an object pixel according to a current figure field and a plurality of figure of being associated field.In present embodiment, lift four adjacent figure at present, comprise former field pre, current figure field cur, an inferior figure nxt and back one a figure aft, do explanation for example, so do not limit, look closely the required figure field that is associated of de-interlace method and why decide.
Dynamic title compensator 30 is according to a settled preceding regional motion-vector (motion vector) and the regional reliability (confidence) scheming the caption area of a cur and obtain corresponding this caption area of these figure boundarys.Wherein, current figure the caption area that dynamic title compensator 30 is defined can be not only one, and it also can define a plurality of caption areas in current figure, and obtains corresponding regional motion-vector and regional reliability respectively.All persons that defines the caption area all belong to the scope of desire protection of the present invention.
When object pixel belongs to caption area; Dynamic title compensator 30 foundations these part and regional motion-vectors of scheming the field are at least carried out the compensation pixel data M CCU_OUT that motion compensation obtains object pixel, and judge that according to regional motion-vector and regional reliability object pixel is that a foreground pixel or a background pixel compensate selection signal MCCU_SEL to export one.Multiplexer 40 is compensated selects signal MCCU_SEL control and output release of an interleave pixel data DEI_OUT or compensation pixel data M CCU_OUT are the correcting pixel data DATA_OUT of object pixel.
Please with reference to Fig. 2, it illustrates the calcspar according to the dynamic title compensator of preferred embodiment of the present invention.Dynamic title compensator 30 comprises that a caption area defines block 310, a foreground/background is judged a block 320 and a motion compensation block 330.Whether caption area defines block 310 and comes down to current figure last many detection lines of cur are estimated, be captions line and whole motion-vector thereof with the decision detection line.Wherein, the equal corresponding one scan line of each bar detection line, or all corresponding data wire of each bar detection line.That is dynamic title compensator 30 can carry out dynamic compensation (when corresponding to scan line) to horizontal captions, also can carry out dynamic compensation (when corresponding to data wire) to vertical captions, does not limit.
Caption area defines block 310 according to a current figure cur and a plurality of figure of being associated field pre, nxt and aft, the pixel gray level of estimating motion-vector MV_P and these detection lines of each pixel on many detection lines of a current figure cur respectively number of times TO that reverses.Afterwards; Whether caption area delimited area 310 is a captions line according to the motion-vector MV_P and the pixel gray level counter-rotating number of times TO decision detection line of pixel; And define according to the characteristic of these captions lines and to obtain caption area, and obtain corresponding regional motion-vector MV_R and regional reliability CR_CONF.
Caption area defines block 310 and comprises that a counter-rotating detecting unit (turnover detecting unit) 312, moves appraisal unit (motion estimating unit) 314, one a captions line detecting unit 316 and a caption area detecting unit 318.Counter-rotating detecting unit 312 is in order to the pixel gray level counter-rotating number of times TO on many detection lines that detect a current figure cur.If detection line belongs to non-captions line and belongs to image, then the GTG value of pixel can be partial to continuously on the detection line, and fewer meeting produces the existing picture of pixel gray level counter-rotating.Anti-, if detection line belongs to the captions line, then the GTG value of pixel more often produces the existing picture of pixel gray level counter-rotating on the detection line.
Move and estimate unit 314 is estimated each pixel on these detection lines respectively according to a current figure cur and a plurality of figure of being associated field pre, nxt and aft motion-vector MV_P.Wherein, Move to estimate unit 314 and be each pixel of calculating on these detection lines in current figure field cur and these be associated figure field pre, nxt and aft between same in twos potential field (parity-field); Pixel gray level on a plurality of directions that searches in interval (search window) is striden poor (cross difference), and gets these corresponding pixel gray levels of each pixel and stride the motion-vector MV_P that poor minimum value is a pixel.Wherein, can further limit these pixel gray levels and stride poor minimum value less than one first critical value.
Please with reference to Fig. 3 A and Fig. 3 B, it illustrates the mobile appraisal sketch map according to preferred embodiment of the present invention.Present embodiment is to be that example is done explanation with former field pre, current figure field cur, an inferior figure nxt and back one a figure aft, and wherein a former field pre and a time figure nxt are same potential fields, and a current figure field cur and back one a figure aft are same potential fields.In Fig. 3 A, move to estimate unit 314 and be calculating pixel in striding poor diff1 with the former field pre of potential field and the pixel gray level between a time figure nxt in twos, and the pixel gray level between a current figure cur and back one a figure aft is striden poor diff2.
In addition, between in twos with potential field, move to estimate unit 314 and be the pixel gray level that calculates on a plurality of directions of searching in interval stride poor.Be interval for { 3 ,+3} is that example is done explanation, yet does not limit, and looks closely exercisable system hardware resources and decides in Fig. 3 B to search.In Fig. 3 B, if be that 1/2 pixel is example (non-limiting) with precision, then search interval be 3, under the situation of+3}, mobile appraisal unit 314 can calculate a, b, c ..., m and l the pixel gray level on totally 13 directions stride poor.Wherein, the pixel gray level of different directions is striden difference and can be endowed different weights on demand.Mobile appraisal unit 314 is got these pixel gray levels and is striden the motion-vector MV_P that poor minimum value is a pixel.Wherein can limit minimum value and just be adopted to motion-vector MV_P less than one first critical value.
Captions line detecting unit 316 judges respectively in order to the motion-vector MV_P according to each pixel on the pixel gray level of each bar detection line counter-rotating number of times TO and these detection lines whether detection line is captions line and a plurality of most probable motion-vector MV_L that obtain respectively corresponding these detection lines.Wherein, if detection line belongs to the captions line, then the GTG value of pixel more often produces the existing picture of pixel gray level counter-rotating on the detection line.
Therefore; Pixel gray level counter-rotating number of times TO on detection line is greater than one second critical value; And the mode of the motion-vector MV_P of all pixels on the detection line is greater than one the 3rd critical value; Then captions line detecting unit 316 judges that detection line is the captions line, and the corresponding detection signal IS_CL of output is " TRUE ", and gets the most probable motion-vector MV_L that mode is a detection line.Wherein, Because present embodiment comes down to compensate to crawl; So can further limit the non-vanishing vector of most probable motion-vector MV_L, if that is the main motion-vector null vector of all pixels on the detection line, then adopting less important motion-vector is most probable motion-vector MV_L.
Caption area detecting unit 318 is to define caption area and obtain the regional reliability CR_CONF of caption area according to many captions lines, and obtains regional motion-vector MV_R according to these most probable motion-vectors MV_L.Wherein, caption area detecting unit 318 can define not only caption area.For each caption area that is defined, caption area detecting unit 318 can obtain corresponding regional motion-vector and regional reliability.In addition, for the captions line, it is " TRUE " that caption area detecting unit 318 can be set corresponding captions line signal IS_CR, then is set at " FALSE " for non-captions line.
Caption area detecting unit 318 can define caption area according to the many captions lines of a current figure cur and former field pre, and obtains regional motion-vector MV_R according to a plurality of most probable motion-vector MV_L of these captions lines.Provide a kind of judgment principle to detect and to define caption area at present, so be not limited to this.Please with reference to Fig. 4 A, it illustrates the illustration intention according to the caption area detection of preferred embodiment of the present invention.In Fig. 4 A, A, B and C are expressed as the detection line of current figure cur, D, E ..., J and K be expressed as the detection line of former field pre.Wherein, detection line A~F is set at first G1 of group, and detection line G~K is set at second G2 of group.First G1 of group has higher judgement priority (priority) than second G2 of group.
In first G1 of group; If wantonly two detection lines of A~C have identical most probable motion-vector MV_L1; Wantonly two detection lines of D~F have identical most probable motion-vector MV_L2, and MV_L1 is equal to MV_L2, and then most probable motion-vector MV_L1 is regional motion-vector MV_R.Perhaps, if three detection lines of A~C have identical most probable motion-vector MV_L1, then most probable motion-vector MV_L1 is regional motion-vector MV_R.Again perhaps, if three detection lines of D~F have identical most probable motion-vector MV_L2, then most probable motion-vector MV_L2 is regional motion-vector MV_R.
If the Rule of judgment of the first above-mentioned G1 of group is all failed (fail), then to inspect among second G2 of group, wantonly three detection lines of G~K have identical most probable motion-vector MV_L3, and then most probable motion-vector MV_L3 is regional motion-vector MV_R.Perhaps, two detection lines of J and K have identical most probable motion-vector MV_L4, and then most probable motion-vector MV_L4 is regional motion-vector MV_R.
Right caption area detecting unit 318 also can only promptly define caption area and obtain regional motion-vector MV_R according to current many captions lines of scheming a cur, does not limit.Please with reference to Fig. 4 B, it illustrates another illustration intention according to the caption area detection of preferred embodiment of the present invention.In Fig. 4 B, A, B ..., F and G be expressed as the detection line of a current figure cur.Wherein, detection line A~D is set at first G1 of group, and detection line D~G is set at second G2 of group.In first G1 of group, if wantonly three detection lines of A~D have identical most probable motion-vector MV_L3, then most probable motion-vector MV_L3 is regional motion-vector MV_R.Perhaps, if wantonly two detection lines of A~C have identical most probable motion-vector MV_L4, then most probable motion-vector MV_L4 is regional motion-vector MV_R.In like manner can be applicable among second G2 of group.
It should be noted that in preferred embodiment of the present invention single caption area comes down to have the group that many adjacent detection lines of similar motion-vector are formed.Not being that all interior detection lines of caption area must be the captions line, also is not that all interior detection lines of caption area must have identical most probable motion-vector.
Caption area detecting unit 318 obtains the regional reliability CR_CONF of caption area also according to the pixel quantity that has most probable motion-vector MV_L on many detection lines in the caption area.Wherein, if on the single detection line, the pixel quantity with most probable motion-vector MV_L reaches a predetermined quantity, then this detection line can be regarded as true captions line (true caption line).Caption area detecting unit 318 determines regional reliability CR_CONF according to the number of true captions line and the number ratio of all captions lines, and the number of true captions line is many more, and regional reliability CR_CONF is high more.Zone reliability CR_CONF is for example between 0~3.
Foreground/background judges that block 320 is foreground pixel or background pixel according to the motion-vector MV_P of this object pixel, regional motion-vector MV_R and regional reliability CR_CONF judgement object pixel, selects signal with the output compensation.Foreground/background judges that block 320 comprises a critical value adjustment unit (threshold adjustingunit) 322 and one foreground/background detecting unit (foreground/background detecting unit) 324.Critical value adjustment unit 322 is dynamically set one the 4th critical value and one the 5th critical value according to regional reliability CR_CONF.
Foreground/background detecting unit 324 is searched the motion-vector of all pixels in the interval and the whole diversity factor between regional motion-vector MV_R in order to compare one of the 4th critical value and target.The existing search interval of lifting is that { 3,3} is that example is done explanation, and foreground/background detecting unit 324 compares motion-vector MV_P and the regional motion-vector MV_R that searches interval interior 7 pixels respectively, and obtains each other diversity factor according to result relatively.If the summation of diversity factor out of the ordinary is less than the 4th critical value, then object pixel can be regarded as foreground pixel; If the summation of diversity factor out of the ordinary is greater than the 4th critical value, then object pixel can be regarded as background pixel.
In addition, foreground/background detecting unit 324 also can detect in order to according to the 5th critical value object pixel is carried out a comb shape, to prevent that background pixel from also being used regional motion-vector MV_R to move interior interpolation by error and repaying.If foreground/background detecting unit 324 does not detect the comb shape phenomenon, then object pixel can be regarded as background pixel; If foreground/background detecting unit 324 detects the comb shape phenomenon, then object pixel can be regarded as background pixel.For foreground/background detecting unit 324, above-mentioned diversity factor relatively reaches the comb shape detection and can only adopt one of them to judge that this object pixel is foreground pixel or background pixel.
Also yet but both all adopt foreground/background detecting unit 324; When object pixel is judged as foreground pixel; The 324 output compensation of foreground/background detecting unit select signal MCCU_SEL to multitask device 40, make that multiplexer output compensation pixel data M CCU_OUT is the correcting pixel data DATA_OUT of object pixel.When object pixel was judged as background pixel, the 324 output compensation of foreground/background detecting unit selected signal MCCU_SEL to multitask device 40, made that multiplexer output release of an interleave pixel data DEI_OUT is the correcting pixel data DATA_OUT of object pixel.
Motion compensation block 330 is in order to belong to caption area when object pixel; Part and regional motion-vector MV_R according to a plurality of at least figure of being associated field carry out motion compensation, and judge whether the detection line under the object pixel is that one drop-down (pull-down) captions line is to obtain the compensation pixel data M CCU_OUT of object pixel.Motion compensation block 330 comprises a motion compensation unit 332 and a drop-down captions correcting unit 334.
Motion compensation unit 332 reaches a time field pattern nxt, regional motion-vector MV_R in order to belong to caption area when object pixel according to former field pre, and object pixel is carried out motion compensation and obtains interpolated pixel data PO1.Wherein, if regional motion-vector MV_R is an idol vector, then gets two pixel datas that have half regional motion-vector (MV_R)/2 with respect to object pixel and get median and obtain interpolated pixel data PO1.
And, then please refer to Fig. 5 if regional motion-vector MV_R is a strange vector, it illustrates the sketch map according to an example of the motion compensation of preferred embodiment of the present invention.In Fig. 5, ☆ represents object pixel, and x and z are the pixel of former field pre, and y and w are time pixel of a figure nxt.The mean value of plain x of motion compensation unit 332 captures and y, the mean value of the mean value of pixel z and w and pixel x, y, z and w carries out comb shape and detects.Wherein, if the mean value of pixel x and y, excessive with the difference of the mean value of pixel z and w, then object pixel is regarded as background pixel.Otherwise getting and producing minimum comb shape phenomenon person in three mean values is interpolated pixel data PO1.
Whether drop-down captions correcting unit 334 in order to judge whether the detection line under the object pixel is drop-down captions line carries out drop-down captions to object pixel with decision and proofreaies and correct.If the detection line under the object pixel is to be judged as drop-down captions line; For example be the drop-down captions line of 3:2 or the drop-down captions line of 2:2 of theater image-type; Then drop-down captions correcting unit 334 is according to the release of an interleave mode of conventional drop captions; Obtain the result that drop-down captions are proofreaied and correct based on a former field pre and a time figure nxt, and be output as compensation pixel data M CCU_OUT.If the detection line under the object pixel is judged the non-drop-down captions line that is, the PO1 as a result of the motion compensation of then drop-down captions correcting unit 334 export target pixels is compensation pixel data M CCU_OUT.
In addition, if object pixel is non-when belonging to caption area, then motion compensation unit 332 is " X " (don ' tcare) according to the value of former field pre and a time field pattern nxt, the resulting interpolated pixel data of regional motion-vector MV_R PO1.At this moment, drop-down captions correcting unit 334 output interpolated pixel data PO1 are compensation pixel data M CCU_OUT, that is the value of compensation pixel data M CCU_OUT also is " X " (don ' t care).Therefore, when dynamic title compensator 30 is independent running, and the non-caption area that belongs to of object pixel, the value of dynamic title compensator 30 outputs is " X " (don ' t care), can be considered and is failure to actuate.
The present invention also provides a kind of de-interlace method, and please with reference to Fig. 6, it illustrates the flow chart according to the de-interlace method of preferred embodiment of the present invention.In step S600, obtain the release of an interleave pixel data of an object pixel according to a current figure field and a plurality of figure of being associated field.In step S610; Scheme a caption area of the settled preceding figure of boundarys and obtain corresponding one regional motion-vector and a regional reliability according to these; And when object pixel belongs to caption area; Foundation these part and regional motion-vectors of scheming the field is at least carried out the compensation pixel data that motion compensation obtains object pixel, and judges that according to regional motion-vector and regional reliability object pixel is that a foreground pixel or a background pixel compensate the selection signal to export one.In step S620, selecting signal output release of an interleave pixel data or compensation pixel data according to compensation is the correcting pixel data of object pixel.
Please with reference to Fig. 7, it illustrates the detail flowchart according to the step S610 of the de-interlace method of preferred embodiment of the present invention.In step S612; Estimate the motion-vector of each pixel on current figure many detection lines and the pixel gray level counter-rotating number of times of these detection lines respectively according to current figure and these figure fields that are associated; Determining whether these detection lines are a captions line, and define caption area and obtain regional motion-vector and regional reliability according to these captions lines.In step S614, judge that according to the motion-vector of object pixel, regional motion-vector and regional reliability object pixel is that foreground pixel or background pixel are selected signal with the output compensation.In step S616, when object pixel belongs to caption area, carry out motion compensation according to part and regional motion-vectors of these figure at least, and judge whether the affiliated detection line of object pixel is that a drop-down captions line is to obtain the compensation pixel data of object pixel.
Above-mentioned de-interlace method, its detailed principle is to be described in the de-interlacer 10, so no longer repeat in this.
De-interlacer that the above embodiment of the present invention disclosed and method and dynamic title compensator have multiple advantages, below just list and lift the explanation of part advantage as follows:
De-interlacer that the present invention disclosed and method and dynamic title compensator; Whether utilize the motion-vector of each pixel on a plurality of figure of being associated field estimation detection line and the pixel gray level counter-rotating number of times of detection line, be effective captions line and most probable motion-vector thereof to judge many detection lines.Characteristic through contiguous captions line defines the caption area that is made up of many detection lines again, and in whole image, estimates the captions confidence level of this entire image according to the confidence level of all captions lines.And the pixel in the caption area carried out motion compensation, be to belong to foreground pixel or background pixel and pixel data after obtaining proofreading and correct according to pixel then.For drawing (pull-down) crawl under the similar theater image-type, the motion-vector that drop-down captions detect provides suitable is arranged then.
In addition, it is to belong to foreground pixel or background pixel that de-interlacer that the present invention disclosed and method and dynamic title compensator are also told object pixel earlier.If object pixel is a foreground pixel, the motion-vector that is then found through estimation carries out motion compensation in the time-domain direction with different interpolation method.If object pixel is a background pixel, then compensate with other existing de-interlace method (comprising tool dynamic adaptable or dynamic compensation).So de-interlacer and method and dynamic title compensator that the present invention is disclosed not only can carry out release of an interleave for whole image, also can carry out motion compensation to crawl.Thus, can reduce the generation of many bad visual effects, for example flicker, lines shake and irregularity etc., and obtain preferable release of an interleave image frame.
In sum, though the present invention with the preferred embodiment exposure as above, yet it is not in order to limit the present invention.Have common knowledge the knowledgeable in the technical field under the present invention, do not breaking away from the spirit and scope of the present invention, when doing various changes that are equal to or replacement.Therefore, protection scope of the present invention is when looking accompanying being as the criterion that the application's claim scope defined.

Claims (38)

1. de-interlacer comprises:
One deinterlacer is in order to obtain the release of an interleave pixel data of an object pixel according to a current figure field and a plurality of figure of being associated field;
One dynamic title compensator; Deserve a caption area of preceding figure and obtain corresponding one regional motion-vector and a regional reliability in order to define according to these figure fields; Wherein work as this object pixel and belong to this caption area; This dynamic title compensator carries out the compensation pixel data that motion compensation obtains this object pixel according to part and this zone motion-vectors of these figure at least, and according to this zone motion-vector and should the zone reliability judge that this object pixel be that a foreground pixel or a background pixel are selected signal to export a compensation; And
One multiplexer receives this compensation to select signal controlling and exports this release of an interleave pixel data or this compensation pixel data correcting pixel data for this object pixel.
2. de-interlacer according to claim 1 is characterized in that this dynamic title compensator comprises:
One caption area defines block; In order to the motion-vector of each pixel on many detection lines estimating this current figure field according to this current figure field and these figure fields that are associated respectively and the pixel gray level counter-rotating number of times of these detection lines; Determining whether these detection lines are a captions line, and define this caption area and obtain this zone motion-vector and should the zone reliability according to these captions lines;
One foreground/background is judged block, according to the motion-vector of this object pixel, this zone motion-vector and should the zone reliability judge that this object pixel be that this foreground pixel or this background pixel are with this compensation selection signal of output; And
One motion compensation block; In order to belong to this caption area when this object pixel; According to these are schemed the part of field and should carry out motion compensation by the zone motion-vector at least, and judge whether the detection line under this object pixel is that a drop-down captions line is to obtain the compensation pixel data of this object pixel.
3. de-interlacer according to claim 2 is characterized in that the equal corresponding one scan line of each bar detection line.
4. de-interlacer according to claim 2 is characterized in that all corresponding data wire of each bar detection line.
5. de-interlacer according to claim 2 is characterized in that this caption area defines block and comprises:
One counter-rotating detecting unit is in order to detect the pixel gray level counter-rotating number of times of these detection lines of this current figure;
One moves the appraisal unit, in order to estimate the motion-vector of each pixel on these detection lines respectively according to this current figure field and these figure fields that are associated;
One captions line detecting unit in order to the motion-vector according to each pixel on these pixel gray level counter-rotating number of times and these detection lines, judges respectively whether these detection lines are this captions line and a plurality of most probable motion-vectors that obtain respectively corresponding these detection lines; And
One caption area detecting unit in order to defining this caption area according to these captions lines and to obtain this zone reliability of this caption area, and obtains this zone motion-vector according to these most probable motion-vectors.
6. de-interlacer according to claim 5; It is characterized in that this move estimate the unit calculate each pixel on these detection lines in this current figure field and these figure fields that are associated in twos with potential field between; It is poor that pixel gray level on a plurality of directions that searches in the interval is striden, and get these corresponding pixel gray levels of each pixel and stride the motion-vector of poor minimum value for this pixel.
7. de-interlacer according to claim 6 is characterized in that these pixel gray levels stride poor minimum value less than one first critical value.
8. de-interlacer according to claim 5; It is characterized in that reversing number of times greater than one second critical value when this pixel gray level of this detection line; And the mode of the motion-vector of these pixels on this detection line is greater than one the 3rd critical value; This captions line detecting unit judges that this detection line is this captions line, and gets this mode this most probable motion-vector for this detection line.
9. de-interlacer according to claim 8 is characterized in that the non-vanishing vector of this most probable motion-vector.
10. de-interlacer according to claim 5; It is characterized in that this caption area detecting unit defines this caption area according to these captions lines of this a current Tu Chang and a former field, and obtain this zone motion-vector according to these most probable motion-vectors.
11. de-interlacer according to claim 5 is characterized in that this caption area detecting unit according to the pixel quantity that has the most probable motion-vector on these detection lines in this caption area, obtains this zone reliability of this caption area.
12. de-interlacer according to claim 2 is characterized in that this foreground/background judgement block comprises:
One critical value adjustment unit is in order to dynamically to set one the 4th critical value and one the 5th critical value according to this zone reliability; And
One foreground/background detecting unit; In order to the 4th critical value relatively and to should object pixel one search the motion-vector of these pixels in interval and the whole diversity factor between this zone motion-vector; Or, be that this foreground pixel or this background pixel are selected signal with this compensation of output to judge this object pixel in order to this object pixel is carried out comb shape detection according to the 5th critical value.
13. de-interlacer according to claim 2 is characterized in that this motion compensation block comprises:
One motion compensation unit in order to belong to this caption area when this object pixel, according to these scheme part, this zone motion-vector and this release of an interleave pixel data of field at least, carries out motion compensation to this object pixel; And
One drop-down captions correcting unit; In order to judge whether the detection line under this object pixel is this drop-down captions line; Whether this object pixel being carried out drop-down captions with decision proofreaies and correct; Be this drop-down captions line if the detection line under this object pixel is non-, the result who then exports the motion compensation of this object pixel is these compensation pixel data, otherwise the result who exports the drop-down captions correction of this object pixel is these compensation pixel data.
14. a dynamic title compensator comprises:
One caption area defines block; In order to the motion-vector of each pixel and the pixel gray level counter-rotating number of times of these detection lines on many detection lines estimating this current figure field according to a current figure field and a plurality of figure of being associated field respectively; Determining whether these detection lines are a captions line, and define a caption area and obtain corresponding one regional motion-vector and a regional reliability according to these captions lines;
One foreground/background is judged block, according to motion-vector, this zone motion-vector of an object pixel and should the zone reliability judge that this object pixel be that a foreground pixel or a background pixel are selected signal to export a compensation; And
One motion compensation block; In order to belong to this caption area when this object pixel; According to these are schemed the part of field and should carry out motion compensation by the zone motion-vector at least, and judge whether the detection line under this object pixel is that a drop-down captions line is to obtain the compensation pixel data of this object pixel.
15. dynamic title compensator according to claim 14 is characterized in that the equal corresponding one scan line of each bar detection line.
16. dynamic title compensator according to claim 14 is characterized in that all corresponding data wire of each bar detection line.
17. dynamic title compensator according to claim 14 is characterized in that this caption area defines block and comprises:
One counter-rotating detecting unit is in order to detect the pixel gray level counter-rotating number of times of these detection lines of this current figure;
One moves the appraisal unit, in order to estimate the motion-vector of each pixel on these detection lines respectively according to this current figure field and these figure fields that are associated;
One captions line detecting unit in order to the motion-vector according to each pixel on these pixel gray level counter-rotating number of times and these detection lines, judges respectively whether these detection lines are this captions line and a plurality of most probable motion-vectors that obtain respectively corresponding these detection lines; And
One caption area detecting unit in order to defining this caption area according to these captions lines and to obtain this zone reliability of this caption area, and obtains this zone motion-vector according to these most probable motion-vectors.
18. dynamic title compensator according to claim 17; It is characterized in that this move estimate the unit calculate each pixel on these detection lines in this current figure field and these figure fields that are associated in twos with potential field between; It is poor that pixel gray level on a plurality of directions that searches in the interval is striden, and get these corresponding pixel gray levels of each pixel and stride the motion-vector of poor minimum value for this pixel.
19. dynamic title compensator according to claim 18 is characterized in that these pixel gray levels stride poor minimum value less than one first critical value.
20. dynamic title compensator according to claim 17; It is characterized in that reversing number of times greater than one second critical value when this pixel gray level of this detection line; And the mode of the motion-vector of these pixels on this detection line is greater than one the 3rd critical value; This captions line detecting unit judges that this detection line is this captions line, and gets this mode this most probable motion-vector for this detection line.
21. dynamic title compensator according to claim 20 is characterized in that the non-vanishing vector of this most probable motion-vector.
22. dynamic title compensator according to claim 17; It is characterized in that this caption area detecting unit defines this caption area according to these captions lines of this a current Tu Chang and a former field, and obtain this zone motion-vector according to these most probable motion-vectors.
23. dynamic title compensator according to claim 17 is characterized in that this caption area detecting unit according to the pixel quantity that has the most probable motion-vector on these detection lines in this caption area, obtains this zone reliability of this caption area.
24. dynamic title compensator according to claim 14 is characterized in that this foreground/background judgement block comprises:
One critical value adjustment unit is in order to dynamically to set one the 4th critical value and one the 5th critical value according to this zone reliability; And
One foreground/background detecting unit; In order to the 4th critical value relatively and to should object pixel one search the motion-vector of these pixels in interval and the whole diversity factor between this zone motion-vector; Or, be that this foreground pixel or this background pixel are selected signal with this compensation of output to judge this object pixel in order to this object pixel is carried out comb shape detection according to the 5th critical value.
25. dynamic title compensator according to claim 14 is characterized in that this motion compensation block comprises:
One motion compensation unit, in order to belonging to this caption area when this object pixel, according at least these figure part and should the zone motion-vector, this object pixel is carried out motion compensation; And
One drop-down captions correcting unit; In order to judge whether the detection line under this object pixel is this drop-down captions line; Whether this object pixel being carried out drop-down captions with decision proofreaies and correct; Be this drop-down captions line if the detection line under this object pixel is non-, the result who then exports the motion compensation of this object pixel is these compensation pixel data, otherwise the result who exports the drop-down captions correction of this object pixel is these compensation pixel data.
26. a de-interlace method comprises:
Obtain the release of an interleave pixel data of an object pixel according to a current figure field and a plurality of figure of being associated field;
Scheming the field according to these defines a caption area that deserves preceding figure field and obtains corresponding one regional motion-vector and a regional reliability;
When this object pixel belongs to this caption area; Carry out the compensation pixel data that motion compensation obtains this object pixel according to part and this zone motion-vectors of these figure at least, and according to this zone motion-vector and should the zone reliability judge that this object pixel be that a foreground pixel or a background pixel are selected signal to export a compensation; And
According to this compensation correcting pixel data that to select signal this release of an interleave pixel data of output or this compensation pixel data be this object pixel.
27. de-interlace method according to claim 26 is characterized in that also comprising:
According to should be current figure and these figure fields that are associated pixel gray level of estimating motion-vector and these detection lines of each pixel on this current figure many detection lines respectively number of times that reverses; Determining whether these detection lines are a captions line, and define this caption area and obtain this zone motion-vector and should the zone reliability according to these captions lines;
According to the motion-vector of this object pixel, this zone motion-vector and should the zone reliability judge that this object pixel be that this foreground pixel or this background pixel are with this compensation selection signal of output; And
When this object pixel belongs to this caption area,, and judge whether the detection line under this object pixel is that a drop-down captions line is to obtain the compensation pixel data of this object pixel according to these are schemed the part of field and should carry out motion compensation by the zone motion-vector at least.
28. de-interlace method according to claim 27 is characterized in that the equal corresponding one scan line of each bar detection line.
29. de-interlace method according to claim 27 is characterized in that all corresponding data wire of each bar detection line.
30. de-interlace method according to claim 27 is characterized in that also comprising:
Detect the pixel gray level counter-rotating number of times of these detection lines of this current figure;
Estimate the motion-vector of each pixel on these detection lines respectively according to this current figure field and these figure fields that are associated;
According to the motion-vector of each pixel on these pixel gray level counter-rotating number of times and these detection lines, judge respectively whether these detection lines are this captions line and a plurality of most probable motion-vectors that obtain respectively corresponding these detection lines; And
Define this caption area and obtain this zone reliability of this caption area according to these captions lines, and obtain this zone motion-vector according to these most probable motion-vectors.
31. de-interlace method according to claim 30 is characterized in that also comprising:
Calculate each pixel on these detection lines in this current figure and these figure fields that are associated in twos with potential field between; It is poor that pixel gray level on a plurality of directions that searches in the interval is striden, and get these corresponding pixel gray levels of each pixel and stride the motion-vector of poor minimum value for this pixel.
32. de-interlace method according to claim 31 is characterized in that these pixel gray levels stride poor minimum value less than one first critical value.
33. de-interlace method according to claim 30 is characterized in that also comprising:
When this pixel gray level counter-rotating number of times of this detection line greater than one second critical value; And the mode of the motion-vector of these pixels on this detection line is greater than one the 3rd critical value; Judge that this detection line is this captions line, and get this mode this most probable motion-vector for this detection line.
34. de-interlace method according to claim 33 is characterized in that the non-vanishing vector of this most probable motion-vector.
35. de-interlace method according to claim 30 is characterized in that also comprising:
Define this caption area according to these captions lines of this a current Tu Chang and a former field, and obtain this zone motion-vector according to these most probable motion-vectors.
36. de-interlace method according to claim 30 is characterized in that also comprising:
According to the pixel quantity that has the most probable motion-vector on these detection lines in this caption area, obtain this zone reliability of this caption area.
37. de-interlace method according to claim 27 is characterized in that also comprising:
Dynamically set one the 4th critical value and one the 5th critical value according to this zone reliability; And
Relatively the 4th critical value and to should object pixel one search the motion-vector of these pixels in interval and the whole diversity factor between this zone motion-vector; Or according to the 5th critical value this object pixel is carried out a comb shape and detect, be that this foreground pixel or this background pixel are selected signal with this compensation of output to judge this object pixel.
38. de-interlace method according to claim 27 is characterized in that also comprising:
When this object pixel belongs to this caption area,, this object pixel is carried out motion compensation according to these scheme part, this zone motion-vector and this release of an interleave pixel data of field at least;
Judge whether the detection line under this object pixel is this drop-down captions line, whether this object pixel is carried out drop-down captions with decision and proofread and correct; And
Be this drop-down captions line if the detection line under this object pixel is non-, the result who then exports the motion compensation of this object pixel is these compensation pixel data, otherwise the result who exports the drop-down captions correction of this object pixel is these compensation pixel data.
CN200910132831A 2009-04-17 2009-04-17 De-interlacing device and de-interlacing method and dynamic title compensator Expired - Fee Related CN101867736B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200910132831A CN101867736B (en) 2009-04-17 2009-04-17 De-interlacing device and de-interlacing method and dynamic title compensator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200910132831A CN101867736B (en) 2009-04-17 2009-04-17 De-interlacing device and de-interlacing method and dynamic title compensator

Publications (2)

Publication Number Publication Date
CN101867736A CN101867736A (en) 2010-10-20
CN101867736B true CN101867736B (en) 2012-08-29

Family

ID=42959272

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200910132831A Expired - Fee Related CN101867736B (en) 2009-04-17 2009-04-17 De-interlacing device and de-interlacing method and dynamic title compensator

Country Status (1)

Country Link
CN (1) CN101867736B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105282397B (en) * 2014-07-22 2019-03-29 北京数码视讯科技股份有限公司 Move the interlace-removing method and device of subtitle

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1462546A (en) * 2001-05-15 2003-12-17 皇家菲利浦电子有限公司 Detecting subtitles in video signal
CN101106685A (en) * 2007-08-31 2008-01-16 湖北科创高新网络视频股份有限公司 An interlining removal method and device based on motion detection

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1462546A (en) * 2001-05-15 2003-12-17 皇家菲利浦电子有限公司 Detecting subtitles in video signal
CN101106685A (en) * 2007-08-31 2008-01-16 湖北科创高新网络视频股份有限公司 An interlining removal method and device based on motion detection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JP特开平10-112837A 1998.04.28

Also Published As

Publication number Publication date
CN101867736A (en) 2010-10-20

Similar Documents

Publication Publication Date Title
KR100282397B1 (en) Deinterlacing device of digital image data
US6999128B2 (en) Stillness judging device and scanning line interpolating device having it
KR101127220B1 (en) Apparatus for motion compensation-adaptive de-interlacing and method the same
KR100360893B1 (en) Apparatus and method for compensating video motions
KR101536794B1 (en) Image interpolation with halo reduction
US7193655B2 (en) Process and device for de-interlacing by pixel analysis
US20040070686A1 (en) Deinterlacing apparatus and method
KR100722773B1 (en) Method and apparatus for detecting graphic region in moving picture
JP5001684B2 (en) Scan conversion device
US20120092553A1 (en) Gradient adaptive video de-interlacing
EP1175088B1 (en) Device for detecting a moving subject
KR100422575B1 (en) An Efficient Spatial and Temporal Interpolation system for De-interlacing and its method
US8576337B2 (en) Video image processing apparatus and video image processing method
JP5139086B2 (en) Video data conversion from interlaced to non-interlaced
CN101867736B (en) De-interlacing device and de-interlacing method and dynamic title compensator
US8294819B2 (en) De-interlacing apparatus and method and moving caption compensator
CN201222771Y (en) High speed edge self-adapting de-interlaced interpolation device
CN101340539A (en) Deinterlacing video processing method and system by moving vector and image edge detection
JPH08163573A (en) Motion vector detector and successive scanning converter using the detector
US20090046202A1 (en) De-interlace method and apparatus
JP2008530876A (en) Video data conversion from interlaced to non-interlaced
JP4339237B2 (en) Sequential scan converter
JP2004320278A (en) Dynamic image time axis interpolation method and dynamic image time axis interpolation apparatus
JP4463171B2 (en) Autocorrelation value calculation method, interpolation pixel generation method, apparatus thereof, and program thereof
KR100351160B1 (en) Apparatus and method for compensating video motions

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120829

Termination date: 20170417

CF01 Termination of patent right due to non-payment of annual fee