CN102364933A - Motion-classification-based adaptive de-interlacing method - Google Patents

Motion-classification-based adaptive de-interlacing method Download PDF

Info

Publication number
CN102364933A
CN102364933A CN2011103285076A CN201110328507A CN102364933A CN 102364933 A CN102364933 A CN 102364933A CN 2011103285076 A CN2011103285076 A CN 2011103285076A CN 201110328507 A CN201110328507 A CN 201110328507A CN 102364933 A CN102364933 A CN 102364933A
Authority
CN
China
Prior art keywords
pending
field
scene
execution
interpolation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011103285076A
Other languages
Chinese (zh)
Inventor
丁勇
史杨宇
刘晓东
王涵
李博睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN2011103285076A priority Critical patent/CN102364933A/en
Publication of CN102364933A publication Critical patent/CN102364933A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Television Systems (AREA)

Abstract

The invention discloses a motion-classification-based adaptive de-interlacing method. Conventional interlaced scanning has the shortcomings of flickering, motion judder, vertical edge jagging and the like. In the method provided by the invention, a corresponding algorithm is adaptively selected for classification processing on the basis of performing accurate detection and granularity division on the motion state of a pixel to be interpolated, and a conventional de-interlacing algorithm is supplemented in terms of inter-field and intra-field de-interlacing by Wiener median filtering and triple median filtering. The method is more directed; and smooth transitions of de-interlacing strategies in adjacent motion states are realized, thereby achieving good de-interlacing effects.

Description

A kind of self adaptation interlace-removing method based on classification of motions
Technical field
The invention belongs to the video signal treatment technique field, relate in particular to a kind of self adaptation interlace-removing method based on classification of motions.
Background technology
Horizontal-interlace technique be exactly each two field picture be split into strange with idol, interleaved line-scanning frequency is half the when lining by line scan, the frequency spectrum of vision signal and the channel width that transmits vision signal also are line by line scan half the.Because visual persistence effect; Human eye it will be appreciated that level and smooth motion rather than the half frame images of flashing; This just under subjectivity is thought the few situation of image quality decrease, has effectively increased the utilance of channel, because the early stage communication technology is undeveloped; In order to save limited bandwidth, in conventional analogue signal TV, generally adopt interleaved technology.Interlacing scan has like this some some shortcomings: flicker, float, vertical edges are along sawtoothization etc.
Along with the development of DTV and high definition TV technology and maturation and people progressively raising to the video quality requirement; Progressive scan mode has become the preferred option of DTV scan mode, and current novel flat panel display terminal also all is the display device that support is lined by line scan.But left over a large amount of video files with the interlace mode record before this, need be converted into the video file of lining by line scan, deinterlacing technique is exactly a kind of video format conversion technology that interleaved signal is converted into progressive-scan signal.
Deinterlacing technique mainly is divided into three major types: based on the space two-dimensional interpolation technique go interlacing, based on the time 2-D interpolation technique go interlacing and based on the interlacing of going of space-time three-dimensional interpolation technology.The space two-dimensional interpolation is reduced pending pixel through the field internal information exactly; The time 2-D interpolation is exactly to utilize the contact between different to go back original image; The space-time three-dimensional interpolation is exactly to carry out interpolation with an internal information between comprehensive utilization use, and its algorithm mainly contains: content-adaptive, Motion Adaptive and motion compensation scheduling algorithm.Movement compensating algorithm can well keep the time domain and the spatial domain details of moving object, is present state-of-the-art format conversion algorithm.
And these methods all have shortcoming separately: the space two-dimensional interpolation can not increase the vertical resolution of image, and image is prone to fuzzy; The time 2-D interpolation then can be brought defectives such as sawtooth, emergence; Movement compensating algorithm is extremely responsive to motion estimation error, and interpolation makes a mistake easily.
Summary of the invention
The purpose of the inventive method is the deficiency of removing the interlacing algorithm to prior art, and a kind of self adaptation interlace-removing method based on classification of motions is provided, and avoids the interpolation mistake effectively, well goes the interlacing effect thereby obtain.
The inventive method practical implementation step is:
Step (1). at inputted video image one of them pending some C in wherein; The abscissa of judging this pending some C divided by 2 remainder whether equal this pending some C sequence number on the scene divided by 2 remainder; If both remainders equate; Constant and the execution in step (10) of gray value that then keeps this pending some C is not equal to as if both remainders, then execution in step (2).
Step (2). detect this pending some C on the scene preceding 2 with back 2 data as a reference; Calculate polarity homologous field correlation; Draw respectively this pending some C on the scene and this pending some C institute sequence number on the scene deduct 2 field correlation P and this pending some C on the scene and this pending some C sequence number on the scene add 2 correlation Q; And this pending some C preceding 1 and back 1 correlation R on the scene, calculate kinematic coefficient M
Calculating kinematic coefficient M concrete grammar is:
Calculating with the pending some C institute identical field of polarity on the scene respectively is n-2 field and the correlation P of n field and the correlation Q of n field and n+2 field, wherein pending some C on the scene be the n field.
P = | Σ i = - 1 1 T i ( n ) + Σ i = - 1 1 B i ( n ) 2 - Σ i = - 1 1 T i ( n - 2 ) + Σ i = - 1 1 B i ( n - 2 ) 2 |
Q = | Σ i = - 1 1 T i ( n ) + Σ i = - 1 1 B i ( n ) 2 - Σ i = - 1 1 T i ( n + 2 ) + Σ i = - 1 1 B i ( n + 2 ) 2 |
Wherein, T i(n) be last 1 row, 1 * 3 block of pixels with the pending some C in n field, by T -1 (n), T 0(n), T 1(n) 3 compositions;
B i(n) be following 1 row, 1 * 3 block of pixels with the pending some C in n field, by B -1(n), B 0(n), B 1(n) 3 compositions;
Calculate and be inserted into some an institute opposite polarity n-1 field on the scene and a n+1 field correlation R simultaneously
R = | Σ i = - 1 1 X i ( n - 1 ) - Σ i = - 1 1 X i ( n + 1 ) |
X -1(n), X 0(n), X 1(n) be same delegation composition 1 * 3 block of pixels of the pending some C in n field;
Kinematic coefficient M can get thus
Figure BDA0000101920570000034
Step (3). with the kinematic coefficient M of step (2) gained and preset movement threshold 1 relatively,, then the confidence level of low-speed motion is made as 1 and execution in step (4) if gained kinematic coefficient M is less than preset movement threshold 1; Otherwise, the confidence level of low-speed motion is made as 0 and execution in step (6);
Step (4). adopt the method for block of pixels match search; Form 1 * 3 search block at 2 about will comprising that pending some C go together with it; With pending some C this position of previous field on the scene search block and pending some C back 3 search block in last 2 row left, center, rights, institute on the scene be expert at about 2 search block and down 3 search block in left, center, right of 2 row carry out absolute and SAD calculating respectively; Result of calculation and movement threshold 0 are compared: if result of calculation is greater than movement threshold 0; The confidence level of then slightly moving is set to 1; And deduct the motion vector that previous field search block vector is made as pending some C, and execution in step (5) with a back best match search piece vector; Otherwise slightly the confidence level of motion is set to 0, and execution in step (8), and a wherein back best match search piece is the pairing search block of result of calculation;
Step (5). adopt dimension to receive median filtering method pending some C of small size motion carried out interpolation, and execution in step (10);
The concrete steps that pending some C carries out interpolation are:
A. get A on the direction of motion of pending some C place, n field, 3 points of B and A ', 3 points wherein being got are respectively preceding and back 1 point on pending the C direction of motion at 2;
B. A point, B point and pending some C being carried out Wiener filtering calculates:
F wiener=wiener(A,B,C)
Wherein r x ( 0 ) r x * ( 1 ) r x * ( 2 ) r x ( 1 ) r x ( 0 ) r x * ( 1 ) r x ( 2 ) r x ( 1 ) r x ( 0 ) w ( 0 ) w ( 1 ) w ( 2 ) = r x ( 1 ) r x ( 2 ) r x ( 3 )
F wiener=w(0)*C+w(1)*B+w(2)*A
Wherein, F WienerCalculate gained result, r for carrying out Wiener filtering x, r x *Be respectively the auto-correlation coefficient and the conjugation auto-correlation coefficient of sequence [A, B, C, A '], w (0), w (1), w (2) is respectively the weight coefficient of A point, B point and pending some C.
C. get adjacent 2 D and E in pending the vertical direction of the C direction of motion, calculate corresponding interpolation result:
S low(i,j)=median(F wiener,E,D)
S wherein Low(i j) is F Wiener, E and D median;
Step (6). the kinematic coefficient M and the preset movement threshold 2 of step (2) gained are compared: if the kinematic coefficient M of gained is greater than preset movement threshold 2; The confidence level that then will significantly move is made as 1 and execution in step (7), otherwise the confidence level that will significantly move is made as 0 and execution in step (9);
Step (7). pending some C to affiliated uses triple medium filterings to carry out interpolation, then execution in step (10);
The concrete steps that pending some C carries out triple medium filtering interpolation are:
D establish pending some C coordinate (j, i), the block of pixels with 3 * 3 is carried out medium filtering to three capable points of j-1:
F 1=median(S(i-1,j-1),S(i,j-1),S(i+1,j-1))
F 1For S (i-1, j-1), S (i, j-1) and S (i+1, median j-1);
Three points capable to j+1 carry out medium filtering:
F 2=median(S(i-1,j+1),S(i,j+1),S(i+1,j+1))
F 2For S (i-1, j+1), S (i, j+1) and S (i+1, median j+1);
E to S (i-1, j), F 1, F 2Carry out medium filtering, obtain the interpolation result S of the pending some C in non-edge under significantly moving High(i, j):
S high(i,j)=median(S(i-1,j),F 1,F 2);
Step (8). interpolation is carried out in affiliated pending some C use occasion and method, then and execution in step (10);
Step (9). for affiliated pending some C use respectively step (7) and step (8) must show up interior and between the interpolation result, calculate the space-time weight again, merge two kinds of interpolation results with the space-time weight;
Space-time weights α is:
α = [ cos ( π · M - T 1 T 2 - T 1 ) + 1 ] / 2
T wherein 1 Be movement threshold 1, T 2Be movement threshold 2,
Y intp=α·Y inter+(1-α)·Y intra
Wherein, Y InterBe the interpolation result that utilizes algorithm between the field, Y IntraBe to utilize an interpolation result of interior algorithm, Y IntpBe the interpolation result that finally calculates;
Step (10). the said inputted video image of traversal step (1) judges whether also to exist pending point: if exist, then return step (1) the pending point of the next one is handled; If do not exist, then finish.
Compared with prior art, beneficial effect of the present invention:
(1) the present invention is through accurately detecting in the motion state of treating interpolating pixel point on the basis of dividing with fine granularity; Select respective algorithms to carry out classification processing adaptively; Interpolation has more specific aim, and the interlacing strategy that goes under the adjacent motion state is seamlessly transitted.
(2) the present invention not only improves traditional Motion Adaptive algorithm; And dimension is wherein received medium filtering and triple medium filtering respectively from going the interlacing algorithm to replenish with in-field deinterlacing two aspects to tradition between the field; Can avoid the interpolation mistake effectively, well go the interlacing effect thereby obtain.
Description of drawings
Fig. 1 is the self adaptation interlace-removing method flow chart based on classification of motions;
Fig. 2 is five motion detection sketch mapes;
Fig. 3 is the division sketch map of low-speed motion state;
Fig. 4 is the detection sketch map of the direction of motion.
Fig. 5 receives the medium filtering sketch map for dimension;
Fig. 6 is triple medium filtering sketch mapes.
Embodiment
Below in conjunction with accompanying drawing the present invention is described further
As shown in Figure 1, the inventive method practical implementation step is:
Step (1). at inputted video image one of them pending some C in wherein; The abscissa of judging this pending some C divided by 2 remainder whether equal this pending some C sequence number on the scene divided by 2 remainder; If both remainders equate; Constant and the execution in step (10) of gray value that then keeps this pending some C is not equal to as if both remainders, then execution in step (2).
Step (2). detect this pending some C on the scene preceding 2 with back 2 data as a reference; Calculate polarity homologous field correlation; Draw respectively this pending some C on the scene and this pending some C institute sequence number on the scene deduct 2 field correlation P and this pending some C on the scene and this pending some C sequence number on the scene add 2 correlation Q; And this pending some C preceding 1 and back 1 correlation R on the scene, calculate kinematic coefficient M
As shown in Figure 2, calculate kinematic coefficient M concrete grammar and be:
Calculating with the pending some C institute identical field of polarity on the scene respectively is n-2 field and the correlation P of n field and the correlation Q of n field and n+2 field, wherein pending some C on the scene be the n field.
P = | Σ i = - 1 1 T i ( n ) + Σ i = - 1 1 B i ( n ) 2 - Σ i = - 1 1 T i ( n - 2 ) + Σ i = - 1 1 B i ( n - 2 ) 2 |
Q = | Σ i = - 1 1 T i ( n ) + Σ i = - 1 1 B i ( n ) 2 - Σ i = - 1 1 T i ( n + 2 ) + Σ i = - 1 1 B i ( n + 2 ) 2 |
Wherein, T i(n) be last 1 row, 1 * 3 block of pixels with the pending some C in n field, by T -1(n), T 0(n), T 1(n) 3 compositions;
B i(n) be following 1 row, 1 * 3 block of pixels with the pending some C in n field, by B -1(n), B 0(n), B 1(n) 3 compositions;
Calculate and be inserted into some an institute opposite polarity n-1 field on the scene and a n+1 field correlation R simultaneously
R = | Σ i = - 1 1 X i ( n - 1 ) - Σ i = - 1 1 X i ( n + 1 ) |
X -1(n), X 0(n), X 1(n) be same delegation composition 1 * 3 block of pixels of the pending some C in n field;
Kinematic coefficient M can get thus
Figure BDA0000101920570000081
Step (3). with the kinematic coefficient M of step (2) gained and preset movement threshold 1 relatively,, then the confidence level of low-speed motion is made as 1 and execution in step (4) if gained kinematic coefficient M is less than preset movement threshold 1; Otherwise, the confidence level of low-speed motion is made as 0 and execution in step (6);
Step (4). as shown in Figure 3; Adopt the method for block of pixels match search; Form 1 * 3 search block at 2 about will comprising that pending some C go together with it; With pending some C this position of previous field on the scene search block and pending some C back 3 search block in last 2 row left, center, rights, institute on the scene be expert at about 2 search block and down 3 search block in left, center, right of 2 row carry out absolute and SAD calculating respectively; Result of calculation and movement threshold 0 are compared: if result of calculation is greater than movement threshold 0; The confidence level of then slightly moving is set to 1, and deducts the motion vector that previous field search block vector is made as pending some C with a back best match search piece vector, and execution in step (5); Otherwise slightly the confidence level of motion is set to 0, and execution in step (8), and a wherein back best match search piece is the pairing search block of result of calculation;
Step (5). adopt dimension to receive median filtering method pending some C of small size motion carried out interpolation, and execution in step (10);
Like Fig. 4 and shown in Figure 5, the concrete steps that pending some C carries out interpolation are:
Get A on the direction of motion of pending some C place, a n field, 3 points of B and A ', 3 points wherein being got are respectively preceding and back 1 point on pending the C direction of motion at 2;
B carries out Wiener filtering to A point, B point and pending some C and calculates:
F wiener=wiener(A,B,C)
Wherein r x ( 0 ) r x * ( 1 ) r x * ( 2 ) r x ( 1 ) r x ( 0 ) r x * ( 1 ) r x ( 2 ) r x ( 1 ) r x ( 0 ) w ( 0 ) w ( 1 ) w ( 2 ) = r x ( 1 ) r x ( 2 ) r x ( 3 )
F wiener=w(0)*C+w(1)*B+w(2)*A
Wherein, F WienerCalculate gained result, r for carrying out Wiener filtering x, r x *Be respectively the auto-correlation coefficient and the conjugation auto-correlation coefficient of sequence [A, B, C, A '], w (0), w (1), w (2) is respectively the weight coefficient of A point, B point and pending some C.
C gets adjacent 2 D and E in pending the vertical direction of the C direction of motion, calculates corresponding interpolation result:
S low(i,j)=median(F wiener,E,D)
S wherein Low(i j) is F Wiener, E and D median;
Step (6). the kinematic coefficient M and the preset movement threshold 2 of step (2) gained are compared: if the kinematic coefficient M of gained is greater than preset movement threshold 2; The confidence level that then will significantly move is made as 1 and execution in step (7), otherwise the confidence level that will significantly move is made as 0 and execution in step (9);
Step (7). pending some C to affiliated uses triple medium filterings to carry out interpolation, then execution in step (10);
As shown in Figure 6, the concrete steps that pending some C carries out triple medium filtering interpolation are:
D establish pending some C coordinate (j, i), the block of pixels with 3 * 3 is carried out medium filtering to three capable points of j-1:
F 1=median(S(i-1,j-1),S(i,j-1),S(i+1,j-1))
F 1For S (i-1, j-1), S (i, j-1) and S (i+1, median j-1);
Three points capable to j+1 carry out medium filtering:
F 2=median(S(i-1,j+1),S(i,j+1),S(i+1,j+1))
F 2For S (i-1, j+1), S (i, j+1) and S (i+1, median j+1);
E to S (i-1, j), F 1, F 2Carry out medium filtering, obtain the interpolation result S of the pending some C in non-edge under significantly moving High(i, j):
S high(i,j)=median(S(i-1,j),F 1,F 2);
Step (8). interpolation is carried out in affiliated pending some C use occasion and method, then and execution in step (10);
Step (9). for affiliated pending some C use respectively step (7) and step (8) must show up interior and between the interpolation result, calculate the space-time weight again, merge two kinds of interpolation results with the space-time weight;
Space-time weights α is:
α = [ cos ( π · M - T 1 T 2 - T 1 ) + 1 ] / 2
T wherein 1Be movement threshold 1, T 2Be movement threshold 2,
Y intp=α·Y inter+(1-α)·Y intra
Wherein, Y InterBe the interpolation result that utilizes algorithm between the field, Y IntraBe to utilize an interpolation result of interior algorithm, Y IntpBe the interpolation result that finally calculates;
Step (10). the said inputted video image of traversal step (1) judges whether also to exist pending point: if exist, then return step (1) the pending point of the next one is handled; If do not exist, then finish.

Claims (1)

1. the self adaptation interlace-removing method based on classification of motions is characterized in that this method comprises the steps:
Step (1). at inputted video image one of them pending some C in wherein; The abscissa of judging this pending some C divided by 2 remainder whether equal this pending some C sequence number on the scene divided by 2 remainder; If both remainders equate; Constant and the execution in step (10) of gray value that then keeps this pending some C is not equal to as if both remainders, then execution in step (2);
Step (2). detect this pending some C on the scene preceding 2 with back 2 data as a reference; Calculate polarity homologous field correlation; Draw respectively this pending some C on the scene and this pending some C institute sequence number on the scene deduct 2 field correlation P and this pending some C on the scene and this pending some C sequence number on the scene add 2 correlation Q; And this pending some C preceding 1 and back 1 correlation R on the scene, calculate kinematic coefficient M
Calculating kinematic coefficient M concrete grammar is:
Calculating with the pending some C institute identical field of polarity on the scene respectively is n-2 field and the correlation P of n field and the correlation Q of n field and n+2 field, wherein pending some C on the scene be the n field;
P = | Σ i = - 1 1 T i ( n ) + Σ i = - 1 1 B i ( n ) 2 - Σ i = - 1 1 T i ( n - 2 ) + Σ i = - 1 1 B i ( n - 2 ) 2 |
Q = | Σ i = - 1 1 T i ( n ) + Σ i = - 1 1 B i ( n ) 2 - Σ i = - 1 1 T i ( n + 2 ) + Σ i = - 1 1 B i ( n + 2 ) 2 |
Wherein, T i(n) be last 1 row, 1 * 3 block of pixels with the pending some C in n field, by T -1(n), T 0(n), T 1(n) 3 compositions;
B i(n) be following 1 row, 1 * 3 block of pixels with the pending some C in n field, by B -1(n), B 0(n), B 1(n) 3 compositions;
Calculate and be inserted into some an institute opposite polarity n-1 field on the scene and a n+1 field correlation R simultaneously
R = | Σ i = - 1 1 X i ( n - 1 ) - Σ i = - 1 1 X i ( n + 1 ) |
X -1(n), X 0(n) X 1(n) be same delegation composition 1 * 3 block of pixels of the pending some C in n field; Kinematic coefficient M can get thus
Figure FDA0000101920560000022
Step (3). with the kinematic coefficient M of step (2) gained and preset movement threshold 1 relatively,, then the confidence level of low-speed motion is made as 1 and execution in step (4) if gained kinematic coefficient M is less than preset movement threshold 1; Otherwise, the confidence level of low-speed motion is made as 0 and execution in step (6);
Step (4). adopt the method for block of pixels match search; Form 1 * 3 search block at 2 about will comprising that pending some C go together with it; With pending some C this position of previous field on the scene search block and pending some C back 3 search block in last 2 row left, center, rights, institute on the scene be expert at about 2 search block and down 3 search block in left, center, right of 2 row carry out absolute and SAD calculating respectively; Result of calculation and movement threshold 0 are compared: if result of calculation is greater than movement threshold 0; The confidence level of then slightly moving is set to 1; And deduct the motion vector that previous field search block vector is made as pending some C, and execution in step (5) with a back best match search piece vector; Otherwise slightly the confidence level of motion is set to 0, and execution in step (8), and a wherein back best match search piece is the pairing search block of result of calculation;
Step (5). adopt dimension to receive median filtering method pending some C of small size motion carried out interpolation, and execution in step (10);
The concrete steps that pending some C carries out interpolation are:
A. get A on the direction of motion of pending some C place, n field, 3 points of B and A ', 3 points wherein being got are respectively preceding and back 1 point on pending the C direction of motion at 2;
B. A point, B point and pending some C being carried out Wiener filtering calculates:
F wiener=wiener(A,B,C)
Wherein r x ( 0 ) r x * ( 1 ) r x * ( 2 ) r x ( 1 ) r x ( 0 ) r x * ( 1 ) r x ( 2 ) r x ( 1 ) r x ( 0 ) w ( 0 ) w ( 1 ) w ( 2 ) = r x ( 1 ) r x ( 2 ) r x ( 3 )
F wiener=w(0)*C+w(1)*B+w(2)*A
Wherein, F WienerCalculate gained result, r for carrying out Wiener filtering x, r x *Be respectively the auto-correlation coefficient and the conjugation auto-correlation coefficient of sequence [A, B, C, A '], w (0), w (1), w (2) is respectively the weight coefficient of A point, B point and pending some C;
C. get adjacent 2 D and E in pending the vertical direction of the C direction of motion, calculate corresponding interpolation result:
S low(i,j)=median(F wiener,E,D)
S wherein Low(i j) is F Wiener, E and D median;
Step (6). the kinematic coefficient M and the preset movement threshold 2 of step (2) gained are compared: if the kinematic coefficient M of gained is greater than preset movement threshold 2; The confidence level that then will significantly move is made as 1 and execution in step (7), otherwise the confidence level that will significantly move is made as 0 and execution in step (9);
Step (7). pending some C to affiliated uses triple medium filterings to carry out interpolation, then execution in step (10);
The concrete steps that pending some C carries out triple medium filtering interpolation are:
D establish pending some C coordinate (j, i), the block of pixels with 3 * 3 is carried out medium filtering to three capable points of j-1:
F 1=median(S(i-1,j-1),S(i,j-1),S(i+1,j-1))
F 1For S (i-1, j-1), S (i, j-1) and S (i+1, median j-1);
Three points capable to j+1 carry out medium filtering:
F 2=median(S(i-1,j+1),S(i,j+1),S(i+1,j+1))
F 2For S (i-1, j+1), S (i, j+1) and S (i+1, median j+1);
E to S (i-1, j), F 1, F 2Carry out medium filtering, obtain the interpolation result S of the pending some C in non-edge under significantly moving High(i, j):
S high(i,j)=median(S(i-1,j),F 1,F 2);
Step (8). interpolation is carried out in affiliated pending some C use occasion and method, then and execution in step (10);
Step (9). for affiliated pending some C use respectively step (7) and step (8) must show up interior and between the interpolation result, calculate the space-time weight again, merge two kinds of interpolation results with the space-time weight;
Space-time weights α is:
α = [ cos ( π · M - T 1 T 2 - T 1 ) + 1 ] / 2
T wherein 1Be movement threshold 1, T 2Be movement threshold 2,
Y intp=α·Y inter+(1-α)·Y intra
Wherein, Y InterBe the interpolation result that utilizes algorithm between the field, Y IntraBe to utilize an interpolation result of interior algorithm, Y IntpBe the interpolation result that finally calculates;
Step (10). the said inputted video image of traversal step (1) judges whether also to exist pending point: if exist, then return step (1) the pending point of the next one is handled; If do not exist, then finish.
CN2011103285076A 2011-10-25 2011-10-25 Motion-classification-based adaptive de-interlacing method Pending CN102364933A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011103285076A CN102364933A (en) 2011-10-25 2011-10-25 Motion-classification-based adaptive de-interlacing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011103285076A CN102364933A (en) 2011-10-25 2011-10-25 Motion-classification-based adaptive de-interlacing method

Publications (1)

Publication Number Publication Date
CN102364933A true CN102364933A (en) 2012-02-29

Family

ID=45691484

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011103285076A Pending CN102364933A (en) 2011-10-25 2011-10-25 Motion-classification-based adaptive de-interlacing method

Country Status (1)

Country Link
CN (1) CN102364933A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103037140A (en) * 2012-12-12 2013-04-10 杭州国策商图科技有限公司 Target tracing algorithm with fortissimo robustness and based on block matching
CN104539260A (en) * 2014-12-03 2015-04-22 广州市雅江光电设备有限公司 Computing method for vector filter
WO2017107188A1 (en) * 2015-12-25 2017-06-29 中国科学院深圳先进技术研究院 Method and apparatus for rapidly recognizing video classification
CN108282653A (en) * 2018-02-06 2018-07-13 上海通途半导体科技有限公司 The motion compensation deinterlacing method and system of estimation based on bipolarity field
WO2020119667A1 (en) * 2018-12-10 2020-06-18 深圳市中兴微电子技术有限公司 Deinterlacing processing method and device, and computer-readable storage medium
CN113852830A (en) * 2021-09-23 2021-12-28 杭州国芯科技股份有限公司 Median filtering video de-interlacing method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1315806A (en) * 2000-03-31 2001-10-03 松下电器产业株式会社 Equipment and method for covering interpolation fault in alternate-line scanning to line-by-line scanning converter
CN1581931A (en) * 2003-08-12 2005-02-16 三星电子株式会社 De-interlacing algorithm responsive to edge pattern
CN1941886A (en) * 2005-09-28 2007-04-04 扬智科技股份有限公司 Adaptive vertical temporal flitering method of de-interlacing
US20080111916A1 (en) * 2006-11-13 2008-05-15 Yu-Chang Wang Image de-interlacing method
US20100039556A1 (en) * 2008-08-12 2010-02-18 The Hong Kong University Of Science And Technology Multi-resolution temporal deinterlacing
US20100149415A1 (en) * 2008-12-12 2010-06-17 Dmitry Znamenskiy System and method for the detection of de-interlacing of scaled video

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1315806A (en) * 2000-03-31 2001-10-03 松下电器产业株式会社 Equipment and method for covering interpolation fault in alternate-line scanning to line-by-line scanning converter
CN1581931A (en) * 2003-08-12 2005-02-16 三星电子株式会社 De-interlacing algorithm responsive to edge pattern
CN1941886A (en) * 2005-09-28 2007-04-04 扬智科技股份有限公司 Adaptive vertical temporal flitering method of de-interlacing
US20080111916A1 (en) * 2006-11-13 2008-05-15 Yu-Chang Wang Image de-interlacing method
US20100039556A1 (en) * 2008-08-12 2010-02-18 The Hong Kong University Of Science And Technology Multi-resolution temporal deinterlacing
US20100149415A1 (en) * 2008-12-12 2010-06-17 Dmitry Znamenskiy System and method for the detection of de-interlacing of scaled video

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JINTAO LI,ET AL: "Motion adaptive deinterlacing with accurate motion detection and anti-aliasing interpolation filter", 《IEEE TRANSACTIONS ON CONSUMER ELECTRONICS》 *
丁勇,等: "时空权重和边缘自适应去隔行", 《计算机学报》 *
刘晓东,等: "一种基于运动分类的自适应去隔行算法", 《仪器仪表学报》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103037140A (en) * 2012-12-12 2013-04-10 杭州国策商图科技有限公司 Target tracing algorithm with fortissimo robustness and based on block matching
CN103037140B (en) * 2012-12-12 2019-06-28 杭州国策商图科技有限公司 A kind of target tracking algorism based on Block- matching
CN104539260A (en) * 2014-12-03 2015-04-22 广州市雅江光电设备有限公司 Computing method for vector filter
CN104539260B (en) * 2014-12-03 2018-03-02 广州市雅江光电设备有限公司 A kind of computational methods of vector filtering
WO2017107188A1 (en) * 2015-12-25 2017-06-29 中国科学院深圳先进技术研究院 Method and apparatus for rapidly recognizing video classification
CN108282653A (en) * 2018-02-06 2018-07-13 上海通途半导体科技有限公司 The motion compensation deinterlacing method and system of estimation based on bipolarity field
CN108282653B (en) * 2018-02-06 2021-09-03 上海通途半导体科技有限公司 Motion compensation de-interlacing method and system based on motion estimation of bipolar field
WO2020119667A1 (en) * 2018-12-10 2020-06-18 深圳市中兴微电子技术有限公司 Deinterlacing processing method and device, and computer-readable storage medium
US11595613B2 (en) 2018-12-10 2023-02-28 Zte Corporation De-interlacing processing method and device, and computer-readable storage medium
CN113852830A (en) * 2021-09-23 2021-12-28 杭州国芯科技股份有限公司 Median filtering video de-interlacing method
CN113852830B (en) * 2021-09-23 2023-12-29 杭州国芯科技股份有限公司 Median filtering video de-interlacing method

Similar Documents

Publication Publication Date Title
CN102025960B (en) Motion compensation de-interlacing method based on adaptive interpolation
CN100518243C (en) De-interlacing apparatus using motion detection and adaptive weighted filter
CN100479495C (en) De-interlacing method with the motive detection and self-adaptation weight filtering
US6459455B1 (en) Motion adaptive deinterlacing
KR100902315B1 (en) Apparatus and method for deinterlacing
CN100417188C (en) Module self-adaptive motion compensation
CN102364933A (en) Motion-classification-based adaptive de-interlacing method
US20030086498A1 (en) Apparatus and method of converting frame and/or field rate using adaptive motion compensation
CN106210767A (en) A kind of video frame rate upconversion method and system of Intelligent lifting fluidity of motion
US7787048B1 (en) Motion-adaptive video de-interlacer
CN101483746B (en) Deinterlacing method based on movement detection
US20100177239A1 (en) Method of and apparatus for frame rate conversion
JP2004312680A (en) Motion estimation apparatus and method for detecting scrolling text or graphic data
CN103369208A (en) Self-adaptive de-interlacing method and device
CN101647292A (en) Motion adaptive upsampling of chroma video signals
CN101510985B (en) Self-adapting de-interleave method for movement compensation accessory movement
CN106303338B (en) A kind of in-field deinterlacing method based on the multi-direction interpolation of bilateral filtering
CN101340539A (en) Deinterlacing video processing method and system by moving vector and image edge detection
CN111294545B (en) Image data interpolation method and device, storage medium and terminal
CN108282653A (en) The motion compensation deinterlacing method and system of estimation based on bipolarity field
EP1691545B1 (en) Apparatus for interpolating scanning lines
EP1334613B1 (en) Motion compensated upconversion for video scan rate conversion
CN102497492B (en) Detection method for subtitle moving in screen
KR100587263B1 (en) Method for frame rate conversing of video signal
US20120082394A1 (en) Image processing apparatus and image processing method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20120229