CN102572222A - Image processing apparatus and method, and image display apparatus and method - Google Patents

Image processing apparatus and method, and image display apparatus and method Download PDF

Info

Publication number
CN102572222A
CN102572222A CN2011103594169A CN201110359416A CN102572222A CN 102572222 A CN102572222 A CN 102572222A CN 2011103594169 A CN2011103594169 A CN 2011103594169A CN 201110359416 A CN201110359416 A CN 201110359416A CN 102572222 A CN102572222 A CN 102572222A
Authority
CN
China
Prior art keywords
mentioned
pixel
frame
vision signal
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011103594169A
Other languages
Chinese (zh)
Other versions
CN102572222B (en
Inventor
藤山直之
小野良树
久保俊明
堀部知笃
那须督
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mitsubishi Electric Corp
Original Assignee
Mitsubishi Electric Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitsubishi Electric Corp filed Critical Mitsubishi Electric Corp
Publication of CN102572222A publication Critical patent/CN102572222A/en
Application granted granted Critical
Publication of CN102572222B publication Critical patent/CN102572222B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Television Systems (AREA)
  • Picture Signal Circuits (AREA)

Abstract

The present invention provides an image processing apparatus and method and an image display apparatus and method, which can reduce motion blur amplitude contained in input video signals and further increase the image quality of dynamic images undergone frame interpolation. As resolution means, the direction and size of motion blur can be estimated according to the motion vector detected by a motion vector detection unit for detecting the motion vector of the video signals; the video signals (D2) are filtered (34) through using filtering coefficients according to the direction and size of estimated motion blur; the gain (39) is obtained according to a filtering result (FL1(I, j)); correction is performed through multiplying the obtained gain by the video signals (D2(I, j)); and the frame interpolation is performed using corrected video signals (E(I, j)).

Description

Image processing apparatus and method and image display device and method
Technical field
The present invention relates to image processing apparatus and method and image display device and method.The frame interpolation that the invention particularly relates to the new interpolated frame of between the frame of image insertion is handled.
Background technology
LCDs etc. are kept escope and in 1 image duration, are continued to show identical image; And human eye is to move continuously for the tracking of moving object; Under the situation of the object of which movement in the image; Because mobile the showing as with 1 frame of object is that move the discontinuous of unit, thereby exists the marginal portion to seem the problem of bluring.
In addition; The film video is converted into the material of TV signal about film etc.; Because the difference of the frame frequency of both (film video and TV signals); Become the situation of producing the picture signal of 2 frames or 3 frames by same frame, if this is shown that directly motion blur then can occur perhaps produces motion unsmooth (jerky) such shake (judder) problem.
Likewise; About convert the material of TV signal into through the video of Computer Processing, be similarly the situation of producing the picture signal of 2 frames by same frame, if directly show; Then can with above-mentioned situation likewise, motion blur occurs or produce the unsmooth such jitter problem of motion.
In order to address these problems, can consider to increase the demonstration frame number, so that the mobile of object becomes smooth through frame is carried out interpolation.
As existing image processing apparatus and method, known have utilize the image identical with preceding 1 frame to the zeroth order that interpolated frame carries out interpolation keep method, utilize preceding 1 two field picture and after the average image of 1 two field picture interpolated frame is carried out the interpolation of average value method etc. of interpolation.Yet, keep method about zeroth order, for image, do not carry out moving smoothly in the certain orientation motion, therefore still can't solve the fuzzy problem of keeping escope.And the interpolation of average value method exists moving image to become the problem of ghost image.
As this improvement measure; Be known that according to be in point-symmetric position with respect to the interpolating pixel of interpolated frame and on the time formerly pixel and time of frame go up after the pixel of pixel of frame between the pixel of relevant maximum, the interpolating pixel (for example referring to patent documentation 1) of generation interpolated frame.According to this method; For example when in video, exist from formerly frame to after the situation of the object that moves of frame under; Formerly frame and after frame between in the interpolated frame that generates; The object's position that interpolation goes out to move to frame formerly with after the object of position, middle of object's position of frame, therefore can show level and smooth video flowing.Yet; Even suitably carried out frame interpolation; For example frame rate is made as 2 times; But cause including fuzzy (the following motion blur that also is referred to as) that motion causes in the frame of mobile object of deterioration, produce in the time of still can't reducing shooting that motion caused was fuzzy, and can only demonstrate fuzzy video flowing.
On the other hand, in patent documentation 2, disclose,, proofreaied and correct the gimmick in the zone of bluring owing to motion to including the fuzzy frame that causes the mobile object of deterioration that causes because of motion through having used the gimmick of deconvoluting motion vector detection, ambiguity function.
The vision signal that display unit received is by the light accepting part of camera frame to be accumulated the time in (for example 1/60 second) from being taken the photograph the light total amount that receives that body receives to quantize, according to according to specification and definite pixel order is arranged and sent.When the light accepting part of camera with taken the photograph under the situation of body relative motion, can produce definite the bluring of relative velocity of accumulating time and camera and being taken the photograph body in the profile portion of being taken the photograph body by frame.To this, the gimmick of patent documentation 2 is that Mathematical Modeling is applicable to image, utilizes the inverse function of the ambiguity function that Mathematical Modeling comprises to carry out filtering.Yet; As stated, LCD etc. are kept escope and in 1 image duration, are continued to show identical image, under the situation of the object of which movement in the image; Object mobile is to be that move the discontinuous of unit with 1 frame, thereby the problem that still can exist the marginal portion to show fuzzyly.
[patent documentation 1] TOHKEMY 2006-129181 communique (the 8th page, Fig. 3)
No. 3251127 specification of [patent documentation 2] japanese
It is exactly as above to constitute that existing frame interpolation is handled, even suitably generated interpolated frame, also can exist to alleviate the fuzzy problem that motion of objects mobile in video causes.In addition, even, the mobile object that blurs owing to motion has been alleviated bluring that motion causes through based on the liftering of deconvoluting, but because the object that moves carries out discontinuous moving with 1 frame unit, thereby the marginal portion seems fuzzy.
Summary of the invention
The image processing apparatus of one aspect of the present invention is characterised in that; It has: motion vector detection section; It is according to the 1st vision signal of the input from the outside and the 2nd vision signal of importing from the outside; Detect the motion vector of above-mentioned the 1st vision signal, wherein, before the 2nd vision signal is in more than 1 frame with respect to above-mentioned the 1st vision signal in time or after more than 1 frame; And image rectification portion; It uses by the detected motion vector of above-mentioned motion vector detection section, proofreaies and correct the motion blur in above-mentioned the 1st vision signal, and above-mentioned image rectification portion has: the motion blur estimating section; It estimates the direction and the size of motion blur according to above-mentioned motion vector; Filtering portion, it uses direction corresponding to the estimated above-mentioned motion blur that goes out with size and predetermined filter factor carries out filtering to above-mentioned the 1st vision signal; And correction intensity adjustment part; It is according near the intensity of variation of the pixel value the concerned pixel; Adjustment is to the correction intensity of above-mentioned concerned pixel; Above-mentioned filtering portion carries out amplitude limiting processing to each pixel value of the pixel in the neighboring area of concerned pixel, makes the absolute value of difference of each pixel value of pixel value and the pixel in its neighboring area of concerned pixel be no more than predetermined threshold value, and uses the above-mentioned pixel value after the amplitude limiting processing that the pixel in the above-mentioned neighboring area is carried out LPF.
The image processing method of another aspect of the present invention is characterised in that; This image processing method comprises: the motion vector detection step; According to the 1st vision signal of input and the 2nd vision signal of importing from the outside from the outside; Detect the motion vector of above-mentioned the 1st vision signal, wherein, before the 2nd vision signal is in more than 1 frame with respect to above-mentioned the 1st vision signal in time or after more than 1 frame; And image rectification step; Use is detected motion vector in above-mentioned motion vector detection step, proofreaies and correct the motion blur in above-mentioned the 1st vision signal, and above-mentioned image rectification step comprises: the motion blur estimating step; According to above-mentioned motion vector, estimate the direction and the size of motion blur; Filter step uses direction corresponding to the estimated above-mentioned motion blur that goes out with size and predetermined filter factor carries out filtering to above-mentioned the 1st vision signal; And correction intensity set-up procedure; Intensity of variation according near the pixel value the concerned pixel; Adjustment is to the correction intensity of above-mentioned concerned pixel; In above-mentioned filter step; Each pixel value to the pixel in the neighboring area of concerned pixel carries out amplitude limiting processing, makes the absolute value of difference of each pixel value of pixel value and the pixel in its neighboring area of concerned pixel be no more than predetermined threshold value, and uses the above-mentioned pixel value after the amplitude limiting processing that the pixel in the above-mentioned neighboring area is carried out LPF.
According to the present invention; Through proofread and correct the motion blur of present frame with reference to motion vector; With reference to 2 frames that motion blur continuous in time is proofreaied and correct and motion vector, thereby generate the frame between these frames, the picture quality in the time of therefore promoting dynamic image and show through interpolation.
In addition,, can consider to be provided with respectively the processing of correct motion blur and be in the processing of the frame between the frame, have following effect yet compare this situation the present invention through the interpolation generation in order to obtain effect same.
(1) use the motion vector detection result in the reason throughout, thus with motion vector detection as omnibus circuit (common process), thereby can cut down circuit scale (treating capacity, the frame memory amount of preserving motion vector according to each pixel).
(2) at least 2 frame memories need be kept respectively in the treatment step that motion vector detection and frame interpolation are handled, and, just required memory capacity can be reduced through the shared frame memory with epigraph.
Description of drawings
Fig. 1 is the block diagram of the image display device that relates to of expression the present invention the 1st execution mode.
Fig. 2 is the block diagram of structure example of the picture delay portion 4 of presentation graphs 1.
Fig. 3 is the block diagram of structure example of the motion vector detection section 5 of presentation graphs 1.
Fig. 4 (a) and Fig. 4 (b) are the figure of an example that is illustrated in the hunting zone of motion vector in the vision signal of 2 continuous frames.
Fig. 5 is the block diagram of structure example of the image rectification portion 6 of presentation graphs 1.
Fig. 6 is the figure of the relation during expression image duration and the shooting.
Fig. 7 is the figure of expression for the example of effective filter field EFA of motion blur.
Fig. 8 is expression another routine figure for effective filter field EFA of motion blur.
Fig. 9 is the figure of expression for the another example of effective filter field EFA of motion blur.
Figure 10 is the figure of the difference of remarked pixel value and mean value with an example of the relation between the adjusted correction intensity parameter.
Figure 11 (a)~Figure 11 (j) is the sequential chart of the signal timing that shows of the each several part of the structure of presentation graphs 1 and Fig. 5.
Figure 12 is the figure of each component of expression motion vector.
Figure 13 (a) and Figure 13 (b) are routine figure of motion vector and the motion blur of 2 frames of expression.
Figure 14 (a) and Figure 14 (b) are the motion vector of 2 frames of expression and another routine figure of motion blur.
Figure 15 is direction and the size of expression motion vector and for the figure of an example of the pointer (IND) of table of filter coefficients.
Figure 16 is the curve chart of expression based on the Nonlinear Processing of threshold value.
Figure 17 is the block diagram of structure example of the image rectification portion 6 of expression the present invention the 2nd execution mode.
Label declaration
1 image display device; 2 image processing apparatus; 3 image displaying parts; 4 picture delay portions; 5 motion vector detection section; 6 image rectification portions; 7 frame generation portions; 11 frame memories; 12 frame memory control parts; 21 pay close attention to the frame piece cuts portion; 22 back frame pieces cut portion; 23 motion vectors are confirmed portion; 24 memories; 25 memory controllers; 30 treatment for correcting portions; 31 operation signal handling parts; 32 motion blur estimating section; 33 filter factor preservation portions; 34 filtering portions; 35 Nonlinear Processing portions; 36 low pass filters; 37 mean value calculation portions; 38 correction intensity adjustment parts; 39 gain calculating portions
Embodiment
The 1st execution mode
Fig. 1 is the block diagram of structure that expression has the image display device of the image processing apparatus that the present invention relates to.Illustrated image display device 1 has image processing apparatus 2 and image displaying part 3, and image processing apparatus 2 has picture delay portion 4, motion vector detection section 5, image rectification portion 6 and frame generation portion 7.
Image processing apparatus 2 receives the vision signal D0 that is imported, and carries out motion blur and proofreaies and correct and frame interpolation.Vision signal D0 is made up of the row of the signal of the pixel value of a plurality of pixels of expression composing images; Image processing apparatus 2 carries out the motion blur treatment for correcting with a plurality of pixels as calibration object pixel (concerned pixel) in order; Be created on the frame HF that comprises between vision signal E1 and the E2 (row of the signal of the pixel value of proofreading and correct by possessing constitute) after the motion blur treatment for correcting through interpolation, output comprises vision signal E1, E2 and HF vision signal (picture signal row) HV in interior process interpolation.
The vision signal D0 that is input to image processing apparatus 2 is provided for picture delay portion 4.Picture delay portion 4 uses frame memories, and the vision signal D0 that is imported is carried out frame delay, and vision signal D2, the D1 of 2 frames that differ from one another exported to motion vector detection section 5.
Motion vector detection section 5 is used from vision signal D2, the D1 of 2 different frames of picture delay portion 4 outputs, detects the motion vector V that vision signal D2 is comprised, and motion vector V is exported to image rectification portion 6.
In addition, make motion vector V postpone for 1 image duration, export to frame generation portion 7 as motion vector Vd.
Image rectification portion 6 will be from the motion vector V of motion vector detection section 5 output as input; Motion blur to from the vision signal D2 of picture delay portion 4 output, producing the video of deterioration owing to the motion of motion of being taken the photograph body and/or camera is proofreaied and correct; The vision signal E that is proofreaied and correct is exported to picture delay portion 4; 4 couples of vision signal E that proofreaied and correct of picture delay portion carry out frame delay, the vision signal E1 and the E2 of 2 frames that differ from one another of output.
7 uses of frame generation portion are from vision signal E1, the E2 of 2 different frames of picture delay portion 4 outputs and the motion vector Vd that imports from motion vector detection section 5; To interpolated frame between vision signal E1 and the E2, export to image displaying part 3 at interior vision signal HV through interpolation as the vision signal HF that comprises interpolated frame.In the vision signal HV of interpolation, export from frame generation portion 7 by the order of vision signal E1, interpolated frame vision signal HF, vision signal E2.
Image displaying part 3 carries out showing based on the image that has been corrected motion blur, has also passed through the vision signal HV of frame interpolation.Wherein, through import adjustment parameter PR by the user, can realize the adjustment of the degree of motion blur correction and/or the adjustment of correcting image quality.
In following explanation, establish picture size for to have M pixel in vertical direction, have N pixel in the horizontal direction.At this moment, variable i and j are defined as 1≤i≤M, 1≤j≤N respectively, with the coordinate of remarked pixel position with ((i j) representes with P with the pixel of the position of this coordinate representation for i, j) expression.That is, variable i is represented the vertical direction position, and variable j representes the horizontal direction position.Locations of pixels place in the image upper left corner, i=1, j=1, along with 1 pel spacing of every propelling downwards then i add 1 one by one, along with to 1 pel spacing of right-hand every propelling then j add 1 one by one.
The structure example of Fig. 2 presentation video delay portion 4.Illustrated picture delay portion 4 has frame memory 11 and frame memory control part 12.Frame memory 11 has at least can store the vision signal D0 that 2 frames are imported, and can store the capacity of the calibrated vision signal E of 2 frames.And, can also constitute vision signal D0, the E that can not store 2 frames respectively, and only can store vision signal D0, the E of 1 frame.
Frame memory control part 12 carries out the reading of the vision signal D0 that writes He accumulated of incoming video signal according to storage address; Generate continuous 2 frame video signal D1, D2; Wherein, this storage address is to generate according to the synchronizing signal of being imported that vision signal D0 comprised.
Vision signal D1 does not postpone with respect to incoming video signal D0, is known as the present frame vision signal yet.
Vision signal D2 postpone 1 frame with respect to vision signal D1 and obtain, be the signal before in time 1 image duration, also be known as 1 frame delay video signal.
In addition, in the content of following explanation, D2 handles as object with vision signal, therefore sometimes vision signal D2 is called the concern frame video signal, and vision signal D1 is called the back frame video signal.As stated, vision signal D1, D2 are made up of the row of the signal of a plurality of pixels of composing images, be in coordinate (i, the pixel P of position j) (i, pixel value j) be expressed as D1 (i, j), D2 (i, j).
The vision signal E that writes He accumulated of the vision signal E that frame memory control part 12 also carries out being proofreaied and correct reads outputting video signal E1 in this image duration identical with vision signal E, outputting video signal E2 after 1 image duration.
Fig. 3 representes the structure example of motion vector detection section 5.Motion vector detection section 5 is according to from the 1st vision signal (D2) of its outside input with from this outside input and before being in more than 1 frame in time with respect to above-mentioned the 1st vision signal or the 2nd vision signal (D1) after more than 1 frame; Detect the motion vector of above-mentioned the 1st vision signal (D2), illustrated motion vector detection section 5 has to be paid close attention to the frame piece and cuts that portion 21, back frame piece cut portion 22, motion vector is confirmed portion 23, memory 24 and memory controller 25.
Pay close attention to the frame piece and cut portion 21 shown in Fig. 4 (a), with concerned pixel P (i, neighboring area j), for example concerned pixel are the center; Cut height (vertical direction size) from the concern frame video signal D2 that picture delay portion 4 exported and be (2*BM+1), width (horizontal direction size) rectangular area (piece) D2B (i for (2*BN+1); J), motion vector is confirmed portion 23 to estimate this rectangular area D2B (i is j) at which regional movement of back frame video signal D1; With estimated zone with respect to rectangular area D2B (i; J) relative position, as about concerned pixel P (i, motion vector V j) is (in order to distinguish with other pixel motion vector; Sometimes also be expressed as V (i, j)) output.
Back frame piece cuts portion 22 with according to each above-mentioned concerned pixel P (i, j) set of coordinate of definition
S(i,j)={(i+k,j+1)} (1)
(wherein;-SV≤k≤SV ,-SH≤1≤SH and SV, SH are set value) (i+k j+1) is the center, cuts and rectangular area D2B (i for the vision signal D1 from 4 inputs of picture delay portion for the position that comprised; J) measure-alike rectangular area D1B (i+k, j+1) (Fig. 4 (b)).Wherein, (i j) also is known as (i, the hunting zone of motion vector j) about concerned pixel P to S.As above the hunting zone of definition is laterally for 2*SH+1, vertically be the rectangular area of 2*SV+1.
Motion vector confirms that portion 23 is cutting the rectangular area D2B (i that portion 21 imports from paying close attention to the frame piece; J) with the frame piece cuts the piece D1B (i+k that portion 22 imports from the back; J+1) obtain between all pixels in each piece, promptly summation (the absolute value sum of the difference) SAD of the absolute value of the pixels difference each other located of the individual position to correspond to each other of (2*BM+1) * (2*BN+1) (i+k, j+1).(i+k j+1) can pass through following formula (2) expression to the absolute value sum SAD of this difference.
[numerical expression 1]
SAD ( i + k , j + l ) = Σ r = - BM BM Σ s = - BN BN | D 1 ( i + k + r , j + l + s ) - D 2 ( i + r , j + s ) | - - - ( 2 )
As above, corresponding to the individual rectangular area D1B of (2*SV+1) * (2*SH+1) (i+k, j+1) the absolute value sum SAD (i+k of the individual difference of acquisition (2*SV+1) * (2*SH+1); J+1), specify out the absolute value sum that has wherein produced the minimum difference of value rectangular area D1B (i+km, j+lm); With this rectangular area with respect to rectangular area D2B (i, (km is lm) as motion vector V=(Vy for relative position j); Vx)=(km lm) exports to image rectification portion 6.
In addition, owing between 2 continuous frames, insert frame, therefore accumulate motion vector V in memory 24 and make its delay, it is exported to frame generation portion 7 as motion vector Vd through memory controller 25.
To carry out the detection of above-mentioned such motion vector from all pixels of the vision signal D2 of picture delay portion 4 output,, the motion vector that as above obtains is used to alleviate motion blur and interpolation is in 2 frames between the continuous frame to each pixel detection motion vector.
And; When the motion vector detection of motion vector detection section 5; The pixel in the upper end of image, lower end, left end, the right-hand member outside become above-mentioned rectangular area D2B (i, j), D1B (i+k, part j+1); Under the situation of these pixel values of needs, for example the pixel in upper end, lower end, left end, the right-hand member outside is got final product as having respectively to handle with the part of the pixel equal values of upper end, lower end, left end, right-hand member.This for after the computing of the filtering portion 34, mean value calculation portion 37 etc. that state be suitable for too.
In addition; The motion vector detecting method of motion vector detection section 5 is not limited to said method; Can also adopt except the vision signal of paying close attention to frame with pay close attention to frame compare the time after the vision signal of frame; Also use and pay close attention to frame and compare the method for time in the vision signal of preceding frame; Perhaps do not use with pay close attention to frame compare the time after frame vision signal and use the vision signal of paying close attention to frame and compare the method for time in the vision signal of preceding frame with paying close attention to frame, perhaps can also adopt use the vision signal of paying close attention to frame with pay close attention to frame compare the time after vision signal and the method for using the phase place correlation function to obtain of frame.And pay close attention to frame with its before and after time interval of frame be not limited to for 1 image duration, can be the image durations more than 2 frames.
The structure example of Fig. 5 presentation video correction portion 6.Illustrated image rectification portion 6 has treatment for correcting portion 30, operation signal handling part 31, motion blur estimating section 32, filter factor preservation portion 33, filtering portion 34, mean value calculation portion 37, correction intensity adjustment part 38 and gain calculating portion 39.
The 30 receiving video signals D2 of treatment for correcting portion, through after the gain G AIN that states each pixel is carried out treatment for correcting, the vision signal E after proofreading and correct is exported to picture delay portion 4.
31 couples of signal PR that use not shown interface to import by the user of operation signal handling part resolve the parameter that output obtains as analysis result.
Comprise adjustment parameter A DJ, correction intensity B parameter ST0, threshold value TH1 and TH2 from the parameter of operation signal handling part 31 outputs.
Adjustment parameter A DJ is used for according to the motion vector computation amount of movement blur, and is provided for motion blur estimating section 32.
Threshold value TH1 is used to adjust the characteristic of filtering portion 34, and is provided for filtering portion 34.
Correction intensity B parameter ST0 is used for definite correction intensity, and threshold value TH2 is used to differentiate characteristic, the for example flatness of image, and they all are provided for correction intensity adjustment part 38.
Motion blur estimating section 32 will by motion vector detection section 5 detected motion vector V (vertical divide to component Vy (=km), level divides to component Vx (=lm)) as input, the component (size and angle) when calculating with this motion vector of polar coordinate representation.Particularly, with the right side that is oriented horizontal direction of motion vector towards situation as 0 degree, utilize computes to go out the direction A (degree) and the big or small LM (pixel) of motion vector.
[numerical expression 2]
A=(Arctan(Vy/Vx))*180/π (3)
LM = Vy 2 + Vx 2 - - - ( 4 )
Motion blur estimating section 32 is also obtained angle corresponding with motion vector and the size of motion blur (the fuzzy amplitude of travel direction).For example the angle of motion blur is identical with the angle A of motion vector, and the big or small LB of motion blur and the big or small LM of motion vector multiply by adjustment parameter A DJ (0<ADJ≤1) and the value that obtains equates, can obtain the big or small LB of motion blur through following formula (5).
LB=LM*ADJ (5)
As shown in Figure 6; Adjustment parameter A DJ possess with shooting during length T s, for example electric charge accumulation time to the suitable value of ratio (Ts/Tf) of the length T f of image duration; Can change according to the length T s during the actual shooting of each frame, also can confirm as the representative value during the shooting under the condition of object, mean value or median according to the present invention.For example use under the situation of median, if during the shooting at the EXS of image duration doubly to EXL doubly in the scope of (EXS, EXL are less than 1), then its median (EXS+EXL)/2 is confirmed as ADJ.
As above multiply by the reason of adjusting parameter A DJ like this and be that motion vector V is detected between frame, therefore reflection is the amount of the motion of each image duration, and relative therewith, and motion blur then is owing to middle motion of being taken the photograph body during the shooting causes.
Filter factor preservation portion 33 is associated with the direction of a plurality of motion blurs a plurality of LPF coefficients (two-dimentional FIR filter factor) with sheet form in advance and stores with big or small combination.This filter factor is used for reducing in interior vision signal from the motion blur that comprises specific direction and size the composition of motion blur.
Motion blur estimating section 32 reads out and the direction A of the motion blur that as above calculates and the corresponding filter factor of combination of big or small LB from table; Thereby, be input to filter factor preservation portion 33 according to the pointer IND that the direction A and the big or small LB of motion blur calculates his-and-hers watches.
(p q), exports to filtering portion 34 to the filter factor CF of the pointer IND corresponding stored that filter factor preservation portion 33 reads out and imported.
Through this processing, select in the filter factor of motion blur estimating section 32 from be kept at filter coefficient preservation portion 33 the filter factor CF corresponding with the combination of the direction A of estimated motion blur and big or small LB (p, q).
Filtering portion 34 uses the filter factor CF that selects through motion blur estimating section 32, and (p q) carries out filtering.Promptly; Filtering portion 34 has Nonlinear Processing portion 35 and low pass filter 36, the filter factor CF that its use as above reads out from filter factor preservation portion 33 (p, q) (wherein-P≤p≤P ,-Q≤q≤Q); Use each concerned pixel P (i of vision signal D2; The pixel value of the pixel in the neighboring area j) carries out filtering, and the FL1 as a result of output filtering (i, j).
Nonlinear Processing portion 35 is according to the pixel value D2 (i of concerned pixel; J) with its neighboring area in the pixel value D2 (i-p of pixel; J-q) difference and the threshold value TH1 that is imported by operation signal handling part 31 carry out the Nonlinear Processing of handling shown in the following formula (6a)~(6f).
(A) when D2 (i-p, j-q)-D2 (i, j)>during TH1, make
D2b(i-p,j-q)-D2(i,j)=TH1 (6a),
Promptly pass through
D2b(i-p,j-q)=D2(i,j)+TH1 (6b)
Confirm D2b (i-p, j-q);
(B) when D2 (i-p, j-q)-D2 (i, j)<-during TH1, make
D2b(i-p,j-q)-D2(i,j)=-TH1 (6c),
Promptly pass through
D2b(i-p,j-q)=D2(i,j)-TH1 (6d)
Confirm D2b (i-p, j-q);
(C) during the situation outside belonging to above-mentioned (A), (B), make
D2b(i-p,j-q)-D2(i,j)=D2(i-p,j-q)-D2(i,j) (6e),
Promptly pass through
D2b(i-p,j-q)=D2(i-p,j-q) (6f)
Confirm D2b (i-p, j-q).
Low pass filter 36 is at each concerned pixel P (i; J) neighboring area, the scope that promptly constitutes by the individual pixel of (2P+1) * (2Q+1); To as the result of above-mentioned Nonlinear Processing and the value D2b that obtains (i-p, j-q) multiply by pairing filter factor CF (p, q); (i j) obtains as filtered FL1 with the summation of multiplied result.
The filter factor CF that below uses in the explanation low pass filter 36 (p, q).
Filter factor be with the concerned pixel be the center right-P≤p≤P ,-pixel definition in the zone of Q≤q≤Q.
With recited above same, (p is definite according to the angle A and the big or small LB of motion blur q) to filter factor CF.
Fig. 7~Fig. 9 representes to define in the zone of filter factor the zone that several examples of motion blur is defined as filter factor the value beyond 0.Be that the zone of the value beyond 0 is called effective filter field EFA below with this filter factor.Effectively the filter factor summation of the location of pixels in the filter field EFA is 1.
Be regarded as effective filter field EFA with the big or small LB and the corresponding belt-like zone of angle A thereof of motion blur.Then for the pixel that is contained in effective filter field EFA wholly or in part give with at the corresponding weight coefficient of this effective filter field EFA proportion.For example; Be contained in the pixel of effective filter field EFA than complete (its integral body); The pixel that part is contained in effective filter field EFA reduces the value of weight coefficient, about the value of the weight coefficient of each pixel be with this pixel in the effective proportional value of filter field EFA proportion.
This belt-like zone extends in the direction of motion blur; Its length is the set several times of the big or small LB of motion blur, for example is 2 times, have from the initiating terminal of motion blur with terminally before and after it, both prolonged quantitatively, for example according to the length of 0.5 times of prolongation of the big or small LB of motion blur.The width of belt-like zone is equivalent to the size of 1 pixel.Fig. 7~example shown in Figure 9 illustrates all identical with the vertical direction in the horizontal direction situation of size of 1 pixel.The starting point that is made as motion blur among Fig. 7~Fig. 9 be in coordinate (i, j) shown in the position.
In example shown in Figure 7, the big or small LB of the motion blur right side in the horizontal direction is oriented 4 pixel sizes.In this case, motion blur is observed to starting point pixel Ps from motion blur (coordinate (i, pixel j)) center extends to terminal point pixel Pe, and (center of coordinate (i, j+4)) is to adding the length of 2 pixels (0.5 * 4 pixel) before and after it.That is, from coordinate (i, pixel center j-2) is up to coordinate (i; J+6) scope till the pixel center is an effective range; Wherein, this coordinate (i, pixel center j-2) be from the center of starting point pixel Ps towards the rear (left Fig. 7) retreated (among the figure move towards left after) positions of 2 pixels; (i, pixel center j+6) are by (right-hand among Fig. 7) advanced (among the figure after right-hand moving) position of 2 pixels forwards, the center of terminal point pixel Pe to this coordinate.To these pixels give with at the corresponding weight coefficient of the ratio that this effective filter field EFA comprised.That is, to from coordinate (i, pixel j-1) is to (i; J+5) pixel is given the coefficient of equal values, and because coordinate (i, pixel j-2), coordinate (i; J+6) pixel only has half to be contained in effective filter field EFA separately; Therefore ((i is j-1) to 1/2 value of the coefficient of coordinate (i, pixel j+5)) from coordinate to give other pixels to them.
In the example of Fig. 7; Only half quantity that is contained in the pixel in effective filter field EFA is 2; The quantity that is contained in the pixel in effective filter field EFA fully is 6; Therefore give 1/7 weight coefficient for the pixel that is contained in fully in effective filter field EFA, give 1/14 weight coefficient half pixel that is contained in effective filter field EFA only.
In example shown in Figure 8, the big or small LB of the motion blur right side in the horizontal direction is oriented 3 pixel sizes.In this case, motion blur is observed to starting point pixel Ps from motion blur (coordinate (i, pixel j)) center extends to terminal point pixel Pe, and (center of coordinate (i, j+3)) is to adding the length of 1.5 pixels (0.5 * 3 pixel) before and after it.That is, from coordinate (i, pixel left end j-1) is up to coordinate (i; J+4) scope till the pixel right-hand member is an effective range; Wherein, this coordinate (i, pixel left end j-1) be from the center of starting point pixel Ps towards the rear (left Fig. 8) retreated 1.5 locations of pixels; (i, pixel right-hand member j+4) are by (right-hand among Fig. 8) position after 1.5 locations of pixels of advancing forwards, the center of terminal point pixel Pe to this coordinate.So, in the example of Fig. 8, do not exist part to be contained in the pixel of effective filter field EFA, the quantity that is contained in the pixel in effective filter field EFA fully is 6, therefore these pixels is confirmed as 1/6 with coefficient respectively.
In example shown in Figure 9; The big or small LB of motion blur is 3 pixel sizes; Identical with the situation of Fig. 8, effectively the length of filter field EFA is identical with the width also situation with Fig. 8, yet the angle of motion blur is 30 degree; Its result exists more only part to be contained in the quantity of the pixel among effective filter field EFA.Particularly, coordinate (i-3, j+4), coordinate (i-2, j+2), coordinate (i-2; J+3), coordinate (i-2, j+4), coordinate (i-1, j), coordinate (i-1, j+1), coordinate (i-1; J+2), coordinate (i-1, j+3), coordinate (i, j-1), coordinate (i, j), coordinate (i; J+1), coordinate (i, j+2), coordinate (i+1, j-1), (i+1, pixel j) part respectively is contained in effective filter field EFA to coordinate.So give weight coefficient according to the ratio that is contained in effective filter field EFA to these 14 pixels.
Obtain weight coefficient too for the big or small LB of motion blur, other values of angle A about each pixel.But; Be not that the big or small LB to motion blur, all values that angle A can be got are all obtained weight coefficient; But about big or small LB, angle A respectively to typical value LR, the AR of each set scope obtain weight coefficient each other; Be stored in filter factor preservation portion 33 as filter factor,, use the filter factor that typical value LR, AR are obtained and preserve for big or small LB, the angle A in each scope.The generation of the pointer IND that typical value LR, AR (or value corresponding with it) state after being used for.The back can these contents of further explain.
And; In above-mentioned example; Effectively filter field EFA has 0.5 times the length that motion blur has been prolonged the big or small LB of motion blur respectively after its initiating terminal and the tip forward; And should the prolongation amount also can be and the irrelevant set value of the big or small LB of motion blur that for example can establish this prolongation amount is 0.5 pixel.In addition, can also make this prolongation amount is zero.
In addition; For the pixel that comprises among effective filter field EFA carry out be contained in effective filter field EFA in the corresponding weighting of ratio; On the other hand; Possess the moving average filter that does not carry out with apart from the structure of the corresponding weighting of the distance of concerned pixel though use, also can constitute carry out with apart from concerned pixel apart from corresponding weighting.Example as this filter can be enumerated Gaussian filter.
With top likewise said; Low pass filter 36 is for to each concerned pixel P (i; J) neighboring area pixel carries out the result of Nonlinear Processing and the value D2b that obtains (i-p, j-p) multiply by the correspondence that reads out from filter factor preservation portion 33 filter factor CF (p, q); (i j) obtains as filtered FL1 with the summation of multiplied result.Can represent this filtering through following formula.
[numerical expression 3]
FL 1 = Σ q = - Q Q Σ p = - P P CF ( p , q ) D 2 b ( i - p , j - q ) - - - ( 7 )
(i j) is exported to gain calculating portion 39 to the filtered FL1 of formula (7).
Each concerned pixel P of the 37 outputting video signal D2 of mean value calculation portion (i, and the mean value FL2 of the pixel value of the pixel in neighboring area j) (i, j).
So-called here neighboring area for example is the scope that is made up of the individual pixel of (2P+1) * (2Q+1); Pixel value D2 (the i-p that mean value calculation portion 37 is calculated in this scope; J-q) (i j), i.e. value through following formula (8) tabular form, exports to correction intensity adjustment part 38 to mean value FL2.
[numerical expression 4]
FL 2 = Σ q = - Q Q Σ p = - P P D 2 ( i - p , j - q ) - - - ( 8 )
Correction intensity adjustment part 38 is according to the pixel value D2 (i of near the intensity of variation of the pixel value the concerned pixel and/or size, for example concerned pixel; J) with the neighboring area in the mean value FL2 of pixel value of pixel (i, j) difference is adjusted the correction intensity to concerned pixel.The adjustment of correction intensity is that (i, adjustment j) is carried out through the correction intensity B parameter ST1 that is described below.Particularly; Correction intensity adjustment part 38 is according to the correction intensity B parameter ST0 by 31 inputs of operation signal handling part; Export adjusted correction intensity B parameter ST1; When from the pixel value D2 of the concerned pixel of the vision signal D2 of picture delay portion 4 input (i, j) with from the mean value FL2 of mean value calculation portion 37 (i, the absolute value of difference j) is less than under the situation by the threshold value TH2 of operation signal handling part 31 inputs; Generate than by the little adjusted correction intensity B parameter ST1 of the correction intensity B parameter ST0 of operation signal handling part 31 inputs (i, j) and export to gain calculating portion 39.(i j), for example can use the value that is provided by BST0 * β (β<1) as adjusted correction intensity B parameter ST1.Also can be made as and to confirm that by the user (i j) compares little which kind of degree (the for example value of β) that arrives of correction intensity B parameter ST0 to adjusted correction intensity B parameter ST1.For example both can be β=1/2, also can be β=0.
(i, j) (i when the absolute value of difference j) is not less than threshold value TH2, will proofread and correct intensity parameters BST0 directly as adjusted correction intensity B parameter ST1 (i, j) output with mean value FL2 at pixel value D2.Therefore, (D2 (i, j)-relation between FL2 (i, j)) and the adjusted correction intensity B parameter ST1 is shown in figure 10.
Gain calculating portion 39 is with reference to the filtered FL1 (i that obtains from filtering portion 34; J), the adjusted correction intensity B parameter ST1 (i of 38 outputs from the correction intensity adjustment part; J) and from the pixel value D2 (i of the concerned pixel of the vision signal D2 of picture delay portion 4 input; J), the multiplication coefficient that in treatment for correcting, uses according to computes just gain G AIN (i, j).
GAIN(i,j)=1+BST1(i,j)-BST1(i,j)*FL1(i,j)/D2(i,j) (9)
Wherein, (i, under situation j)=0, (i j)=1 calculates as D2 for simplicity as D2.In addition, when the result of calculating formula (9) is under the situation of GAIN<0, establish GAIN (i, j)=0.(i j) exports to treatment for correcting portion 30 with the gain G AIN that obtains then.
Treatment for correcting portion 30 is through the calculating based on following formula; To from the pixel value D2 of the concerned pixel of the vision signal D2 of picture delay portion 4 input (i, j) obtain pixel value E (i, j); (i, pixel value j) is exported to picture delay portion 4 as the pixel P of the vision signal after proofreading and correct.
E(i,j)=GAIN(i,j)*D2(i,j) (10)
In the present invention, only luminance signal (Y) is handled, thereby can be proofreaied and correct the motion blur that produces the video of deterioration owing to the motion of motion of being taken the photograph body and/or camera in picture delay portion 4, motion vector detection section 5 and image rectification portion 6.Yet, also can be to handle luminance signal (Y), can also handle danger signal (R), blue signal (G) and green (B) respectively.In addition, (i j), uses the formula (10) of image rectification portion 6 to handle R, G, B respectively can also to obtain the gain G AIN of formula (9) through the signal of expression R, G, B sum.Can also handle luminance signal (Y) and color difference signal (Cb, Cr) respectively.(i, j), the gain G AIN that obtains is used in the computing of through type (10), and (i j) handles luminance signal (Y) and color difference signal (Cb, Cr) respectively can to obtain gain G AIN through luminance signal (Y).Can also carry out same processing through other color forms of expression.
In addition, the gimmick of using low pass filter has been described in the 1st execution mode, but also can have been used the solution of the image repair problem that former studies goes out.
Then, about from 2 of picture delay portion 4 continuous correcting image E1 and E2, import the motion vector Vd corresponding, interpolated frame between E1 and E2 to frame generation portion 7 with correcting image E2 by motion vector detection section 5.In the 1st execution mode, explanation be that to make the frame rate of the vision signal that is input to picture delay portion 1 be that 2 times frame generates and handles, and generate or for the frame of the phase place that staggers generates for the frame more than 3 times, also can be based on same thought delta frame.
E2 is called the correcting image of paying close attention to frame, and the correcting image of frame after E1 is called is under the situation to interpolated frame between them, with reference to the motion vector Vd that pays close attention to frame.The motion vector of the frame of institute's interpolation (after this being referred to as interpolated frame) H can be through obtaining with reference to the motion vector Vd that pays close attention to frame.Particularly, be under 2 times the situation making frame rate, use 1/2 of the motion vector of paying close attention to frame to obtain the mobile destination in the correcting image of frame, 1/2 motion vector as this position of the motion vector of paying close attention to frame is got final product.If obtained the motion vector of interpolated frame H, then can obtain the corresponding relation between the correcting image E1 of the correcting image E2 that pays close attention to frame and back frame, thereby can generate interpolated frame H.The back can be described in detail the generation of interpolated frame.
The action of each inscape of following further explain image processing apparatus 2.
The vision signal D0 that is input to image processing apparatus 2 is transfused to picture delay portion 4.
The signal timing of the each several part in Figure 11 (a)~Figure 11 (j) presentation video processing unit 2.Shown in Figure 11 (b), with the incoming video signal D0 of incoming frame F0, F1, F2, F3, F4 successively synchronously of the input vertical synchronizing signal SYI shown in Figure 11 (a).
Frame memory control part 12 writes the address according to input vertical synchronizing signal SYI delta frame memory; Make frame memory 11 storage incoming video signals; And synchronously, shown in Figure 11 (d), export the vision signal D1 (vision signal of frame F0, F1, F2, F3, F4) that does not have frame delay with respect to incoming video signal D0 with the output vertical synchronizing signal SYO shown in Figure 11 (c) (being depicted as the signal that does not have delay with respect to input vertical synchronizing signal SYI).
Frame memory control part 12 also reads the address according to output vertical synchronizing signal SYO delta frame memory, makes to read out 1 frame delay video signal D2 (Figure 11 (e)) and the output that is stored in frame memory 11.
Its result exports continuous 2 frame video signal D1, D2 simultaneously from picture delay portion 4.Promptly; In the timing (image duration) of the vision signal of frame F1 as vision signal D0 input; The vision signal of frame F1, F0 is as vision signal D1, D2 output; In the timing (image duration) that the vision signal of frame F2 is imported as vision signal D0, the vision signal of frame F2, F1 is as vision signal D1, D2 output.
Be provided for motion vector detection section 5 from continuous 2 frame video signal D1, the D2 of 4 outputs of picture delay portion, vision signal D2 also is provided for image rectification portion 6.
Motion vector detection section 5 generates motion vector V according to vision signal D1, D2.This motion vector V is by the vision signal D2 of each frame motion vector to the vision signal D1 of next frame, therefore is transfused to the timing of motion vector detection section 5 at vision signal D2, the D1 of frame F0, F1 and is exported to the motion vector (representing with " F0 → F1 " Figure 11 (f)) of frame F1 from frame F0.
Vision signal D2 and it multiply by gain G AIN and the vision signal E that generates is exported (Figure 11 (h)) in the image duration identical with vision signal D2.
At the image duration outputting video signal E1 identical (Figure 11 (i)) with vision signal E, outputting video signal E2 after 1 image duration (Figure 11 (j)).
Motion vector V was postponed for 1 image duration, to frame F0 outputting video signal E2, to the timing of frame F1 outputting video signal E1, exported the motion vector Vd (representing with " F0 → F1 " among Figure 11 (g)) to frame F1 by frame F0.
In motion vector detection section 5, use the often detection of the motion vector of the absolute value sum SAD of the difference of use in the video coding.The objective of the invention is to alleviate the motion blur of the pixel that produces motion blur, therefore, obtain motion vector according to its minimum value according to the absolute value sum SAD of each pixel calculated difference.
Yet; If all pixels are carried out the computing of the absolute value sum SAD that obtains difference; Then operand can become huge; Thereby can be adjacent according to making, likewise be used to detect the mode that the piece of motion vector do not overlap each other with video coding and handle, for the pixel that does not detect motion vector, be utilized in peripheral detected motion vector and carry out interpolation.
In addition; In foregoing; With 5 employed sizes of motion vector detection section as with concerned pixel P (i, j) be the center and up and down with about be the rectangular area of same size, the odd number that the height of rectangular area and width are represented as usefulness (2*BM+1), (2*BN+1) respectively.Yet the height of rectangular area and width can not be odd numbers, and the position in the rectangular area of concerned pixel can not be center accurately, can be off-centered slightly position.
In addition, shown in (1), the hunting zone is defined as-SV≤k≤SV ,-SH≤l≤SH, to all k that this scope comprised and the absolute value sum SAD of l calculated difference.Yet also can come to dredge between appropriateness (draw ㄑ) k and l based on the purpose of cutting down operand and come the absolute value sum SAD of calculated difference.In this case, about the position of dredging (being removed) between quilt through dredging (i+k, j+l), can (i+k j+l) carries out interpolation according to the absolute value sum SAD of the difference of this peripheral position.Can also study the precision of motion vector, then not dredge the absolute value sum SAD of the difference that obtains between use if precision does not have problems.
The motion vector V that is input to image rectification portion 6 at first is imported into motion blur estimating section 32.Shown in figure 12; (i is j) with component Vx (i, j) expression of horizontal direction to be input to the component Vy that the motion vector V of motion blur estimating section 32 can be through vertical direction; So the direction A (degree) of through type (3) calculating kinematical vector, the big or small LM (pixel) of through type (4) calculating kinematical vector.
Here investigation makes camera static, takes the situation of the object that carries out linear uniform motion.The example that Figure 13 (a), Figure 13 (b) expression is taken this moment with the motion of the key element of the image of continuous 3 frame video signals performance.In embodiment illustrated, between the 1st frame and the 2nd frame between (Figure 13 (a)) and the 2nd frame and the 3rd frame between (Figure 13 (b)), the key element ES of image moves in the horizontal direction 4 pixels, and do not move in vertical direction (Vx=4, Vy=0).Therefore, between the 1st frame and the 2nd frame and the motion vector between the 2nd frame and the 3rd frame shown in the arrow of Figure 13 (a), Figure 13 (b), be detected as 4 pixels of horizontal direction, vertical direction 0 pixel.
Suppose during the shooting of the image shown in Figure 13 (a), Figure 13 (b) Ts with 1 image duration Tf equate that then the big or small LB of motion blur also is 4 pixels of horizontal direction, vertical direction 0 pixel.
Yet; In fact Ts ratio as shown in Figure 6 Tf 1 image duration is short during the shooting; Therefore shown in Figure 14 (a), Figure 14 (b), the big or small LB of motion blur is less than the big or small LM of motion vector, during its ratio is equivalent to make a video recording the length of Ts with respect to 1 image duration Tf ratio (Ts/Tf).
Under the circumstances, will multiply by the big or small LB that is estimated as motion blur less than the value of 1 adjustment parameter A DJ to the big or small LM of motion vector.As stated, adjustment parameter A DJ both can confirm according to the length T s during the actual shooting of each frame, also can by virtue of experience come to confirm, can also be set by the user.
Explanation is used for reading out from the table of filter factor preservation portion 33 computational methods of the pointer IND of filter factor below.
For example, the filter factor that is stored in filter factor preservation portion 33 be to as the typical value of angle (unit is " degree ") from 0 spend to 165 degree every at a distance from 15 spend angle, define as from 1 to 21 odd number of the typical value of size.
At this moment; The LB that through type (5) is obtained rounds up; If the result that rounds up is an even number, make it become odd number (LB=LB+1) after then adding 1, if as above process result is greater than " 21 "; Then amplitude limit is " 21 ", will carry out as above process result as the typical value LR output of the size of motion blur.If the value of the big or small LB of motion blur is in the set scope that comprises typical value LR, then convert the big or small LB of motion blur into typical value LR through carrying out above-mentioned processing.
On the other hand, about angle A, if utilize A that formula (3) obtains less than 0; Then add 180 degree (A=A+180), be that unit rounds up (the R revision of the convention) with 15 degree, thereby A2=(A+7.5)/15 is got rid of decimal point with the lower part; If its result in (A2 >=12) more than 12, then is made as A2=0.With this process result as the value AR2 output corresponding with the typical value AR of the angle of motion blur.Between AR and AR2, there is following relation.
AR=15×AR2
If the value of the angle A of motion blur is in the set scope that comprises typical value AR,, convert the angle A of motion blur into the value AR2 corresponding with typical value AR then through carrying out above-mentioned processing.
The typical value LR of the size of use motion blur and the value AR2 corresponding with the typical value AR of angle can be as the pointer IND that is used for reading from table through the calculating of following formula.
IND=12*((LR-1)/2-1)+AR2 (11)
Figure 15 illustrates the object lesson of obtaining the table of pointer IND based on formula (11) according to AR2 and LR.Though do not illustrate among Figure 15, about the filter factor CF under the situation of LR=1 (p, q), at i=0, during j=0 for example be CF (i, j)=1, under the situation in addition be CF (i, j)=0.
When having imported pointer IND by motion blur estimating section 32, (p q) offers low pass filter 36 to the filter factor CF that filter factor preservation portion 33 will be corresponding with the pointer IND that is imported.The filter factor that is kept at filter factor preservation portion 33 can freely be designed by the user.As long as filter factor can be realized low pass filter, relatively be easy to design, this also is a characteristic of the present invention.
Then, specify the filtering portion 34 that possesses low pass filter 36.The motion that the objective of the invention is to suitably alleviate the motion of being taken the photograph body and camera causes producing the motion blur in the zone of motion blur, with the gimmick of using the low pass filter shown in the following formula as the basis.
E(i,j)=D2(i,j)+BST1(i,j)*(D2(i,j)-FL1(i,j)) (12)
Distortion can obtain formula (9), formula (10) to formula (12).If handle based on the consideration mode of formula (12); Then for example use green (G) to carry out the calculating of formula (9); Obtain gain G AIN (i, j), a plurality of color signals to same pixel in treatment for correcting portion 30 use identical gain G AIN (i; J) carry out the computing of formula (10), thereby have the advantage that to cut down operand.Yet there is following shortcoming in the gimmick of use formula (12), therefore should handle as follows.
The gimmick of formula (12) is to use the filter factor CF that exports from filter factor preservation portion 33, and (p q), carries out LPF to the vision signal D2 that is input to image rectification portion 6, and (i j) exports to gain calculating portion 39 with filtered FL1.Yet, cause the shortcoming that produces overshoot based on the motion blur treatment for correcting that low pass filter carries out easily at the stronger edge part of correcting image according to formula (12).
At this, insert Nonlinear Processing portion 35 to the prime of low pass filter 36, carry out suppressing the Nonlinear Processing of overshoot at stronger edge part.For example use threshold value TH1 to carry out Nonlinear Processing, carry out the inhibition of overshoot by 31 inputs of operation signal handling part.Particularly, shown in figure 16, passing threshold TH1 to the pixel value D2 of concerned pixel (i, j) with its neighboring area in pixel pixel value D2 (i-p, difference DIF j-q) (i-p, j-q)=D2 (i, j)-(i-p j-q) carries out amplitude limit to D2.Promptly; Filtering portion 34 is so that the pixel value D2 (i of concerned pixel; J) with its neighboring area in pixel each pixel value D2 (i-p, (i-p, absolute value j-q) can be above the mode of predetermined threshold value TH1 to the pixel value D2 (i-p separately of the pixel in the neighboring area for difference DIF j-q); J-q) carry out amplitude limiting processing, the pixel value after the use amplitude limiting processing carries out LPF to the pixel in the neighboring area.Thus, if hypothesis does not suppress, then can be in difference DIF (i-p, j-q) the gain G AIN that calculates of big and gain calculating portion 39 (i, j) the bigger suitable ride gain of edge of image portion.
Be described in detail the processing of correction intensity adjustment part 38 below.
Correction intensity adjustment part 38 is used for handling the back at motion blur and suppresses owing to the noise amplification effect makes the quality of motion blur correcting image reduce; Characteristic, for example flatness according to image; Make from the correction intensity B parameter ST0 of operation signal handling part 31 input to reduce or be zero, export to gain calculating portion 39 as adjusted correction intensity B parameter ST1.
Particularly, incoming video signal D2 detects the variation of the pixel value (for example brightness value) of the pixel in the neighboring area of concerned pixel, confirms the value of adjusted correction intensity B parameter ST1 according to the size of this variation.As the index that the above-mentioned pixel value of expression changes, (i is j) with mean value FL2 (i, the absolute value of difference j) exported from mean value calculation portion 37 for the pixel value D2 of use concerned pixel.And for example this absolute value less than threshold value TH2 by operation signal handling part 31 input; The variation that then is judged as the pixel value in the neighboring area of concerned pixel is less; For example adjusted correction intensity B parameter ST1 is made as 1/2 of the preceding correction intensity B parameter ST0 of adjustment; Change greatly if above-mentioned absolute value then is judged as pixel value greater than threshold value TH2, the correction intensity B parameter ST0 that adjustment is preceding is directly as adjusted correction intensity B parameter ST1.The adjusted correction intensity B parameter ST1 that will as above confirm then exports to gain calculating portion 39.
The meaning of above-mentioned processing is carried out in following further explain.
The motion that is used for alleviating the motion of being taken the photograph body and camera produces the noise that the processing of motion blur in the zone of motion blur will inevitably amplification video signal.Produced motion blur even especially change less flat site in pixel value variation, for example brightness, its influence is also very little in visual aspects, and treatment for correcting is more weak to get final product.Suppose directly to use correction intensity parameter value BST0 to proofread and correct, then can amplify noise largely, make the quality of motion blur correcting result reduce in this zone.At this, flat site is detected, the adaptation of using littler value to replace correction intensity B parameter ST0 in this zone is handled.At this moment, in order to take a decision as to whether flat site, as stated to the pixel value D2 of concerned pixel (i, j) with its neighboring area in the mean F L2 of pixel value of pixel get difference, with this difference and threshold ratio size judge.
In addition, based on this reason, use as stated that mean value calculation portion 37 calculates-P≤p≤P ,-the simple average value of the pixel value of all pixels in the zone of Q≤q≤Q.
Gain calculating portion 39 uses the output FL1 (i of filtering portion 34; J), the adjusted correction intensity B parameter ST1 (i of 38 outputs from the correction intensity adjustment part; J), the pixel value D2 of the concerned pixel of vision signal D2 (i, j) according to following formula (9) calculated gains GAIN (i, j); (i j) offers treatment for correcting portion 30 with the gain G AIN that calculates.
Wherein, in the computing shown in the formula (9), based on ((i, (i j)=1 calculates j)=0 o'clock to press D2 as D2 for i, needs j) divided by the pixel value D2 of concerned pixel.In addition, when GAIN (i, under situation j)<0, amplitude limit be GAIN (i, j)=0.(i j) exports to treatment for correcting portion 30 to the gain G AIN that will obtain through above calculating.
In treatment for correcting portion 30, (i, j) (i j) multiplies each other, and proofreaies and correct thereby carry out motion blur with pixel value D2 with the gain G AIN that is provided.(i, j) output offers picture delay portion 4 to multiplied result as the pixel value E after proofreading and correct through motion blur.
Then, specify the processing of frame generation portion 7.Because to interpolated frame between the correcting image E1 of the correcting image E2 that pays close attention to frame, back frame, therefore by picture delay portion 4 to frame generation portion 7 these vision signals of input, pay close attention to the motion vector Vd of frames to 7 inputs of frame generation portion by motion vector detection section 5.If with the position (i, motion vector Vd j) be expressed as vertical direction Vdx (i, j), (i j), then can obtain position (i, the motion vector of j) locating of interpolated frame H to horizontal direction Vdy as follows.
Vhx(i+si,j+sj)=Vdx(i,j)/2 (13a)
Vhy(i+si,j+sj)=Vdy(i,j)/2 (13b)
Wherein, si=round [Vdx (i, j)/2], and sj=round [Vdy (i, j)/2], round [*] expression rounds up to *.
That is, generate the frame in the middle of the correcting image E1 be in the correcting image E2 that pays close attention to frame and back frame, therefore the motion vector Vd that will pay close attention to frame divided by 2 and the position that calculates of rounding up (i+si, j+sj) storage with the motion vector Vd of concern frame divided by 2 value.In addition, in following processing, use the absolute value sum of difference, therefore also calculate the absolute value sum SADh of the difference of interpolated frame.
SADh(i+si,j+sj)=mv_sad(i,j) (13c)
(si, sj is identical with the situation of explanation in above-mentioned formula (13a), (13b).)
In formula (13c), when (si when sj) having specified the position of the scope that surpasses the position that defines as video, does not handle.Wherein, should be noted that the interpolated frame that calculates in formula (13a) and formula (13b) motion vector can't (i j) obtains in all positions.Therefore just need be to the correction and/or the interpolation processing (being designated hereinafter simply as correcting process) of motion vector value.Correcting process about motion vector has proposed various algorithms, describes representational processing here.
The correction of motion vector by following 2 processing constitute, promptly in 3 * 3 scope to the processing of the minimum value of all pixel motion vector search SAD of interpolated frame HF and if in 3 * 3 scope, do not have the then processing of new settings motion vector of motion vector.
In that (i is minimum value and the position thereof of search SAD in 3 * 3 the scope at center j), the result of search promptly is judged as the position (ci of minimum value with the position; Cj) motion vector (Vhx (ci; Cj), Vhy (ci, cj)) is as position (i, the correction value of motion vector j).
Vcx(i,j)=Vhx(ci,cj)
Vcy(i,j)=Vhy(ci,cj) (14)
(ci cj) can represent through following formula at this moment.
[numerical expression 5]
( cii , cjj ) = arg min ( ci , cj ) [ { SADh ( i + cii , j + cjj ) , cii = - 1 , . . . , 1 , cjj = - 1 , . . . , 1 } ] - - - ( 15 a )
ci=i+cii、cj=j+cjj (15b)
In addition, if in 3 * 3 scope, there is not a motion vector, then with Vcx (i, j)=(i j)=0 is set at the correction value of motion vector to Vcy.
Thus, obtained interpolated frame motion vector Vcx (i, j), Vcy (i, j), therefore use these values and with reference to the correcting image E2 that pays close attention to frame with after the value of correcting image E1 of frame, obtain interpolated frame HF.If with the concerned pixel of interpolated frame (i, j) the correcting image E2 of corresponding concern frame be respectively with the position of the correcting image E1 of back frame (bi, bj), (ai, aj), then (i j) can obtain as following formula the pixel value HF of each pixel of interpolated frame HF.
HF(i,j)={E2(bi,bj)+E1(ai,aj)}/2 (16)
Wherein, bi=i-round [Vcx (i, j)]
bj=j-round[Vcy(i,j)]
ai=i+fix[Vcx(i,j)]
aj=j+fix[Vcy(i,j)]
Wherein, the casting out of fix [*] expression * to 0 direction.
The 2nd execution mode
Figure 17 representes the image rectification portion 6 that uses in the 2nd execution mode.
The image rectification portion of illustrated image rectification portion 6 and Fig. 5 is roughly the same; Difference is to comprise threshold value TH3 from the parameter of operation signal handling part 31 outputs; This threshold value TH3 is provided for treatment for correcting portion 30, and in treatment for correcting portion 30, TH3 handles according to threshold value.Threshold value TH3 is used to the degree that suppresses to proofread and correct, makes the correction of pixel value can not become overcorrect.
In treatment for correcting portion 30, use the gain G AIN that provides from gain calculating portion 39, obtain the motion blur correcting image.Yet even handle in the inhibition that filtering portion 34 carries out overshoot according to the method for the 1st execution mode, the result of motion blur treatment for correcting also can produce overshoot sometimes.This be mostly with proofread and correct intensity parameters BST0 set higher situation of proofreading and correct.
So, in this 2nd execution mode,, carry out amplitude limiting processing to avoid overshoot to the result of motion blur treatment for correcting.Particularly, by operation signal handling part 31 input threshold value TH3, likewise handle as follows with the Nonlinear Processing of filtering portion 34: (i is j) and to this pixel value D2 (i for the pixel value D2 of the concerned pixel before proofreading and correct; J) multiply by gain G AIN (when i, the absolute value of the difference of the value that j) obtains surpass threshold value TH3, make | E (i, j)-D2 (i; J) |=TH3, if the pixel value D2 of the concerned pixel before proofreading and correct (i, j) with to this pixel value D2 (i; J) (i, the absolute value of the difference of the value that j) obtains then make E (i smaller or equal to threshold value TH3 to multiply by gain G AIN; J)=GAIN (i, j) * D2 (i, j).
That is, (A) if GAIN (i, j) * D2 (i, j)-(i j)>TH3, then makes D2
E(i,j)-D2(i,j)=TH3 (17a),
Promptly pass through
E(i,j)=D2(i,j)+TH3 (17b)
Confirm E (i, j).
(B) if GAIN (i, j) * D2 (i, j)-(i j)<TH3, then makes D2
E(i,j)-D2(i,j)=-TH3 (17c),
Promptly pass through
E(i,j)=D2(i,j)-TH3 (17d)
Confirm E (i, j).
(C) if be not the situation of (A), (B), then make
E(i,j)-D2(i,j)=GAIN(i,j)*D2(i,j)-D2(i,j) (17e),
Promptly pass through
E(i,j)=GAIN(i,j)*D2(i,j) (17f)
Confirm E (i, j).
And; In above-mentioned the 1st, the 2nd execution mode; Vision signal D1 postponed for 1 image duration (before being in for 1 image duration on the time) with respect to vision signal D2; But also can be (being on the time formerly more than or equal to 2 image durations) more than vision signal D1 postponed for 2 image durations with respect to vision signal D2, can also be that vision signal D1 is in the time upward in 1 image duration of back or more than or equal to 2 image durations with respect to vision signal D2.
As stated; In the 1st, the 2nd execution mode; Motion vector between the frame of the picture signal of importing according to each pixel detection; Thereby gain is confirmed according to the direction of detected motion blur, size in the zone of the generation motion blur that comprises in the detection video, thereby can proofread and correct owing to motion blur causes the vision signal that worsens.And; Through using 2 continuous vision signals of being proofreaied and correct; Interpolation is considered to be present in the vision signal between them, then compares according to the situation of original video signal interpolation or the situation of the motion blur in the correct frames only, the picture quality in the time of promoting dynamic image and show.
In addition,, can consider to be provided with respectively the processing of correct motion blur and generate the processing that is in the frame between the frame, and the present invention compares this situation and can obtain following effect through interpolation in order to obtain same effect.
(1) use the motion vector detection result in the reason throughout, therefore with motion vector detection as omnibus circuit (common process), can cut down circuit scale (treating capacity, the frame memory amount of preserving motion vector according to each pixel).
(2) motion vector detection and frame interpolation are handled and need be preserved at least 2 frame memories with epigraph respectively based on its treatment step, and through the shared frame memory, can reduce required memory size.
More than describe the present invention with image processing apparatus and image display device, and image processing method and method for displaying image of carrying out through these devices also constitute a part of the present invention.The present invention can also be as the program of the processing of the step of carrying out above-mentioned image processing apparatus or image processing method and each step, also can be as the recording medium of the embodied on computer readable that has write down this program.

Claims (12)

1. an image processing apparatus is characterized in that, this image processing apparatus has:
Motion vector detection section; It is according to the 1st vision signal of the input from the outside and the 2nd vision signal of importing from the outside; Detect the motion vector of above-mentioned the 1st vision signal, wherein, before the 2nd vision signal is in more than 1 frame with respect to above-mentioned the 1st vision signal in time or after more than 1 frame; And
Image rectification portion, it uses by the detected motion vector of above-mentioned motion vector detection section, proofreaies and correct the motion blur in above-mentioned the 1st vision signal,
Above-mentioned image rectification portion has:
The motion blur estimating section, it estimates the direction and the size of motion blur according to above-mentioned motion vector;
Filtering portion, it uses direction corresponding to the estimated above-mentioned motion blur that goes out with size and predetermined filter factor carries out filtering to above-mentioned the 1st vision signal; And
The correction intensity adjustment part, it adjusts the correction intensity to above-mentioned concerned pixel according near the intensity of variation of the pixel value the concerned pixel,
Above-mentioned filtering portion carries out amplitude limiting processing to each pixel value of the pixel in the neighboring area of concerned pixel; Make the absolute value of difference of each pixel value of pixel value and the pixel in its neighboring area of concerned pixel be no more than predetermined threshold value, and use the above-mentioned pixel value after the amplitude limiting processing that the pixel in the above-mentioned neighboring area is carried out LPF.
2. image processing apparatus according to claim 1 is characterized in that,
Above-mentioned correction intensity adjustment part is poor according to the mean value of the pixel value of the pixel value of above-mentioned concerned pixel and the pixel in the above-mentioned neighboring area, and adjustment is to the correction intensity of above-mentioned concerned pixel.
3. image processing apparatus according to claim 1 and 2 is characterized in that,
Above-mentioned image rectification portion has:
Gain calculating portion, its filtered according to above-mentioned filtering portion is obtained gain; And
Treatment for correcting portion, it multiplies each other the above-mentioned gain and above-mentioned the 1st vision signal that are calculated by above-mentioned gain calculating portion, thus above-mentioned the 1st vision signal is proofreaied and correct.
4. image processing apparatus according to claim 3 is characterized in that,
Above-mentioned image rectification portion also has the filter factor preservation portion that the combination with the direction of filter factor and a plurality of motion blurs and size is mapped and preserves,
Select direction and big or small corresponding filter factor with the estimated above-mentioned motion blur that goes out the filter factor of above-mentioned motion blur estimating section in being kept at above-mentioned filter factor preservation portion,
Above-mentioned filtering portion uses the selected above-mentioned filter factor that goes out to carry out filtering.
5. image processing apparatus according to claim 1 and 2 is characterized in that,
This image processing apparatus also has through interpolation and generates the frame generation portion that is in the frame between 2 correcting images that differ from one another that alleviated above-mentioned motion blur.
6. an image display device is characterized in that, this image display device has:
Claim 1 or 2 described image processing apparatus; And
The image displaying part that shows the image that above-mentioned image processing apparatus generates.
7. an image processing method is characterized in that, this image processing method comprises:
The motion vector detection step; According to the 1st vision signal of input and the 2nd vision signal of importing from the outside from the outside; Detect the motion vector of above-mentioned the 1st vision signal, wherein, before the 2nd vision signal is in more than 1 frame with respect to above-mentioned the 1st vision signal in time or after more than 1 frame; And
The image rectification step is used detected motion vector in above-mentioned motion vector detection step, proofreaies and correct the motion blur in above-mentioned the 1st vision signal,
Above-mentioned image rectification step comprises:
The motion blur estimating step according to above-mentioned motion vector, is estimated the direction and the size of motion blur;
Filter step uses direction corresponding to the estimated above-mentioned motion blur that goes out with size and predetermined filter factor carries out filtering to above-mentioned the 1st vision signal; And
The correction intensity set-up procedure according near the intensity of variation of the pixel value the concerned pixel, is adjusted the correction intensity to above-mentioned concerned pixel,
In above-mentioned filter step; Each pixel value to the pixel in the neighboring area of concerned pixel carries out amplitude limiting processing; Make the absolute value of difference of each pixel value of pixel value and the pixel in its neighboring area of concerned pixel be no more than predetermined threshold value, and use the above-mentioned pixel value after the amplitude limiting processing that the pixel in the above-mentioned neighboring area is carried out LPF.
8. image processing method according to claim 7 is characterized in that,
In above-mentioned correction intensity set-up procedure, poor according to the mean value of the pixel value of the pixel value of above-mentioned concerned pixel and the pixel in the above-mentioned neighboring area, adjustment is to the correction intensity of above-mentioned concerned pixel.
9. according to claim 7 or 8 described image processing methods, it is characterized in that,
Above-mentioned image rectification step comprises:
The gain calculating step is obtained gain according to the filtered of above-mentioned filter step; And
The treatment for correcting step multiplies each other the above-mentioned gain and above-mentioned the 1st vision signal that in above-mentioned gain calculating step, calculate, thus above-mentioned the 1st vision signal is proofreaied and correct.
10. image processing method according to claim 9 is characterized in that,
In advance the combination of the direction of filter factor and a plurality of motion blurs and size is mapped and is stored in filter factor preservation portion,
In above-mentioned motion blur estimating step, select direction and big or small corresponding filter factor with the estimated above-mentioned motion blur that goes out the filter factor in being kept at above-mentioned filter factor preservation portion,
In above-mentioned filter step, use selected above-mentioned filter factor to carry out filtering.
11. according to claim 7 or 8 described image processing methods, it is characterized in that,
This image processing method comprises that also frame generates step, generates in the step at this frame, generates the frame that is between 2 correcting images that differ from one another that alleviated above-mentioned motion blur through interpolation.
12. a method for displaying image is characterized in that, this method for displaying image comprises:
Claim 7 or 8 described image processing methods; And
Show the image display step utilize the image that above-mentioned image processing method generates.
CN201110359416.9A 2010-11-15 2011-11-14 Image processing apparatus and method, and image display apparatus and method Expired - Fee Related CN102572222B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010-254753 2010-11-15
JP2010254753A JP2012109656A (en) 2010-11-15 2010-11-15 Image processing apparatus and method, and image display unit and method

Publications (2)

Publication Number Publication Date
CN102572222A true CN102572222A (en) 2012-07-11
CN102572222B CN102572222B (en) 2014-10-15

Family

ID=46416613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110359416.9A Expired - Fee Related CN102572222B (en) 2010-11-15 2011-11-14 Image processing apparatus and method, and image display apparatus and method

Country Status (2)

Country Link
JP (1) JP2012109656A (en)
CN (1) CN102572222B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104813648A (en) * 2012-12-11 2015-07-29 富士胶片株式会社 Image processing device, image capture device, image processing method, and image processing program
CN108476319A (en) * 2016-01-11 2018-08-31 三星电子株式会社 Image encoding method and equipment and picture decoding method and equipment
CN110084765A (en) * 2019-05-05 2019-08-02 Oppo广东移动通信有限公司 A kind of image processing method, image processing apparatus and terminal device
CN111698427A (en) * 2020-06-23 2020-09-22 联想(北京)有限公司 Image processing method and device and electronic equipment
CN112789848A (en) * 2018-10-12 2021-05-11 Jvc建伍株式会社 Interpolation frame generation device and method
CN113260941A (en) * 2019-01-09 2021-08-13 三菱电机株式会社 Control device and control method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1913585A (en) * 2005-06-13 2007-02-14 精工爱普生株式会社 Method and system for estimating motion and compensating for perceived motion blur in digital video
CN101272488A (en) * 2007-03-23 2008-09-24 展讯通信(上海)有限公司 Video decoding method and device for reducing LCD display movement fuzz
CN101305396A (en) * 2005-07-12 2008-11-12 Nxp股份有限公司 Method and device for removing motion blur effects
CN101365053A (en) * 2007-08-08 2009-02-11 佳能株式会社 Image processing apparatus and method of controlling the same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1913585A (en) * 2005-06-13 2007-02-14 精工爱普生株式会社 Method and system for estimating motion and compensating for perceived motion blur in digital video
CN101305396A (en) * 2005-07-12 2008-11-12 Nxp股份有限公司 Method and device for removing motion blur effects
CN101272488A (en) * 2007-03-23 2008-09-24 展讯通信(上海)有限公司 Video decoding method and device for reducing LCD display movement fuzz
CN101365053A (en) * 2007-08-08 2009-02-11 佳能株式会社 Image processing apparatus and method of controlling the same

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104813648A (en) * 2012-12-11 2015-07-29 富士胶片株式会社 Image processing device, image capture device, image processing method, and image processing program
CN104813648B (en) * 2012-12-11 2018-06-05 富士胶片株式会社 Image processing apparatus, photographic device and image processing method
CN108476319A (en) * 2016-01-11 2018-08-31 三星电子株式会社 Image encoding method and equipment and picture decoding method and equipment
CN112789848A (en) * 2018-10-12 2021-05-11 Jvc建伍株式会社 Interpolation frame generation device and method
CN112789848B (en) * 2018-10-12 2023-09-08 Jvc建伍株式会社 Interpolation frame generation device and method
CN113260941A (en) * 2019-01-09 2021-08-13 三菱电机株式会社 Control device and control method
CN113260941B (en) * 2019-01-09 2023-10-24 三菱电机株式会社 Control device and control method
CN110084765A (en) * 2019-05-05 2019-08-02 Oppo广东移动通信有限公司 A kind of image processing method, image processing apparatus and terminal device
CN110084765B (en) * 2019-05-05 2021-08-06 Oppo广东移动通信有限公司 Image processing method, image processing device and terminal equipment
CN111698427A (en) * 2020-06-23 2020-09-22 联想(北京)有限公司 Image processing method and device and electronic equipment
CN111698427B (en) * 2020-06-23 2021-12-24 联想(北京)有限公司 Image processing method and device and electronic equipment

Also Published As

Publication number Publication date
JP2012109656A (en) 2012-06-07
CN102572222B (en) 2014-10-15

Similar Documents

Publication Publication Date Title
CN102572222B (en) Image processing apparatus and method, and image display apparatus and method
US8155468B2 (en) Image processing method and apparatus
US7936941B2 (en) Apparatus for clearing an image and method thereof
CN103460682B (en) Image processing apparatus and method
US8369644B2 (en) Apparatus and method for reducing motion blur in a video signal
US8781225B2 (en) Automatic tone mapping method and image processing device
US20140147042A1 (en) Device for uniformly enhancing images
CN100548028C (en) Partial image filtering method based on Noise Estimation
WO2009107487A1 (en) Motion blur detecting device and method, and image processor, and image display
EP2989794B1 (en) Image color correction
US7903901B2 (en) Recursive filter system for a video signal
US8345163B2 (en) Image processing device and method and image display device
CN103310446B (en) A kind of solid matching method that instructs filtering based on iteration
CN108632501B (en) Video anti-shake method and device and mobile terminal
CN105229998A (en) Image processing apparatus, image processing method and program
JP2013106151A (en) Image processing apparatus and image processing method
CN105304031A (en) Method based on still image scene judgment to avoid noise amplification
JP2009194721A (en) Image signal processing device, image signal processing method, and imaging device
EP1836678B1 (en) Image processing
US20120038646A1 (en) Image processing apparatus and image processing method
CN102118546B (en) Method for quickly implementing video image noise estimation algorithm
JP5550794B2 (en) Image processing apparatus and method, and image display apparatus and method
CN1984241A (en) Method for transient enhancing video image brightness
JP2008259097A (en) Video signal processing circuit and video display device
JP5790933B2 (en) Noise removal equipment, electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20141015

Termination date: 20161114