WO2009107487A1 - 動きぼやけ検出装置及び方法、画像処理装置、並びに画像表示装置 - Google Patents
動きぼやけ検出装置及び方法、画像処理装置、並びに画像表示装置 Download PDFInfo
- Publication number
- WO2009107487A1 WO2009107487A1 PCT/JP2009/052285 JP2009052285W WO2009107487A1 WO 2009107487 A1 WO2009107487 A1 WO 2009107487A1 JP 2009052285 W JP2009052285 W JP 2009052285W WO 2009107487 A1 WO2009107487 A1 WO 2009107487A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video signal
- difference
- gradation
- unit
- blur
- Prior art date
Links
- 238000000034 method Methods 0.000 title description 17
- 238000001514 detection method Methods 0.000 claims abstract description 107
- 230000007704 transition Effects 0.000 claims abstract description 47
- 230000008859 change Effects 0.000 claims abstract description 19
- 238000012937 correction Methods 0.000 claims description 70
- 238000004364 calculation method Methods 0.000 claims description 32
- 238000006243 chemical reaction Methods 0.000 claims description 28
- 230000004069 differentiation Effects 0.000 claims description 24
- 230000003111 delayed effect Effects 0.000 claims description 16
- 238000012545 processing Methods 0.000 claims description 13
- 238000010586 diagram Methods 0.000 description 25
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 6
- 238000009825 accumulation Methods 0.000 description 5
- 239000004973 liquid crystal related substance Substances 0.000 description 5
- 238000001914 filtration Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 2
- 238000007796 conventional method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/20—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/20—Circuitry for controlling amplitude response
- H04N5/205—Circuitry for controlling amplitude response for correcting amplitude versus frequency characteristic
- H04N5/208—Circuitry for controlling amplitude response for correcting amplitude versus frequency characteristic for compensating for attenuation of high frequency components, e.g. crispening, aperture distortion correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20201—Motion blur correction
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/02—Improving the quality of display appearance
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/02—Improving the quality of display appearance
- G09G2320/0261—Improving the quality of display appearance in the context of movement of objects on the screen or movement of the observer relative to the screen
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/10—Special adaptations of display systems for operation with variable images
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/10—Special adaptations of display systems for operation with variable images
- G09G2320/103—Detection of image changes, e.g. determination of an index representative of the image change
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/10—Special adaptations of display systems for operation with variable images
- G09G2320/106—Determination of movement vectors or equivalent parameters within the image
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/18—Use of a frame buffer in a display terminal, inclusive of the display panel
Definitions
- the present invention relates to a motion blur detection device and method, an image processing device, and an image display device.
- the display for video display is rapidly replacing the conventional CRT with a thin display such as a liquid crystal display or a plasma display.
- a thin display such as a liquid crystal display or a plasma display.
- the moving image display performance was greatly inferior to that of the CRT.
- the effect of motion blur due to the hold-type display was great.
- the video signal received by the display device is obtained by quantizing the total amount of light received from the subject during the frame accumulation time (for example, 1/60 seconds) by the light receiving unit of the camera, and arranging them in the pixel order determined by the standard. is there.
- blurring hereinafter referred to as motion blur
- Patent Document 1 As a method for improving blur included in an image, for example, as disclosed in Patent Document 1, a method using an enlargement / reduction circuit is disclosed. This method makes it possible to sharpen the rise and fall of the image outline using filtering technology without adding overshoot or undershoot, so isotropic blur width due to defocusing during imaging, etc. It is expected to be effective for narrow blurring.
- motion blur differs from image blur caused by defocusing, etc., and the blur width varies greatly depending on the relative speed of the camera and subject, and the direction in which the blur occurs is anisotropic (camera and subject Therefore, it is difficult to apply the conventional technique.
- Patent Document 2 discloses a blurring function decombination method using motion vector detection. This method requires an enormous amount of calculation and is difficult to implement in an actual circuit.
- JP 2002-16820 A Japanese Patent No. 3251127
- the motion blur included in the video signal is different from the isotropic blur caused by defocusing, etc., and the blur is wide and narrow, and the direction in which the blur occurs is anisotropic. For this reason, there is a case where an appropriate effect cannot be obtained by a method of performing frequency conversion on the screen uniformly using a filtering technique.
- filtering is also performed on an image having a contour with a low luminance change rate, which is seen in a lamp image or the like. There is a risk of erroneous conversion to an image different from the image to be displayed.
- the subject moving method is specified using the motion vector detection method and the filtering process is adaptively performed, there is a problem that the detection circuit scale increases, and thus it is difficult to use in terms of cost.
- the present invention has been made in view of the above-described problems, and it is possible to detect motion blur included in a video signal without increasing the circuit scale, and thereby reduce the detected motion blur. Objective.
- the motion blur detection device is: A first video signal that is not frame-delayed with respect to the input video signal; a second video signal that is delayed by a first predetermined number of frames with respect to the input video signal; Delay means for generating a third video signal delayed by the first predetermined number of frames with respect to the video signal; First difference detection means for detecting a gradation difference between the first video signal and the second video signal; Second difference detecting means for detecting a gradation difference between the second video signal and the third video signal; Third difference detection means for detecting a gradation difference between the first video signal and the third video signal; Differentiating means for detecting a change in signal between adjacent pixels of the gradation difference detected by the third difference detecting means; A transition period detecting unit that detects a gradation change between adjacent pixels in the second video signal and detects a transition period of a gradation included in the video signal based on the gradation change; The gradation difference detected by the first difference detection means, the gradation difference
- the present invention it is possible to detect a motion blur portion included in the input video signal, and accordingly, for example, by adaptively correcting only the motion blur portion, the motion blur included in the input video signal is detected.
- the width can be reduced and the quality of the moving image can be improved.
- FIG. 3 is a block diagram showing details of a delay unit 1.
- FIG. 3 is a block diagram illustrating details of a blur detection unit 2.
- FIG. 3 is a diagram illustrating details of a blur correction unit 3.
- FIG. (A)-(f) is a figure explaining operation
- FIG. It is the figure which showed the example of the video signal. It is the figure which showed the example of the video signal from which the high frequency component was removed. It is a figure which shows the differentiation result of the video signal which removed the high frequency component.
- 3 is a diagram showing a detailed configuration of a transition period detection unit 13.
- FIG. (A)-(c) is a figure explaining operation
- FIG. (A) And (b) is a figure which shows the example of the video signal of 3 continuous frames.
- 3 is a diagram illustrating a detailed configuration of a difference detection unit 11.
- FIG. (A)-(c) is a figure explaining the operation
- FIG. (A)-(d) is a figure explaining operation
- FIG. 3 is a diagram illustrating a configuration of a blur period determination unit 14.
- FIG. (A) And (b) is a figure explaining operation
- FIG. 3 is a diagram illustrating an internal configuration of a state determination unit 25.
- FIG. 5 is a diagram showing a detailed configuration of an outline shape calculation unit 15.
- (A)-(d) is a figure explaining operation
- FIG. (A)-(c) is a figure explaining the operation
- FIG. (A)-(c) is a figure explaining operation
- FIG. 1 is a block diagram showing a configuration of an image display apparatus according to the present invention.
- the illustrated image display device 81 includes a delay unit 1, a blur detection unit 2, a blur correction unit 3, and an image display unit 4.
- the delay unit 1 and the blur detection unit 2 constitute a motion blur detection device.
- the video signal input to the image display device 81 is supplied to the delay unit 1.
- the delay unit 1 performs frame delay of the input signal using a frame memory, and outputs a plurality of frame-delayed signals d1 to d3 to the blur detection unit 2 and the blur correction unit 3.
- the blur detection unit 2 detects a motion blur region included in the video from the video signals d1 to d3 of a plurality of different frames output from the delay unit 2, and outputs a motion blur detection flag bf.
- the blur correction unit 3 converts the video signal d2 output from the delay unit 1 based on the motion blur detection flag bf detected by the blur detection unit 2, and outputs the converted video signal d2 to the image display unit 4.
- FIG. 2 is a block diagram showing details of the delay unit 1.
- the delay unit 1 includes a frame memory control unit 5 and a frame memory 6.
- the frame memory 6 has a capacity capable of storing at least two frames of the input video signal.
- the frame memory control unit 5 performs writing of the input video signal and reading of the stored video signal according to the memory address generated based on the synchronization signal included in the input video signal d0, so that three consecutive frames Video signals d1, d2, and d3 are generated.
- the video signal d1 has no delay with respect to the input video signal d0, and is also called a current frame video signal.
- the video signal d2 is delayed by one frame with respect to the video signal d1, and is also called a one-frame delayed video signal.
- the video signal d3 is delayed by one frame with respect to the video signal d2, that is, delayed by two frames with respect to the video signal d1, and is also called a two-frame delayed video signal.
- the video signal d2 is also referred to as an attention frame video signal
- the video signal d1 is referred to as a previous frame video signal
- the video signal d3 is also referred to as a rear frame video signal. is there.
- FIG. 3 is a block diagram showing details of the blur detection unit 2.
- the blur detection unit 2 includes low-pass filters (hereinafter referred to as LPFs) 7, 8 and 9, a differentiation unit 10, difference detection units 11 and 12, a transition period detection unit 13, a blur period determination unit 14, and a difference detection unit 27. And a differentiating unit 28.
- LPFs low-pass filters
- the video signals d1, d2, and d3 output from the delay unit 1 are supplied to LPFs 7, 8, and 9, respectively.
- the LPF 7 removes the high frequency component of the current frame video signal d1 output from the delay unit 1 to generate the video signal e1, and outputs the video signal e1 to the difference detection units 11 and 27.
- the LPF 8 removes the high-frequency component of the 1-frame delayed video signal d2 output from the delay unit 1, generates a video signal e2, and outputs the video signal e2 to the differentiation unit 10 and the difference detection units 11 and 12.
- the LPF 9 removes the high-frequency component of the 2-frame delayed video signal d3 output from the delay unit 1, generates a video signal e3, and outputs the video signal e3 to the difference detection units 12 and 27.
- the differentiating unit 10 detects the amount of change between adjacent pixels of the input video signal e2, that is, the difference between adjacent pixels, and outputs the detection result f to the transition period detecting unit 13.
- the transition period determination unit 13 determines a contour predicted to be motion blur based on the detection result f of the differentiation unit 10 and outputs the determination result h to the blur period determination unit 14.
- the difference detection unit 11 calculates the difference between the input video signal e1 and the video signal e2, that is, the difference between one frame for each pixel of the video signal, and outputs the difference correction signal g1 to the blur period determination unit 14.
- the difference detection unit 12 calculates the difference between the input video signal e3 and the video signal e2, that is, the difference between one frame for each pixel of the video signal, and outputs the difference correction signal g2 to the blur period determination unit 14.
- the difference detection unit 27 calculates a difference between the input video signal e1 and the video signal e3, that is, a difference between two frames for each pixel of the video signal, and outputs a difference correction signal g3 to the differentiation unit 28.
- the differentiating unit 28 detects the amount of change between adjacent pixels of the difference correction signal g3, that is, the difference between adjacent pixels, and outputs the difference differential result f3 between two frames to the blur period determination unit 14.
- the blur period determination unit 14 includes the determination result h output from the transition period determination unit 13, the difference correction signals g 1 and g 2 obtained by the difference detection units 11 and 12, and the two-frame difference obtained by the differentiation unit 28. Based on the differential result f3, it is determined whether or not motion blur has occurred, that is, whether or not it is a blur period, and a determination result bf is output.
- FIG. 4 is a diagram showing details of the blur correction unit 3.
- the blur correction unit 3 includes a contour shape calculation unit 15 and a pixel conversion unit 16.
- the contour shape calculation unit 15 calculates a contour shape based on the input motion blur determination result bf, and outputs a conversion control signal j to the pixel conversion unit 16.
- the pixel conversion unit 16 converts the video signal d2 based on the conversion control signal j and the input video signals d1 and d3, and outputs the converted video signal k.
- FIGS. 5A to 5F are diagrams for explaining the relationship between the video signal d0 input to the delay unit 1 and the output video signals d1, d2, and d3.
- the input video signals d0 of the frames F0, F1, F2, and F3 are sequentially input.
- the frame memory control unit 5 generates a frame memory write address based on the input vertical synchronization signal SYI, stores the input video signal d0 in the frame memory 6, and outputs the output vertical synchronization signal SYO (input vertical) shown in FIG. In synchronization with the synchronization signal SYI), as shown in FIG. 5D, the video signal d1 (frames F0, F1,. F2 and F3 video signals) are output.
- the frame memory control unit 5 also generates a frame memory read address based on the input vertical synchronization signal, and stores the 1-frame delayed video signal d2 (FIG. 5 (e)) and the 2-frame delayed video signal stored in the frame memory 6. d3 (FIG. 5 (f)) is read and output.
- the delay unit 1 outputs three consecutive frames of video signals d1, d2, and d3 simultaneously. That is, at the timing (frame period) when the video signal of the frame F2 is input as the video signal d0, the video signals of the frames F2, F1, and F0 are output as the video signals d1, d2, and d3, and the video signal of the frame F3 is the video. Video signals of frames F3, F2, and F1 are output as video signals d1, d2, and d3 at the timing (frame period) that is input as the signal d0.
- the continuous three frames of video signals d1, d2, and d3 output from the delay unit 1 are output to the blur detection unit 2 and the blur correction unit 3.
- the video signals d1, d2, and d3 input to the blur detection unit 2 are input to the LPFs 7, 8, and 9, respectively.
- FIG. 6 shows an example of the video signals d1, d2, and d3.
- the horizontal direction indicates the pixel position
- the vertical direction indicates the gradation, indicating the contour portion of the image where the gradation changes gently.
- the LPFs 7, 8, and 9 remove high frequency components of the video signal shown in FIG. This is because blurring due to frame accumulation time has a relatively wide transition width, in other words, it is a problem with images with relatively fast movement.Therefore, high-frequency components of the input signal are used to detect blurring due to frame accumulation time. It is because it is an unnecessary component.
- FIG. 7 shows an example of a video signal e obtained by removing high frequency components from the video signal shown in FIG. 6 using LPFs 7, 8, and 9.
- the video signal e2 generated by removing the high-frequency component of the one-frame delayed video signal d2 is supplied to the differentiation unit 10 and the difference detection units 11 and 12.
- the differentiating unit 10 calculates a differential value of the input video signal e2. In calculating the differential value, an absolute value of a difference between adjacent pixels is obtained.
- FIG. 8 shows the differentiation result f of the video signal e2 shown in FIG.
- the differentiation unit 10 outputs a differentiation result (differentiation result signal) f to the transition period detection unit 13.
- FIG. 9 shows a detailed configuration of the transition period detection unit 13.
- the illustrated transition period detection unit 13 includes a ternarization unit 17 and a determination flag generation unit 18.
- the ternarization unit 17 ternarizes the input signal f using two predetermined threshold values S1 and S2 (S1 ⁇ S2), and outputs a ternary signal fk.
- the determination flag generator 18 outputs a transition period determination result h based on the ternary signal fk.
- 10 (a) to 10 (c) show examples of signals in the transition period detection unit 13.
- FIG. The horizontal axis indicates the pixel position.
- 10A shows the input signal f
- FIG. 10B shows a ternary signal fk obtained as a result of ternarizing the signal f shown in FIG. 10A
- FIG. ) Shows the result of determining the transition period determination result h from the ternary signal fk shown in FIG.
- a threshold value S1 is set, and when the signal f is smaller than the threshold value S1, it is determined that it is not within the transition period (a signal indicating that it is a transition period) By setting h to Lo, it can be excluded from the correction target.
- An outline with a steep slope occurs when the subject moves slowly.
- a threshold value S2 is set and the signal f3 is When it is larger than the threshold value S2, it can be excluded from the correction target by determining that it is not within the transition period (the signal h indicating the transition period is Lo).
- FIGS. 11A and 11B are diagrams illustrating the relationship between signals e1, e2, and e3 of three consecutive frames. Since images transmitted at 60 Hz are usually taken with a frame accumulation time of 1/60 seconds, when the subject is moving, the contours Ce3, Ce2, and Ce1 are shown in FIGS. 11 (a) and 11 (b). As shown in Fig. 4, it is observed continuously between frames. Specifically, when the subject is moving, as shown in FIG.
- the gradation change of the 1-frame delay signal e2 starts from the pixel position where the gradation change of the 2-frame delay signal e3 ends.
- the gradation of the current frame signal e1 starts to change from the pixel position where the gradation change of the one-frame delay signal e2 has ended.
- FIG. 12 is a diagram showing a detailed configuration of the difference detection unit 11.
- the illustrated difference detection unit 11 includes a difference calculation unit 19 and a difference correction unit 20.
- the difference calculation unit 19 calculates a difference between the input signals e1 and e2 and outputs a calculation result de.
- the difference calculation result de is supplied to the difference correction unit 20.
- the difference correction unit 20 corrects the difference calculation result de using a threshold value S3 given in advance, and generates and outputs a difference correction signal g1.
- FIGS. 13A and 13B are diagrams for explaining the operation of the difference detection unit 11.
- FIG. 13A shows the signals e1 and e2 for two frames to be input.
- FIG. 13C shows the generated difference correction signal g1.
- the horizontal axis of each figure indicates the pixel position. If there is a portion where the subject is moving in the two continuous frames of signals e2 and e1 (FIG. 13A) that are input, if the difference calculation unit 19 obtains the interframe difference, FIG. As shown in FIG. 5, a difference calculation result de having a peak at a pixel position Ee where the contours continue between frames is obtained.
- the difference calculation result de output from the difference calculation unit 19 is input to the difference amount correction unit 20.
- the difference amount correction unit 20 performs a process of reducing the magnitude of the difference calculation result de by the magnitude of the threshold value S3. That is, the threshold value S3 is subtracted from the absolute value of the difference calculation result de.
- the difference correction signal g1 (FIG. 13C) in which erroneous detection due to noise is reduced is generated and output.
- the configuration and operation of the difference calculation unit 12 are the same as those of the difference calculation unit 11, but the signals e2 and e3 are input instead of the signals e1 and e2, and the signal g2 is output instead of the signal g1.
- the configuration and operation of the difference calculation unit 27 are the same as those of the difference calculation unit 11, but signals e1 and e3 are input instead of the signals e1 and e2, and a signal g3 is output instead of the signal g1.
- the difference calculation unit 27 obtains a difference de (FIG. 14 (b)) between e1 and e3 from the signals e1 and e3 (FIG. 14 (a)), and further obtains a difference correction signal g3 (FIG. 14 (c)). The generation process is shown.
- g1, g2, and g3 are represented by the following formulas. If
- > S3, g1
- -S3 Otherwise, g1 0 If
- > S3, then g2
- -S3 Otherwise, g2 0 If
- > S3, then g3
- -S3 Otherwise, g3 0
- the difference correction signal g3 (FIG. 14 (c)) output from the difference calculation unit 27 is input to the differentiation unit 28. Similar to the differentiating unit 10, the differentiating unit 28 calculates a differential value f3 (FIG. 14 (d)) of the signal g3. Also in the calculation of the differential value f3, the absolute value of the difference between adjacent pixels is obtained.
- the contour portion Ce2 of the signal of the frame of interest (1-frame delay signal) has a difference between the previous and subsequent frames (difference between the two-frame delay signal e3 and the current frame signal e1). ) Is differentiated, and f3 (FIG. 14D) is almost zero.
- the difference correction signals g1 and g2 output from the difference detectors 11 and 12, the differential value f3 output from the differentiator 28, and the transition period determination result h output from the transition period detector 13 are determined as blur period determination. Input to the unit 14.
- FIG. 15 is a diagram illustrating a configuration of the blur period determination unit 14.
- the differential value f3 is input to the binarization unit 22.
- the binarization unit 22 binarizes the differential value f3 using a threshold value S4 given in advance, and outputs a binarized difference correction signal dg3 to the state determination unit 25.
- the difference correction signal g1 is input to the binarization unit 23.
- the binarization unit 23 binarizes the input difference correction signal g1 using a predetermined threshold value S5, and uses the binarized difference correction signal dg1 as a state determination unit.
- the difference correction signal g2 is input to the binarization unit 24.
- the binarizing unit 24 binarizes the input difference correction signal g2 using a predetermined threshold value S6, and sets the binarized difference correction signal dg2 to the state. Output to the determination unit 25.
- the state determination unit 25 generates a contour state flag gs based on the binarized difference correction signals dg1, dg2, and dg3, and outputs the contour state flag gs to the blur determination unit 26.
- the blur determination unit 26 outputs a blur detection flag bf based on the contour state flag gs and the transition period determination result h.
- FIGS. 16A and 16B are diagrams for explaining the operation of the binarization unit 22 and show the relationship between the input signal f3 and the output signal dg3.
- the input signal f3 (FIG. 16A) is binarized to 1 when the absolute value is larger than a predetermined threshold value S4, and to 0 when the absolute value is smaller, and is output to the state determination unit 25.
- a signal dg3 obtained as a result of binarization is shown in FIG.
- the operations of the binarizing units 23 and 24 are the same as those of the binarizing unit 22, but signals g1 and g2 are input instead of the signal f3, and signals dg1 and dg2 are output instead of the signal dg3.
- FIG. 17 is a diagram illustrating an internal configuration of the state determination unit 25.
- the illustrated state determination unit 25 includes a state comparison unit 21 and a state correction unit 29.
- the binarized difference correction signals dg1 and dg2 input to the state determination unit 25 are input to the state comparison unit 21.
- the state comparison unit 21 outputs the state comparison signal gss to the state correction unit 29 as a result of comparing the two input difference correction signals dg1 and dg2.
- the state correction unit 29 corrects the state comparison signal gss based on the binarized difference correction signal dg3 and outputs a state correction signal ⁇ gs.
- FIGS. 18 (a) to 18 (c) are diagrams for explaining the operation of the state comparison unit 21.
- FIG. It is assumed that the binary difference correction signals dg1 and dg2 input to the state comparison unit 21 are as shown in FIGS. 18 (a) and 18 (b), respectively.
- the state comparison unit 21 outputs a state comparison signal gss (FIG. 18C) having three states (A, B, C) according to the states of the two binarized difference correction signals.
- the state comparison signal gss is in the state A.
- the state comparison signal gss is in the state B when the states of the binary difference correction signals dg1 and dg2 are different from each other (when dg1 ⁇ dg2).
- the state comparison signal gss is in the state C when the binary difference correction signals dg1 and dg2 are both Hi.
- FIGS. 19A to 19E are diagrams for explaining the operation of the state correction unit 29.
- the state comparison signal gss input from the state comparison unit 21 is a signal having three states A, B, and C as shown in FIG.
- the state correction unit 29 performs correction according to the state of the binarized difference correction signal dg3 (FIGS. 19B and 19D), and the state correction signal (flag) gs. (FIGS. 19C and 19E are generated.
- the state correction signal gs output from the state correction unit 29 is a signal having four states.
- FIG. 20A to 20C are diagrams for explaining the operation of the blur determination unit 26.
- FIG. The state correction signal gs shown in FIG. 20A output from the state determination unit 25 is a signal having four states A, B, C, and D, and the transition period determination result h is shown in FIG. As shown, it is a binary signal.
- the blur determination unit 26 operates on the signals of the states B, C, and D of the state correction signal gs. When the transition period determination signal h is Lo, the signals of the states B, C, and D are changed to the state A. Convert and output. State A is a state that is not subject to blur correction.
- the blur determination unit 26 when the transition period determination signal h is Hi, the signals of the states B, C, and D are output as B, C, and D as they are. Note that the signal of the state A is output as it is regardless of whether the transition period determination signal h is Hi or Lo. In this way, the blur determination unit 26 generates and outputs a motion blur detection flag bf (FIG. 20C) having four states A, B, C, and D.
- the motion blur detection flag bf having four states A, B, C, and D output from the blur detection unit 2 and the video signals d1, d2, and d3 output from the delay unit 1 are sent to the blur correction unit 3. Entered.
- the configuration of the blur correction unit 3 is as described in FIG. 4, and here, the operation of the blur correction unit 3 will be described.
- FIG. 21 is a diagram showing a detailed configuration of the contour shape calculation unit 15.
- the contour shape calculation unit 15 includes a pixel counter unit 30, a state D counter unit 31, a center detection unit 32, and a core position determination unit 33. And a converted signal generator 34.
- the motion blur detection flag bf is input to the pixel counter unit 30, the state D counter unit 31, and the conversion signal generation unit 34.
- the pixel counter unit 30 outputs a count value c1 obtained by counting the pixel clock from the start of processing of each line on the screen as data indicating the pixel position on the line.
- the state D counter unit 31 uses the count value c2 obtained by counting the pixel clock from the start time to the end time of the state D included in the motion blur detection flag bf as data indicating the width (duration) of the state D.
- the pixel clock is a clock generated in order to synchronize processing in each unit in the image processing apparatus, and a signal of one pixel is processed each time the pixel clock is generated.
- the count results c1 and c2 output from the pixel counter unit 30 and the state D counter unit 31 are input to the center detection unit 32.
- the center detector 32 detects the center position (data indicating) c3 in the state D period from the count results c1 and c2.
- the detected center position c3 of the state D period is output to the core position determination unit 33.
- the core position determination unit 33 calculates a core region at the time of blur correction using a threshold value S7 given in advance, and outputs a core region determination flag c4.
- the core region determination flag c4 is sent to the converted signal generator 34.
- the conversion signal generation unit 34 outputs a conversion control signal j based on the input motion blur detection flag bf and the core area determination flag c4.
- FIGS. 22A to 22D show an operation of generating the core area determination flag c4 from the motion blur detection flag bf, the pixel counter output c1, and the state D counter output c2.
- the motion blur detection flag bf shown in FIG. 22A is input to the state D counter unit 31, the count value c2 of state D (FIG. 22C) is counted up to 9, and the maximum count value c2max is 9 It becomes.
- the center detection unit 32 is based on the maximum count value c2max of the state D and the output result c1 of the pixel counter unit 30 when the maximum count value c2max is reached (FIG. 22B).
- the core position determination unit 33 calculates a core region at the time of blur correction based on the center position c3 of the state D output from the center detection unit 32 and a predetermined threshold value S7, and the core region determination flag c4 (FIG. 22). (D)) is output.
- c3 to S7 to c3 + S7 are set as the core areas, and the core area determination flag c4 is set to Hi during that period.
- the threshold value S7 is “2”.
- FIGS. 23A to 23C are diagrams for explaining the operation of the converted signal generation unit 34.
- the relationship between (b)) and the output conversion control signal j (FIG. 23 (c)) is shown.
- the conversion signal generation unit 34 the state of the input motion blur detection flag bf is converted into the state E and output during the period when the core region determination flag c4 is Hi.
- bf is output as it is as the conversion control signal j. That is, the conversion control signal j output from the conversion signal generation unit 34 is a signal having five states A, B, C, D, and E.
- the conversion control signal j output from the contour shape calculation unit 15 is input to the pixel conversion unit 16 together with the video signals d1, d2, and d3.
- the pixel conversion unit 16 generates a video signal k output to the image display unit 4 from the input video signals d1, d2, and d3 based on the state of the conversion control signal j.
- FIGS. 24A to 24C show an example of the process of generating the video signal k.
- the video signals d1, d2, and d3 input to the pixel conversion unit 16 are as shown in FIG. 24A, and the conversion control signal j is as shown in FIG.
- the conversion control signal j is in the state B or D, If
- d2 is output as the output video signal k.
- the video signal d2 is output as k in the state E (core region located in the central part of the blurring period), and the signal is output in the state D (area other than the core area in the blurring period).
- a signal having a smaller difference from d2 out of d3 and d1 is output as an output video signal k.
- the motion blur generated region included in the video is detected, and the blur is matched to the detected motion blur size.
- the width it is possible to improve the image quality when displaying a moving image.
- the circuit scale is relatively small, an energy saving effect can be obtained.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Television Systems (AREA)
- Picture Signal Circuits (AREA)
- Control Of Indicators Other Than Cathode Ray Tubes (AREA)
- Liquid Crystal Display Device Control (AREA)
- Image Analysis (AREA)
Abstract
Description
さらに、動きベクトル検出手法を用いて被写体の移動方法を特定し、適応的にフィルタリング処理を行う場合、検出回路規模が増大してしまうため、コスト的に利用が難しいという問題があった。
入力された映像信号に対してフレーム遅延していない第1の映像信号と、入力された映像信号に対して、第1の所定のフレーム数遅延している第2の映像信号と、前記第2の映像信号に対して、前記第1の所定のフレーム数遅延している第3の映像信号とを生成する遅延手段と、
前記第1の映像信号と前記第2の映像信号の階調差を検出する第1の差分検出手段と、
前記第2の映像信号と前記第3の映像信号の階調差を検出する第2の差分検出手段と、
前記第1の映像信号と前記第3の映像信号の階調差を検出する第3の差分検出手段と、
前記第3の差分検出手段で検出された階調差の隣接する画素間の信号の変化を検出する微分手段と、
前記第2の映像信号において、隣接する画素間の階調変化を検出し、前記階調変化を元に映像信号に含まれる階調の遷移期間を検出する遷移期間検出手段とを有し、
前記第1の差分検出手段で検出された前記階調差と、前記第2の差分検出手段で検出された前記階調差と、前記微分手段で検出された微分結果と、前記遷移期間検出手段で検出された前記第2の映像信号の階調の遷移期間を元に、動きぼやけ期間を検出する
ことを特徴とする。
図1は、本発明に係る画像表示装置の構成を示すブロック図である。図示の画像表示装置81は、遅延部1と、ぼやけ検出部2と、ぼやけ補正部3と、画像表示部4とを備える。上記のうち、遅延部1と、ぼやけ検出部2とで、動きぼやけ検出装置が構成されている。
画像表示装置81に入力された映像信号は、遅延部1に供給される。遅延部1はフレームメモリを用いて、入力された信号のフレーム遅延を行い、複数のフレーム遅延された信号d1~d3をぼやけ検出部2及びぼやけ補正部3に出力する。
LPF7は、遅延部1から出力された現フレーム映像信号d1の高周波成分を除去して映像信号e1を生成し、差分検出部11及び27に出力する。
LPF8は、遅延部1から出力された1フレーム遅延映像信号d2の高周波成分を除去して映像信号e2を生成し、微分部10、並びに差分検出部11及び12に出力する。
LPF9は、遅延部1から出力された2フレーム遅延映像信号d3の高周波成分を除去して映像信号e3を生成し、差分検出部12及び27に出力する。
遷移期間判定部13は、微分部10の検出結果fを元に、動きぼやけと予想される輪郭を判定し、判定結果hをぼやけ期間判定部14に出力する。
差分検出部12は、入力された映像信号e3と映像信号e2の差分、即ち映像信号の各画素についての1フレーム間の差分をとり、差分補正信号g2をぼやけ期間判定部14に出力する。
微分部28は、差分補正信号g3の隣接画素間変化量、即ち隣接画素間差分を検出し、2フレーム間差分微分結果f3をぼやけ期間判定部14に出力する。
輪郭形状計算部15は、入力される動きぼやけ判定結果bfを元に、輪郭形状を計算し、画素変換部16に対して、変換制御信号jを出力する。
画素変換部16は、変換制御信号jと、入力される映像信号d1及びd3を元に、映像信号d2を変換し、変換後の映像信号kを出力する。
画像表示装置81に入力された映像信号d0は遅延部1に入力される。
図5(a)~(f)は、遅延部1に入力される映像信号d0と、出力される映像信号d1、d2、d3の関係を説明する図である。図5(a)に示される入力垂直同期信号SYIに同期して、図5(b)に示すように、フレームF0、F1、F2、F3の入力映像信号d0が順次入力される。フレームメモリ制御部5は入力垂直同期信号SYIを元にフレームメモリ書込みアドレスを生成し、入力映像信号d0をフレームメモリ6に記憶させると共に、図5(c)に示す出力垂直同期信号SYO(入力垂直同期信号SYIに対して遅れがないものとして示してある)に同期して、図5(d)に示すように、入力映像信号d0に対してフレーム遅延のない映像信号d1(フレームF0、F1、F2、F3の映像信号)を出力する。フレームメモリ制御部5はまた、入力垂直同期信号を元にフレームメモリ読み出しアドレスを生成し、フレームメモリ6に蓄えられた、1フレーム遅延映像信号d2(図5(e))及び2フレーム遅延映像信号d3(図5(f))をそれぞれ読み出して出力する。この結果、遅延部1からは、連続する3フレームの映像信号d1、d2、d3が同時に出力される。即ち、フレームF2の映像信号が映像信号d0として入力されるタイミング(フレーム期間)に、フレームF2、F1、F0の映像信号が映像信号d1、d2、d3として出力され、フレームF3の映像信号が映像信号d0として入力されるタイミング(フレーム期間)に、フレームF3、F2、F1の映像信号が映像信号d1、d2、d3として出力される。
図7に、図6に示す映像信号を、LPF7、8、9を用いて高周波成分を除去した映像信号eの例を示す。
微分部10は微分結果(微分結果信号)fを遷移期間検出部13に出力する。
3値化部17は、予め決められた2つの閾値S1、S2(S1<S2)を用いて入力信号fを3値化し、3値化信号fkを出力する。
判定フラグ生成部18は、3値化信号fkを元に遷移期間判定結果hを出力する。
以上のように、信号hは、
S1<f<S2のときに、h=Hiとなり、
それ以外のとき、即ち、f≦S1又はf≧S2のときには、h=Loとなる。
図11(a)及び(b)は、連続した3フレームの信号e1、e2、e3の関係を示す図である。
通常、60Hzで送信されてくる映像は、フレーム蓄積時間1/60秒で撮影されているため、被写体が動いている場合、その輪郭部Ce3、Ce2、Ce1は図11(a)及び(b)に示すようにフレーム間で連続して観測される。具体的には、被写体が動いている場合、図11(a)に示すように、2フレーム遅延信号e3の階調変化が終わった画素位置から、1フレーム遅延信号e2の階調変化が始まる。同様に、図11(b)に示すように、1フレーム遅延信号e2の階調変化が終わった画素位置から、現フレーム信号e1の階調が変化し始める。
差分計算部19は、入力信号e1、e2相互間の差分を計算し、計算結果deを出力する。差分計算結果deは差分補正部20に供給する。
差分補正部20は、予め与えられた閾値S3を用いて差分計算結果deを補正し、差分補正信号g1を生成し出力する。
図13(a)は、入力される2フレーム分の信号e1、e2を示す。図13(b)は差分計算結果de(=e2-e1)を示す。図13(c)は生成された差分補正信号g1を示す。ここでそれぞれの図の横軸は画素位置を示している。
入力される、連続する2フレームの信号e2、e1(図13(a))に、被写体が動いている箇所がある場合、差分計算部19にてフレーム間差分を取ると、図13(b)に示すように、フレーム間で輪郭が連続する画素位置Eeをピークとする、差分計算結果deが得られる。
なお、以下の説明から理解されるように、ぼやけ検出部2内における処理では、差分の絶対値のみが問題になるので、差分検出部27において差分を求める場合、2つの信号のどちらからどちらを減算するかは問題とならない。差分検出部11、12においても同じである。
差分演算部27においては、信号e1、e3(図14(a))から、e1とe3の差de(図14(b))を求め、さらに差分補正信号g3(図14(c))をが生成する過程を示す。
|e2-e1|>S3の場合には、g1=|e2-e1|-S3
そうでない場合には、g1=0
|e3-e2|>S3の場合には、g2=|e3-e2|-S3
そうでない場合には、g2=0
|e3-e1|>S3の場合には、g3=|e3-e1|-S3
そうでない場合には、g3=0
差分補正信号g1は2値化部23に入力される。2値化部23では2値化部22と同様に、予め与えられた閾値S5を用いて、入力された差分補正信号g1の2値化を行い、2値化差分補正信号dg1をステート判定部25に出力する。
同様に、差分補正信号g2は2値化部24に入力される。2値化部24では2値化部22、23と同様に、予め与えられた閾値S6を用いて、入力された差分補正信号g2の2値化を行い、2値化差分補正信号dg2をステート判定部25に出力する。
ステート判定部25は、2値化差分補正信号dg1、dg2、dg3を元に、輪郭状態フラグgsを生成し、ぼやけ判定部26に出力する。ぼやけ判定部26は、輪郭状態フラグgsと、遷移期間判定結果hを元に、ぼやけ検出フラグbfを出力する。
入力された信号f3(図16(a))は、その絶対値が予め与えられた閾値S4より大きい場合は1、小さい場合は0に2値化され、ステート判定部25に出力される。2値化された結果得られる信号dg3が図16(b)に示されている。
2値化部23、24の動作は2値化部22と同様であるが、信号f3の代わりに、信号g1、g2が入力され、信号dg3の代わりに、信号dg1、dg2が出力される。
ステート判定部25に入力される2値化差分補正信号dg1、dg2は状態比較部21に入力する。状態比較部21は、入力された2つの差分補正信号dg1、dg2を比較した結果、状態比較信号gssを状態補正部29に出力する。状態補正部29では、状態比較信号gssを、2値化差分補正信号dg3を元に補正し、状態補正信号<gsを出力する。
具体的には、信号gssがステートAの場合には、gsを(gssと同じく)ステートAとし、信号gssがステートBの場合には、gsを(gssと同じく)ステートBとする。
一方、信号gssがステートCで、かつdg3=Loの場合、gsをステートDとする。
なお、図19(d)に示すように、信号gssがステートCで、かつdg3=Hiの場合には、図19(e)に示すように、gsを(gssと同じく)ステートCとする。
以上のようにして、状態補正部29が出力する状態補正信号gsは4つのステートを持つ信号になる。
ステート判定部25から出力される、図20(a)に示される状態補正信号gsは4つのステートA、B、C、Dを持つ信号であり、遷移期間判定結果hは図20(b)に示すように2値信号である。
ぼやけ判定部26では、状態補正信号gsのステートB、C、Dの信号に対して作用し、遷移期間判定信号hがLoの場合は、ステートB、C、Dのそれぞれの信号をステートAに変換して出力する。ステートAは、ぼやけ補正の対象外のステートである。一方、遷移期間判定信号hがHiの場合は、ステートB、C、Dのそれぞれの信号をそのままB、C、Dとして出力する。なお、ステートAの信号は、遷移期間判定信号hがHiであるかLoであるかに拘わらず、そのまま出力される。このようにして、ぼやけ判定部26は、4つのステートA、B、C、Dを持つ動きぼやけ検出フラグbf(図20(c))を生成して出力する。
動きぼやけ検出フラグbfは、画素カウンタ部30と、ステートDカウンタ部31と、変換信号生成部34とに入力される。
ステートDカウンタ部31は、画素クロックを、動きぼやけ検出フラグbfに含まれるステートDの開始時点から終了時点までカウントすることで得られるカウント値c2をステートDの幅(持続時間)を示すデータとして出力する。
画素クロックは、画像処理装置内の各部において、処理を同期させるために発生されるクロックであり、画素クロックが発生される度に一つの画素の信号が処理される。
中心検出部32では、カウント結果c1とc2から、ステートD期間の中心位置(を示すデータ)c3を検出する。検出されたステートD期間の中心位置c3はコア位置判定部33に出力される。コア位置判定部33では、予め与えられた閾値S7を用いて、ぼやけ補正時のコア領域を計算し、コア領域判定フラグc4を出力する。
コア領域判定フラグc4は変換信号生成部34に送られる。変換信号生成部34では、入力された動きぼやけ検出フラグbfと、コア領域判定フラグc4を元に変換制御信号jを出力する。
図22(a)に示す動きぼやけ検出フラグbfがステートDカウンタ部31に入力されたとき、ステートDのカウント値c2(図22(c))は9までカウントアップされ、最大カウント値c2maxは9となる。中心検出部32は、ステートDの最大カウント値c2maxと、最大カウント値c2maxとなったときの画素カウンタ部30の出力結果c1(図22(b))に基づき、
c3=c1-(c2max-1)/2
=12-(9-1)/2=8
の演算を行い(端数を切り上げ又は切り下げることにより、ステートDの中心位置c3を求める。図示の例では、c3は「8」となる。中心位置(を示すデータ)c3は、コア位置判定部33に出力される。
つまり、変換信号生成部34から出力される変換制御信号jは5つのステートA、B、C、D、Eを持つ信号である。
|d2-d1|>|d2-d3|ならば、d3を出力映像信号kとして出力し、
|d2-d1|≦|d2-d3|ならば、d1を出力映像信号kとして出力する(図24(c))。即ち、d3、d1のうち、d2との差が小さいものを出力映像信号kとして出力する。
このような処理により、本処理を行わずに映像信号d2を表示に用いる場合と比較して、ぼやけの原因となる輪郭の遷移幅が狭くすることが可能となる。
また、回路規模が比較的小さくて済むので、省エネルギー効果も得られる。
Claims (14)
- 入力された映像信号に対してフレーム遅延していない第1の映像信号と、入力された映像信号に対して、第1の所定のフレーム数遅延している第2の映像信号と、前記第2の映像信号に対して、前記第1の所定のフレーム数遅延している第3の映像信号とを生成する遅延手段と、
前記第1の映像信号と前記第2の映像信号の階調差を検出する第1の差分検出手段と、
前記第2の映像信号と前記第3の映像信号の階調差を検出する第2の差分検出手段と、
前記第1の映像信号と前記第3の映像信号の階調差を検出する第3の差分検出手段と、
前記第3の差分検出手段で検出された階調差の隣接する画素間の信号の変化を検出する微分手段と、
前記第2の映像信号において、隣接する画素間の階調変化を検出し、前記階調変化を元に映像信号に含まれる階調の遷移期間を検出する遷移期間検出手段とを有し
前記第1の差分検出手段で検出された前記階調差と、前記第2の差分検出手段で検出された前記階調差と、前記微分手段で検出された微分結果と、前記遷移期間検出手段で検出された前記第2の映像信号の階調の遷移期間を元に、動きぼやけ期間を検出することを特徴とする動きぼやけ検出装置。 - 前記遅延手段が、入力された映像信号を複数フレーム分蓄積可能なフレームメモリを有し、前記入力された映像信号をフレーム遅延させることなく前記第1の映像信号を生成し、前記第1の映像信号を前記フレームメモリを用いることで前記第1の所定のフレーム数遅延させることで前記第2の映像信号を生成し、前記第2の映像信号を前記フレームメモリを用いることで前記第1の所定のフレーム数遅延させることで前記第3の映像信号を生成することを特徴とする請求項1に記載の動きぼやけ検出装置。
- 前記遅延手段は、2フレーム分の蓄積可能なフレームメモリを有し、
前記第1の映像信号と、前記第1の映像信号を1フレーム遅延させた第2の映像信号と、前記第2の映像信号を1フレーム遅延させた第3の映像信号を生成することを特徴とする請求項2に記載の動きぼやけ検出装置。 - 前記遷移期間検出手段は、前記第2の映像信号の隣接する画素間の階調変化の絶対値が、第1の所定の閾値よりも大きく、かつ第2の所定の閾値よりも小さいときに、遷移期間内であると判定することを特徴とする請求項1に記載の動きぼやけ検出装置。
- 前記ぼやけ判定手段は、前記遷移期間検出手段で検出された前記第2の映像信号の階調の遷移期間と、
前記第1の差分検出手段で検出した前記第1の映像信号と前記第2の映像信号の階調差と、
前記第2の差分検出手段で検出した前記第2の映像信号と前記第3の映像信号の階調差とを元に、
前記動きぼやけ期間を判定することを特徴とする請求項1乃至4のいずれかに記載の動きぼやけ検出装置。 - 前記ぼやけ判定手段は、
前記第1の映像信号と前記第2の映像信号の階調差の絶対値が所定値より大きく、
前記第2の映像信号と前記第3の映像信号の階調差の絶対値が所定値より大きく、
前記微分手段で検出された微分結果の絶対値が所定値より小さく、
前記第2の映像信号の階調が遷移期間内にあるぼやけ期間であると判定することを特徴とする請求項5に記載の動きぼやけ検出装置。 - 前記ぼやけ判定手段は、
前記遷移期間検出手段で検出された前記第2の映像信号に含まれる階調の遷移期間と、前記差分検出手段において検出した前記第1の映像信号と前記第3の映像信号の階調差を、前記微分手段で微分した結果を元に、動きぼやけ期間を判定することを特徴とする請求項1乃至4のいずれかに記載の動きぼやけ検出装置。 - 前記ぼやけ判定手段は、微分手段の微分結果が、所定の第3の閾値よりも小さい場合のみ動きぼやけ期間と判定することを特徴とする請求項7に記載の動きぼやけ検出装置。
- 前記ぼやけ判定手段は、前記第1の差分検出手段で検出した前記第1の映像信号と前記第2の映像信号の階調差と、前記第2の差分検出手段で検出した前記第2の映像信号と前記第3の映像信号の階調差の、少なくとも一方が所定の第4の閾値よりも大きい場合にのみ動きぼやけ期間と判定することを特徴とする請求項1、5又は7に記載の動きぼやけ検出装置。
- 請求項1乃至9のいずれかに記載のぼやけ検出装置による動きぼやけ期間の検出結果を元に、前記第2の映像信号を補正するぼやけ補正手段を有し、
前記ぼやけ補正手段は、前記ぼやけ判定装置で検出された前記動きぼやけ期間の間のみ前記第2の映像信号を補正する
ことを特徴とする画像処理装置。 - 前記ぼやけ補正手段は、前記ぼやけ判定手段で判定された結果に基づき、前記第1の映像信号、前記第2の映像信号及び前記第3の映像信号のいずれかを選択して出力することを特徴とする請求項10に記載の画像処理装置。
- 前記ぼやけ補正手段は、
輪郭形状計算部と、画素変換部とを有し、
前記輪郭形状計算部は、
前記ぼやけ判定手段で検出したぼやけ期間のうちの中心部分に位置するコア領域を検出し、
前記画素変換部は、
前記コア領域においては、前記第2の映像信号を選択して出力し、
前記ぼやけ期間のうち、前記コア領域以外の領域においては、前記第1の映像信号及び前記第3の映像信号のうち、前記第2の映像信号との差がより小さいものを出力する
ことを特徴とする請求項11に記載の画像処理装置。 - 請求項10乃至12のいずれかに記載の画像処理装置と、
前記画像処理装置から出力された画像データに基づく画像を表示する表示手段を有することを特徴とする画像表示装置。 - 入力された映像信号に対してフレーム遅延していない第1の映像信号と、入力された映像信号に対して、第1の所定のフレーム数遅延している第2の映像信号と、前記第2の映像信号に対して、前記第1の所定のフレーム数遅延している第3の映像信号とを生成する遅延ステップと、
前記第1の映像信号と前記第2の映像信号の階調差を検出する第1の差分検出ステップと、
前記第2の映像信号と前記第3の映像信号の階調差を検出する第2の差分検出ステップと、
前記第1の映像信号と前記第3の映像信号の階調差を検出する第3の差分検出ステップと、
前記第3の差分検出ステップで検出された階調差の隣接する画素間の信号の変化を検出する微分ステップと、
前記第2の映像信号において、隣接する画素間の階調変化を検出し、前記階調変化を元に映像信号に含まれる階調の遷移期間を検出する遷移期間検出ステップとを有し
前記第1の差分検出ステップで検出された前記階調差と、前記第2の差分検出ステップで検出された前記階調差と、前記微分ステップで検出された微分結果と、前記遷移期間検出ステップで検出された前記第2の映像信号の階調の遷移期間を元に、動きぼやけ期間を判定することを特徴とする動きぼやけ検出方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2009506835A JPWO2009107487A1 (ja) | 2008-02-25 | 2009-02-12 | 動きぼやけ検出装置及び方法、画像処理装置、並びに画像表示装置 |
US12/517,434 US8218888B2 (en) | 2008-02-25 | 2009-02-12 | Motion blur detecting apparatus and method, image processing apparatus, and image display apparatus |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2008042546 | 2008-02-25 | ||
JP2008-042546 | 2008-02-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2009107487A1 true WO2009107487A1 (ja) | 2009-09-03 |
Family
ID=41015888
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2009/052285 WO2009107487A1 (ja) | 2008-02-25 | 2009-02-12 | 動きぼやけ検出装置及び方法、画像処理装置、並びに画像表示装置 |
Country Status (3)
Country | Link |
---|---|
US (1) | US8218888B2 (ja) |
JP (1) | JPWO2009107487A1 (ja) |
WO (1) | WO2009107487A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012070295A (ja) * | 2010-09-27 | 2012-04-05 | Hitachi Consumer Electronics Co Ltd | 映像処理装置及び映像処理方法 |
CN112749613A (zh) * | 2020-08-27 | 2021-05-04 | 腾讯科技(深圳)有限公司 | 视频数据处理方法、装置、计算机设备及存储介质 |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8224033B2 (en) * | 2008-06-24 | 2012-07-17 | Mediatek Inc. | Movement detector and movement detection method |
JP5267239B2 (ja) * | 2008-12-26 | 2013-08-21 | 株式会社リコー | 画像読み取り装置、画像読み取り方法、画像読み取りプログラム及び記録媒体 |
JP5649927B2 (ja) * | 2010-11-22 | 2015-01-07 | オリンパス株式会社 | 画像処理装置、画像処理方法、および、画像処理プログラム |
US9071818B2 (en) * | 2011-08-30 | 2015-06-30 | Organizational Strategies International Pte. Ltd. | Video compression system and method using differencing and clustering |
MY167521A (en) * | 2011-10-21 | 2018-09-04 | Organizational Strategies Int Pte Ltd | An interface for use with a video compression system and method using differencing and clustering |
US9152858B2 (en) | 2013-06-30 | 2015-10-06 | Google Inc. | Extracting card data from multiple cards |
US8837833B1 (en) | 2013-06-30 | 2014-09-16 | Google Inc. | Payment card OCR with relaxed alignment |
US10283031B2 (en) * | 2015-04-02 | 2019-05-07 | Apple Inc. | Electronic device with image processor to reduce color motion blur |
CN112164011B (zh) * | 2020-10-12 | 2023-02-28 | 桂林电子科技大学 | 基于自适应残差与递归交叉注意力的运动图像去模糊方法 |
US11770584B1 (en) * | 2021-05-23 | 2023-09-26 | Damaka, Inc. | System and method for optimizing video communications based on device capabilities |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09312788A (ja) * | 1996-05-23 | 1997-12-02 | Nippon Hoso Kyokai <Nhk> | 雑音低減回路 |
JPH10262160A (ja) * | 1997-03-18 | 1998-09-29 | Nippon Hoso Kyokai <Nhk> | Sn比検出装置および雑音低減装置 |
JP2005318251A (ja) * | 2004-04-28 | 2005-11-10 | Sharp Corp | 3次元ノイズリダクション回路及び映像信号処理装置 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2280812B (en) | 1993-08-05 | 1997-07-30 | Sony Uk Ltd | Image enhancement |
JP2002016820A (ja) | 2000-06-29 | 2002-01-18 | Victor Co Of Japan Ltd | 画質改善回路 |
JP2004080252A (ja) | 2002-08-14 | 2004-03-11 | Toshiba Corp | 映像表示装置及びその方法 |
US7561186B2 (en) * | 2004-04-19 | 2009-07-14 | Seiko Epson Corporation | Motion blur correction |
JP4764065B2 (ja) | 2005-05-12 | 2011-08-31 | 日本放送協会 | 画像表示制御装置、ディスプレイ装置及び画像表示方法 |
US7548659B2 (en) * | 2005-05-13 | 2009-06-16 | Microsoft Corporation | Video enhancement |
US7728909B2 (en) * | 2005-06-13 | 2010-06-01 | Seiko Epson Corporation | Method and system for estimating motion and compensating for perceived motion blur in digital video |
JP5036410B2 (ja) * | 2007-05-31 | 2012-09-26 | キヤノン株式会社 | 撮像装置及びその制御方法 |
US8098333B2 (en) * | 2007-06-29 | 2012-01-17 | Seiko Epson Corporation | Phase shift insertion method for reducing motion artifacts on hold-type displays |
-
2009
- 2009-02-12 WO PCT/JP2009/052285 patent/WO2009107487A1/ja active Application Filing
- 2009-02-12 US US12/517,434 patent/US8218888B2/en not_active Expired - Fee Related
- 2009-02-12 JP JP2009506835A patent/JPWO2009107487A1/ja active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09312788A (ja) * | 1996-05-23 | 1997-12-02 | Nippon Hoso Kyokai <Nhk> | 雑音低減回路 |
JPH10262160A (ja) * | 1997-03-18 | 1998-09-29 | Nippon Hoso Kyokai <Nhk> | Sn比検出装置および雑音低減装置 |
JP2005318251A (ja) * | 2004-04-28 | 2005-11-10 | Sharp Corp | 3次元ノイズリダクション回路及び映像信号処理装置 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012070295A (ja) * | 2010-09-27 | 2012-04-05 | Hitachi Consumer Electronics Co Ltd | 映像処理装置及び映像処理方法 |
CN112749613A (zh) * | 2020-08-27 | 2021-05-04 | 腾讯科技(深圳)有限公司 | 视频数据处理方法、装置、计算机设备及存储介质 |
CN112749613B (zh) * | 2020-08-27 | 2024-03-26 | 腾讯科技(深圳)有限公司 | 视频数据处理方法、装置、计算机设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
US20100158402A1 (en) | 2010-06-24 |
US8218888B2 (en) | 2012-07-10 |
JPWO2009107487A1 (ja) | 2011-06-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2009107487A1 (ja) | 動きぼやけ検出装置及び方法、画像処理装置、並びに画像表示装置 | |
KR101200231B1 (ko) | 화상 처리 장치 및 방법, 및 기록 매체 | |
JP5062968B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
KR20090102610A (ko) | 영상 스케일링 검출 방법 및 장치 | |
CN102629970B (zh) | 视频图像的噪点去除方法和系统 | |
US7705918B2 (en) | Noise reduction apparatus and noise reduction method | |
US20100328530A1 (en) | Video display apparatus | |
US8462267B2 (en) | Frame rate conversion apparatus and frame rate conversion method | |
JP2007288595A (ja) | フレーム巡回型ノイズ低減装置 | |
JP4829802B2 (ja) | 画質改善装置および画質改善方法 | |
US20120008689A1 (en) | Frame interpolation device and method | |
US9215353B2 (en) | Image processing device, image processing method, image display device, and image display method | |
US20140333801A1 (en) | Method and apparatus for processing image according to image conditions | |
JP5005260B2 (ja) | 画像表示装置 | |
CN113344820B (zh) | 图像处理方法及装置、计算机可读介质、电子设备 | |
US8345163B2 (en) | Image processing device and method and image display device | |
US20200065949A1 (en) | Image processing method and device | |
JP5147655B2 (ja) | 映像信号処理装置および映像表示装置 | |
JP2010057001A (ja) | 画像処理装置及び方法、並びに画像表示装置 | |
JP2008171059A (ja) | 画像処理回路、半導体装置、画像処理装置 | |
JP4674528B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
KR101577703B1 (ko) | 흐림과 이중 윤곽의 효과를 줄이는 비디오 화상 디스플레이방법과 이러한 방법을 구현하는 디바이스 | |
TWI392336B (zh) | 具有cue移除器的動態適應性去交錯裝置與方法 | |
JP5230538B2 (ja) | 画像処理装置、画像処理方法 | |
JPH0646298A (ja) | 映像信号の雑音除去装置およびこれに用いる動き検出回路 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2009506835 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 12517434 Country of ref document: US |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09715165 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 09715165 Country of ref document: EP Kind code of ref document: A1 |