WO2010064316A1 - Dispositif, procédé et programme de traitement d'image - Google Patents

Dispositif, procédé et programme de traitement d'image Download PDF

Info

Publication number
WO2010064316A1
WO2010064316A1 PCT/JP2008/072135 JP2008072135W WO2010064316A1 WO 2010064316 A1 WO2010064316 A1 WO 2010064316A1 JP 2008072135 W JP2008072135 W JP 2008072135W WO 2010064316 A1 WO2010064316 A1 WO 2010064316A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
unit
image
pixels
noise reduction
Prior art date
Application number
PCT/JP2008/072135
Other languages
English (en)
Japanese (ja)
Inventor
佐々木 寛
Original Assignee
オリンパス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by オリンパス株式会社 filed Critical オリンパス株式会社
Priority to PCT/JP2008/072135 priority Critical patent/WO2010064316A1/fr
Publication of WO2010064316A1 publication Critical patent/WO2010064316A1/fr
Priority to US13/118,736 priority patent/US8411205B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20182Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering

Definitions

  • the present invention relates to an image processing apparatus, an image processing method, and an image processing program that perform noise reduction processing on a moving image signal.
  • a moving image signal is a signal composed of a plurality of frames (fields) captured at specified time intervals.
  • each frame (field) has a correlation with a local space in the frame (in the field) and is adjacent to each other.
  • a frame (field) can be assumed as a signal having a correlation with local time between frames (inter-field).
  • Such a moving image signal can be obtained mainly by imaging an arbitrary subject with a video camera equipped with an image sensor such as a CCD or CMOS. Specifically, the subject image formed on the image sensor is output in a predetermined order as a signal photoelectrically converted in units of pixels, and the output analog image signal is amplified by a predetermined gain amount, and then a digital image is obtained by A / D conversion. In general, it is converted into a signal and subjected to predetermined image processing.
  • noise is superimposed on the captured image due to the characteristics of the image sensor.
  • This noise is mainly shot noise due to the nature of photoelectric conversion.
  • the shot noise has an average amplitude proportional to the square root of the image signal value, and generally becomes random noise in the time direction and the spatial direction. This noise appears more conspicuously when the gain amount is increased when the amount of imaged light on the image sensor is insufficient.
  • the noise reduction processing using the above time correlation is known as a method in which a cyclic noise reduction processing using a frame after noise reduction as a past frame can obtain a large amount of noise reduction.
  • this method is premised on the correlation with the past frame, if the same processing is performed on a scene with a large scene change or movement with the past frame, the image of the previous frame will be displayed. There is a problem that afterimages are superimposed on the current frame. On the other hand, if control is performed to reduce the afterimage, there is a problem that a sufficient noise reduction effect cannot be expected.
  • Patent Document 1 discloses a noise reduction process that uses both intra-frame correlation and inter-frame correlation to improve noise reduction capability even for a scene with large motion.
  • This noise reduction system is provided with an image memory for delaying one frame or one field, newly inputted center pixel data, pixel data in the vicinity of this center pixel data, and already recorded in the image memory.
  • Noise-reduced pixel data is output by performing non-linear filter processing on pixel data in the vicinity of the central pixel data in the image data of one frame or one field before noise reduction.
  • the non-linear filter processing means that a large weighting factor is assigned to data of neighboring pixels having a high correlation with the central pixel data value, and conversely, a small weighting factor is assigned to a pixel having a low correlation to perform weighted averaging. Is.
  • the above method it is possible to perform noise reduction using both intra-frame correlation and inter-frame correlation.
  • the number of pixels that contribute to averaging increases because the number of pixels having a large weighting coefficient increases in the previous frame and the previous field used for weighted averaging and the current field pixel. .
  • a larger weighting factor is given to the current field pixel than to the pixel one frame before or one field before, so the weighting is performed mainly on the current field pixel. An average is made.
  • the noise reduction processing using time correlation as in the above prior art is substantially switched to noise reduction processing mainly using the inside of the field for scenes and scene changes with large movements. Therefore, the noise reduction amount decreases when the moving region suddenly changes from the still region or immediately after the scene change. Furthermore, a stable noise reduction amount cannot be obtained immediately after returning from the moving region to the stationary region, and a time delay of several frame periods is required before obtaining a stable noise reduction amount. That is, when trying to obtain a large noise reduction amount using a cyclic noise reduction process using time correlation, there arises a problem that the noise amount greatly fluctuates with time depending on the input moving image.
  • the present invention has been made in view of the above problems, and is capable of effectively reducing noise and keeping resolution degradation to a minimum while maintaining a stable noise level, and image processing. It is an object to provide a method and an image processing program.
  • an image processing apparatus for reducing noise of a frame image or a field image input in time series.
  • the processing target frame image or field image and the processing target frame image or field image On the other hand, a recording unit that records past and future frame images or field images, and a first pixel extraction unit that extracts a plurality of pixels in a predetermined area in the processing target frame image or field image recorded by the recording unit
  • a second pixel extraction unit that extracts a plurality of pixels in a region corresponding to the predetermined region in the past frame image or field image and the future frame image or field image recorded by the recording unit;
  • the image of interest in the predetermined area extracted by the pixel extraction unit A first distance calculation unit that calculates a spatiotemporal distance between the plurality of pixels in the predetermined region and a plurality of pixels in the region corresponding to the predetermined region extracted by the second pixel extraction unit;
  • a moving image signal is obtained using spatial correlation within a frame and temporal correlation between a past frame and a current frame and between a current frame and a future frame.
  • Noise reduction processing is performed.
  • it is possible to suppress the noise reduction amount from reacting sensitively to fluctuations in time correlation, and to stably reduce noise while suppressing a reduction in resolution.
  • an image processing program for causing a computer to perform noise reduction processing of a frame image or field image input in time series, the processing target frame image or field image and the processing target frame image.
  • a recording process that records past and future frame images or field images with respect to a field image, and a first pixel that extracts a plurality of pixels in a predetermined region in the processing target frame image or field image recorded by the recording process.
  • Pixel extraction processing; and second pixel extraction processing for extracting a plurality of pixels in a region corresponding to the predetermined region in the past frame image or field image and the future frame image or field image recorded by the recording processing.
  • the first pixel Calculate the spatio-temporal distance between the pixel of interest in the predetermined area extracted in the extraction process, the plurality of pixels in the predetermined area, and the plurality of pixels in the area corresponding to the predetermined area extracted in the second pixel extraction process The first distance calculating process, the pixel value of the pixel of interest in the predetermined area extracted by the first pixel extracting process, and the plurality of pixels of the predetermined area extracted by the first pixel extracting process.
  • a second distance calculation process for calculating a distance between pixel values of a pixel value and a pixel value of a plurality of pixels in an area corresponding to the predetermined area extracted by the second pixel extraction process; and the first distance calculation Noise reduction processing for reducing noise of the processing target frame image or field image based on the spatiotemporal distance calculated by the processing and the inter-pixel value distance calculated by the second distance calculation processing
  • An image processing program for causing a computer to execute the.
  • an image processing method for reducing noise of a frame image or a field image input in time series, wherein the processing target frame image or field image and the processing target frame image or field image are processed.
  • the predetermined pixel extracted in the pixel extraction step A first distance calculating step of calculating a spatiotemporal distance between a pixel of interest in a region and a plurality of pixels in the predetermined region and a plurality of pixels in a region corresponding to the predetermined region extracted in the second pixel extracting step;
  • the present invention it is possible to prevent the noise reduction amount from reacting sensitively to the fluctuation of the time correlation that occurs when reducing the noise of the moving image signal, and to suppress stable deterioration with little time fluctuation while suppressing a decrease in resolution. Reduction is possible.
  • FIG. 3 is a functional block diagram illustrating the functions provided in the image processing apparatus according to the first embodiment of the present invention. It is a functional block diagram of the noise reduction part of the image processing apparatus shown in FIG. It is a figure which shows the three-dimensional block which consists of a present, past, and future block processed with a noise reduction part. It is a figure which shows an example of the relationship between a pixel value and a reference
  • FIG. 3 is a functional block diagram of a correction coefficient calculation unit shown in FIG. 2.
  • FIG. 3 is a functional block diagram of a filter coefficient calculation unit shown in FIG. 2.
  • FIG. 3 is a functional block diagram of a filter processing unit shown in FIG. 2.
  • FIG. 10 is a functional block diagram of a motion compensation unit shown in FIG. 9. It is a flowchart which shows the process sequence of 1st Embodiment. It is a flowchart which shows the process sequence of 1st Embodiment.
  • FIG. 1 is a functional block diagram showing the basic configuration of the first embodiment of the present invention, and details will be described below.
  • the image processing apparatus according to the present embodiment includes a switch 101, a memory group (recording unit) including a frame memory 102, a frame memory 103, a switch 104, a switch 105, a frame memory 109, a frame memory 110, and an N-line memory 106.
  • a signal flow in the image processing apparatus having the above configuration will be described below.
  • a moving image signal captured and digitized by an image sensor of an imaging unit (not shown) is input to the switch 101.
  • the switch 101 is alternately connected to the frame memory 102 or the frame memory 103 every frame period according to a control signal of the control unit 112. Therefore, the moving image signal input to the switch 101 is stored in the frame memory 102 or the frame memory 103 for each frame period.
  • the moving image signal may be a monochrome signal or a color signal.
  • a color signal even a synchronized signal composed of a plurality of colors per pixel (generally three colors) may be used with a single-plate image sensor. It may be an imaged signal consisting of one color per pixel before synchronization.
  • the moving image signal is a single color signal. However, in the case of a color signal, it is necessary to perform the processing described below for each color signal.
  • the frame memory 102 is connected to the switch 104 and the switch 105.
  • the frame memory 103 is also connected to the switch 104 and the switch 105.
  • the switches 104 and 105 are connected to the N line memory 106 and the block extraction unit 114, respectively.
  • the control unit 112 controls the switch 104 and the switch 105 so that the output signals of the switch 104 and the switch 105 are switched every frame period.
  • the frame memory 102 is connected to the N-line memory 106 via the switch 104
  • the frame memory 103 is connected to the block extraction unit 114 via the switch 105.
  • the frame memory 103 is connected to the N line memory 106 via the switch 104.
  • N line memory 106 pixels corresponding to a predetermined number of lines above and below the noise reduction processing target pixel are temporarily stored from the frame 102 or the frame memory 103.
  • the N line memory 106 is connected to the input of the noise reduction unit 107 via the block extraction unit 113.
  • the input of the noise reduction unit 107 is connected to a block extraction unit 113, a block extraction unit 114, and a control unit 112.
  • the noise reduction unit 107 includes a predetermined region pixel of the processing target frame output from the block extraction unit 113 and a temporally future frame image stored in the frame memory 102 or the frame memory 103 output from the block extraction unit 114.
  • the calculated noise reduction pixel is output as an output signal.
  • the predetermined areas output from the block extraction unit 113 and the block extraction unit 114 are spatially the same extraction position in each frame.
  • the output of the noise reduction unit 107 is connected to the switch 108 and the N line memory 106.
  • the output signal to the N line memory 106 is used to overwrite the processing target pixel value before noise reduction stored in the N frame memory 106 with the noise reduction pixel. This makes it possible to form a recursive filter that uses pixels after noise reduction for the current frame, and thus noise can be more effectively reduced.
  • the connection of the switch 108 is switched to the frame memory 109 or the frame memory 110 for each frame period by a control signal of the control unit 112.
  • the noise reduction pixels calculated by the noise reduction unit 107 are recorded in the corresponding frame memory 109 or the frame memory 110.
  • the frame memory in which the noise reduction pixels are recorded is a frame memory different from the frame memory in which the past frame extracted by the block extraction unit 114 is stored.
  • the switch 111 is switched and connected to the frame memory 109 and the frame memory 110 for each frame period by a control signal of the control unit 112.
  • the control unit 112 associates the switch 108 and the switch 111 and controls switching of each switch. Specifically, when the switch 108 is connected to the frame memory 109, the frame memory 110 is connected to the switch 111. On the other hand, when the switch 108 is connected to the frame memory 110, the frame memory 109 is connected to the switch 111.
  • the output signal from the switch 111 is input to the block extraction unit 114.
  • the block extraction unit 114 extracts a plurality of pixels in the peripheral area corresponding to the processing target pixel.
  • the control unit 112 outputs a preset target noise reduction amount to the noise reduction unit 107 and also outputs a control signal for interlocking control of the switches 101, 104, 105, 108, and 111 as described above.
  • FIG. 3 is a schematic diagram illustrating a structure of a three-dimensional block (N x ⁇ N y ⁇ N t pixels) including a processing target pixel processed by the noise reduction unit 107 and its peripheral pixels.
  • N x ⁇ N y ⁇ N t pixels a processing target pixel processed by the noise reduction unit 107 and its peripheral pixels.
  • the three-dimensional block processed by the noise reduction unit 107 is N x ⁇ N composed of the processing target pixel P (r 0 , t 0 ) and the surrounding area pixel P (r, t 0 ) of the current frame (time t 0 ).
  • N x ⁇ N y composed of a current block 302 of y pixels, a pixel P (r 0 , t 0 -1) and a peripheral region pixel P (r, t 0 -1) of the past frame (time t 0 -1)
  • a future block of N x ⁇ N y pixels consisting of a past block 301 of pixels, a pixel P (r 0 , t 0 +1) of a future frame (time t 0 +1), and its peripheral region pixel P (r, t 0 +1) 303.
  • N t may be an integer of 2 or more.
  • the past block and the future block do not have pixels in the same spatial position as the current block.
  • an N x ⁇ N y pixel composed of the processing target pixel P (r 0 , t 0 ) and its peripheral region pixel P (r, t 0 ) is a current block
  • a pixel in the future field (time t 0 +1) is defined as a past block having N x ⁇ N y pixels composed of the pixel P (r 0 ′, t 0 ⁇ 1) and its peripheral region pixel P (r ′, t 0 ⁇ 1).
  • N x ⁇ N y pixel composed of P (r 0 ′, t 0 +1) and its peripheral region pixel P (r ′, t 0 +1) can be defined as a future block.
  • the current block, the past block, and the future block are strictly shifted by one line.
  • FIG. 2 is a detailed block diagram of the noise reduction unit 107, and details thereof will be described based on the definition of the three-dimensional block.
  • the noise reduction unit 107 includes a past block memory 201, a future block memory 202, a current block memory 203, a distance calculation unit (first distance calculation unit and second distance calculation unit) 204, an inter-frame correlation calculation unit (inter-frame correlation).
  • the current block 302 including the processing target pixel P (r 0 , t 0 ) and its peripheral region N x ⁇ N y is output and stored in the current block memory 203.
  • the block extraction unit 114 also includes a past consisting of P (r 0 , t 0 ⁇ 1) and its peripheral region N x ⁇ N y that are spatially the same position as the processing target pixel P (r 0 , t 0 ).
  • the block 301 is output and stored in the past block memory 201.
  • the pixel data of the three-dimensional block stored in the three block memories 201, 202, and 203 is output to the distance calculation unit 204, the inter-frame correlation calculation unit 205, and the filter processing unit 208. Further, the pixel data in the current block memory 203 is further output to the correction coefficient calculation unit 206.
  • the distance calculation unit 204 the pixel position (r, t) stored in the three input block memories 201, 202, and 203 and the position (r 0 ) that is the processing target pixel recorded in the block memory 203. , T 0 ), the following spatiotemporal distance Ds is calculated.
  • t ⁇ t 0 Ds ⁇ 1
  • t ⁇ t 0 Ds ⁇ 2
  • ⁇ 1 , ⁇ 2, and ⁇ are predetermined coefficients of 0 or more, and
  • ⁇ ⁇ (x ⁇ x 0 ) 2 + (y ⁇ y 0 ) 2 ⁇ .
  • may be performed by storing the calculation result in a ROM table (not shown) in advance when the block size (N x ⁇ N y ) is determined in advance. Is possible.
  • ⁇ 1 represents a coefficient for converting the time of the future block into a distance
  • ⁇ 2 represents a coefficient for converting the time of the past block into a distance.
  • the past block consists of pixels whose noise has already been reduced
  • the future block consists of pixels whose noise has not been reduced yet. For this reason, it is possible to improve the noise reduction effect by setting the weighting factor of each pixel determined by the filter coefficient calculation unit 207 to be greater in weight with respect to the past block than the future block.
  • the pixel value P (r, t) at the position (r, t) stored in the three input block memories 201, 202, and 203 is recorded in the current block memory 203.
  • the inter-frame correlation calculation unit 205 uses the pixel values P (r, t) of the pixels stored in the input three block memories 201, 202, and 203, to generate a frame between the current block 302 and the past block 301. and between the correlation value S p, and the inter-frame correlation value S f of the current block 302 and the future blocks 303 calculated as follows, and outputs the correction coefficient calculation section 206.
  • a noise reduction amount calculation unit 209, an interframe correlation calculation unit 205, a control unit 112, a current block memory 203, and a filter coefficient calculation unit 207 are connected to the correction coefficient calculation unit 206.
  • the correction coefficient calculation unit 206 includes a frame average noise reduction amount NR ave calculated by the noise reduction amount calculation unit 209, a target noise reduction amount NR target output from the control unit 112, and an inter-frame correlation calculation unit 205.
  • Current pixel data is input.
  • the correction coefficient T i is one coefficient T p , T c , and T f for the current block 302, the past block 301, and the future block 303, respectively, and this correction coefficient T i is sent to the filter coefficient calculation unit 207. Is output.
  • a three-dimensional block composed of three frames is taken as an example, but a three-dimensional block composed of arbitrary N frames may be used. In this case, N coefficients T i are calculated and output to the filter coefficient calculation unit 207.
  • the filter coefficient calculation unit 207 receives the distance D output from the distance calculation unit 204 and the correction coefficients T p , T c , and T f output from the correction coefficient calculation unit 206.
  • the filter coefficient calculation unit 207 uses these data to filter coefficient C (r, r, t) corresponding to the pixel P (r, t) stored in the current block memory 203, the past block memory 201, and the future block memory 202. t) is calculated and output to the filter processing unit 208.
  • the filter processing unit 208 reads out the pixels P (r, t) stored in the three block memories 201, 202, and 203 in a predetermined order. Then, a product-sum operation is performed on the read pixel P (r, t) and the filter coefficient C (r, t) output from the filter coefficient calculation unit 207 to obtain a noise reduction pixel P n (r 0 , t 0 ). Is calculated and output.
  • the noise reduction amount calculation unit 209 includes the noise reduction pixel P n (r 0 , t 0 ) calculated and output by the filter processing unit 208 and the processing target pixel P (r 0 , t 0 ) from the current block memory 203. ) And.
  • the noise reduction amount calculation unit 209 calculates the difference absolute value
  • NR ave ⁇ r0
  • NR ave is an average value in the above example, but the total number of pixels in frame In an apparatus in which the number does not change, the sum of absolute differences may be substituted.
  • V 1 , V 2 , and V 3 are coefficients determined in advance such that V 1 ⁇ V 2 ⁇ V 3 .
  • the reference correction coefficient V 2 standard when the absolute value of the threshold TH below Er, Er is a small reference correction coefficient V 1 than standard if it becomes greater than the threshold TH, Er is the threshold value TH
  • the reference correction coefficient V 3 larger than the standard is output to the multiplier 605.
  • the current block 302 output from the current block memory 203 is input to the average value calculation unit 603.
  • the average value calculation unit 603 calculates the average pixel value of the current block 302 and outputs it to the LUT_V 604.
  • the LUT_V 604 converts the correction value Rv of the reference correction coefficient with respect to the average pixel value and outputs it to the multiplier 605.
  • FIG. 4 An example of the relationship between the average pixel value and the reference correction coefficient correction value Rv is shown in FIG.
  • the multiplier 605 multiplies the reference correction coefficient T base by the correction value Rv and outputs the result to the multiplier 607.
  • the inter-frame correlation value S p, fixes and inter-frame correlation value S f is coefficient ratio calculation of the future blocks 303 and current block 302 with the past block 301 and current block 302 which is output from the inter-frame correlation calculating unit 205 Input to the unit 606.
  • the correction coefficient ratio calculation unit 606 assigns the ratios R p , R c , and R f to the past block 301, the current block 302, and the future block 303 as follows.
  • the condition of S p ⁇ TH p and S f ⁇ TH f is a case where the temporal correlation is high for both the past block 301 and the future block 303 with respect to the current block 302, and the motion is in the block area that is temporally related. Corresponds to a small area.
  • the correction coefficient is the same in both the time and space directions.
  • the condition of S p ⁇ TH p and S f > TH f is a case where the past block 301 has a high time correlation and the future block 303 has a low time correlation with respect to the current block 302. It shows that the area is where sudden movements or scene changes occur in the time to the future. In this case, the correction coefficient of the past block 301 and the current block 302 is increased, and the correction coefficient of the future block is decreased.
  • the condition of S p > TH p and S f ⁇ TH f is that the future block 303 is high in time correlation and the past block 301 is low in time correlation with respect to the current block 302. In this case, from the past It shows a region where a sudden movement or scene change has occurred in the time up to now. In this case, the correction coefficient of the future block 303 and the current block 302 is increased, and the correction coefficient of the past block 301 is decreased.
  • the conditions of S p > TH p and S f > TH f are when the time correlation is low for both the past block 301 and the future block 303 with respect to the current block 302. In this case, noise mixed in the image This indicates that there is too much or that the movement is large within the time from the past block 301 to the future block 303. In this case, the correction coefficient is the same in both the time and space directions.
  • Multiplier 607 multiplies output value T base ⁇ Rv of multiplier 605 by output values R p , R c , and R f from correction coefficient ratio calculation unit 606.
  • Correction coefficient thus calculated T p T base ⁇ Rv ⁇ R p
  • T c T base ⁇ Rv ⁇ R c
  • T f T base ⁇ Rv ⁇ R f is output to the filter coefficient calculating section 207 Is done.
  • the example in which the reference correction coefficient, the correction value, and the block allocation ratio are all variably controlled has been shown. However, one or two of them can be variably controlled, and the others can be set to predetermined constants. .
  • LUT_R 1 701, LUT_R 2 702,..., LUT_R N 703 are tables corresponding to the graphs 501, 502, and 503 having different values of T in the rational function ⁇ T / (x + T) ⁇ shown in FIG. T is a value greater than zero.
  • T 1, T 2, ... corresponds to the correction factor T N is outputted from the correction coefficient calculation unit 206, a relationship of T 1 ⁇ T 2 ⁇ ... ⁇ T N. That is, the above rational function is a function that decreases rapidly as the variable x increases when the correction coefficient is small, and a function that decreases gently as the variable x increases when the correction coefficient is large.
  • Each address is input to each lookup table as an address extracted by the upper predetermined number of bits with respect to the distance D output from the distance calculation unit 204.
  • Each lookup table outputs the value of the lookup table stored in the address and the amount of inclination to the selection unit 704.
  • addresses that refer to LUT_R 1 701, LUT_R 2 702,..., LUT_R N 703 are determined, and the start point R j (a) stored in the determined address a and the slope ⁇ j (a ) Is output to the selection unit 704.
  • Selecting unit 704 selects a gradient with these N start R j (a) and the slope ⁇ j (a) and on the basis of the correction coefficient T i 1 single start point R (a) ⁇ (a) , the interpolation processing unit Output to 705.
  • the interpolation processing unit 705 receives the start point R (a) and the inclination ⁇ (a) output from the selection unit 704 and the predetermined lower-order bits of the distance D, so that the weighting factor R ′ (D , T i ) are calculated and sequentially stored in the memory 706.
  • LUT_D708 is a multiplication / division conversion table prepared in advance for normalization by multiplication.
  • the reciprocal of the normalized coefficient thus converted is multiplied by the weight coefficient stored in the memory 706 by the multiplier 709 and output to the filter processing unit 208 as the final filter coefficient C (r, t).
  • the filter coefficient C (r, t) corresponding to the pixel P (r, t) of the three-dimensional block N x ⁇ N y ⁇ N t calculated in this way corresponds to the processing target pixel P (r 0 , t 0 ).
  • the weight coefficient (filter coefficient) value is inversely proportional to the distance calculated from the distance between the pixel values and the spatiotemporal distance, and this weight coefficient is variably controlled in accordance with the correction coefficient, that is, the temporal correlation variation of the moving image signal. It has the characteristic that it can.
  • the filter processing result in the filter processing unit 208 described below can obtain an effect equivalent to collecting only pixels having high correlation and taking an average. Therefore, even if a spatial edge and a temporal edge (when a scene change or a rapid movement occurs) are included in the block, it is possible to minimize the dullness of the edge due to averaging.
  • the weight coefficient allocation can be variably controlled by the correction coefficient, the noise reduction result of one frame before is reflected, and when the target noise reduction amount is not satisfied, the three-dimensional block N x is set so that the specified noise reduction amount is obtained. It is possible to give a weighting coefficient that cancels the correlation amount between the pixel value and the distance in ⁇ N y ⁇ N t . In this case, it is equivalent to selecting more pixels in the three-dimensional block N x ⁇ N y ⁇ N t and performing the averaging process, and the noise reduction effect can be increased.
  • the correlation value between the pixel value and the distance in the three-dimensional block N x ⁇ N y ⁇ N t is set so as to be the target noise reduction amount.
  • a stronger weighting factor can be given.
  • the filter processing unit 208 includes a filter coefficient C (r, t) output from the filter coefficient calculation unit 207, pixel values P (r, t) stored in the block memories 201, 202, and 203, and a control.
  • the noise model table output from the unit 112 is input.
  • the filter coefficient C (r, t) and the pixel value P (r, t) are subjected to the following processing in the product-sum operation unit 801, and the smoothed pixel value in the processing target pixel P (r 0 , t 0 ).
  • P ′ (r 0 , t 0 ) is output to the noise amount estimation unit 802 and the coring processing unit 803.
  • P '(r 0, t 0 ) ⁇ r ⁇ t C (r, t) P (r, t)
  • the noise amount estimation unit 802 stores the noise model table output from the control unit 112, and uses the smoothed pixel value P ′ (r 0 , t 0 ) output from the product-sum operation unit 801 as an address.
  • the corresponding noise amount N amp is output to the coring processing unit 803.
  • the noise model table is a table in which the amount of noise (an amount corresponding to the average amplitude of noise) is stored at the address position corresponding to the pixel value.
  • the control unit 112 converts the predetermined noise model table into the pixel value. It is also possible to make a noise model table arbitrarily changed by multiplying with an arbitrary function as a variable.
  • the coring processing unit 803 includes a processing target pixel value P (r 0 , t 0 ), a pixel value P ′ (r 0 , t 0 ) after the smoothing process, and noise output from the noise amount estimation unit 802. A quantity is entered.
  • the coring processing unit 803 performs the following coring determination, calculates a final noise reduction pixel value P n (r 0 , t 0 ) for the pixel to be processed, and outputs it as an output signal.
  • the filter processing unit 208 is configured to include the noise amount estimation unit 802.
  • the filter processing unit 208 includes only a simpler product-sum operation unit 801, and the final noise reduction pixel value P n (r 0 , T 0 ) may be P ′ (r 0 , t 0 ).
  • the spatial correlation in the frame and the temporal correlation between the past frame and the current frame and between the current frame and the future frame are calculated. Utilizing this, noise reduction processing of the moving image signal is performed. Thereby, for example, even when a scene change occurs, it is possible to suppress a noise reduction amount from reacting sensitively to a change in time correlation, and it is possible to stably reduce noise while suppressing a reduction in resolution.
  • the pixels of the past frame and the current frame used for the cyclic noise reduction processing are the pixels after noise reduction, the amount of noise reduction can be improved.
  • the weighting factor is calculated based on the spatio-temporal distance and the distance between pixel values, and weighted average is performed by weighting the pixels that are highly correlated with the noise processing target pixel. It is possible to effectively reduce noise while minimizing dullness with respect to changes.
  • the noise characteristic of the input signal is estimated by estimating the noise amount of the pixel of interest based on the product-sum operation value of the pixel values and weighting factors of a plurality of pixels in a predetermined area, and performing noise reduction processing based on the noise amount.
  • noise reduction processing based on the noise amount.
  • the amount of noise reduction is reduced due to the temporal correlation variation that occurs when noise is reduced in the moving image signal. Responsive reaction is suppressed, and stable noise reduction with little time variation is possible.
  • the image processing apparatus includes a main storage device such as a CPU and a RAM, and a computer-readable recording medium on which a program for realizing all or part of the above processing is recorded. Then, the CPU reads out the program recorded in the storage medium and executes information processing / calculation processing, thereby realizing processing similar to that of the above-described image processing apparatus.
  • the computer-readable recording medium means a magnetic disk, a magneto-optical disk, a CD-ROM, a DVD-ROM, a semiconductor memory, or the like.
  • FIG. 11A and FIG. 11B show the processing procedure of the first embodiment.
  • the input moving image signal is stored in the frame memory 102 or 103 as a future frame via the switch 101. (S1101)
  • the process proceeds to S1113. If it is not the first frame, the three-dimensional block P (r, t) is extracted from the past, future, and current frames stored in the frame memory 102 or 103-109 or 110, and further stored in the N-line memory 106.
  • the block 301, the future block 303, and the current block 302 are stored in the block memories 201, 202, and 203 (S1103).
  • the three-dimensional block which is stored in the block memory 201, 202, 203, and calculates inter-frame correlation amounts S p and S f (S1104), the three-dimensional block noise processing target pixel P (r 0 , t 0 ) and its surrounding pixels are calculated as a spatiotemporal distance Ds and a distance Dv between pixel values, and the distance D is calculated by multiplying the two distances (S1105).
  • a noise reduction pixel P n (r 0 , t 0 ) is calculated based on the calculated filter coefficient C (r, t) and the three-dimensional block pixel P (r, t) (S1108), and the noise reduction pixel P n (r 0 , t 0 ) is stored in an output buffer (not shown), the frame memory 109 or 110, and the N line memory 106 (S1109).
  • a noise reduction amount is calculated from the sum of absolute differences between the noise reduction pixel P n (r 0 , t 0 ) and the processing target pixel P (r 0 , t 0 ) (S1110), and the calculation of the noise reduction pixel is performed. It is determined whether or not the processing is completed for the number of pixels in the frame (S1111). As a result of the determination, if the processing for the number of frame pixels has not been completed yet, the processing returns to S1103, and the noise reduction processing for the next processing target pixel is repeated. When processing for the number of frame pixels is completed, an average value of the calculated noise reduction amounts is obtained, and a frame average noise reduction amount NR ave is calculated (S1112). Then, the input / output of the switches 101, 104, 105, 108, and 111 for performing input / output control of the frame memory storing the current, past, and future frames is switched (S1113).
  • the distance D is the product of the inter-pixel value distance Dv and the spatio-temporal distance Ds, and the weighting factor is calculated as a variable of one rational function.
  • the inter-pixel value distance D v and the spatio-temporal distance D s The weighting factor may be calculated by the product of the function values R (Dv) and R (Ds) after each is given to the function.
  • the function for converting the distance D into a weighting factor is not limited to a rational function, and for example, a Gaussian function Exp ( ⁇ x 2 / 2 ⁇ 2 ) may be used.
  • a Gaussian function Exp ⁇ x 2 / 2 ⁇ 2
  • the same effect can be obtained by setting the distance D to x and the correction coefficient T to ⁇ .
  • is increased, the width of the Gaussian function becomes wider and acts to cancel the distance correlation.
  • is reduced, the width of the Gaussian function is narrowed, and the distance correlation can be further emphasized.
  • FIG. 10 a second embodiment of the present invention will be described with reference to FIG.
  • the image processing apparatus according to the present embodiment is different from the first embodiment in that a motion compensation unit 901 is provided instead of the block extraction unit 114.
  • the image processing apparatus according to the present embodiment will not be described with respect to the points common to the first embodiment, and different points will be mainly described.
  • FIG. 9 is a functional block diagram showing the basic configuration of the second embodiment of the present invention.
  • a moving image signal captured by an image sensor of an imaging unit (not shown) and converted into digital data is input to the switch 101 and is alternately connected to the frame memory 102 or the frame memory 103 every frame period by a control signal of the control unit 112. Then, it is stored in the frame memory 102 or the frame memory 103 for each frame period.
  • the outputs of the frame memory 102 and the frame memory 103 are connected to the inputs of the switch 104 and the switch 105, and are connected to the N line memory 106 and the motion compensation unit 901 by the control signal of the control unit 112.
  • the frame memory 102 is connected to the N-line memory 106 via the switch 104
  • the frame memory 103 is connected to the motion compensation unit 901 via the switch 105.
  • the control unit 112 is controlled so that the frame memory 103 is connected to the N line memory 106 via the switch 104.
  • the N line memory 106 temporarily stores a predetermined number of pixels above and below the noise reduction processing target pixel from the frame 102 or the frame memory 103.
  • the output of the N line memory 106 is connected to the inputs of the noise reduction unit 107 and the motion compensation unit 901 via the block extraction unit 113.
  • the noise reduction unit 107 is connected so that an output from the block extraction unit 113, an output from the motion compensation unit 901, and a control signal from the control unit 112 are input.
  • the noise reduction unit 107 includes a predetermined region pixel of the processing target frame output from the block extraction unit 113, a predetermined region pixel of a frame image that is temporally compensated for by the motion compensation unit 901, and a control unit 112.
  • the noise reduction processing is performed on the processing target pixel based on the control signal. Further, the noise reduction pixel calculated in this way is output as an output signal to a buffer memory of an image processing unit arranged in a subsequent stage (not shown).
  • the output of the noise reduction unit 107 is connected to the input of the switch 108 and the N line memory 106.
  • the output signal of the noise reduction unit 107 is used to overwrite the processing target pixel value before noise reduction stored in the N frame memory 106 with the noise reduction pixel.
  • the output of the switch 108 is switched to the frame memory 109 or the frame memory 110 for each frame period according to the control signal of the control unit 112, and the frame memory 109 or the frame memory 110 to which the noise reduction pixel of the noise reduction unit 107 corresponds. To be recorded.
  • the frame memory 109 or the frame memory 110 is switched and connected to the motion compensation unit 901 via the switch 111 for each frame period by a control signal of the control unit 112.
  • the frame memory 110 is connected to the switch 111.
  • the control unit 112 controls so that the frame memory 109 is connected to the switch 111.
  • the motion compensation unit 901 includes the current block 302 from the connected block extraction unit 113, the future that changes in time with respect to the processing target frame via the switch 104, the switch 105, and the switch 111, and the past A frame is input.
  • the motion compensation unit 901 calculates a position where the correlation value is maximum between the current block 302 and the future and past frames. Then, the region having the highest correlation is extracted from the future and past frames, and is output to the noise reduction unit 107 as the future block 303 and the past block 301.
  • the control unit 112 outputs a preset target noise reduction amount to the noise reduction unit 107 and also outputs a control signal for interlocking control of the switches 101, 104, 105, 108, and 111 as described above.
  • An extracted image of a predetermined search range of the past frame input to the motion compensation unit 901 is stored in the search range storage memory 1001.
  • an extracted image of a predetermined search range of the future frame is stored in the search range storage memory 1002.
  • the current block is input to a motion vector determination unit (motion vector detection unit) 1003 and a motion vector determination unit (motion vector detection unit) 1004.
  • an extracted image of the motion vector search range of the past frame is input from the search range storage memory 1001 to the motion vector determination unit 1003.
  • an extracted image of the motion vector search range of the future frame is input from the search range storage memory 1002 to the motion vector determination unit 1004.
  • the motion vector determination units 1003 and 1004 use the current block as a pattern matching reference image to perform pattern matching processing while moving at the pixel pitch within the search range extraction image, and move the position within the search range where the correlation is maximum. Determine as a vector.
  • a well-known method can be used in which the correlation is maximized at a position where the sum of absolute values of differences or the sum of squares is minimized.
  • the motion vectors determined by the motion vector determination units 1003 and 1004 are input to the block extraction units 1005 and 1006, respectively.
  • the block extraction unit 1005 receives the extracted image of the past frame stored in the search range storage memory 1001, extracts the past block based on the determined motion vector to the past frame, and outputs the extracted block to the noise reduction unit 107.
  • the block extraction unit 1006 receives the extracted image of the future frame stored in the search range storage memory 1002, extracts the future block based on the determined motion vector to the future frame, and outputs it to the noise reduction unit 107. .
  • the future block and the past block are selected by motion compensation for the current block, so that a three-dimensional block having a higher correlation than that of the first embodiment can be configured.
  • the weighting coefficient for the future block and the past block calculated by the filter coefficient calculation unit 207 becomes larger, and the contribution to the weighted average increases.
  • the amount of noise reduction is suppressed from reacting sensitively to fluctuations in time correlation that occur when moving image signals are reduced by cyclic noise reduction processing, and more stable noise reduction is performed. It becomes possible.
  • the cyclic type in which all the pixels of the past block and some pixels of the current block are replaced with pixel values after noise reduction processing is described as an example.
  • FIR type the cyclic type
  • the amount of noise reduction in the still region is lower than that in the cyclic type, but the convergence performance for suppressing the afterimage generated in the motion region is improved.
  • a progressive signal is assumed as an input signal in the first and second embodiments, a field interlace signal may be used.
  • each of the above embodiments is configured by describing the frame image as a field image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Picture Signal Circuits (AREA)

Abstract

Selon l'invention, on réduit efficacement le bruit et on minimise la détérioration de la résolution. Un dispositif de traitement d'image comprend une section d'enregistrement permettant d'enregistrer l'image de la trame actuelle à traiter et les images des trames passée et future, une première section d'extraction de pixels permettant d'extraire des pixels dans une zone prédéterminée de l'image de la trame actuelle, une seconde section d'extraction de pixels permettant d'extraire les pixels dans les zones des images des trames passée et future correspondant aux zones prédéfinies, une première section de calcul de distances permettant de calculer les distances dans le temps et dans l'espace entre un pixel intéressant dans la zone prédéterminée, des pixels dans la zone prédéterminée et les pixels de la zone prédéterminée correspondante, une seconde section de calcul de distances permettant de calculer les distances de valeurs entre pixels comprises entre la valeur de pixel du pixel intéressant dans la zone prédéterminée, les valeurs de pixels des pixels dans la zone prédéterminée et les valeurs de pixels des pixels dans les zones correspondantes, ainsi qu'une section de réduction de bruit permettant de réduire le bruit de l'image de la trame actuelle en fonction de la distance dans le temps et dans l'espace et des distances des valeurs entre pixels.
PCT/JP2008/072135 2007-07-11 2008-12-05 Dispositif, procédé et programme de traitement d'image WO2010064316A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2008/072135 WO2010064316A1 (fr) 2008-12-05 2008-12-05 Dispositif, procédé et programme de traitement d'image
US13/118,736 US8411205B2 (en) 2007-07-11 2011-05-31 Noise reducing image processing apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2008/072135 WO2010064316A1 (fr) 2008-12-05 2008-12-05 Dispositif, procédé et programme de traitement d'image

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/118,736 Continuation US8411205B2 (en) 2007-07-11 2011-05-31 Noise reducing image processing apparatus

Publications (1)

Publication Number Publication Date
WO2010064316A1 true WO2010064316A1 (fr) 2010-06-10

Family

ID=42232981

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2008/072135 WO2010064316A1 (fr) 2007-07-11 2008-12-05 Dispositif, procédé et programme de traitement d'image

Country Status (1)

Country Link
WO (1) WO2010064316A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012017514A1 (fr) * 2010-08-02 2012-02-09 富士通株式会社 Dispositif de traitement d'image, programme de traitement d'image et procédé de traitement d'image
JP2013062660A (ja) * 2011-09-13 2013-04-04 Kddi Corp 動画像画質復元装置、動画像画質復元方法、およびプログラム
CN113170029A (zh) * 2018-12-07 2021-07-23 索尼半导体解决方案公司 图像处理装置和图像处理方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001097510A1 (fr) * 2000-06-15 2001-12-20 Sony Corporation Systeme et procede de traitement d'images, programme et support d'enregistrement
JP2004503960A (ja) * 2000-06-15 2004-02-05 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ 画像シーケンスのノイズフィルタリング
JP2005160071A (ja) * 2003-11-22 2005-06-16 Samsung Electronics Co Ltd ノイズ減衰装置及び順次走査変換装置
JP2008205737A (ja) * 2007-02-19 2008-09-04 Olympus Corp 撮像システム、画像処理プログラム、画像処理方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001097510A1 (fr) * 2000-06-15 2001-12-20 Sony Corporation Systeme et procede de traitement d'images, programme et support d'enregistrement
JP2004503960A (ja) * 2000-06-15 2004-02-05 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ 画像シーケンスのノイズフィルタリング
JP2005160071A (ja) * 2003-11-22 2005-06-16 Samsung Electronics Co Ltd ノイズ減衰装置及び順次走査変換装置
JP2008205737A (ja) * 2007-02-19 2008-09-04 Olympus Corp 撮像システム、画像処理プログラム、画像処理方法

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012017514A1 (fr) * 2010-08-02 2012-02-09 富士通株式会社 Dispositif de traitement d'image, programme de traitement d'image et procédé de traitement d'image
US8693794B2 (en) 2010-08-02 2014-04-08 Fujitsu Limited Image processing apparatus and image processing method
JP5761195B2 (ja) * 2010-08-02 2015-08-12 富士通株式会社 画像処理装置、画像処理プログラム及び画像処理方法
JP2013062660A (ja) * 2011-09-13 2013-04-04 Kddi Corp 動画像画質復元装置、動画像画質復元方法、およびプログラム
CN113170029A (zh) * 2018-12-07 2021-07-23 索尼半导体解决方案公司 图像处理装置和图像处理方法
CN113170029B (zh) * 2018-12-07 2024-04-12 索尼半导体解决方案公司 图像处理装置和图像处理方法

Similar Documents

Publication Publication Date Title
US8373777B2 (en) Image processing apparatus and image processing method
JP5143038B2 (ja) 画像処理装置及び画像処理方法
KR101460688B1 (ko) 화상처리장치 및 그 제어 방법
EP0629083B1 (fr) Convertisseur de balayage entrelacé en non-entrelacé à fonction de lissage double et procédé associé
KR100564592B1 (ko) 동영상 데이터 잡음제거방법
US8411205B2 (en) Noise reducing image processing apparatus
JP5645699B2 (ja) 動き検出装置及び方法、映像信号処理装置及び方法、並びに映像表示装置
JP5123756B2 (ja) 撮像システム、画像処理方法および画像処理プログラム
JP4317587B2 (ja) 撮像処理装置および撮像装置、画像処理方法およびコンピュータプログラム
JP4169188B2 (ja) 画像処理方法
CN101123681A (zh) 一种数字图像降噪方法及装置
JP2007525132A (ja) 画像の補間および外挿を組み合わせることによる、画像信号の走査レート変換におけるアーチファクトの低減
JP2001204045A (ja) 動き検出装置
WO2010064316A1 (fr) Dispositif, procédé et programme de traitement d'image
JP4875557B2 (ja) 画像処理装置、画像処理方法、および画像処理プログラム。
JPH1098695A (ja) 画像情報変換装置および方法並びに積和演算装置
JP2004320278A (ja) 動画像時間軸補間方法及び動画像時間軸補間装置
JP2004320279A (ja) 動画像時間軸補間方法及び動画像時間軸補間装置
JPH10262160A (ja) Sn比検出装置および雑音低減装置
JP2009171162A (ja) 映像信号処理装置、映像信号処理プログラム、映像信号処理方法、電子装置
JP5178477B2 (ja) 映像信号処理装置及び方法、並びに映像表示装置
JP3724008B2 (ja) 画像情報変換装置および係数データ作成装置
JP3745425B2 (ja) 動きベクトル検出方法および動きベクトル検出用適応切り替え型前置フィルタ
JP2579014B2 (ja) インタレース/ノンインタレース変換装置
JP3922286B2 (ja) 係数学習装置および方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08878576

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08878576

Country of ref document: EP

Kind code of ref document: A1