WO2015040731A1 - Image processing device and image processing method - Google Patents

Image processing device and image processing method Download PDF

Info

Publication number
WO2015040731A1
WO2015040731A1 PCT/JP2013/075436 JP2013075436W WO2015040731A1 WO 2015040731 A1 WO2015040731 A1 WO 2015040731A1 JP 2013075436 W JP2013075436 W JP 2013075436W WO 2015040731 A1 WO2015040731 A1 WO 2015040731A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel
group
correction
input
Prior art date
Application number
PCT/JP2013/075436
Other languages
French (fr)
Japanese (ja)
Inventor
山口 宗明
佑一郎 小宮
Original Assignee
株式会社日立国際電気
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立国際電気 filed Critical 株式会社日立国際電気
Priority to PCT/JP2013/075436 priority Critical patent/WO2015040731A1/en
Priority to JP2015537516A priority patent/JPWO2015040731A1/en
Publication of WO2015040731A1 publication Critical patent/WO2015040731A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/683Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20182Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet

Definitions

  • the present invention relates to an image processing apparatus, and more particularly, to an image processing apparatus and an image processing method capable of reducing image degradation due to image fluctuations such as a hot flame.
  • Fluctuation correction is one of image processing techniques used for monitoring systems and the like.
  • the purpose of fluctuation correction is to reduce the influence of fluctuation received by a monitoring video represented by a hot flame.
  • the influence of fluctuation is reduced by time smoothing processing.
  • FIG. 1 is a diagram illustrating an example for explaining the effect of temporal smoothing processing of pixel values on an image.
  • the input image to be corrected is input to the image processing apparatus at predetermined time intervals according to a time series flowing from left to right.
  • FIG. 1 only the input images 101, 102, and 103 are shown.
  • the input images 101, 102, and 103 all look like the same image. That is, the subject image 402 reflected in the input image 102 and the subject image 403 reflected in the input image 103 are originally moving objects in a car or the like, but the time when the images 101 to 103 were photographed. Are not moved and are at the same pixel position as the subject image 401 reflected in the input image 101.
  • the subject images 201 and 301 reflected in the input image 101 are the subject images 202 and 302 reflected in the input image 102 and the subject images 203 and 303 reflected in the input image 103. It is a still body that does not move, and is at approximately the same pixel position. However, the stationary moving object images 401 to 403 in the input images 101, 102, and 103 and the still object images 201 to 203 and 301 to 303 all appear to fluctuate due to the influence of the heat.
  • the lower smoothed image 104 in FIG. 1 is a smoothed image of n input images including the input images 101, 102, and 103 (n is a natural number). As shown in the smoothed image 104, the still object images 204 and 304 and the moving object image 404 of the moving object in the image are corrected for fluctuations due to the influence of the heat. However, FIG. 1 shows the case of ideal processing conditions. Actually, the influence of fluctuation is not completely eliminated by only the time smoothing process (see Patent Document 1).
  • FIG. 2 is a diagram for explaining that the contour of the subject image of the image is blurred when only the time smoothing process is performed.
  • the image 111 includes a subject image 211 that is a still life and does not fluctuate.
  • the subject image 211 is monotone, the luminance value is minimum (black), and the luminance values of other portions are maximum (white).
  • the subject images 221 to 223 are monotone and have the minimum luminance (black) and the other portions have the maximum luminance (white).
  • the subject image 211 is affected by fluctuations such as a heat wave, and becomes subject images that fluctuate like subject images 221 to 223 in n images 121 to 123.
  • the subject image 211 in the image 111 has a luminance corresponding to the region a2 of the subject image at the center as in the subject image 224 shown in the image 124.
  • the luminance of the images a1 and a3 at the ends in this case, left and right
  • the maximum and minimum intermediate value (gray) and blurs in addition, the pixel size is larger than the original subject image 211. That is, the subject image 224 appears blurred.
  • Japanese Patent Laid-Open No. 2004-133867 discusses (1) a plurality of types of model images for a multifunction device having a printer function, a scanner function, and a color copy function, and those constituent pixels are divided into six regions according to hue values.
  • a first feature amount that represents a comprehensive feature of a plurality of types of model images that is classified and specified with a mean value of constituent pixels belonging to each region as a representative value is specified.
  • the constituent pixels of the correction target image are classified into six regions according to the hue value, and the average value of the constituent pixels belonging to each region is specified as the second feature amount representing the feature of the correction target image.
  • a technique is disclosed in which color conversion processing is performed so that each representative value of the second feature value approaches each representative value of the first feature value.
  • Patent Document 2 a dark area in an output image is extracted, pixels that are outside the color reproduction range and are in the dark area are classified into one or more clusters, and whether each pixel of the output image is noise or not.
  • the classified cluster the feature for the pixel determined to be noise is extracted, whether or not noise is present is determined, and the color conversion method of the cluster that is determined to be noise is present as noise.
  • the color conversion method is different from the color conversion method when it is determined not to be performed, and the color conversion process is performed according to the determined color conversion method, so that the color conversion method is different depending on whether noise is present or not.
  • Non-Patent Document 1 discloses a technique for reducing noise by fusing images from a plurality of different sensors with a gradient pyramid.
  • the fusion by the gradient pyramid is more effective than the fusion by the Laplacian pyramid and can be applied to the video sequence.
  • noise due to a moving object can be removed from the background, but the moving object itself is also removed. Therefore, it cannot be applied to image processing for monitoring a moving object.
  • the conventional fluctuation correction technique has a problem that although the fluctuation correction effect is obtained, the image is blurred.
  • an object of the present invention is to provide an image processing apparatus and an image processing method capable of correcting fluctuations due to the influence of a hot flame and the like and reducing image blur.
  • an image processing apparatus stores an input image input along a time series, and stores a correction target image and a plurality of images stored along the time series from the stored images.
  • the pixel at the same position is extracted from the image memory for outputting the correction image, the input image or the correction target image, and the plurality of correction images, grouped according to the luminance, and the pixel information of the correction processing corresponding position
  • a pixel grouping unit for calculating pixel frequency of the grouped group, outputting pixel information indicating the calculated pixel frequency and the position of the group, and a group of maximum frequency using the input pixel frequency
  • a pixel group to be used for the time smoothing process by comparing the pixel frequency with a predetermined first threshold and selecting whether or not to perform time smoothing by all pixels.
  • a pixel group selection unit that outputs pixel information of the selected group, a fluctuation correction unit that corrects fluctuation by temporal smoothing using the pixel information of the selected group, and each element constituting the device It is a first feature of the present invention that includes a control unit for controlling and correcting the input image or the correction target image.
  • the pixel grouping unit classifies into a plurality of groups based on the luminance and the first threshold, and further divides into a plurality of groups based on the color difference and the second threshold. Classification is a second feature of the present invention.
  • the pixel grouping unit groups the luminance value information or luminance value information and color difference information of the grouped group together with the pixel frequency. Outputting as information is a third feature of the present invention.
  • the image processing method of the present invention stores an input image input along a time series, and stores the correction target image and the time series from the stored image.
  • the image input step for outputting a plurality of correction images, the pixels at the same position are extracted from the input image or the correction target image, and the plurality of correction images, grouped according to luminance, and the correction processing corresponding position
  • An image grouping step for calculating the pixel information of the group and the pixel frequency of the grouped group, outputting pixel information indicating the calculated pixel frequency and the position of the group, and using the input pixel frequency
  • An image to be used for time smoothing processing by selecting a frequency group, comparing the pixel frequency with a predetermined threshold value, and selecting whether or not to perform time smoothing for all pixels.
  • the applicant of this patent has applied for a fluctuation correction technique using time smoothing as Japanese Patent Application No. 2012-107470 (hereinafter referred to as “prior application”).
  • the prior application suppresses the influence of fluctuation by temporally smoothing a plurality of frames (hereinafter referred to as “correction images”) that are close in time to the correction target image.
  • corrected images a plurality of frames
  • the similarity for each pixel is obtained between the correction target image and the correction image, or between the images obtained by mixing the correction target image and the correction image, and the strength of time smoothing is adjusted according to the result. It is said.
  • FIG. 3 and 4 are diagrams illustrating an example for explaining the effect of temporal smoothing of pixel values on an image.
  • description will be made using a monotone image in which the luminance value is divided into the maximum (white) and the luminance value is minimum (black).
  • the input image to be corrected is input to the image processing apparatus at a predetermined time interval according to a time series flowing from left to right.
  • FIG. 3 is a diagram for explaining the effect of conventional time smoothing processing on an image.
  • the subject in the correction target image is corrected for the influence of fluctuation by time smoothing.
  • FIG. 3 is a diagram for explaining the effect of conventional time smoothing processing on an image.
  • the subject in the correction target image is corrected for the influence of fluctuation by time smoothing.
  • FIG. 3 is a diagram for explaining the effect of conventional time smoothing processing on an image.
  • FIG. 3 is a diagram illustrating an example for explaining the effect of temporal smoothing of pixel values on an image.
  • the upper input image 101 in FIG. 3 is the same as the image described in FIG.
  • the subject image 401 as a moving body moves like 402 ′ and 403 ′ in the input images 102 ′ and 103 ′. That is, in the region 800 where the subject of the correction target image is stationary like the subject images 204 ′ and 304 ′ of the smoothed image 104 ′ on the lower side of FIG. 3, distortion due to the influence of air fluctuation is caused by time smoothing. It is corrected.
  • FIG. 4 will be described in monotone as in FIG. However, description will be made using an image whose luminance value is represented by a numerical value of 256 gradations of “0 to 255”.
  • the images 141 to 146 fluctuate like images 151 due to the heat, etc., with respect to the images 141 to 146 without fluctuation.
  • image 152 is an image with fluctuation with respect to image 142
  • image 153 is an image with fluctuation with respect to image 143
  • image 153 is an image with fluctuation with respect to image 143
  • image 154 is an image with fluctuation with respect to image 144
  • image 155 Is an image with fluctuation with respect to the image 145
  • an image 156 is an image with fluctuation with respect to the image 146.
  • each pixel value is changed with the passage of time in the non-fluctuating images 141 to 146 like the images 151 to 156 having the fluctuations due to the fluctuations.
  • the cells in the images 140 to 146, 151 to 156, and 257 each indicate one pixel.
  • the white square pixels of the images 141 to 146 and 151 to 156 have a luminance value of “255”
  • the black square pixels have a luminance value of “0”.
  • the region B1 pixels in the first and second columns from the left
  • the region B5 pixels in the sixth and seventh columns from the left
  • the black pixel is 1 and the white pixel is 5, so the pixel value after time smoothing is 213 (rounded off after the decimal point), and the region B4 (the fifth column from the left)
  • Each pixel is 1 for white pixels and 5 for black pixels, so the pixel value after time smoothing is 43 (rounded off after the decimal point), and area B3 (each pixel in the fourth column from the left) is a white pixel Is 3 and the black pixel is 3, the pixel value after time smoothing is 128 (rounded off after the decimal point).
  • a pixel having a luminance value of “213” is a pixel in which black is actually mixed although it is actually white.
  • a pixel having a luminance value of “43” is a pixel in which white is mixed slightly although it is actually black. Further, a pixel having a luminance value of “128” is a pixel in which white and black are equally mixed. For this reason, the gray portion is widened and the outline of the subject image is blurred. In this case, as in b1 to b5 shown in the image 140, a gradation that gradually increases from left to right and from white to black appears.
  • b1 is a region having a luminance value of “255”
  • b2 is a region having a luminance value of “213”
  • b3 is a region having a luminance value of “128”
  • b4 is a region having a luminance value of “43”
  • b5 is a luminance. This is an area having a value of “0”.
  • the image looks blurred in the areas b2 to b4.
  • FIGS. 5 and 6 are diagrams for explaining an embodiment of the image processing method of the present invention.
  • the input image is input to the image processing apparatus at predetermined time intervals according to a time series flowing from left to right.
  • the pixels at the same position are divided into a plurality of groups using the luminance and color difference of the pixels. .
  • the frequency of the number of pixels in each divided group is calculated. After that, when the information of the group having the highest frequency is used and the frequency exceeds a predetermined threshold, time smoothing is performed using only the information of the group. Other than that, time smoothing is performed using all pixels.
  • each pixel has a luminance value of “0 (black)” to “255 (white)”, is divided into two groups with “128” as a threshold, and the frequency threshold is “4”.
  • the images 141 to 146 without fluctuation and the images 251 to 256 with fluctuation in FIG. 5 are the same as the images 141 to 146 and 251 to 256 in FIG. Of the two groups, the group A is white and the group B is black.
  • the group to which the pixel at the pixel position indicated by each cell belongs is as shown in the diagram shown as grouping. In each of the pixels in the left to first three regions 511 to 516, the left and second left pixels belong to the group A because all the images 501 to 506 are white.
  • the pixels in the third column from the left belong to group A because the frequency of white is “5” in any of the images 501 to 506.
  • the pixels in the regions 521 to 526 in the fourth column from the left are “3” for both the black and white frequencies, so they are not in either group.
  • the fifth column from the left belongs to the group B because the frequency of black is “5”.
  • the remaining two columns (regions 6 to 7 on the left) are all black and belong to group B.
  • images 141 to 146 and 151 to 156 are the same as those shown in FIGS. Also, the image selection images 551 to 556 in FIG. 6 are the same as those in FIG. In FIG. 6, the image 647 obtained by smoothing the fluctuating images 151 to 156 will be described.
  • the pixel value is “255”.
  • the time smoothing process is performed using only the white pixels, and the pixel value becomes “255”.
  • the pixels in the region G3 in the sixth to seventh columns on the left are all black in any of the images 501 to 506, and thus the pixel value is “0”.
  • the frequency of black pixels is high in the pixels in the fifth column from the left, the time smoothing process is performed using only the black pixels, and the pixel value becomes “0”.
  • the frequency of the black pixels is equal to the frequency of the white pixels.
  • the pixel value becomes “128”. Therefore, in the final correction result image 640 after the input image or an image close in time to the input image is a correction target image and the correction target image is subjected to correction processing, the white pixel region g1 and the black pixel
  • the region g2 in which the pixels excluding the region g3 are gray is only the region in the fourth column from the left, and blurring is reduced.
  • FIG. 7 is a block diagram showing the configuration of an embodiment of the image processing apparatus of the present invention.
  • the input image 10 is input to the image grouping unit 1 and the image memory 2 via the input unit 4i.
  • the image memory 2 stores n + 1 input images 10 (n is a natural number of 2 or more).
  • the image memory 2 extracts a desired image from the stored image as a correction target image 12 and n correction images 11, and groups the extracted correction target image 12 and n correction images 11 into an image group.
  • the correction target image 12 is an input image 10 or an image temporally close to the input image 10.
  • the number of images before the current input image 10 to be used as the correction target image 12 is set in advance by the operator or appropriately selected.
  • the input image 10 is, for example, a video imaged at a predetermined time interval along a time series by one of imaging devices used in the monitoring system.
  • the control unit 5 includes elements (image grouping unit 1, image memory 2, image group selection unit 3, fluctuation correction unit 4, input unit 4i, output unit 4o, and operation input unit 5io) constituting the image processing apparatus 70. And each element in the apparatus is controlled according to a processing program stored in a built-in or external storage unit (not shown) connected to the control unit 5. The operator performs an operation of selecting a correction target image by operating from the operation input unit 5io, an operation of changing a threshold value, and the like.
  • the pixel grouping unit 1 extracts pixels at the same position from a plurality of images, and groups them by luminance and color difference. That is, the pixel grouping unit 1 uses the input image 10 (not essential) input from the input unit 4 i, the correction target image 12 and the correction image 11 input from the image memory 2, and pixel information on the correction processing target position. 13 is taken out.
  • the pixel information 13 is basically a pixel value for one pixel in a predetermined color space expression format (RGB, YC b C r , HSV, etc.) similar to that in the input image 10.
  • the pixel grouping unit 1 performs grouping using 1 + n pieces of image information 13 extracted for one pixel position, threshold values such as luminance values and color difference values that are fixedly or adaptively set, and classifies each group.
  • the number of pixels (frequency of appearance of pixels in the time domain) is calculated.
  • the pixel grouping unit 1 sequentially outputs group information 14 including the calculated pixel frequency and 1 + n pixel information 13 at the position where the group information is obtained to the image group selection unit 3.
  • the group information 14 preferably holds not only the pixel frequency in a strict sense but also information indicating which pixel information 13 is classified into which group, or immediately after grouping without counting.
  • the pixel group selection unit 3 selects the group with the highest frequency using the pixel frequency of each group included in the group information 14 sequentially input for each pixel position. When multiple groups have the same maximum frequency, select them all. Further, the pixel frequency is compared with the threshold value Tf, and if it is equal to or greater than the threshold value, the pixel information 15 of the selected pixel group is output to the fluctuation correction unit 4, and if it is less than the threshold value, all 1 + n pixel information 15 are output. Output to the fluctuation correction unit 4.
  • the fluctuation correction unit 4 has a function of correcting fluctuations using only pixel information from the pixel group selection unit 3.
  • a fluctuation correction algorithm a well-known algorithm can be arbitrarily used. However, in this example, since the number of pixel information changes depending on the pixel position, it is necessary to appropriately handle a gap of a pixel that has not been subjected to fluctuation correction. .
  • a method of obtaining a time-smoothed image by simply averaging pixel values at different times can easily cope with a change in the number of pixels.
  • the fluctuation correction unit 4 sequentially performs fluctuation correction processing for each pixel position to generate a corrected image, and outputs the corrected image generated via the output unit 4o as an output image 20.
  • Pixel grouping is classified based on information of each pixel.
  • the pixel information is the brightness and color of the pixel.
  • classification is performed using luminance and color difference. That is, it is classified into a plurality of groups according to luminance, and the groups are further subdivided according to color differences. For example, if four threshold values are used to classify into five groups according to luminance values, and one threshold value is used for each of two pieces of color difference information, the information is classified into 20 groups.
  • the luminance value is generally important, the classification may be performed using only the luminance.
  • These threshold values are not fixed values, but are preferably set adaptively for each pixel.
  • 1 + n frames of the average value of the pixel values can be substituted by the output image fluctuation correcting unit 4) and determine the dispersion, the upper side relative to the average value for each component of the YC b C r, the lower Set a threshold on the side.
  • the variance may be common to all screens without being calculated for each pixel, or a value obtained by multiplying the average value by a predetermined coefficient may be substituted.
  • grouping may be performed in two ways: whether all components are close to the average value.
  • Pixel information 15 used for fluctuation correction is input from the image group selection unit 3 to the fluctuation correction unit 4.
  • the pixel information 15 is obtained by selecting pixel information used for temporal smoothing from pixel information at the same position in the image and having different times. Therefore, smoothing is performed using the input pixel information 15. After smoothing, the pixel information at the corresponding position of the image to be output is obtained, so that the fluctuation correction unit 4 outputs it as a corrected image via the output unit 4o.
  • the pixels used for grouping and temporal smoothing of images are selected for each pixel, but may be performed for each of a plurality of pixel blocks. Further, not only the frequency in the time domain but also the frequency in time and space can be used. For example, for a pixel at a certain correction processing corresponding position, a histogram (frequency) including a pixel spatially adjacent to the pixel. You may ask for.
  • FIG. 8 is a functional block diagram of the imaging apparatus 1500 according to the embodiment of the present invention.
  • the imaging device 1500 includes an imaging unit 1501, an image processing device 70, and an image display unit 1502.
  • the imaging unit 1501 is an imaging device that receives light emitted from a subject and converts the received optical image into image data.
  • the image processing apparatus 70 is an image processing apparatus according to the first embodiment, and receives image data captured by the imaging unit 1501 and corrects distortion caused by air fluctuation due to generation of a hot flame or the like.
  • the image display unit 1502 is a device that displays a corrected image output from the image processing apparatus 70.
  • the image display unit 1502 switches the image to be displayed according to the operation mode. For example, when the mode is 1, a corrected image in which air fluctuation due to the occurrence of a hot flame is reduced is displayed, and when the mode is 0, an uncorrected input image is displayed.
  • an imaging apparatus that displays to the photographer a corrected image in which air fluctuation due to generation of a hot flame or the like of the entire image including both a stationary region and a moving object is reduced. can do.
  • FIG. 9 is a functional block diagram of the monitoring system 1600 according to the third embodiment of the present invention.
  • the monitoring system 1600 includes an imaging device 1601, an image processing device 70, a server 1602, and a display device 1603.
  • the imaging device 1601 is an imaging device such as one or more surveillance cameras that capture image data.
  • the image processing apparatus 70 is the image processing apparatus according to the first embodiment, and receives image data captured by the imaging apparatus 1601 and corrects air fluctuation caused by the occurrence of a hot flame or the like.
  • the server 1602 is a computer equipped with the image processing apparatus 70.
  • the display device 1603 is a device that displays a corrected image output from the image processing device 70.
  • the imaging device 1601 and the server 1602 and the server 1602 and the display device 1603 can be connected via a network such as the Internet, for example, depending on the physical arrangement between the monitored location and the monitoring operator. it can.
  • the monitoring system 1600 provides a monitoring system that displays to the monitoring operator a corrected image in which air fluctuation due to generation of a heat flame of the entire image including both the stationary region and the moving object is reduced. can do.
  • FIG. 10 is a functional block diagram of a code decoding system 1700 according to the fourth embodiment of the present invention.
  • the code decoding system 1700 includes an encoding device 1710, a decoder 1721, and a display device 1722.
  • the encoding device 1710 includes an imaging device 1711, an image processing device 70, and an encoder 1712.
  • the imaging device 1711 is an imaging device such as a surveillance camera that captures image data.
  • An image processing apparatus 1704 is an image processing apparatus according to the first embodiment, and receives image data captured by the imaging apparatus 1711 and corrects the heat.
  • the encoder 1712 encodes the corrected image data output from the image processing apparatus 70 and transmits the encoded image data to the decoder 1721 via the network.
  • the decoder 1721 decodes the corrected image data that has been transmitted.
  • the display device 1722 displays the image decoded by the decoder 1721.
  • the code decoding system 1700 it is possible to provide a code decoding system that displays a decoded image in which fluctuation of air due to generation of a hot flame or the like of an entire image including both a stationary region and a moving object is reduced. it can. Furthermore, by reducing air fluctuations due to the occurrence of a hot flame in the image, the difference between images to be transmitted by the encoder 1712 is reduced, and the encoding efficiency is improved.
  • FIG. 11 is a flowchart for explaining an embodiment of the image processing method of the present invention.
  • the control unit 5 of FIG. 7 controls each element in the apparatus.
  • the control unit 5 has a processing program for performing image processing of the present invention, and performs image processing based on the processing program. It is assumed that the threshold value of the processing program is set to a predetermined value in advance.
  • an image input step S01 an input image input from the outside in time series is input and stored in the image memory, and the correction target image and the time series are stored from the stored image. A plurality of correction images are output.
  • an image grouping step S02 pixels at the same position are extracted from the input image or the correction target image and a plurality of correction images, grouped according to luminance, pixel information at the correction processing corresponding position, and the grouped group. And the pixel information indicating the calculated pixel frequency and the group position is output. That is, in the image grouping step S02, the pixel information at the position corresponding to the correction process is extracted from the input image that has been input or the stored correction target image and the stored correction image. Then, using the extracted image information, the pixel frequency of each grouped group set in advance using the luminance value of the pixel or the like is calculated. Further, group information including the calculated pixel frequency and pixel information indicating the processing corresponding position (position of the group) are output. In addition to the pixel frequency, the group information is, for example, luminance value information or luminance values and color differences of the grouped groups.
  • a group having the maximum frequency is selected using the input pixel frequency, and the pixel frequency is compared with a predetermined threshold value to select whether to perform time smoothing for all pixels.
  • the group having the maximum frequency is selected using the pixel frequency of each group included in the input group information.
  • the pixel frequency is compared with the threshold value, and whether or not to perform time smoothing by all pixels is selected, and a pixel group to be used for the time smoothing process is selected. Then, the pixel information of the selected pixel group is output.
  • fluctuation correction is performed by time smoothing using the pixel information of the selected group. That is, in the fluctuation correction step S04, fluctuation correction by time smoothing is performed using the pixel information of the input group. Then, fluctuation correction processing is performed on all pixels in the processing target image to generate a corrected image and output it.
  • wide corrections are made for, for example, water surface fluctuations, tree vibrations, atmospheric haze, image distortion during underwater photography, etc. It can be widely used for the purpose of improving visibility by stopping an object that is irregularly shaking.
  • each configuration, function, processing unit, processing means, etc. may be realized by hardware by designing a part or all of them, for example, by an integrated circuit.
  • Each of the above-described configurations, functions, and the like may be realized by software by interpreting and executing a program that realizes each function by the processor.
  • Information such as programs, tables, and files for realizing each function can be stored in a recording device such as a memory, a hard disk, an SSD (Solid State Drive), or a recording medium such as an IC card, an SD card, or a DVD.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

A problem with conventional art is that, while it is possible to obtain a blur correction effect, a haze arises in an image. With the present invention, before time smoothing, pixels in the same location are grouped into a plurality of groups using brightness and color difference of the pixels. Frequencies of pixel numbers of the respective groups are computed. If, using the information of the group having the greatest frequency, the frequency exceeds a threshold, the time smoothing is carried out using only the information of the said group.

Description

画像処理装置及び画像処理方法Image processing apparatus and image processing method
本発明は、画像処理装置に係り、特に陽炎などの画像の揺らぎによる画像劣化を軽減することができる画像処理装置及び画像処理方法に関する。 The present invention relates to an image processing apparatus, and more particularly, to an image processing apparatus and an image processing method capable of reducing image degradation due to image fluctuations such as a hot flame.
 揺らぎ補正は、監視システム等に用いられる画像処理技術の一つである。揺らぎ補正は、陽炎に代表される監視映像が受けた揺らぎの影響を軽減させることを目的としている。
  揺らぎ補正技術では、例えば、時間平滑化処理によって揺らぎの影響を低減することが行われる。
Fluctuation correction is one of image processing techniques used for monitoring systems and the like. The purpose of fluctuation correction is to reduce the influence of fluctuation received by a monitoring video represented by a hot flame.
In the fluctuation correction technique, for example, the influence of fluctuation is reduced by time smoothing processing.
 図1は、画素値の時間平滑化処理が映像に与える効果を説明するための一例を示す図である。 FIG. 1 is a diagram illustrating an example for explaining the effect of temporal smoothing processing of pixel values on an image.
 図1の上側に示すように、補正対象の入力画像は、左から右に流れる時系列に従って、所定の時間間隔で画像処理装置に入力される。なお、図1では、入力画像101、102、103だけを示している。図1に示すように、入力画像101、102、103は、いずれも同じ画像のように見える。即ち、入力画像102内に写り込んでいる被写体像402、及び入力画像103内に写り込んでいる被写体像403は、本来自動車等に移動物体であるが、この画像101~103が撮影された時間には、移動しておらず、入力画像101内に写り込んでいる被写体像401と同じ画素位置にある。勿論、入力画像101内に写り込んでいる被写体像201及び301は、入力画像102内に写り込んでいる被写体像202及び302、並びに、入力画像103内に写り込んでいる被写体像203及び303は移動しない静物であり、ほぼ同じ画素位置にある。ただし、入力画像101、102、103内の静止している移動体の被写体像401~403も、静物の被写体像201~203、301~303は、いずれも、陽炎の影響で揺らいで見える。 As shown in the upper side of FIG. 1, the input image to be corrected is input to the image processing apparatus at predetermined time intervals according to a time series flowing from left to right. In FIG. 1, only the input images 101, 102, and 103 are shown. As shown in FIG. 1, the input images 101, 102, and 103 all look like the same image. That is, the subject image 402 reflected in the input image 102 and the subject image 403 reflected in the input image 103 are originally moving objects in a car or the like, but the time when the images 101 to 103 were photographed. Are not moved and are at the same pixel position as the subject image 401 reflected in the input image 101. Of course, the subject images 201 and 301 reflected in the input image 101 are the subject images 202 and 302 reflected in the input image 102 and the subject images 203 and 303 reflected in the input image 103. It is a still body that does not move, and is at approximately the same pixel position. However, the stationary moving object images 401 to 403 in the input images 101, 102, and 103 and the still object images 201 to 203 and 301 to 303 all appear to fluctuate due to the influence of the heat.
 図1の下側の平滑化画像104は、入力画像101、102、103を含むn個の入力画像の平滑化画像である(nは自然数)。平滑化画像104が示すように、画像内の静物の被写体像204、304及び移動体の被写体像404は、陽炎の影響による揺らぎを補正されたものとなっている。
  ところが、図1は、理想的な処理条件の場合であって、実際には時間平滑化処理だけでは揺らぎの影響が完全に消えない(特許文献1参照。)。
The lower smoothed image 104 in FIG. 1 is a smoothed image of n input images including the input images 101, 102, and 103 (n is a natural number). As shown in the smoothed image 104, the still object images 204 and 304 and the moving object image 404 of the moving object in the image are corrected for fluctuations due to the influence of the heat.
However, FIG. 1 shows the case of ideal processing conditions. Actually, the influence of fluctuation is not completely eliminated by only the time smoothing process (see Patent Document 1).
 図2は、時間平滑化処理のみの場合に、画像の被写体像の輪郭がぼやけることを説明するための図である。画像111には、静物であって揺らぎの無い被写体像211が写り込んでいる。なお、分かり易く説明するために、画像111において、被写体像211はモノトーンで、輝度値が最小(黒色)で、他の部分は輝度値が最大(白色)とする。また、画像121~123においても同様に、被写体像221~223はモノトーンで、輝度が最小値(黒色)で、他の部分は輝度が最大値(白色)とする。
  この被写体像211は、陽炎のような揺らぎの影響を受け、画像121~123のn個の画像内の被写体像221~223のように揺らいだ被写体像となる。なお、図2の画像121~123もまた、図1の入力画像101~103のように、左から右に流れる時系列に従って、所定の時間間隔で画像処理装置に入力される。
  このn個の画像121~123について、時間平滑化処理を行うと、画像111内の被写体像211は、画像124に示す被写体像224のように、中央部の被写体像の領域a2の部分だけ輝度が最小値(黒色)となり、端部(この場合は左右)の画像a1とa3の輝度が最大と最小の中間値(灰色)となって、ぼやける。また、本来の被写体像211より画素サイズが大きくなる。即ち、被写体像224がぼやけて見える。
FIG. 2 is a diagram for explaining that the contour of the subject image of the image is blurred when only the time smoothing process is performed. The image 111 includes a subject image 211 that is a still life and does not fluctuate. For easy understanding, in the image 111, the subject image 211 is monotone, the luminance value is minimum (black), and the luminance values of other portions are maximum (white). Similarly, in the images 121 to 123, the subject images 221 to 223 are monotone and have the minimum luminance (black) and the other portions have the maximum luminance (white).
The subject image 211 is affected by fluctuations such as a heat wave, and becomes subject images that fluctuate like subject images 221 to 223 in n images 121 to 123. The images 121 to 123 in FIG. 2 are also input to the image processing apparatus at predetermined time intervals in accordance with the time series flowing from left to right, like the input images 101 to 103 in FIG.
When the time smoothing process is performed on the n images 121 to 123, the subject image 211 in the image 111 has a luminance corresponding to the region a2 of the subject image at the center as in the subject image 224 shown in the image 124. Becomes the minimum value (black), and the luminance of the images a1 and a3 at the ends (in this case, left and right) becomes the maximum and minimum intermediate value (gray) and blurs. In addition, the pixel size is larger than the original subject image 211. That is, the subject image 224 appears blurred.
特開2009-059202号公報JP 2009-059202 A 特開2010-263552号公報JP 2010-263552 A
 特許文献1には、プリンタ機能、スキャナ機能、及びカラーコピー機能を備えた複合機について、(1)複数種類のお手本画像を入力して、それらの構成画素を色相値に応じて6つの領域に分類し、各領域に属する構成画素の平均値を代表値として特定する複数種類のお手本画像の総合的な特徴を表す第1特徴量を特定する。(2)補正対象画像の構成画素をその色相値に応じて6つの領域に分類し、各領域に属する構成画素の平均値を補正対象画像の特徴を表す第2特徴量として特定する。(3)第2特徴量の各代表値を第1特徴量の各代表値にそれぞれ近づけるように、色変換処理を行う技術が開示されている。
  特許文献2には、被出力画像における暗部領域を抽出し、色再現範囲外であり、かつ暗部領域である画素を1つ以上のクラスタに分類し、被出力画像の各画素がノイズであるか否かを判定し、分類されたクラスタにおいて、ノイズと判定した画素に対する特徴を抽出し、ノイズが存在するか否かを判定し、ノイズが存在すると判定したクラスタの色変換方法を、ノイズが存在しないと判定した場合の色変換方法とは異ならせ、決定した色変換方法により色変換処理を行うことで、ノイズが存在する場合と存在しない場合とで色変換方法を異ならせ、クラスタ内の画素に含まれるノイズを解析することにより、該当クラスタにおいてノイズを低減させる写像方向を決定し、ノイズを目立たなくする技術が開示されている。
  しかし、特許文献1及び特許文献2は、補正対象画像が動画像ではなく、画像中の被写体を静物と移動体に区分して補正することは開示していない。
  また、非特許文献1は、複数の異なるセンサによる画像を勾配ピラミッドにより融合してノイズを減らす技術について開示している。また、その勾配ピラミッドによる融合は、ラプラシアンピラミッドによる融合よりも有効で、ビデオシーケンスにも適用できるとしている。しかし、この技術では、背景から移動物体によるノイズを除去できるが、移動物そのものも除去される。従って、移動物体を監視するための画像処理には適用できない。
  上述のように、従来の揺らぎ補正技術では、揺らぎ補正効果が得られるものの、画像にぼやけが生じてしまうという問題があった。
 本発明の目的は、上記のような問題に鑑みて、陽炎等の影響による揺らぎを補正し、かつ画像のぼやけを低減可能な画像処理装置及び画像処理方法を提供することにある。
Japanese Patent Laid-Open No. 2004-133867 discusses (1) a plurality of types of model images for a multifunction device having a printer function, a scanner function, and a color copy function, and those constituent pixels are divided into six regions according to hue values. A first feature amount that represents a comprehensive feature of a plurality of types of model images that is classified and specified with a mean value of constituent pixels belonging to each region as a representative value is specified. (2) The constituent pixels of the correction target image are classified into six regions according to the hue value, and the average value of the constituent pixels belonging to each region is specified as the second feature amount representing the feature of the correction target image. (3) A technique is disclosed in which color conversion processing is performed so that each representative value of the second feature value approaches each representative value of the first feature value.
In Patent Document 2, a dark area in an output image is extracted, pixels that are outside the color reproduction range and are in the dark area are classified into one or more clusters, and whether each pixel of the output image is noise or not. In the classified cluster, the feature for the pixel determined to be noise is extracted, whether or not noise is present is determined, and the color conversion method of the cluster that is determined to be noise is present as noise. The color conversion method is different from the color conversion method when it is determined not to be performed, and the color conversion process is performed according to the determined color conversion method, so that the color conversion method is different depending on whether noise is present or not. Has disclosed a technique for determining a mapping direction in which noise is reduced in a corresponding cluster by analyzing the noise included in the cluster and making the noise inconspicuous.
However, Patent Literature 1 and Patent Literature 2 do not disclose that the correction target image is not a moving image, and the subject in the image is classified into a still life and a moving body for correction.
Non-Patent Document 1 discloses a technique for reducing noise by fusing images from a plurality of different sensors with a gradient pyramid. In addition, the fusion by the gradient pyramid is more effective than the fusion by the Laplacian pyramid and can be applied to the video sequence. However, with this technique, noise due to a moving object can be removed from the background, but the moving object itself is also removed. Therefore, it cannot be applied to image processing for monitoring a moving object.
As described above, the conventional fluctuation correction technique has a problem that although the fluctuation correction effect is obtained, the image is blurred.
In view of the above-described problems, an object of the present invention is to provide an image processing apparatus and an image processing method capable of correcting fluctuations due to the influence of a hot flame and the like and reducing image blur.
 上記の目的を達成するため、本発明の画像処理装置は、時系列に沿って入力された入力画像を格納し、該格納された画像から補正対象画像及び前記時系列に沿って格納された複数の補正用画像を出力する画像メモリと、前記入力画像または前記補正対象画像、及び前記複数の補正用画像から、同一位置の画素を抽出し、輝度によりグループ分けし、補正処理該当位置の画素情報及び前記グループ分けしたグループの画素頻度を算出し、算出した前記画素頻度と前記グループの位置を示す画素情報を出力する画素グループ化部と、入力された前記画素頻度を使用して最大頻度のグループを選択し、かつ、前記画素頻度と所定の第1の閾値を比較し、全画素による時間平滑化を行うか否かの選択を行い時間平滑化処理に使用する画素グループを選択し、選択されたグループの画素情報を出力する画素グループ選択部と、前記選択されたグループの画素情報を使用して時間平滑化により揺らぎ補正する揺らぎ補正部と、装置を構成する各要素を制御する制御部とを備え、前記入力画像または前記補正対象画像を補正することを本発明の第1の特徴とする。 In order to achieve the above object, an image processing apparatus according to the present invention stores an input image input along a time series, and stores a correction target image and a plurality of images stored along the time series from the stored images. The pixel at the same position is extracted from the image memory for outputting the correction image, the input image or the correction target image, and the plurality of correction images, grouped according to the luminance, and the pixel information of the correction processing corresponding position And a pixel grouping unit for calculating pixel frequency of the grouped group, outputting pixel information indicating the calculated pixel frequency and the position of the group, and a group of maximum frequency using the input pixel frequency A pixel group to be used for the time smoothing process by comparing the pixel frequency with a predetermined first threshold and selecting whether or not to perform time smoothing by all pixels. A pixel group selection unit that outputs pixel information of the selected group, a fluctuation correction unit that corrects fluctuation by temporal smoothing using the pixel information of the selected group, and each element constituting the device It is a first feature of the present invention that includes a control unit for controlling and correcting the input image or the correction target image.
 上記の発明の第1の特徴の画像処理装置において、前記画素グループ化部は、前記輝度及び前記第1の閾値によって複数のグループに分類し、さらに、色差及び第2の閾値によって複数のグループに分類することを本発明の第2の特徴とする。 In the image processing device according to the first aspect of the present invention, the pixel grouping unit classifies into a plurality of groups based on the luminance and the first threshold, and further divides into a plurality of groups based on the color difference and the second threshold. Classification is a second feature of the present invention.
 上記の発明の第1の特徴または第2の特徴の画像処理装置において、前記画素グループ化部は、前記グループ分けされたグループの輝度値情報若しくは輝度値情報及び色差情報を、前記画素頻度と共にグループ情報として出力することを本発明の第3の特徴とする。 In the image processing device according to the first or second aspect of the present invention, the pixel grouping unit groups the luminance value information or luminance value information and color difference information of the grouped group together with the pixel frequency. Outputting as information is a third feature of the present invention.
 また、上記の目的を達成するため、本発明の画像処理方法は、時系列に沿って入力された入力画像を格納し、該格納された画像から補正対象画像及び前記時系列に沿って格納された複数の補正用画像を出力する画像入力ステップと、前記入力画像または前記補正対象画像、及び前記複数の補正用画像から、同一位置の画素を抽出し、輝度によりグループ分けし、補正処理該当位置の画素情報及び前記グループ分けしたグループの画素頻度を算出し、算出した前記画素頻度と前記グループの位置を示す画素情報を出力する画像グループ化ステップと、入力された前記画素頻度を使用して最大頻度のグループを選択し、かつ、前記画素頻度と所定の閾値を比較し、全画素による時間平滑化を行うか否かの選択を行い時間平滑化処理に使用する画素グループを選択し、選択されたグループの画素情報を出力するグループ選択ステップと、前記選択されたグループの画素情報を使用して時間平滑化により揺らぎ補正する揺らぎ補正ステップとを備え、前記入力画像または前記補正対象画像から補正画像を生成することを特徴とする。 In order to achieve the above object, the image processing method of the present invention stores an input image input along a time series, and stores the correction target image and the time series from the stored image. The image input step for outputting a plurality of correction images, the pixels at the same position are extracted from the input image or the correction target image, and the plurality of correction images, grouped according to luminance, and the correction processing corresponding position An image grouping step for calculating the pixel information of the group and the pixel frequency of the grouped group, outputting pixel information indicating the calculated pixel frequency and the position of the group, and using the input pixel frequency An image to be used for time smoothing processing by selecting a frequency group, comparing the pixel frequency with a predetermined threshold value, and selecting whether or not to perform time smoothing for all pixels. A group selection step of selecting a group and outputting pixel information of the selected group; and a fluctuation correction step of correcting fluctuation by temporal smoothing using the pixel information of the selected group, the input image or A corrected image is generated from the correction target image.
 本発明によれば、揺らぎを補正しつつ被写体の輪郭に発生するぼやけを低減することができる。 According to the present invention, it is possible to reduce blurring that occurs in the contour of a subject while correcting fluctuations.
時間平滑化処理が画像に与える効果を説明するための一例を示す図である。It is a figure which shows an example for demonstrating the effect which a time smoothing process has on an image. 時間平滑化処理のみの場合に、画像の被写体の輪郭がぼやけることを説明するための図である。It is a figure for demonstrating that the outline of the to-be-photographed object's object blurs only in the time smoothing process. 画素値の時間平滑化が映像に与える効果を説明するための一例を示す図である。It is a figure which shows an example for demonstrating the effect which the time smoothing of a pixel value has on an image | video. 画素値の時間平滑化が映像に与える効果を説明するための一例を示す図である。It is a figure which shows an example for demonstrating the effect which the time smoothing of a pixel value has on an image | video. 本発明の画像処理方法の一実施例を説明するための図である。It is a figure for demonstrating one Example of the image processing method of this invention. 本発明の画像処理方法の一実施例を説明するための図である。It is a figure for demonstrating one Example of the image processing method of this invention. 本発明の画像処理装置の一実施例の構成を示すブロック図である。It is a block diagram which shows the structure of one Example of the image processing apparatus of this invention. 本発明の実施例2に係る撮像装置1500の一実施例の機能ブロック図である。It is a functional block diagram of one Example of the imaging device 1500 which concerns on Example 2 of this invention. 本発明の実施例3に係る監視システム1600の一実施例の機能ブロック図である。It is a functional block diagram of one Example of the monitoring system 1600 which concerns on Example 3 of this invention. 本発明の実施例4に係る符号復号システム1700の一実施例の機能ブロック図である。It is a functional block diagram of one Example of the code decoding system 1700 which concerns on Example 4 of this invention. 本発明の画像処理方法の一実施例を説明するためのフローチャートである。It is a flowchart for demonstrating one Example of the image processing method of this invention.
 以下、本発明の一実施例を、図面を参照して説明する。各図面において共通の構成要素には同一の符号を付与した。
  なお、本発明は、以下に説明する実施例に限定されるものではなく、様々な変形例が含まれる。下記の実施例は本発明を分かり易く説明するために詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されるものではない。また、ある実施例の構成の一部を他の実施例の構成に置き換えることもできる。また、ある実施例の構成に他の実施例の構成を加えることもできる。また、各実施例の構成の一部について、他の構成を追加、削除、若しくは置換することもできる。
Hereinafter, an embodiment of the present invention will be described with reference to the drawings. In each drawing, the same code | symbol was provided to the common component.
In addition, this invention is not limited to the Example demonstrated below, Various modifications are included. The following embodiments have been described in detail for the purpose of illustrating the present invention in an easy-to-understand manner, and are not necessarily limited to those having all the configurations described. Also, a part of the configuration of one embodiment can be replaced with the configuration of another embodiment. Moreover, the structure of another Example can also be added to the structure of a certain Example. Further, with respect to a part of the configuration of each embodiment, another configuration can be added, deleted, or replaced.
 本特許出願人は、時間平滑化による揺らぎ補正技術を特願2012-107470号(以下、“先願”と称するとする)として出願した。先願は、補正対象画像と時間的に近い複数のフレーム(以下。“補正用画像”と称する)を時間平滑化することにより揺らぎの影響を抑える。その際、補正対象画像と補正用画像間、または補正対象画像と補正用画像を混合した画像間の画素毎の類似度を求め、その結果に応じて時間平滑化の強度を調整することを特徴としている。 The applicant of this patent has applied for a fluctuation correction technique using time smoothing as Japanese Patent Application No. 2012-107470 (hereinafter referred to as “prior application”). The prior application suppresses the influence of fluctuation by temporally smoothing a plurality of frames (hereinafter referred to as “correction images”) that are close in time to the correction target image. At that time, the similarity for each pixel is obtained between the correction target image and the correction image, or between the images obtained by mixing the correction target image and the correction image, and the strength of time smoothing is adjusted according to the result. It is said.
 従来の揺らぎ補正の概要に関し、図3と図4を参照して説明する。図3及び図4は、画素値の時間平滑化が映像に与える効果を説明するための一例を示す図である。本説明では、簡単化のため、図1と同様に、モノトーンで、輝度値が最大(白色)と輝度値が最小(黒色)に分かれている画像を用いて説明する。また、図1と同様に、補正対象の入力画像は、左から右に流れる時系列に従って、所定の時間間隔で画像処理装置に入力される。
 図3は、従来の時間平滑処理が映像に与える効果を説明する図である。補正対象画像内の被写体は、時間平滑化により揺らぎの影響が補正される。しかし、図4に示す通り、時間平滑化のみの処理では、揺らぎの影響により被写体の輪郭にぼやけが生じてしまう。
  図3は、画素値の時間平滑化が映像に与える効果を説明するための一例を示す図である。図3の上側の入力画像101は図1で説明した画像と同じである。しかし、移動体としての被写体像401が、入力画像102’と103’では、402’、403’のように移動している。
  即ち、図3の下側の平滑化画像104’の被写体像204’、304’のように、補正対象画像の被写体が静止している領域800では、時間平滑化により空気揺らぎの影響による歪みが補正される。しかし、被写体像404’のように、補正対象画像の被写体が移動している領域801に時間平滑化を施すと、歪みによる画素の変位だけでなく、被写体の移動による画素の変位を平滑化してしまい、移動する被写体の周囲に滲みが生じてしまう。つまり、画素値の時間平滑化は、移動する被写体が映像内に存在する場合には、映像の劣化を伴う。
An outline of conventional fluctuation correction will be described with reference to FIGS. 3 and 4. FIG. 3 and 4 are diagrams illustrating an example for explaining the effect of temporal smoothing of pixel values on an image. In this description, for the sake of simplification, as in FIG. 1, description will be made using a monotone image in which the luminance value is divided into the maximum (white) and the luminance value is minimum (black). As in FIG. 1, the input image to be corrected is input to the image processing apparatus at a predetermined time interval according to a time series flowing from left to right.
FIG. 3 is a diagram for explaining the effect of conventional time smoothing processing on an image. The subject in the correction target image is corrected for the influence of fluctuation by time smoothing. However, as shown in FIG. 4, in the process of only the time smoothing, the outline of the subject is blurred due to the influence of fluctuation.
FIG. 3 is a diagram illustrating an example for explaining the effect of temporal smoothing of pixel values on an image. The upper input image 101 in FIG. 3 is the same as the image described in FIG. However, the subject image 401 as a moving body moves like 402 ′ and 403 ′ in the input images 102 ′ and 103 ′.
That is, in the region 800 where the subject of the correction target image is stationary like the subject images 204 ′ and 304 ′ of the smoothed image 104 ′ on the lower side of FIG. 3, distortion due to the influence of air fluctuation is caused by time smoothing. It is corrected. However, if time smoothing is performed on the region 801 where the subject of the correction target image is moving like the subject image 404 ′, not only the displacement of the pixel due to distortion but also the displacement of the pixel due to movement of the subject is smoothed. As a result, blurring occurs around the moving subject. That is, the time smoothing of the pixel value is accompanied by deterioration of the image when a moving subject exists in the image.
 図4によって、さらに従来の揺らぎ補正の概要を説明する。図4は、簡単化のため、図1と同様に、モノトーンで説明する。ただし、輝度値を“0~255”の256階調の数値で示した画像を用いて説明する。
  図4において、揺らぎなしの画像141~146に対して、陽炎等によって、画像141の画像は画像151のように揺らぐ。以下同様に、画像152は画像142に対する揺らぎありの画像、画像153は画像143に対する揺らぎありの画像、画像153は画像143に対する揺らぎありの画像、画像154は画像144に対する揺らぎありの画像、画像155は画像145に対する揺らぎありの画像、及び画像156は画像146に対する揺らぎありの画像である。このように、揺らぎなし画像141~146が、揺らぎによって揺らぎありの画像151~156のように時間の経過によって、各画素値が変更されているとする。ここで、画像140~146、151~156及び257における枡目は、それぞれ、1つの画素を示す。また、画像141~146及び151~156の白い升目の画素は輝度値が“255”であり、黒い升目の画素は輝度値が“0”である。
The outline of the conventional fluctuation correction will be further described with reference to FIG. For simplicity, FIG. 4 will be described in monotone as in FIG. However, description will be made using an image whose luminance value is represented by a numerical value of 256 gradations of “0 to 255”.
In FIG. 4, the images 141 to 146 fluctuate like images 151 due to the heat, etc., with respect to the images 141 to 146 without fluctuation. Similarly, image 152 is an image with fluctuation with respect to image 142, image 153 is an image with fluctuation with respect to image 143, image 153 is an image with fluctuation with respect to image 143, image 154 is an image with fluctuation with respect to image 144, image 155 Is an image with fluctuation with respect to the image 145, and an image 156 is an image with fluctuation with respect to the image 146. In this way, it is assumed that each pixel value is changed with the passage of time in the non-fluctuating images 141 to 146 like the images 151 to 156 having the fluctuations due to the fluctuations. Here, the cells in the images 140 to 146, 151 to 156, and 257 each indicate one pixel. In addition, the white square pixels of the images 141 to 146 and 151 to 156 have a luminance value of “255”, and the black square pixels have a luminance value of “0”.
 図4において、揺らぎありの画像151~156のように、n個の揺らぎありの時間的に連続な画像について、時間平滑化処理を行うと、画像257に示す各画素の中に示す数値“213”、“128”、“43”の輝度値ように、灰色の画素に補正される。即ち、従来の補正処理では、全画素を時間平滑化に使用するため、図4のように、本来白い部分及び黒い部分も揺らぎによる黒画素あるいは白画素を加算することになり灰色となる。 In FIG. 4, when time smoothing processing is performed on n temporally continuous images with fluctuations such as the images 151 to 156 with fluctuations, the numerical value “213” shown in each pixel shown in the image 257 is obtained. It is corrected to gray pixels such as “,” “128” and “43”. That is, in the conventional correction process, all pixels are used for time smoothing, and as a result, as shown in FIG. 4, black pixels or white pixels due to fluctuations are added to the original white portion and black portion, resulting in gray.
 6画像(画像151~156)の各画素について、白画素であるか黒画素であるかの数を調べる。すると、領域B1(左から1列目と2列目の各画素)は6画像すべてについて白画素であり、領域B5(左から6列目と7列目の各画素)は6画像すべて黒画素である。領域B2(左から3列目の各画素)は、黒画素が1、白画素が5のため時間平滑化後の画素値は213(小数点以下は四捨五入)となり、領域B4(左から5列目の各画素)は、白画素が1、黒画素が5のため時間平滑化後の画素値は43(小数点以下は四捨五入)となり、領域B3(左から4列目の各画素)は、白画素が3、黒画素が3のため時間平滑化後の画素値は128(小数点以下は四捨五入)となる。
  このように、輝度値が“213”の画素は、本当は白いのに、黒が少し混ざった画素である。また、輝度値が“43”の画素は、本当は黒いのに、白が少し混ざった画素である。さらに、輝度値が“128”の画素は、白と黒が等分に混ざった画素である。そのため、灰色部分が広がり、被写体像の輪郭がぼやけた状態となる。
  この場合、画像140に示すb1~b5のように、左から右に、白から黒に次第に濃くなるようなグラディーションが表れる。ここで、b1は輝度値が“255”の領域、b2は輝度値が“213”の領域、b3は輝度値が“128”の領域、b4は輝度値が“43”の領域、b5は輝度値が“0”の領域である。このように、領域b2~b4では画像がぼやけて見える。
For each pixel of the six images (images 151 to 156), the number of white pixels or black pixels is examined. Then, the region B1 (pixels in the first and second columns from the left) is a white pixel for all six images, and the region B5 (pixels in the sixth and seventh columns from the left) is a black pixel in all six images. It is. In the region B2 (each pixel in the third column from the left), the black pixel is 1 and the white pixel is 5, so the pixel value after time smoothing is 213 (rounded off after the decimal point), and the region B4 (the fifth column from the left) Each pixel) is 1 for white pixels and 5 for black pixels, so the pixel value after time smoothing is 43 (rounded off after the decimal point), and area B3 (each pixel in the fourth column from the left) is a white pixel Is 3 and the black pixel is 3, the pixel value after time smoothing is 128 (rounded off after the decimal point).
As described above, a pixel having a luminance value of “213” is a pixel in which black is actually mixed although it is actually white. A pixel having a luminance value of “43” is a pixel in which white is mixed slightly although it is actually black. Further, a pixel having a luminance value of “128” is a pixel in which white and black are equally mixed. For this reason, the gray portion is widened and the outline of the subject image is blurred.
In this case, as in b1 to b5 shown in the image 140, a gradation that gradually increases from left to right and from white to black appears. Here, b1 is a region having a luminance value of “255”, b2 is a region having a luminance value of “213”, b3 is a region having a luminance value of “128”, b4 is a region having a luminance value of “43”, and b5 is a luminance. This is an area having a value of “0”. Thus, the image looks blurred in the areas b2 to b4.
 本発明による揺らぎ補正の概要について、図5及び図6を参照して説明する。本説明では、簡単化のため、白と黒に分かれている画像を用いて説明する。図5と図6は、本発明の画像処理方法の一実施例を説明するための図である。図5及び図6においても、図3や図4と同様に、入力画像は、左から右に流れる時系列に従って、所定の時間間隔で画像処理装置に入力される。
  本発明の画像処理方法の一実施例は、図5及び図6に示すように、時間平滑化処理をする前に、同じ位置の画素を画素の輝度及び色差を利用して複数のグループに分ける。そして、分けたそれぞれのグループの画素数の頻度を算出する。その後、最も大きな頻度を持つグループの情報を使用して、頻度が所定の閾値を越えた場合は、そのグループの情報のみを使用して時間平滑化を行う。それ以外では、全ての画素を使用して時間平滑化を行うものである。
An outline of fluctuation correction according to the present invention will be described with reference to FIGS. In this description, for the sake of simplicity, description will be made using an image that is divided into white and black. 5 and 6 are diagrams for explaining an embodiment of the image processing method of the present invention. 5 and 6, as in FIGS. 3 and 4, the input image is input to the image processing apparatus at predetermined time intervals according to a time series flowing from left to right.
In one embodiment of the image processing method of the present invention, as shown in FIGS. 5 and 6, before performing the time smoothing process, the pixels at the same position are divided into a plurality of groups using the luminance and color difference of the pixels. . Then, the frequency of the number of pixels in each divided group is calculated. After that, when the information of the group having the highest frequency is used and the frequency exceeds a predetermined threshold, time smoothing is performed using only the information of the group. Other than that, time smoothing is performed using all pixels.
 図5の実施例では、各画素が“0(黒)”~“255(白)”の輝度値を持つとし、“128”を閾値として2つのグループに分け、頻度の閾値を“4”とした場合の一例を示す。図5の揺らぎなしの画像141~146、及び揺らぎありの画像251~256は、図3の画像141~146及び251~256と同一である。
  2つのグループは、白となるものがグループA、黒となるのがグループBである。各桝目で示す画素位置の画素が属するグループは、グループ化と示された図の通りとなる。
  左1列~左3列の領域511~516の各画素において、左1列と左2列の画素は、画像501~506のどの画像においても全て白であるのでグループAに属する。また左から3列目の画素は、画像501~506のどの画像においても白の頻度が“5”となるためグループAに属する。
  左から4列目の領域521~526の各画素は、黒、白の頻度とも“3”となるため、どちらのグループでもない。
  左5列~左7列の領域531~536の各画素において、左から5列目は、黒の頻度が“5”となるため、グループBに属する。残りの2列(左6列~左7列の領域)の画素は全て黒であり、グループBに属する。
  その結果、時間平均化に使用する画素は、画素選択画像551~556の通り、平滑化処理しない画素560が選択できる。
In the embodiment of FIG. 5, each pixel has a luminance value of “0 (black)” to “255 (white)”, is divided into two groups with “128” as a threshold, and the frequency threshold is “4”. An example will be shown. The images 141 to 146 without fluctuation and the images 251 to 256 with fluctuation in FIG. 5 are the same as the images 141 to 146 and 251 to 256 in FIG.
Of the two groups, the group A is white and the group B is black. The group to which the pixel at the pixel position indicated by each cell belongs is as shown in the diagram shown as grouping.
In each of the pixels in the left to first three regions 511 to 516, the left and second left pixels belong to the group A because all the images 501 to 506 are white. The pixels in the third column from the left belong to group A because the frequency of white is “5” in any of the images 501 to 506.
The pixels in the regions 521 to 526 in the fourth column from the left are “3” for both the black and white frequencies, so they are not in either group.
In each of the pixels in the regions 531 to 536 in the left 5 columns to the left 7 columns, the fifth column from the left belongs to the group B because the frequency of black is “5”. The remaining two columns (regions 6 to 7 on the left) are all black and belong to group B.
As a result, the pixels used for the time averaging can be selected from the pixels 560 that are not subjected to the smoothing process, as in the pixel selection images 551 to 556.
 図6によって、本発明の図5の実施例に基づく時間平均化処理の結果を説明する。図6における、画像141~146及び151~156は、図4及び図5と同一の画像である。また、図6における、画像選択の画像551~556は、図5と同一の画像である。
  図6において、揺らぎありの画像151~156を平滑化処理した画像647について説明する。
  図5で説明したように、まず、左1列目~3列目の領域G1の各画素において、左1列と左2列の各画素は、画像501~506のどの画像においても全て白であるので画素値は“255”となる。また、左から3列目の画素は、白画素の頻度が高いため、白画素のみを使用して時間平滑化処理が行われ、画素値が“255”となる。
  また、図5で説明したように、左6列目~7列目の領域G3の各画素は、画像501~506のどの画像においても全て黒であるので画素値は“0”となる。また、左から5列目の画素は、黒画素の頻度が高いため、黒画素のみを使用して時間平滑化処理が行われ、画素値が“0”となる。
  さらに、左から4列目の領域G2の各画素は、黒画素の頻度と白画素の頻度が等しいため、全ての画素を使用して時間平滑化が行われ、画素値は“128”となる。
  したがって、入力画像または入力画像に時間的に近い画像を補正対象画像として、当該補正対象画像に補正処理を施した後の最終的な補正結果画像640において、白色の画素の領域g1と黒色の画素の領域g3を除いた画素が灰色の領域g2は、左から4列目の領域のみとなり、ぼやけが低減する。
The result of the time averaging process based on the embodiment of FIG. 5 of the present invention will be described with reference to FIG. In FIG. 6, images 141 to 146 and 151 to 156 are the same as those shown in FIGS. Also, the image selection images 551 to 556 in FIG. 6 are the same as those in FIG.
In FIG. 6, the image 647 obtained by smoothing the fluctuating images 151 to 156 will be described.
As described with reference to FIG. 5, first, in each of the pixels in the region G1 in the left first column to the third column, the pixels in the left first column and the left two columns are all white in any of the images 501 to 506. Therefore, the pixel value is “255”. In addition, since the pixel in the third column from the left has a high frequency of white pixels, the time smoothing process is performed using only the white pixels, and the pixel value becomes “255”.
In addition, as described with reference to FIG. 5, the pixels in the region G3 in the sixth to seventh columns on the left are all black in any of the images 501 to 506, and thus the pixel value is “0”. In addition, since the frequency of black pixels is high in the pixels in the fifth column from the left, the time smoothing process is performed using only the black pixels, and the pixel value becomes “0”.
Further, in each pixel in the region G2 in the fourth column from the left, the frequency of the black pixels is equal to the frequency of the white pixels. Therefore, time smoothing is performed using all the pixels, and the pixel value becomes “128”. .
Therefore, in the final correction result image 640 after the input image or an image close in time to the input image is a correction target image and the correction target image is subjected to correction processing, the white pixel region g1 and the black pixel The region g2 in which the pixels excluding the region g3 are gray is only the region in the fourth column from the left, and blurring is reduced.
 次に、本発明の揺らぎ補正の一実施例を図7の画像処理装置によって説明する。図7は、本発明の画像処理装置の一実施例の構成を示すブロック図である。
  図7の画像処理装置70において、入力画像10は、入力部4iを介して、画像グループ化部1及び画像メモリ2に入力される。画像メモリ2は、入力された入力画像10をn+1個(nは2以上の自然数)格納する。また画像メモリ2は、格納された画像から、所望の画像を、補正対象画像12及びn個の補正用画像11として取出し、取り出した補正対象画像12及びn個の補正用画像11を画像グループ化部1に出力する。ここで、補正対象画像12は、入力画像10または入力画像10に時間的に近い画像である。補正対象画像12を現在の入力画像10から何個前の画像にするかは、オペレータが予め設定しておくか、または、適宜選択する。
  入力画像10は、例えば、監視システムで使用される撮像装置の1つが時系列に沿って所定の時間間隔で撮像した映像である。
Next, an embodiment of the fluctuation correction according to the present invention will be described with reference to the image processing apparatus of FIG. FIG. 7 is a block diagram showing the configuration of an embodiment of the image processing apparatus of the present invention.
In the image processing apparatus 70 of FIG. 7, the input image 10 is input to the image grouping unit 1 and the image memory 2 via the input unit 4i. The image memory 2 stores n + 1 input images 10 (n is a natural number of 2 or more). The image memory 2 extracts a desired image from the stored image as a correction target image 12 and n correction images 11, and groups the extracted correction target image 12 and n correction images 11 into an image group. Output to part 1. Here, the correction target image 12 is an input image 10 or an image temporally close to the input image 10. The number of images before the current input image 10 to be used as the correction target image 12 is set in advance by the operator or appropriately selected.
The input image 10 is, for example, a video imaged at a predetermined time interval along a time series by one of imaging devices used in the monitoring system.
 制御部5は、画像処理装置70を構成する各要素(画像グループ化部1、画像メモリ2、画像グループ選択部3、揺らぎ補正部4、入力部4i、出力部4o、及び操作入力部5io)と相互にアクセスし、制御部5に接続された内蔵または外付けの図示しない記憶部に格納された処理プログラムに従って、装置内の各要素を制御する。
  オペレータは、操作入力部5ioから操作して補正対象画像を選択する操作や閾値を変更する操作等を行う。
The control unit 5 includes elements (image grouping unit 1, image memory 2, image group selection unit 3, fluctuation correction unit 4, input unit 4i, output unit 4o, and operation input unit 5io) constituting the image processing apparatus 70. And each element in the apparatus is controlled according to a processing program stored in a built-in or external storage unit (not shown) connected to the control unit 5.
The operator performs an operation of selecting a correction target image by operating from the operation input unit 5io, an operation of changing a threshold value, and the like.
 画素グループ化部1は、複数の画像から、同一位置の画素を抽出し、輝度及び色差によりグループ分けする。即ち、画素グループ化部1は、入力部4iから入力された入力画像10(必須ではない)、画像メモリ2から入力された補正対象画像12及び補正用画像11より、補正処理該当位置の画素情報13を取り出す。画素情報13は、基本的には、入力画像10におけるものと同様の、所定の色空間表現形式(RGB,YC,HSV等)による、1ピクセル分の画素値である。
  画素グループ化部1は、1画素位置に対して取り出した1+n個の画像情報13、固定的に或いは適応的に設定した輝度値と色差値などの閾値を用いてグループ分けし、各グループに分類された画素数(時間領域おける画素の出現頻度)を算出する。そして、画素グループ化部1は、算出した画素頻度を含むグループ情報14と、当該グループ情報を得た位置での1+n個の画素情報13を画像グループ選択部3に順次出力する。
  グループ情報14は、厳密な意味での画素頻度ののみならず、どの画素情報13がどのグループに分類されたかを示す情報も保持されていることが望ましく、或いは、計数せずにグループ分けした直後の結果でも良い。
The pixel grouping unit 1 extracts pixels at the same position from a plurality of images, and groups them by luminance and color difference. That is, the pixel grouping unit 1 uses the input image 10 (not essential) input from the input unit 4 i, the correction target image 12 and the correction image 11 input from the image memory 2, and pixel information on the correction processing target position. 13 is taken out. The pixel information 13 is basically a pixel value for one pixel in a predetermined color space expression format (RGB, YC b C r , HSV, etc.) similar to that in the input image 10.
The pixel grouping unit 1 performs grouping using 1 + n pieces of image information 13 extracted for one pixel position, threshold values such as luminance values and color difference values that are fixedly or adaptively set, and classifies each group. The number of pixels (frequency of appearance of pixels in the time domain) is calculated. Then, the pixel grouping unit 1 sequentially outputs group information 14 including the calculated pixel frequency and 1 + n pixel information 13 at the position where the group information is obtained to the image group selection unit 3.
The group information 14 preferably holds not only the pixel frequency in a strict sense but also information indicating which pixel information 13 is classified into which group, or immediately after grouping without counting. The result of
 画素グループ選択部3は、画素位置毎に順次入力されたグループ情報14に含まれる各グループの画素頻度を使用して、最大頻度のグループを選択する。複数のグループが同じ最大頻度となったときは、それらを全て選択する。また、画素頻度を閾値Tと比較し、閾値以上であれば、選択された画素グループの画素情報15を揺らぎ補正部4に出力し、閾値未満であれば、1+n個の画素情報15全てを揺らぎ補正部4に出力する。 The pixel group selection unit 3 selects the group with the highest frequency using the pixel frequency of each group included in the group information 14 sequentially input for each pixel position. When multiple groups have the same maximum frequency, select them all. Further, the pixel frequency is compared with the threshold value Tf, and if it is equal to or greater than the threshold value, the pixel information 15 of the selected pixel group is output to the fluctuation correction unit 4, and if it is less than the threshold value, all 1 + n pixel information 15 are output. Output to the fluctuation correction unit 4.
 揺らぎ補正部4は、画素グループ選択部3からの画素情報のみを使用して揺らぎを補正する機能を備える。
  揺らぎ補正のアルゴリズムには、周知のものを任意に利用できるが、本例では画素情報の数が画素位置によって変化するので、揺らぎ補正の対象とならなかった画素の空隙を適切に扱う必要がある。時刻の異なる画素値を単純平均することで時間平滑化画像を得る方法であれば、画素数の変化に容易に対応できる。
  揺らぎ補正部4は、画素位置毎に順次揺らぎ補正処理を実施して補正画像を生成し、出力部4oを介して生成した補正画像を出力画像20として出力する。
The fluctuation correction unit 4 has a function of correcting fluctuations using only pixel information from the pixel group selection unit 3.
As a fluctuation correction algorithm, a well-known algorithm can be arbitrarily used. However, in this example, since the number of pixel information changes depending on the pixel position, it is necessary to appropriately handle a gap of a pixel that has not been subjected to fluctuation correction. . A method of obtaining a time-smoothed image by simply averaging pixel values at different times can easily cope with a change in the number of pixels.
The fluctuation correction unit 4 sequentially performs fluctuation correction processing for each pixel position to generate a corrected image, and outputs the corrected image generated via the output unit 4o as an output image 20.
 なお、画像グループ化部1の動作の詳細は、以下の通りである。
  画素のグループ化は、各画素の情報に基づいて分類される。ここで、画素の情報とは、当該画素の明度と色である。
  輝度と色差により画素を表現する場合には、輝度と色差を用いて分類を行う。即ち、輝度によって、複数のグループに分類し、色差にてグループをさらに細分化する。例えば、輝度値によって4つの閾値を用いて5グループに分類し、色差の2つの情報で、それぞれ1つの閾値を用いた場合には、20のグループに分類されることとなる。
  なお、精細度に関しては、輝度値が重要であることが一般的であるため、輝度のみを用いて分類を行っても良い。
  これらの閾値は、固定値ではなく、画素毎に適応的に設定することが望ましい。例えば、画素毎に、1+nフレーム分の画素値の平均値(揺らぎ補正部4の出力画像で代用可能)と分散を求め、YCの各成分毎に平均値を基準にその上側、下側に閾値を設定する。なお分散は、画素毎に求めずに全画面共通でもよく、平均値に所定係数を乗じた値で代用しても良い。このような適応的な閾値を用いた場合、グル―プ分けは、全成分において平均値から近いか否かの2つで良い。
The details of the operation of the image grouping unit 1 are as follows.
Pixel grouping is classified based on information of each pixel. Here, the pixel information is the brightness and color of the pixel.
When expressing pixels by luminance and color difference, classification is performed using luminance and color difference. That is, it is classified into a plurality of groups according to luminance, and the groups are further subdivided according to color differences. For example, if four threshold values are used to classify into five groups according to luminance values, and one threshold value is used for each of two pieces of color difference information, the information is classified into 20 groups.
Regarding the definition, since the luminance value is generally important, the classification may be performed using only the luminance.
These threshold values are not fixed values, but are preferably set adaptively for each pixel. For example, for each pixel, 1 + n frames of the average value of the pixel values (can be substituted by the output image fluctuation correcting unit 4) and determine the dispersion, the upper side relative to the average value for each component of the YC b C r, the lower Set a threshold on the side. The variance may be common to all screens without being calculated for each pixel, or a value obtained by multiplying the average value by a predetermined coefficient may be substituted. When such an adaptive threshold value is used, grouping may be performed in two ways: whether all components are close to the average value.
 また、揺らぎ補正部4の動作の詳細は、以下の通りである。
  揺らぎ補正部4には、画像グループ選択部3から、揺らぎの補正に使用する画素情報15が入力される。この画素情報15は、画像の同一位置の、時間が異なる画素情報のうち、時間平滑化に使用する画素情報を選択したものとなる。したがって、入力された画素情報15を使用し、平滑化を行うこととなる。
  平滑化後は、出力すべき画像の該当位置の画素情報となるため、揺らぎ補正部4は、出力部4oを介して補正画像として出力することとなる。
Details of the operation of the fluctuation correction unit 4 are as follows.
Pixel information 15 used for fluctuation correction is input from the image group selection unit 3 to the fluctuation correction unit 4. The pixel information 15 is obtained by selecting pixel information used for temporal smoothing from pixel information at the same position in the image and having different times. Therefore, smoothing is performed using the input pixel information 15.
After smoothing, the pixel information at the corresponding position of the image to be output is obtained, so that the fluctuation correction unit 4 outputs it as a corrected image via the output unit 4o.
 上記実施例によれば、揺らぎを補正しつつ被写体の輪郭に発生するぼやけを低減することができる。
  なお、図1~図6では、分かり易く説明するため、濃淡画像で行ったが、図7の説明のように、カラー画像でも同じことがいえることは自明である。
  さらに、上記実施例では、1画素毎に画像のグループ化、及び時間平滑化に使用する画素の選択を行っているが、複数の画素ブロック毎に実施しても良い。また、時間領域での頻度に限らず、時空間での頻度を用いることができ、例えば、ある補正処理該当位置の画素関して、その画素に空間的に隣接する画素も含めてヒストグラム(度数)を求めても良い。
According to the above embodiment, it is possible to reduce blurring that occurs in the contour of the subject while correcting the fluctuation.
In FIGS. 1 to 6, for the sake of easy understanding, the gray image is used. However, it is obvious that the same can be said for the color image as shown in FIG. 7.
Furthermore, in the above embodiment, the pixels used for grouping and temporal smoothing of images are selected for each pixel, but may be performed for each of a plurality of pixel blocks. Further, not only the frequency in the time domain but also the frequency in time and space can be used. For example, for a pixel at a certain correction processing corresponding position, a histogram (frequency) including a pixel spatially adjacent to the pixel. You may ask for.
 図8は、本発明の実施例に係る撮像装置1500の機能ブロック図である。撮像装置1500は、撮像部1501、画像処理装置70、及び画像表示部1502を備える。撮像部1501は、被写体から発する光を受光し、受光した光学像を画像データに変換する撮像デバイスである。画像処理装置70は、実施例1に係る画像処理装置であり、撮像部1501が撮影した画像データを受け取って陽炎等の発生による空気の揺らぎに起因する歪みを補正する。画像表示部1502は、画像処理装置70が出力する補正後画像を表示するデバイスである。 FIG. 8 is a functional block diagram of the imaging apparatus 1500 according to the embodiment of the present invention. The imaging device 1500 includes an imaging unit 1501, an image processing device 70, and an image display unit 1502. The imaging unit 1501 is an imaging device that receives light emitted from a subject and converts the received optical image into image data. The image processing apparatus 70 is an image processing apparatus according to the first embodiment, and receives image data captured by the imaging unit 1501 and corrects distortion caused by air fluctuation due to generation of a hot flame or the like. The image display unit 1502 is a device that displays a corrected image output from the image processing apparatus 70.
 画像表示部1502は、動作モードに応じて表示する画像を切り替える。例えば、モードが1の場合、陽炎の発生等による空気の揺らぎを低減した補正後画像を表示し、モードが0の場合、補正していない入力画像を表示する。 The image display unit 1502 switches the image to be displayed according to the operation mode. For example, when the mode is 1, a corrected image in which air fluctuation due to the occurrence of a hot flame is reduced is displayed, and when the mode is 0, an uncorrected input image is displayed.
 本実施例2に係る撮像装置1500によれば、静止領域と移動物体を共に含む画像全体の陽炎の発生等による空気の揺らぎを低減した補正後画像を撮影者に対して表示する撮像装置を提供することができる。 According to the imaging apparatus 1500 according to the second embodiment, there is provided an imaging apparatus that displays to the photographer a corrected image in which air fluctuation due to generation of a hot flame or the like of the entire image including both a stationary region and a moving object is reduced. can do.
 図9は、本発明の実施例3に係る監視システム1600の機能ブロック図である。監視システム1600は、撮像装置1601、画像処理装置70、サーバ1602、表示装置1603を備える。撮像装置1601は、画像データを撮影する1以上の監視カメラなどの撮像装置である。画像処理装置70は、実施例1係る画像処理装置であり、撮像装置1601が撮影した画像データを受け取って陽炎等の発生による空気の揺らぎを補正する。サーバ1602は、画像処理装置70を搭載したコンピュータである。
表示装置1603は、画像処理装置70が出力する補正後画像を表示するデバイスである。
FIG. 9 is a functional block diagram of the monitoring system 1600 according to the third embodiment of the present invention. The monitoring system 1600 includes an imaging device 1601, an image processing device 70, a server 1602, and a display device 1603. The imaging device 1601 is an imaging device such as one or more surveillance cameras that capture image data. The image processing apparatus 70 is the image processing apparatus according to the first embodiment, and receives image data captured by the imaging apparatus 1601 and corrects air fluctuation caused by the occurrence of a hot flame or the like. The server 1602 is a computer equipped with the image processing apparatus 70.
The display device 1603 is a device that displays a corrected image output from the image processing device 70.
 撮像装置1601とサーバ1602の間、およびサーバ1602と表示装置1603の間は、被監視場所と監視オペレータとの間の物理的配置などに応じて、例えばインターネットなどのネットワークを介して接続することができる。 The imaging device 1601 and the server 1602 and the server 1602 and the display device 1603 can be connected via a network such as the Internet, for example, depending on the physical arrangement between the monitored location and the monitoring operator. it can.
 本実施例3に係る監視システム1600によれば、静止領域と移動物体を共に含む画像全体の陽炎の発生等による空気の揺らぎを低減した補正後画像を監視オペレータに対して表示する監視システムを提供することができる。 The monitoring system 1600 according to the third embodiment provides a monitoring system that displays to the monitoring operator a corrected image in which air fluctuation due to generation of a heat flame of the entire image including both the stationary region and the moving object is reduced. can do.
 図10は、本発明の実施例4に係る符号復号システム1700の機能ブロック図である。符号復号システム1700は、符号化装置1710、復号器1721、表示装置1722を備える。符号化装置1710は、撮像装置1711、画像処理装置70、及び符号器1712を備える。 FIG. 10 is a functional block diagram of a code decoding system 1700 according to the fourth embodiment of the present invention. The code decoding system 1700 includes an encoding device 1710, a decoder 1721, and a display device 1722. The encoding device 1710 includes an imaging device 1711, an image processing device 70, and an encoder 1712.
 撮像装置1711は、画像データを撮影する監視カメラなどの撮像装置である。画像処理装置1704は、実施例1に係る画像処理装置であり、撮像装置1711が撮影した画像データを受け取って陽炎を補正する。符号器1712は、画像処理装置70が出力する補正後画像データを符号化し、ネットワークを介して復号器1721へ伝送する。復号器1721は、伝送されてきた補正後画像データを復号化する。表示装置1722は、復号器1721が復号化した画像を表示する。 The imaging device 1711 is an imaging device such as a surveillance camera that captures image data. An image processing apparatus 1704 is an image processing apparatus according to the first embodiment, and receives image data captured by the imaging apparatus 1711 and corrects the heat. The encoder 1712 encodes the corrected image data output from the image processing apparatus 70 and transmits the encoded image data to the decoder 1721 via the network. The decoder 1721 decodes the corrected image data that has been transmitted. The display device 1722 displays the image decoded by the decoder 1721.
 本実施例4に係る符号復号システム1700によれば、静止領域と移動物体を共に含む画像全体の陽炎の発生等による空気の揺らぎを低減した復号後画像を表示する符号復号システムを提供することができる。さらに、画像内の陽炎の発生等による空気の揺らぎを低減することにより、符号器1712が伝送すべき画像間の差分が小さくなり、符号化効率が向上する。 According to the code decoding system 1700 according to the fourth embodiment, it is possible to provide a code decoding system that displays a decoded image in which fluctuation of air due to generation of a hot flame or the like of an entire image including both a stationary region and a moving object is reduced. it can. Furthermore, by reducing air fluctuations due to the occurrence of a hot flame in the image, the difference between images to be transmitted by the encoder 1712 is reduced, and the encoding efficiency is improved.
 図11によって、本発明の第5の実施例について説明する。図11は、本発明の画像処理方法の一実施例を説明するためのフローチャートである。
  図11の処理は、例えば、図7の制御部5が、装置内の各要素を制御する。
  制御部5は、本発明の画像処理を実施する処理プログラムを有し、処理プログラムに基づいて画像処理を実施する。なお、処理プログラムの閾値等は予め所定に値に設定されているとする。
A fifth embodiment of the present invention will be described with reference to FIG. FIG. 11 is a flowchart for explaining an embodiment of the image processing method of the present invention.
In the process of FIG. 11, for example, the control unit 5 of FIG. 7 controls each element in the apparatus.
The control unit 5 has a processing program for performing image processing of the present invention, and performs image processing based on the processing program. It is assumed that the threshold value of the processing program is set to a predetermined value in advance.
 図11において、まず画像入力ステップS01では、外部から時系列に沿って入力された入力画像を入力すると共に、画像メモリに格納し、格納された画像から補正対象画像及び時系列に沿って格納された複数の補正用画像を出力する。 In FIG. 11, first, in an image input step S01, an input image input from the outside in time series is input and stored in the image memory, and the correction target image and the time series are stored from the stored image. A plurality of correction images are output.
 次に画像グループ化ステップS02では、入力画像または補正対象画像、及び複数の補正用画像から、同一位置の画素を抽出し、輝度によりグループ分けし、補正処理該当位置の画素情報及びグループ分けしたグループの画素頻度を算出し、算出した画素頻度とグループの位置を示す画素情報を出力する。
  即ち、画像グループ化ステップS02では、入力された入力画像または格納された補正対象画像、及び格納された補正用画像より、補正処理該当位置の画素情報を取り出す。そして、取り出した画像情報を用いて、あらかじめ画素の輝度値などを用いて設定された各グループ分けされたグループの画素頻度を算出する。さらに、算出した画素頻度を含むグループ情報と、処理該当位置(当該グループの位置)を示す画素情報を出力する。
  なお、グループ情報は、画素頻度の他、例えば、グループ分けされたグループの輝度値情報若しくは輝度値及び色差である。
Next, in an image grouping step S02, pixels at the same position are extracted from the input image or the correction target image and a plurality of correction images, grouped according to luminance, pixel information at the correction processing corresponding position, and the grouped group. And the pixel information indicating the calculated pixel frequency and the group position is output.
That is, in the image grouping step S02, the pixel information at the position corresponding to the correction process is extracted from the input image that has been input or the stored correction target image and the stored correction image. Then, using the extracted image information, the pixel frequency of each grouped group set in advance using the luminance value of the pixel or the like is calculated. Further, group information including the calculated pixel frequency and pixel information indicating the processing corresponding position (position of the group) are output.
In addition to the pixel frequency, the group information is, for example, luminance value information or luminance values and color differences of the grouped groups.
 次にグループ選択ステップS03では、入力された画素頻度を使用して最大頻度のグループを選択し、かつ、画素頻度と所定の閾値を比較し、全画素による時間平滑化を行うか否かの選択を行い時間平滑化処理に使用する画素グループを選択し、選択されたグループの画素情報を出力する。
  即ち、グループ選択ステップS03では、入力されたグループ情報に含まれる各グループの画素頻度を使用して、最大頻度のグループを選択する。また、画素頻度と閾値を比較し、全画素による時間平滑化を行うか否かの選択を行い、時間平滑化処理に使用する画素グループを選択する。そして、選択された画素グループの画素情報を出力する。
Next, in the group selection step S03, a group having the maximum frequency is selected using the input pixel frequency, and the pixel frequency is compared with a predetermined threshold value to select whether to perform time smoothing for all pixels. To select a pixel group to be used for the time smoothing process, and output pixel information of the selected group.
That is, in the group selection step S03, the group having the maximum frequency is selected using the pixel frequency of each group included in the input group information. Further, the pixel frequency is compared with the threshold value, and whether or not to perform time smoothing by all pixels is selected, and a pixel group to be used for the time smoothing process is selected. Then, the pixel information of the selected pixel group is output.
 次に、揺らぎ補正ステップS04では、選択されたグループの画素情報を使用して時間平滑化により揺らぎ補正する。
  即ち、揺らぎ補正ステップS04では、入力されたグループの画素情報を使用し、時間平滑化による揺らぎ補正を実施する。そして、処理対象画像中の全ての画素について揺らぎ補正処理を実施して補正画像を生成し、出力する。
Next, in the fluctuation correction step S04, fluctuation correction is performed by time smoothing using the pixel information of the selected group.
That is, in the fluctuation correction step S04, fluctuation correction by time smoothing is performed using the pixel information of the input group. Then, fluctuation correction processing is performed on all pixels in the processing target image to generate a corrected image and output it.
 以上によって、入力画像または前記補正対象画像から補正画像を生成することによって、揺らぎを補正しつつ被写体の輪郭に発生するぼやけを低減することができる。 As described above, by generating a corrected image from the input image or the correction target image, it is possible to reduce blurring generated in the contour of the subject while correcting the fluctuation.
 なお、上記実施例の画像補正装置及び画像補正方法によれば、陽炎の他に、例えば水面の揺らぎ、木々の揺れ、大気中のもや、水中撮影時の画像の歪み等に対して幅広く補正を行うことが可能であり、不規則に揺れている被写体を止めて視認性を高める用途に広く利用することができる。 In addition, according to the image correction apparatus and the image correction method of the above-described embodiment, in addition to the hot flame, for example, wide corrections are made for, for example, water surface fluctuations, tree vibrations, atmospheric haze, image distortion during underwater photography, etc. It can be widely used for the purpose of improving visibility by stopping an object that is irregularly shaking.
 また、各構成、機能、処理部、処理手段等は、それらの一部や全部を、例えば集積回路で設計する等によりハードウェアで実現しても良い。また、上記の各構成、機能等は、プロセッサがそれぞれの機能を実現するプログラムを解釈し、実行することによりソフトウェアで実現しても良い。各機能を実現するプログラム、テーブル、ファイル等の情報は、メモリ、ハードディスク、SSD(Solid State Drive)等の記録装置、ICカード、SDカード、DVD等の記録媒体に格納することができる。 Further, each configuration, function, processing unit, processing means, etc. may be realized by hardware by designing a part or all of them, for example, by an integrated circuit. Each of the above-described configurations, functions, and the like may be realized by software by interpreting and executing a program that realizes each function by the processor. Information such as programs, tables, and files for realizing each function can be stored in a recording device such as a memory, a hard disk, an SSD (Solid State Drive), or a recording medium such as an IC card, an SD card, or a DVD.
 1:画像グループ化部、 2:画像メモリ、 3:画像グループ選択部、 4:揺らぎ補正部、 4i:入力部、 4o:出力部、 5:制御部、 5io:操作入力部、 10:入力画像、 11:補正用画像、 12:補正対象画像、 13:画素情報、 14:グループ情報、 15:画素情報、 20:出力画像、 70:画像処理装置、 101~103、102’~104’、111、121~123、140~146、:入力画像、 104、124、147、304:平滑化画像、 201~204、211、221~224、241~246、251~257、301~304、401~404、402’~404’:被写体像、 501~506:グループ化画像、 551~556:画素選択画像、 560:平滑化処理しない画素、 640:補正結果画像、 511~516、800、801:領域、 1500:撮像装置、 1502:画像表示部、 1600:監視システム、 1601:撮像装置、 1602:サーバ、 1603:表示装置、 1700:符号復号システム、 1710:符号化装置、 1711:撮像装置、 1712:符号器、 1721:復号器、 1722:表示装置、 a1~a3、B1~B5、b1~b5、G1~G3、g1~g3:領域。 1: Image grouping unit, 2: Image memory, 3: Image group selection unit, 4: Fluctuation correction unit, 4i: Input unit, 4o: Output unit, 5: Control unit, 5io: Operation input unit, 10: Input image 11: Image for correction, 12: Image to be corrected, 13: Pixel information, 14: Group information, 15: Pixel information, 20: Output image, 70: Image processing device, 101-103, 102′-104 ′, 111 121 to 123, 140 to 146, input image, 104, 124, 147, 304: smoothed image, 201 to 204, 211, 221 to 224, 241 to 246, 251 to 257, 301 to 304, 401 to 404 402 ′ to 404 ′: Subject image, 501 to 506: Grouped image, 551 to 556: Pixel selection image, 560: Smooth Non-processed pixels, 640: correction result image, 511 to 516, 800, 801: area, 1500: imaging device, 1502: image display unit, 1600: monitoring system, 1601: imaging device, 1602: server, 1603: display device, 1700: Code decoding system, 1710: Encoding device, 1711: Imaging device, 1712: Encoder, 1721: Decoder, 1722: Display device, a1-a3, B1-B5, b1-b5, G1-G3, g1- g3: region.

Claims (4)

  1.  時系列に沿って入力された入力画像を格納し、該格納された画像から補正対象画像及び前記時系列に沿って格納された複数の補正用画像を出力する画像メモリと、
     前記入力画像または前記補正対象画像、及び前記複数の補正用画像から、同一位置の画素を抽出し、輝度によりグループ分けし、補正処理該当位置の画素情報及び前記グループ分けしたグループの画素頻度を算出し、算出した前記画素頻度と前記グループの位置を示す画素情報を出力する画素グループ化部と、
     入力された前記画素頻度を使用して最大頻度のグループを選択し、かつ、前記画素頻度と所定の第1の閾値を比較し、全画素による時間平滑化を行うか否かの選択を行い時間平滑化処理に使用する画素グループを選択し、選択されたグループの画素情報を出力する画素グループ選択部と、
     前記選択されたグループの画素情報を使用して時間平滑化により揺らぎ補正する揺らぎ補正部と、
     装置を構成する各要素を制御する制御部と、
    を備え、前記入力画像または前記補正対象画像を補正することを特徴とする画像処理装置。
    An image memory for storing an input image input along a time series, and outputting a correction target image and a plurality of correction images stored along the time series from the stored image;
    Pixels at the same position are extracted from the input image or the correction target image and the plurality of correction images, grouped according to luminance, and pixel information of the correction processing corresponding position and pixel frequency of the grouped group are calculated. A pixel grouping unit that outputs pixel information indicating the calculated pixel frequency and the position of the group;
    A time group is selected by selecting a maximum frequency group using the input pixel frequency, comparing the pixel frequency with a predetermined first threshold value, and selecting whether to perform time smoothing for all pixels. A pixel group selection unit that selects a pixel group to be used for the smoothing process and outputs pixel information of the selected group;
    A fluctuation correction unit that corrects fluctuation by temporal smoothing using the pixel information of the selected group;
    A control unit for controlling each element constituting the device;
    An image processing apparatus comprising: correcting the input image or the correction target image.
  2.  請求項1記載の画像処理装置において、前記画素グループ化部は、前記輝度及び前記第1の閾値によって複数のグループに分類し、さらに、色差及び第2の閾値によって複数のグループに分類することを特徴とする画像処理装置。 The image processing apparatus according to claim 1, wherein the pixel grouping unit classifies into a plurality of groups according to the luminance and the first threshold, and further classifies into a plurality of groups according to a color difference and a second threshold. A featured image processing apparatus.
  3.  請求項1または請求項2記載の画像処理装置において、前記画素グループ化部は、前記グループ分けされたグループの輝度値情報若しくは輝度値情報及び色差情報を、前記画素頻度と共にグループ情報として出力することを特徴とする画像処理装置。 3. The image processing apparatus according to claim 1, wherein the pixel grouping unit outputs the luminance value information or luminance value information and color difference information of the grouped group as group information together with the pixel frequency. An image processing apparatus.
  4.  時系列に沿って入力された入力画像を格納し、該格納された画像から補正対象画像及び前記時系列に沿って格納された複数の補正用画像を出力する画像入力ステップと、
     前記入力画像または前記補正対象画像、及び前記複数の補正用画像から、同一位置の画素を抽出し、輝度によりグループ分けし、補正処理該当位置の画素情報及び前記グループ分けしたグループの画素頻度を算出し、算出した前記画素頻度と前記グループの位置を示す画素情報を出力する画像グループ化ステップと、
     入力された前記画素頻度を使用して最大頻度のグループを選択し、かつ、前記画素頻度と所定の閾値を比較し、全画素による時間平滑化を行うか否かの選択を行い時間平滑化処理に使用する画素グループを選択し、選択されたグループの画素情報を出力するグループ選択ステップと、
     前記選択されたグループの画素情報を使用して時間平滑化により揺らぎ補正する揺らぎ補正ステップと、
    を備え、前記入力画像または前記補正対象画像から補正画像を生成することを特徴とする画像処理方法。
    An image input step of storing an input image input along a time series, and outputting a correction target image and a plurality of correction images stored along the time series from the stored image;
    Pixels at the same position are extracted from the input image or the correction target image and the plurality of correction images, grouped according to luminance, and pixel information of the correction processing corresponding position and pixel frequency of the grouped group are calculated. An image grouping step for outputting pixel information indicating the calculated pixel frequency and the position of the group;
    A time smoothing process is performed by selecting a maximum frequency group using the input pixel frequency, comparing the pixel frequency with a predetermined threshold value, and selecting whether to perform time smoothing for all pixels. A group selection step of selecting a pixel group to be used for outputting and outputting pixel information of the selected group;
    A fluctuation correction step for correcting fluctuation by temporal smoothing using the pixel information of the selected group;
    An image processing method comprising: generating a correction image from the input image or the correction target image.
PCT/JP2013/075436 2013-09-20 2013-09-20 Image processing device and image processing method WO2015040731A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2013/075436 WO2015040731A1 (en) 2013-09-20 2013-09-20 Image processing device and image processing method
JP2015537516A JPWO2015040731A1 (en) 2013-09-20 2013-09-20 Image processing apparatus and image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2013/075436 WO2015040731A1 (en) 2013-09-20 2013-09-20 Image processing device and image processing method

Publications (1)

Publication Number Publication Date
WO2015040731A1 true WO2015040731A1 (en) 2015-03-26

Family

ID=52688413

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/075436 WO2015040731A1 (en) 2013-09-20 2013-09-20 Image processing device and image processing method

Country Status (2)

Country Link
JP (1) JPWO2015040731A1 (en)
WO (1) WO2015040731A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002049920A (en) * 2000-06-01 2002-02-15 Univ Of Washington Method and device for estimating desired contour of image object
JP2008160733A (en) * 2006-12-26 2008-07-10 Sony Corp Imaging apparatus, imaging signal processing method, and program
JP2010097430A (en) * 2008-10-16 2010-04-30 Tokyo Univ Of Agriculture & Technology Smoke detection device and smoke detection method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002049920A (en) * 2000-06-01 2002-02-15 Univ Of Washington Method and device for estimating desired contour of image object
JP2008160733A (en) * 2006-12-26 2008-07-10 Sony Corp Imaging apparatus, imaging signal processing method, and program
JP2010097430A (en) * 2008-10-16 2010-04-30 Tokyo Univ Of Agriculture & Technology Smoke detection device and smoke detection method

Also Published As

Publication number Publication date
JPWO2015040731A1 (en) 2017-03-02

Similar Documents

Publication Publication Date Title
JP5908174B2 (en) Image processing apparatus and image processing method
KR101059403B1 (en) Adaptive spatial image filter for filtering image information
KR101478669B1 (en) Automatic parameter estimation for adaptive pixel-based filtering
JP5230669B2 (en) How to filter depth images
JP5419744B2 (en) How to synthesize a virtual image
US8131109B2 (en) Image processing method and apparatus for contrast enhancement using intensity mapping
JP5144202B2 (en) Image processing apparatus and program
WO2018136373A1 (en) Image fusion and hdr imaging
CN104335565A (en) Image processing method with detail-enhancing filter with adaptive filter core
JP6104680B2 (en) Image processing apparatus, imaging apparatus, monitoring system, encoding apparatus, and image processing method
US20130169834A1 (en) Photo extraction from video
JP4967921B2 (en) Apparatus, method, and program for image processing
KR20140116960A (en) Adaptive multi-dimensional data decomposition
US20200007734A1 (en) Image processing apparatus, control method, and non-transitory computer-readable storage medium
JP2009049709A (en) Device, method and program for image processing
US10123021B2 (en) Image encoding apparatus for determining quantization parameter, image encoding method, and program
JP2006211247A (en) Image processing apparatus and method
US20140270568A1 (en) Method and system for noise reduction in video systems
CN113039576A (en) Image enhancement system and method
KR101437626B1 (en) System and method for region-of-interest-based artifact reduction in image sequences
US9355435B2 (en) Method and system for adaptive pixel replacement
CN112819699A (en) Video processing method and device and electronic equipment
JP2014010776A (en) Image processing apparatus, image processing method, and program
US20210125318A1 (en) Image processing method and apparatus
WO2016051716A1 (en) Image processing method, image processing device, and recording medium for storing image processing program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13893979

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2015537516

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13893979

Country of ref document: EP

Kind code of ref document: A1