WO2015132826A1 - 画像処理装置、監視カメラ及び画像処理方法 - Google Patents
画像処理装置、監視カメラ及び画像処理方法 Download PDFInfo
- Publication number
- WO2015132826A1 WO2015132826A1 PCT/JP2014/004802 JP2014004802W WO2015132826A1 WO 2015132826 A1 WO2015132826 A1 WO 2015132826A1 JP 2014004802 W JP2014004802 W JP 2014004802W WO 2015132826 A1 WO2015132826 A1 WO 2015132826A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- fluctuation
- image
- input image
- determination unit
- difference
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 128
- 238000003672 processing method Methods 0.000 title claims description 10
- 238000012937 correction Methods 0.000 claims abstract description 67
- 210000000746 body region Anatomy 0.000 claims description 30
- 238000012935 Averaging Methods 0.000 claims description 20
- 238000003707 image sharpening Methods 0.000 claims description 10
- 230000015572 biosynthetic process Effects 0.000 claims description 7
- 238000003786 synthesis reaction Methods 0.000 claims description 7
- 238000000034 method Methods 0.000 description 62
- 238000010586 diagram Methods 0.000 description 34
- 238000007792 addition Methods 0.000 description 16
- 238000012544 monitoring process Methods 0.000 description 10
- 238000003708 edge detection Methods 0.000 description 5
- 230000008602 contraction Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000001788 irregular Effects 0.000 description 3
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 3
- 230000005856 abnormality Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/92—Dynamic range modification of images or parts thereof based on global image properties
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/21—Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20004—Adaptive image processing
- G06T2207/20012—Locally adaptive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20216—Image averaging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20224—Image subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30236—Traffic on road, railway or crossing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30261—Obstacle
Definitions
- the present disclosure relates to an image processing apparatus, a monitoring camera, and an image processing method that correct image fluctuation.
- a monitoring system that captures a predetermined space using a camera device such as a monitoring camera and monitors the space is known. At this time, fluctuations may occur in the captured image. Fluctuation is a phenomenon caused by changing the characteristics of a light transmission medium. Specifically, fluctuation is a phenomenon (Schlieren phenomenon) that occurs when the refractive index of a medium (such as air or water) that transmits light from a subject changes.
- the fluctuation is, for example, a so-called hot flame that is generated when the density of the air becomes dense due to a temperature difference in the atmosphere at the time of shooting outdoors in hot weather. Alternatively, the fluctuation occurs even when shooting underwater.
- Patent Documents 1 and 2 disclose image processing apparatuses that can correct image fluctuations.
- the present disclosure provides an image processing apparatus, a monitoring camera, and an image processing method that can appropriately correct fluctuation even when the strength of fluctuation changes.
- an image processing apparatus that corrects fluctuation of a first input image included in a moving image, and determines a fluctuation intensity that indicates the intensity of fluctuation.
- a correction unit that corrects fluctuation of the first input image according to the fluctuation intensity determined by the determination unit, and the determination unit sets the first input image or a frame temporally prior to the first input image. The larger the ratio of the number of pixels whose pixel value difference between the first input image and the previous frame is equal to or greater than a predetermined threshold with respect to the number of pixels of the included edge, the greater the fluctuation strength is determined.
- FIG. 1A is a block diagram illustrating a configuration of the image processing apparatus according to the first embodiment.
- FIG. 1B is a block diagram illustrating another configuration of the image processing apparatus according to the first embodiment.
- FIG. 1C is a block diagram illustrating another configuration of the correction unit of the image processing apparatus according to the first embodiment.
- FIG. 2 is a diagram illustrating a difference image when fluctuations with different intensities occur in the image processing apparatus according to the first embodiment.
- FIG. 3 is a diagram illustrating a difference image between the case where there are a plurality of objects and the case where there is a single object in the input image in the image processing apparatus according to the first embodiment.
- FIG. 4A is a flowchart illustrating an example of the operation of the image processing apparatus according to the first embodiment.
- FIG. 4A is a flowchart illustrating an example of the operation of the image processing apparatus according to the first embodiment.
- FIG. 4B is a flowchart illustrating an example of processing for determining the fluctuation strength according to Embodiment 1.
- FIG. 5A is a flowchart illustrating another example of the operation of the image processing apparatus according to the first embodiment.
- FIG. 5B is a flowchart illustrating another example of the process for determining the fluctuation strength according to Embodiment 1.
- FIG. 6A is a block diagram illustrating a configuration of the image processing apparatus according to the second embodiment.
- FIG. 6B is a block diagram illustrating another configuration of the image processing apparatus according to the second embodiment.
- FIG. 7A is a diagram showing an example of an image (second input image) immediately before the input image according to Embodiment 2.
- FIG. 7B is a diagram showing an example of an input image (first input image) according to Embodiment 2.
- FIG. 7C is a diagram illustrating an example of a difference image according to Embodiment 2.
- FIG. 7D is a diagram illustrating an example of a difference image subjected to the opening process according to the second embodiment.
- FIG. 7E is a diagram illustrating an example of a difference image subjected to the opening process according to the second embodiment.
- FIG. 7F is a diagram illustrating an example of a difference image subjected to the opening process according to the second embodiment.
- FIG. 8A is a flowchart illustrating an example of the operation of the image processing apparatus according to the second embodiment.
- FIG. 8B is a flowchart illustrating an example of a process for identifying a moving body region according to the second embodiment.
- FIG. 8C is a flowchart illustrating an example of a process for determining the fluctuation strength according to the second embodiment.
- FIG. 9A is a flowchart illustrating another example of the operation of the image processing apparatus according to the second embodiment.
- FIG. 9B is a flowchart illustrating another example of the process of specifying the moving object region according to the second embodiment.
- FIG. 9C is a flowchart illustrating another example of the process for determining the fluctuation strength according to the second embodiment.
- FIG. 10A is a block diagram illustrating a configuration of an image processing device according to the third embodiment.
- FIG. 10A is a block diagram illustrating a configuration of an image processing device according to the third embodiment.
- FIG. 10B is a block diagram illustrating another configuration of the image processing apparatus according to the third embodiment.
- FIG. 11A is a diagram illustrating an example of a background image according to Embodiment 3.
- FIG. 11B is a diagram illustrating an example of a difference image according to Embodiment 3.
- FIG. 11C is a diagram illustrating an example of a difference image obtained by performing the opening process according to the third embodiment.
- FIG. 12A is a flowchart illustrating an example of the operation of the image processing apparatus according to the third embodiment.
- FIG. 12B is a flowchart illustrating an example of a process for specifying a moving object region according to the third embodiment.
- FIG. 13 is a flowchart illustrating another example of the operation of the image processing apparatus according to the third embodiment.
- FIG. 14 is a diagram illustrating a product example of a monitoring camera including the image processing apparatus according to the embodiment.
- the present inventor has found that the following problems occur with respect to the conventional image processing apparatus described in the “Background Art” column.
- the present disclosure provides an image processing apparatus, a monitoring camera, and an image processing method that can appropriately correct fluctuation even when the intensity of fluctuation changes. To do.
- Embodiment 1 will be described with reference to FIGS. 1A to 5B.
- FIGS. 1A and 1B are block diagrams illustrating a configuration example of an image processing apparatus according to the present embodiment.
- the image processing apparatus is an apparatus that corrects fluctuations in an input image using images of a plurality of frames. At this time, the processing in the image processing apparatus is different between the case where the corrected image is not used as one of the images of the plurality of frames and the case where the corrected image is used.
- the configuration of the image processing apparatus 100 when the corrected image is not used will be described with reference to FIG. 1A. Further, the configuration of the image processing apparatus 100a when using the corrected image will be described with reference to FIG. 1B.
- Image processing apparatuses 100 and 100a generate and output a corrected image by correcting fluctuations in an input image included in a moving image.
- the image processing apparatus 100 includes a determination unit 110 and a correction unit 120.
- the image processing apparatus 100a includes a determination unit 110a and a correction unit 120a.
- the determination unit 110 illustrated in FIG. 1A determines a fluctuation strength indicating the fluctuation strength of the input image.
- the determination unit 110 determines and outputs a fluctuation intensity indicating the fluctuation intensity of the input image using at least two input images.
- the determination unit 110 acquires a first input image and a frame temporally prior to the first input image.
- the frame temporally prior to the first input image is the second input image input before the first input image.
- the first input image is an image for which fluctuation is to be corrected, and is, for example, the latest input image.
- the second input image is, for example, an input image of a frame adjacent to the first input image, that is, a frame immediately before the first input image.
- the second input image may be an input image of a frame two frames or more before the first input image.
- Fluctuation is a phenomenon caused by changing the characteristics of the light transmission medium, as described above.
- fluctuation is a phenomenon such as a hot flame, and is a phenomenon (Schlieren phenomenon) that occurs when the refractive index of a medium (such as air or water) that transmits light from a subject changes.
- a medium such as air or water
- fluctuation is a phenomenon in which a fixed and non-moving subject appears to move. For this reason, “fluctuation” occurs in an image taken by a fixed camera, unlike camera shake. In particular, the influence of fluctuation appears remarkably in moving images taken with telephoto.
- “Input image fluctuation” is a phenomenon in which the shape of the subject is distorted in the input image. For example, in a simple example, an edge that is straight when there is no “fluctuation” in the input image is a curve when there is “fluctuation”.
- the edge appears at a position deviated from the original position, but the direction and amount of the shift are substantially constant. That is, in the case of camera shake, the entire image is shifted by substantially the same amount in a common direction. On the other hand, the direction and amount of distortion of the edge caused by “fluctuation” is irregular for each pixel.
- correction fluctuation is to reduce or zero the pixel shift caused in the input image due to “fluctuation”.
- “Fluctuation strength” indicates the magnitude of distortion of the subject in the input image. That is, the greater the subject distortion, the greater the fluctuation strength. In other words, the fluctuation strength corresponds to the amount of deviation from the correct position of the edge (the position displayed when there is no fluctuation).
- the determination unit 110 determines the fluctuation strength using (Equation 1).
- the edge amount is the number of pixels of the edge included in the first input image.
- the difference amount between adjacent images is the number of pixels in which the difference in pixel values between the first input image and the second input image is equal to or greater than a predetermined threshold.
- the determination unit 110 calculates the ratio of the number of pixels in which the difference between the pixel values of the first input image and the second input image is greater than or equal to a predetermined threshold to the number of pixels of the edge included in the first input image. Determine as. In other words, the determination unit 110 calculates the fluctuation strength by normalizing the difference amount between adjacent images with the edge amount.
- the determination unit 110a illustrated in FIG. 1B is different from the determination unit 110 in that a corrected image generated by correcting the fluctuation of the second input image is used instead of the second input image. . That is, the frame temporally before the first input image is a corrected image generated by correcting the fluctuation of the second input image by the correction unit 120a. Further, the determination unit 110a is different in that the edge amount is calculated using the corrected image instead of the first input image.
- the determination unit 110a acquires the first input image and the corrected image. Then, the determination unit 110 determines the ratio of the number of pixels in which the difference between the pixel values of the first input image and the corrected image relative to the number of pixels of the edge included in the corrected image is equal to or greater than a predetermined threshold as the fluctuation strength. To do.
- the edge amount is the number of edge pixels included in the corrected image.
- the difference amount between adjacent images is the number of pixels in which the difference in pixel values between the first input image and the corrected image is equal to or greater than a predetermined threshold.
- the correction units 120 and 120a correct the fluctuation of the first input image according to the fluctuation intensity determined by the determination units 110 and 110a. Specifically, the correction units 120 and 120a correct fluctuation of the first input image by combining a plurality of frames including the first input image.
- the correction unit 120 illustrated in FIG. 1A corrects fluctuations in the first input image by averaging a plurality of frames.
- the correction unit 120a illustrated in FIG. 1B corrects fluctuations in the first input image by weighted addition of a plurality of frames.
- the correction unit 120 includes a fluctuation correction unit 121 and a parameter determination unit 122.
- the fluctuation correction unit 121 corrects fluctuation of the first input image by combining a plurality of frames including the first input image. For example, the fluctuation correction unit 121 averages a plurality of frames.
- the fluctuation correction unit 121 generates an image after correction by averaging n frames of input images for each pixel.
- n is an integer greater than or equal to 2, and is an example of a parameter determined by the parameter determination unit 122.
- the fluctuation correction unit 121 averages n input images that include the first input image and that are temporally continuous. Specifically, as shown in (Equation 2), the fluctuation correction unit 121 averages n input images from the first input image to the nth input image input n times before in time. As a result, a corrected image is generated.
- output [t] is a corrected image corresponding to the input image at time t.
- input [t] is an input image (that is, a first input image) at time t. Note that the n input images to be averaged may not be n consecutive images in time.
- the fluctuation correction unit 121 may weight and add n input images instead of averaging n input images.
- the weight used for weighted addition may be increased as the corresponding input image is closer to time t.
- only the input image at time t may be averaged by using a large weight and making the weights of the remaining images uniform.
- the image may be further blurred in addition to the blur caused by the fluctuation of the image.
- FIG. 1C is a block diagram illustrating a configuration of the correction unit 120b according to the present embodiment.
- the image processing apparatus 100 according to the present embodiment may include a correction unit 120b illustrated in FIG. 1C instead of the correction unit 120 illustrated in FIG. 1A.
- the correction unit 120b includes a fluctuation correction unit 121, a parameter determination unit 122b, and an image sharpening unit 123.
- the parameter determination unit 122b will be described later.
- the image sharpening unit 123 sharpens the input image whose fluctuation has been corrected by the fluctuation correction unit 121.
- sharpening of the image after fluctuation correction is performed using the filter size determined by the parameter determination unit 122b.
- the image sharpening unit 123 performs filter processing for image sharpening such as an unsharp mask on the image after fluctuation correction. Accordingly, it is possible to reduce blur caused by image fluctuations and blur caused by averaging of images.
- the parameter determination unit 122 determines parameters to be used for combining a plurality of frames according to the fluctuation strength determined by the determination unit 110. For example, the parameter determination unit 122 determines the number of frames used for averaging as a parameter according to the fluctuation strength determined by the determination unit 110. Specifically, the parameter determination unit 122 determines the value of n in (Equation 2) as a parameter.
- the fluctuation of the input image is considered to fluctuate with a constant amplitude centering on the position where there is no fluctuation. For this reason, by averaging a plurality of images, an image in which the degree of fluctuation is reduced is generated.
- the parameter determination unit 122 determines the number of frames used for averaging in accordance with the fluctuation strength. Specifically, the parameter determination unit 122 increases the value of n when the fluctuation strength is large, and decreases the value of n when the fluctuation strength is small.
- the filter size for sharpening is determined as a parameter according to the fluctuation strength determined by the determination part 110. Specifically, the parameter determination unit 122b determines the filter size so that the degree of sharpening of the image by the image sharpening unit 123 increases as the fluctuation strength increases.
- the parameter determination unit 122b increases the filter size of the unsharp mask as the fluctuation strength increases.
- the greater the fluctuation strength the greater the degree of sharpening of the image, and the blur caused by the fluctuation of the image and the blur caused by the averaging of the image can be reduced.
- the correction unit 120a includes a fluctuation correction unit 121a and a parameter determination unit 122a.
- the fluctuation correction unit 121a performs weighted addition between the first input image and the corrected image.
- the fluctuation correcting unit 121a combines the first input image and the corrected image with a fixed combining ratio ⁇ .
- the combination ratio ⁇ is a weight for weighted addition, and is an example of a parameter determined by the parameter determination unit 122a.
- the fluctuation correction unit 121a performs weighted addition of pixel values between the first input image and the corrected image generated by correcting the input image immediately before the first input image. Specifically, as shown in (Equation 3), the fluctuation correcting unit 121a uses the weight ⁇ , the input image input [t], and the immediately previous corrected image output [t ⁇ 1], after the correction. An image output [t] is generated.
- the weight ⁇ (0 ⁇ ⁇ ⁇ 1) is a composition ratio of the input image input [t]. That is, as the weight ⁇ approaches 1, the ratio of the input image in the corrected image increases, and as the weight ⁇ approaches 0, the ratio of the previous corrected image in the corrected image increases.
- the parameter determination unit 122a determines the weight of the weighted addition as a parameter according to the fluctuation strength determined by the determination unit 110a. Specifically, the parameter determination unit 122a determines the weight ⁇ in (Equation 3).
- the parameter determination unit 122a determines the weight ⁇ to be a smaller value, specifically, a value closer to 0 as the fluctuation strength is larger. Further, the parameter determination unit 122a determines the weight ⁇ to be a larger value, specifically, a value closer to 1 as the fluctuation strength is smaller.
- the fluctuation correction unit 121a may combine three or more frames.
- the parameter determination unit 122a may determine the weight corresponding to each frame so that the sum of the synthesis ratios of the three or more frames is 1.
- the edge appears as a difference in luminance value in the image. That is, an edge appears in a portion where the contrast is large.
- an edge pixel is a pixel whose luminance value difference with surrounding pixels is equal to or greater than a predetermined threshold.
- the determining unit 110 determines, for each pixel included in the first input image, whether the pixel is an edge pixel. For example, the determination unit 110 detects the presence / absence of an edge for each pixel by using vertical and horizontal Sobel filters. Then, the determination unit 110 counts the number of pixels determined as edge pixels.
- the determination unit 110 determines whether or not the value calculated by applying the Sobel filter to the target pixel is equal to or greater than a predetermined threshold. If the determination unit 110 determines that the pixel is greater than or equal to the threshold, the determination unit 110 determines that the pixel of interest is an edge pixel, and increments the counter value.
- the counter value after performing edge determination for all the pixels included in the first input image is the number of pixels of the edge in the first input image.
- the determination part 110 uses the 1st input image for the object image of edge detection, it is not restricted to this.
- the determination unit 110a illustrated in FIG. 1B uses the corrected image as the target image for edge detection.
- the determination unit 110a uses the previous corrected image as the target image for edge detection. Specifically, the determination unit 110a performs edge detection using the corrected image generated by correcting the fluctuation of the input image immediately before the first input image.
- the edge can be detected with higher accuracy.
- the determination units 110 and 110a may use a Prewitt filter or a Laplacian filter instead of the Sobel filter.
- the edge detection process is not limited to the above description.
- the determination unit 110 determines, for each pixel included in the first input image, whether the pixel is a pixel having a large difference from the previous frame (hereinafter, may be referred to as “difference pixel”). .
- difference pixel is, for example, a pixel whose pixel value difference between the first input image and a frame temporally prior to the first input image is greater than or equal to a predetermined threshold value.
- the determination unit 110 calculates a difference between the pixel value of the first input image and the pixel value of the second input image for each pixel, and determines whether the calculated difference is equal to or greater than a predetermined threshold. judge. If the determination unit 110 determines that the pixel is greater than or equal to the threshold, the determination unit 110 determines that the pixel of interest is a difference pixel and increments the counter value.
- the counter value after performing the difference determination for all the pixels included in the first input image is the number of difference pixels in the first input image.
- the pixels having different pixel values between the two frames captured by the fixed camera are either moving objects that are moving subjects or fluctuations. It is. That is, the difference pixel is either a moving body or a fluctuation.
- the difference pixel can be regarded as a fluctuation, not a moving body. Therefore, the number of difference pixels can be regarded as the number of pixels of fluctuation.
- Embodiment 2 the case where the moving body occupies many pixels, that is, the case where the moving body is a large object will be described in Embodiment 2.
- the determination unit 110 uses the first input image and the second input image for calculating the difference, but is not limited thereto.
- the determination unit 110a illustrated in FIG. 1B uses the first input image and the corrected image for the difference calculation.
- the determination unit 110a calculates the number of difference pixels by calculating the difference between the first input image and the corrected image generated by correcting the fluctuation of the second input image. In other words, the determination unit 110a calculates a difference between an image with a large fluctuation (first input image) and an image with a small fluctuation (corrected image).
- the determination unit 110a calculates the difference between the first input image and the corrected image, the determination unit 110a can calculate the difference amount close to the accurate fluctuation amount of the first input image.
- the value of the fluctuation intensity may vary from frame to frame due to the influence of noise, threshold values, and the like. If the fluctuation removal is performed using the fluctuation strength value that varies from frame to frame, the fluctuation removal effect is not stable. In order to avoid this, the fluctuation strength may be generated using only a specific frame. Alternatively, an average value of a plurality of fluctuation intensities generated in a plurality of frames may be used as the fluctuation intensity.
- FIG. 2 is a diagram showing a difference image when fluctuations of different intensities occur in the image processing apparatus according to the present embodiment.
- FIG. 3 is a diagram illustrating a difference image between the case where there are a plurality of objects in the input image and the case where there is a single object in the image processing apparatus according to the present embodiment.
- the input image includes a rectangular object 200.
- the edge (contour) 210 of the object 200 is a straight line.
- the luminance values of the object 200 are all the same, the luminance values of the background 201 are all the same, and the luminance value of the object 200 and the luminance value of the background 201 are different. .
- the degree of deformation of the object 200 increases as the fluctuation strength increases. Specifically, the edge (contour) 210 of the object 200 becomes a curve. As the fluctuation strength increases, the edge 210 is greatly displaced from the original position.
- the fluctuation is not a shift in a fixed direction unlike a camera shake or the like, and therefore, the shift amount of the edge 210 is randomly different depending on the pixel position of the edge 210 as shown in FIG. .
- the difference image shown in FIG. 2 is an image showing a difference between an image without fluctuation and an image including fluctuation.
- a difference image with a fluctuation intensity of “weak” indicates a difference between an input image with a fluctuation intensity of “weak” and an image without fluctuation.
- the fluctuation strength is “medium” and “strong”.
- the difference area 220 includes pixels (difference pixels) in which the difference in pixel value between an image without fluctuation and an image including fluctuation is equal to or greater than a predetermined threshold. That is, the number of difference pixels constituting the difference region 220 corresponds to, for example, the difference amount between adjacent images in (Expression 1).
- the degree of deformation of the object 200 increases as the fluctuation strength increases, and the difference area 220 also increases.
- the difference area 220 appears in an area corresponding to the vicinity of the edge 210 of the object 200.
- the input image includes a single object 200.
- the object 200 even when another object is sufficiently smaller than the object 200, it can be said that the greater the difference amount between adjacent images, the stronger the fluctuation strength.
- the difference regions may be substantially the same depending on the number of objects included in the input image, that is, the edge amount.
- the difference area 240 when the input image includes a plurality of objects 230 when the fluctuation intensity is “medium” is a single input image when the fluctuation intensity is “strong”. This is substantially the same as the difference area 220 when the object 200 is included.
- the fluctuation strength cannot be determined only by the size of the difference area. That is, the size of the difference area depends on both the edge amount and the fluctuation strength.
- the determination units 110 and 110a normalize the magnitude of the difference area, that is, the difference amount between adjacent images by the edge amount, thereby fluctuation intensity. To decide. Thereby, it is possible to determine an appropriate fluctuation strength regardless of the edge amount included in the input image.
- FIGS. 4A to 5B are flowcharts showing the operation of the image processing apparatus 100 according to the present embodiment.
- FIG. 4B is a flowchart showing processing for determining the fluctuation strength according to the present embodiment.
- the determination unit 110 acquires a plurality of input images (S100). Specifically, the determination unit 110 acquires a first input image that is a fluctuation correction target and a second input image that is input before the first input image.
- the determination unit 110 determines the fluctuation strength (S120). Details of the method for determining the fluctuation strength will be described later with reference to FIG. 4B.
- the parameter determination unit 122 determines a parameter according to the fluctuation strength determined by the determination unit 110 (S140). Specifically, the parameter determination unit 122 determines, as a parameter, the number n of frames that have a larger value as the fluctuation strength increases.
- the fluctuation correcting unit 121 corrects the fluctuation of the first input image using the parameter determined by the parameter determining unit 122 (S160). Specifically, the fluctuation correction unit 121 corrects fluctuations in the first input image by averaging n input images determined by the parameter determination unit 122, and outputs a corrected image.
- the determination unit 110 calculates the number of pixels of the edge using the first input image (S121). For example, the determination unit 110 determines, for each pixel, whether or not the target pixel is an edge pixel, and counts the pixels that are determined to be edge pixels, whereby the edge pixels in the first input image are counted. Calculate the number.
- the determination unit 110 calculates a difference amount between the first input image and the second input image (S122).
- the difference amount is the number of pixels (difference pixels) in which the difference value between adjacent images is equal to or greater than a predetermined threshold. For example, for each pixel, the determination unit 110 determines whether or not the target pixel is a difference pixel, and counts the pixels that are determined to be difference pixels, thereby calculating the difference amount.
- the determination unit 110 calculates the fluctuation strength based on (Equation 1) (S123).
- FIG. 5A is a flowchart showing the operation of the image processing apparatus 100a according to the present embodiment.
- FIG. 5B is a flowchart showing processing for determining the fluctuation strength according to the present embodiment.
- the determination unit 110a acquires the first input image and the previous corrected image (S100a). Specifically, the determination unit 110a corrects the first input image to be corrected for fluctuation and the corrected image generated by correcting the fluctuation of the second input image input before the first input image. And get.
- the determination unit 110a determines the fluctuation strength (S120a). Details of the method of determining the fluctuation strength will be described later with reference to FIG. 5B.
- the parameter determination unit 122a determines a parameter according to the fluctuation strength determined by the determination unit 110a (S140a). Specifically, the parameter determination unit 122a determines the weight ⁇ corresponding to the first input image to be a smaller value and the weight 1 ⁇ corresponding to the corrected image to be a larger value as the fluctuation strength is larger.
- the fluctuation correction unit 121a corrects the fluctuation of the first input image using the parameter determined by the parameter determination unit 122a (S160a). Specifically, the fluctuation correction unit 121a corrects fluctuations in the first input image by performing weighted addition between the first input image and the corrected image using the weight ⁇ determined by the parameter determination unit 122a. .
- the determination unit 110a calculates the number of edge pixels using the previous corrected image (S121a). For example, the determination unit 110a determines, for each pixel, whether or not the target pixel is an edge pixel, and counts the pixels that are determined to be edge pixels, thereby determining the edge of the previous corrected image. The number of pixels is calculated.
- the determination unit 110a calculates the difference amount between the first input image and the corrected image (S122a). For example, the determination unit 110a determines, for each pixel, whether or not the target pixel is a difference pixel, and counts the pixels that are determined to be difference pixels, thereby calculating the difference amount.
- the determination unit 110a calculates the fluctuation strength based on (Equation 1) (S123).
- the image processing apparatus 100 is an image processing apparatus 100 that corrects the fluctuation of the first input image included in the moving image, and determines the fluctuation intensity indicating the intensity of the fluctuation.
- the determination unit 110 includes a determination unit 110 and a correction unit 120 that corrects the fluctuation of the first input image according to the fluctuation intensity determined by the determination unit 110.
- the determination unit 110 has a time from the first input image or the first input image. In particular, the greater the ratio of the number of pixels in which the pixel value difference between the first input image and the previous frame is equal to or greater than a predetermined threshold with respect to the number of edge pixels included in the previous frame, the greater the fluctuation strength becomes a larger value. decide.
- the fluctuation of the first input image is corrected according to the determined fluctuation intensity, the fluctuation can be appropriately corrected even when the fluctuation intensity changes.
- the number of pixels whose difference is equal to or greater than the threshold value depends on the strength of fluctuation and the number of edge pixels. Therefore, by normalizing the difference amount with the number of edge pixels, an appropriate fluctuation strength can be obtained. Can be determined. Therefore, for example, even when there are a plurality of images in the input image and there are many edges, the strength of fluctuation can be determined appropriately.
- the previous frame is generated by correcting the second input image input before the first input image or the fluctuation of the second input image by the correction unit 120a. It is an image after correction.
- the determination of the fluctuation intensity and the correction of the fluctuation can be performed more appropriately.
- the correction unit 120 includes a fluctuation correction unit 121 that corrects fluctuations of the first input image by combining a plurality of frames including the first input image, and a determination unit 110.
- a parameter determining unit 122 that determines parameters used for the synthesis in accordance with the determined fluctuation strength.
- the composition ratio can be changed according to the fluctuation strength. Therefore, it is possible to appropriately correct the fluctuation of the first input image in accordance with the fluctuation strength of the first input image.
- the fluctuation correction unit 121 performs averaging of a plurality of frames as a synthesis
- the parameter determination unit 122 performs averaging according to the fluctuation intensity determined by the determination unit 110.
- the number of frames to be used is determined as a parameter.
- an image with a reduced degree of fluctuation can be generated by averaging images of a plurality of frames.
- the fluctuation can be appropriately corrected according to the fluctuation intensity. For example, when the fluctuation strength is large, the large fluctuation can be appropriately corrected by increasing the number of frames.
- the fluctuation correction unit 121a performs weighted addition of the first input image and the corrected image as a composition, and the parameter determination unit 122a sets the fluctuation intensity determined by the determination unit 110a. Accordingly, the weight of weighted addition may be determined as a parameter.
- the weighted addition of the input image and the corrected image with little fluctuation is performed, fluctuation of the input image can be corrected.
- the weight is determined according to the fluctuation strength, the fluctuation can be corrected appropriately. For example, by increasing the weight of the corrected image so that the weight of the corrected image increases, the ratio of the corrected image with less fluctuation can be increased in the image after weighted addition, and the fluctuation is corrected appropriately. be able to.
- FIGS. 6A and 6B are block diagrams illustrating a configuration example of the image processing apparatus according to the present embodiment.
- the image processing apparatuses 300 and 300a according to the present embodiment are apparatuses that can appropriately correct fluctuations when an input image includes a moving body that is a moving subject.
- the image processing apparatuses 100 and 100a according to Embodiment 1 are effective when there is no moving body between two frames and when it can be assumed that the number of pixels occupied by the moving body is sufficiently small. This is because the influence of the moving body is small and the fluctuation strength can be calculated without taking the moving body into consideration.
- the influence of the moving object on the fluctuation strength becomes large. That is, the difference due to the movement of the moving object is included in the difference amount between adjacent images in (Expression 1). That is, the amount of difference between adjacent images includes not only the magnitude of fluctuation and the amount of edge, but also the amount of movement of the moving object.
- the configuration of the image processing apparatus 300 when the corrected image is not used will be described with reference to FIG. 6A. Further, the configuration of the image processing apparatus 300a when using the corrected image will be described with reference to FIG. 6B.
- the image processing apparatus 300 is different from the image processing apparatus 100 illustrated in FIG. 1A in that a determination unit 310 is provided instead of the determination unit 110 and a new specification unit 330 is provided. Is different.
- the image processing apparatus 300a includes a determination unit 310a instead of the determination unit 110a and a new specifying unit 330a, as compared with the image processing apparatus 100a illustrated in FIG. 1B. Is different. Below, it demonstrates focusing on a different point and may abbreviate
- the specifying unit 330 illustrated in FIG. 6A specifies a moving body region including a moving body that moves between the input image and the previous frame.
- the specifying unit 330 acquires an input image of a plurality of frames and the previous fluctuation intensity, specifies a moving body region, and outputs the specified moving body region.
- the specifying unit 330 moves a closed region that is equal to or larger than a predetermined area among the difference regions that are configured by pixels that have a difference value equal to or greater than a threshold value between the first input image and the second input image. Identified as a body region. In other words, the specifying unit 330 specifies an area where many difference pixels are gathered as a moving body area. At this time, the specifying unit 330 sets a parameter corresponding to the predetermined area according to the fluctuation strength determined by the determining unit 310.
- the specifying unit 330a specifies, as a moving body region, a closed region having a predetermined area or more among the difference regions including pixels whose difference value is not less than a threshold value between the first input image and the corrected image. To do.
- the determination unit 310 illustrated in FIG. 6A determines the fluctuation strength using an area other than the moving body area specified by the specifying unit 330. For example, the determination unit 310 uses an area other than the moving body area as the area used for calculating the edge amount and the difference amount. That is, the determination unit 310 does not target the pixels of the entire input image but calculates the edge amount and the difference amount for pixels in a limited region.
- the specific operation of the determination unit 310 is the same as the operation of the determination unit 110 illustrated in FIG. 1A except that the calculation area is limited.
- the determination unit 310a shown in FIG. 6B is different from the determination unit 310 in that a corrected image is used instead of the second input image. Specifically, the determination unit 310 calculates the difference amount by calculating the difference between the first input image and the corrected image in an area other than the moving body area.
- the determination unit 310a calculates the edge amount using the corrected image instead of the first input image, similarly to the determination unit 110a according to the first embodiment. Specifically, the determination unit 310a calculates the edge amount using an area other than the moving body area of the corrected image.
- FIG. 7A is a diagram showing an example of an image (second input image) immediately before the input image according to the present embodiment.
- FIG. 7B is a diagram showing an example of an input image (first input image) according to the present embodiment.
- FIG. 7C is a diagram illustrating an example of a difference image according to the present embodiment.
- FIG. 7D to FIG. 7E are diagrams showing examples of difference images that have undergone the opening process according to the present embodiment.
- the first input image and the second input image include an object 400 (for example, a building) that does not move and a moving body 401 (for example, a car).
- an object 400 for example, a building
- a moving body 401 for example, a car
- the difference image that is the difference between the first input image and the second input image includes difference areas 420 and 421.
- pixels with no difference are represented by binary values of black (“0”), and pixels with differences are represented by white values (“255”).
- the difference area 420 is an area corresponding to the vicinity of the edge of the object 400 and is an area that appears due to the influence of fluctuation.
- the difference area 421 is an area that appears mainly due to the movement of the moving body 401.
- the edge of the object 400 moves by an irregular amount in an irregular direction due to the influence of fluctuation, whereas the moving body 401 has a certain pixel area. Move together and move the same amount in the same direction. For this reason, generally, the amplitude of fluctuation (that is, the amount of edge shift due to fluctuation) is often smaller than the amount of movement by the moving body 401, so that the difference area 421 is larger than the difference area 420.
- the specifying unit 330 specifies the difference area 421 using the difference in pixel area between the difference area 420 and the difference area 421.
- the specifying unit 330 performs an opening process, which is a type of morphological process, as an example of a method for specifying a region using a difference in pixel area.
- the opening process is a process of performing the expansion process the same number of times after performing the contraction process for a predetermined image a predetermined number of times (hereinafter sometimes referred to as “specified number”).
- Shrinkage processing is performed by shrinking a white pixel area by replacing the white pixel with a black pixel when there is even a black pixel around the white pixel of interest (for example, 8 pixels adjacent to the target pixel).
- This is a process to
- the expansion process is a process of expanding a white pixel region by replacing pixels around the white pixel as a target (for example, eight pixels adjacent to the target pixel) with white pixels.
- the specifying unit 330 cannot appropriately specify the moving object region.
- the appropriate number of times depends on the fluctuation strength.
- the appropriate specified number of times is the number of times the difference area 420 just disappears. That is, when the appropriate number of times is m, the difference area 420 remains in the m ⁇ 1th contraction process, but the difference area 420 disappears in the mth contraction process.
- the number of times the difference area 420 is deleted depends on the size of the difference area 420, that is, the amount of edge shift due to fluctuation.
- the specifying unit 330 sets the specified number of times to an appropriate number according to the fluctuation strength.
- the specifying unit 330 sets the specified number of times to a larger value as the fluctuation strength is larger. Thereby, the specific
- the specifying unit 330 sets the specified number of times to a smaller value as the fluctuation strength is smaller. As a result, the specifying unit 330 deletes the small difference area 420 corresponding to the small fluctuation, and determines the remaining area (difference area 421) as the moving body area.
- the specifying unit 330 determines a closed region that is equal to or larger than a predetermined area among the difference regions as a moving body region.
- an example of a parameter corresponding to a predetermined area serving as a reference for determining whether or not the region is a moving body region is the specified number of times. That is, the specifying unit 330 sets a closed difference region having an area larger than the first area as the moving body region by setting the specified number of times to a larger value as the fluctuation strength is larger. Further, by setting the specified number of times to a smaller value as the fluctuation strength is smaller, a closed differential region having an area larger than the second area ( ⁇ first area) is determined as the moving body region.
- the specifying unit 330a illustrated in FIG. 6B performs the same processing as the processing of the specifying unit 330 described above except that the first input image and the corrected image are used. For example, the specifying unit 330a calculates a difference between the first input image and the corrected image.
- the appropriate specified number of times does not have to be the number of times the difference area 420 disappears.
- the appropriate specified number of times may be m ⁇ 1 times or m + 1 times. That is, the specifying unit 330 may be set as appropriate so that the difference area 420 affected by the fluctuation is reduced and the difference area 421 affected by the moving object remains.
- FIG. 8A is a flowchart showing the operation of the image processing apparatus 300 according to this embodiment.
- FIG. 8B is a flowchart showing a process for specifying a moving body region according to the present embodiment.
- FIG. 8C is a flowchart showing processing for determining the fluctuation strength according to the present embodiment.
- the determination unit 110 and the specification unit 330 acquire a plurality of input images (S100). And the specific
- the determination unit 310 determines the fluctuation strength (S220). At this time, the determination unit 310 determines the fluctuation strength using an area other than the moving body area specified by the specifying unit 330. Details of the method of determining the fluctuation strength will be described later with reference to FIG. 8C.
- the parameter determination unit 122 determines the parameter (S140), and the fluctuation correction unit 121 corrects the fluctuation of the first input image using the determined parameter (S160).
- the specifying unit 330 calculates differences between input images of a plurality of frames (S211). Specifically, the specifying unit 330 generates a difference image that is a difference between the first input image and the second input image.
- the specifying unit 330 performs binarization processing of the difference image (S212). Specifically, the specifying unit 330 changes the pixels of the difference image to 0 by changing the pixel whose absolute value of the difference between the pixels is equal to or smaller than a predetermined threshold to 0 and changing the pixel whose absolute value of the difference is larger than the predetermined threshold to 255. Perform binarization processing. Thereby, for example, a binarized difference image as shown in FIG. 7C is generated.
- the specifying unit 330 sets a parameter based on the fluctuation intensity used immediately before (S213). Specifically, the specifying unit 330 sets the prescribed number to a larger value as the fluctuation strength is larger, and sets the prescribed number to a smaller value as the fluctuation intensity is smaller.
- the identification unit 330 performs an opening process on the binarized difference image (S214). Thereby, as shown to FIG. 7D, a mobile body area
- the determination unit 310 determines whether or not the target pixel, which is one pixel included in the first input image, is included in the moving body region (S221).
- the region is determined using another pixel as the target pixel.
- the determination unit 310 determines whether or not the target pixel of the first input image is an edge pixel (S222). If the target pixel is an edge pixel, the counter value indicating the number of edge pixels is incremented. If the target pixel is not an edge pixel, the counter value remains unchanged.
- the determination unit 310 determines whether or not the target pixel of the first input image is a difference pixel (S223). Specifically, the determination unit 310 calculates the difference between the target pixel of the first input image and the target pixel of the second input image, and determines whether the calculated difference is equal to or greater than a predetermined threshold. When the difference is equal to or larger than the predetermined threshold, the counter value indicating the difference amount between the adjacent images is incremented. When the difference is smaller than the predetermined threshold, the counter value remains as it is.
- the region determination (S221), the edge determination (S222), and the difference determination (S223) are repeated until the processing is completed for all the pixels of the first input image with another pixel as the target pixel.
- the determination unit 310 calculates the fluctuation strength based on (Equation 1) (S224).
- the influence of the moving body 401 can be reduced by excluding the pixels included in the moving body region as the fluctuation intensity calculation target. Therefore, a more appropriate fluctuation strength can be calculated.
- FIG. 9A is a flowchart showing the operation of the image processing apparatus 300a according to this embodiment.
- FIG. 9B is a flowchart showing a process of specifying a moving body region according to the present embodiment.
- FIG. 9C is a flowchart showing processing for determining the fluctuation strength according to the present embodiment.
- the determination unit 310a and the specifying unit 330a acquire the first input image and the previous corrected image (S100a).
- the specifying unit 330a specifies the moving body region (S210a). The details of the method for specifying the moving object region are as shown in FIG. 9B.
- the identifying unit 330a calculates a difference between the first input image and the previous corrected image (S211a).
- the subsequent processing is the same as the processing of the specifying unit 330 illustrated in FIG. 8B.
- the determination unit 310a determines the fluctuation strength (S220a). Details of the method of determining the fluctuation strength will be described later with reference to FIG. 9C.
- the parameter determination unit 122a determines parameters (S140a), and the fluctuation correction unit 121a uses the determined parameters to perform fluctuations in the first input image. Correction is performed (S160a).
- the determination unit 310a determines whether or not the target pixel, which is one pixel included in the first input image, is included in the moving body region (S221).
- the region is determined using another pixel as the target pixel.
- the determination unit 310a determines whether the target pixel of the corrected image is an edge pixel (S222a). If the target pixel is an edge pixel, the counter value indicating the number of edge pixels is incremented. If the target pixel is not an edge pixel, the counter value remains unchanged.
- the determination unit 310a determines whether or not the target pixel of the first input image is a difference pixel (S223a). Specifically, the determination unit 310a calculates a difference between the target pixel of the first input image and the target pixel of the corrected image, and determines whether the calculated difference is equal to or greater than a predetermined threshold. When the difference is equal to or larger than the predetermined threshold, the counter value indicating the difference amount between the adjacent images is incremented. When the difference is smaller than the predetermined threshold, the counter value remains as it is.
- the region determination (S221), the edge determination (S222a), and the difference determination (S223a) are repeated until the processing is completed for all the pixels of the first input image with another pixel as the target pixel.
- the determination unit 310 calculates the fluctuation strength based on (Equation 1) (S224).
- the influence of the moving body 401 can be reduced by excluding the pixels included in the moving body region as the fluctuation intensity calculation target. Therefore, a more appropriate fluctuation strength can be calculated.
- the image processing apparatus 300 further includes the specifying unit 330 that specifies the moving object region including the moving object that moves between the input image and the previous frame.
- the fluctuation strength is determined using an area other than the moving body area.
- the fluctuation strength is determined using an area other than the moving body area, the fluctuation strength is appropriately determined even when the moving image is included in the input image, and the fluctuation is appropriately corrected. Can do.
- the specifying unit 330 selects a closed area that is a predetermined area or more out of a difference area including pixels whose difference value is greater than or equal to a threshold value between the first input image and the previous frame. As specified.
- the amplitude of fluctuation (that is, the amount of shift of the edge due to fluctuation) is often smaller than the amount of movement by the moving body. Can be considered. For this reason, a mobile body area
- the specifying unit 330 sets a parameter corresponding to a predetermined area according to the fluctuation strength determined by the determining unit 310.
- a parameter serving as a threshold for specifying the moving body area from the difference area is determined according to the fluctuation strength, and thus the moving body area can be specified appropriately.
- the fluctuation strength can be determined with high accuracy, and the fluctuation can be corrected more appropriately.
- Embodiment 3 will be described with reference to FIGS. 10A to 13.
- FIGS. 10A and 10B are block diagrams illustrating a configuration example of the image processing apparatus according to the present embodiment.
- the image processing apparatuses 500 and 500a according to the present embodiment are apparatuses that can correct fluctuation more appropriately when a moving object is included in an input image.
- the difference area 421 cannot sufficiently represent the movement of the moving body 401. Therefore, it is required to specify the moving body region with higher accuracy.
- the corrected image is not used and the corrected image is used as one of the images of the plurality of frames. Then, processing is different.
- the configuration of the image processing apparatus 500 when the corrected image is not used will be described with reference to FIG. 10A. Further, the configuration of the image processing apparatus 500a when using the corrected image will be described with reference to FIG. 10B.
- the image processing apparatus 500 is different from the image processing apparatus 300 illustrated in FIG. 6A in that a specifying unit 530 is provided instead of the specifying unit 330 and a generation unit 540 is newly provided. Is different. Also, as illustrated in FIG. 10B, the image processing apparatus 500a includes a specifying unit 530 instead of the specifying unit 330a and a generation unit 540, as compared with the image processing apparatus 300a illustrated in FIG. 6B. Is different. Below, it demonstrates focusing on a different point and may abbreviate
- the generation unit 540 illustrated in FIGS. 10A and 10B generates a background image using the input image.
- the background image is an image that does not include a moving object.
- the background image is an image in which a moving object is not captured when a space is photographed with a fixed camera. In the background image, it is preferable that the fluctuation is sufficiently suppressed or has not occurred.
- the generation unit 540 may generate a background image by deleting a moving body from a moving image captured by a fixed camera, for example. Specifically, by averaging images having a sufficiently large number of frames, such as several hundred frames, it is possible to delete the moving object and generate a background image. In this case, even if fluctuations occur during the shooting period, fluctuations can be removed by averaging, and thus fluctuations are sufficiently suppressed in the generated background image.
- the generation unit 540 may generate the background image using any other method.
- the specifying unit 530 illustrated in FIGS. 10A and 10B specifies the moving body region using the input image and the background image.
- the specific unit 530 is the same as the specific unit 330 according to the second embodiment except for the use of a background image instead of the second input image.
- FIG. 11A is a diagram showing an example of a background image according to the present embodiment. As illustrated in FIG. 11A, the background image includes an object 400 (building) that does not move without including a moving object.
- object 400 building
- FIG. 11B is a diagram showing an example of the difference image according to the present embodiment. Specifically, FIG. 11B shows a difference image that is a difference between the background image shown in FIG. 11A and the first input image shown in FIG. 7B.
- difference image an object that is not in the background image appears as a difference area.
- a difference area 620 due to an edge shift amount due to fluctuation and a difference area 621 due to the moving body 401 appear. That is, since the moving body 401 is not included in the background image, the moving body 401 itself appears as the difference area 621.
- FIG. 11C is a diagram illustrating an example of a difference image subjected to the opening process according to the present embodiment.
- the moving body region (difference region 621) can be specified more accurately by performing the opening process.
- the background image it is possible to specify the moving object region with high accuracy.
- FIG. 12A is a flowchart showing the operation of the image processing apparatus 500 according to this embodiment.
- FIG. 12B is a flowchart showing a process of specifying a moving body region according to the present embodiment.
- the determination unit 310 and the specifying unit 530 obtain a plurality of input images (S100).
- the generation unit 540 generates a background image (S305).
- the generation unit 540 may read and acquire the background image from the memory or the like.
- the specifying unit 530 specifies the moving body region (S310). Details of the method for specifying the moving object region will be described later with reference to FIG. 12B.
- the subsequent processing is the same as the operation of the image processing apparatus 300 shown in FIG.
- the specifying unit 530 calculates a difference between the first input image and the background image (S311).
- the subsequent processing is the same as the processing of the specifying unit 330 illustrated in FIG. 8B.
- the determination unit 310a and the specifying unit 530 obtain the first input image and the previous corrected image (S100a). Subsequent processing is the same as in FIG. 9A and FIG.
- the specifying unit 530 specifies the moving object region using the first input image and the background image not including the moving object.
- the background image does not include the moving object, so that the moving object region can be specified with high accuracy. Therefore, the fluctuation intensity can be determined appropriately, and the fluctuation of the first input image can be corrected more appropriately.
- the fluctuation strength is larger as the ratio of the number of pixels in which the difference between the pixel values of the first input image and the previous frame is equal to or greater than a predetermined threshold to the number of pixels of the edge included in the first input image or the previous frame is larger. It only needs to be a large value.
- the fluctuation strength is a calculation result of (Equation 1), and thus has a continuously changing value, but is not limited thereto.
- the fluctuation intensity may be a discrete value such as “weak”, “medium”, “strong” as shown in FIG.
- the determination unit determines the fluctuation strength to be “weak”, and the calculation result of (Equation 1) falls within the second range ( The fluctuation strength may be determined to be “medium” when the value is within the range of the first range.
- the number of pixels at the edge means a value indicating the amount of the edge, and may be, for example, the length of the edge.
- the number of difference pixels may be, for example, the sum of absolute differences.
- FIG. 14 is a diagram illustrating a product example of a surveillance camera according to a modification of the embodiment.
- the surveillance camera according to the present disclosure is, for example, a camera that is installed to take an image of the outdoors, and can be used for monitoring traffic as an example.
- the surveillance camera according to the present disclosure can also be realized as an underwater camera for photographing underwater.
- the underwater camera can be used for monitoring aquatic organisms or inspecting articles immersed in water in a factory or the like.
- the present disclosure can also be realized as an image processing method.
- the image processing method according to the present disclosure is an image processing method for correcting fluctuation of an input image included in a moving image, and determines a fluctuation strength indicating the strength of the fluctuation, and according to the determined fluctuation strength.
- a difference between pixel values of the input image and the previous frame is predetermined with respect to the number of pixels of the edge included in the input image or the temporally previous frame of the input image. The larger the ratio of the number of pixels that is equal to or greater than the threshold value, the larger the fluctuation strength is determined.
- each component determining units 110, 110a, 310 and 310a, correcting units 120 and 120a, fluctuation correcting units 121 and 121a, parameter determining units 122 and 122a, specifying unit configuring the image processing apparatus 100 and the like according to the present disclosure.
- 330, 330a and 530, and generation unit 540 are programs executed on a computer including a CPU (Central Processing Unit), a RAM, a ROM (Read Only Memory) communication interface, an I / O port, a hard disk, a display, and the like. Or may be realized by hardware such as an electronic circuit.
- CPU Central Processing Unit
- RAM random access memory
- ROM Read Only Memory
- the image processing apparatus, the monitoring camera, and the image processing method according to the present disclosure can be used for, for example, a video recorder, a television, a camera, and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Closed-Circuit Television Systems (AREA)
- Image Analysis (AREA)
Abstract
Description
以下、図1A~図5Bを用いて、実施の形態1について説明する。
まず、本実施の形態に係る画像処理装置の構成について、図1A及び図1Bを用いて説明する。図1A及び図1Bは、本実施の形態に係る画像処理装置の構成例を示すブロック図である。
本実施の形態に係る画像処理装置100及び100aは、動画像に含まれる入力画像の揺らぎを補正することで、補正後画像を生成して出力する。図1Aに示すように、画像処理装置100は、決定部110と、補正部120とを備える。また、図1Bに示すように、画像処理装置100aは、決定部110aと、補正部120aとを備える。
図1Aに示す決定部110は、入力画像の揺らぎの強さを示す揺らぎ強度を決定する。決定部110は、少なくとも2枚の入力画像を用いて、入力画像の揺らぎの強度を示す揺らぎ強度を決定して出力する。
補正部120及び120aは、決定部110及び110aによって決定された揺らぎ強度に応じて第1入力画像の揺らぎを補正する。具体的には、補正部120及び120aは、第1入力画像を含む複数のフレームを合成することで、第1入力画像の揺らぎを補正する。
まず、複数のフレームの合成の一例である平均化を行う補正部120の構成について、図1Aを用いて説明する。図1Aに示すように、補正部120は、揺らぎ補正部121と、パラメータ決定部122とを備える。
揺らぎ補正部121は、第1入力画像を含む複数のフレームの合成を行うことで、第1入力画像の揺らぎを補正する。例えば、揺らぎ補正部121は、複数のフレームの平均化を行う。
パラメータ決定部122は、決定部110によって決定された揺らぎ強度に応じて、複数のフレームの合成に用いるパラメータを決定する。例えば、パラメータ決定部122は、決定部110によって決定された揺らぎ強度に応じて、平均化に用いるフレーム数をパラメータとして決定する。具体的には、パラメータ決定部122は、(式2)におけるnの値をパラメータとして決定する。
なお、図1Cに示すパラメータ決定部122bでは、決定部110によって決定された揺らぎ強度に応じて、先鋭化のためのフィルタサイズを、パラメータとして決定する。具体的には、パラメータ決定部122bは、揺らぎ強度が大きい程、画像先鋭化部123による画像の先鋭化度合いが大きくなるように、フィルタサイズを決定する。例えば、画像先鋭化部123でアンシャープマスクを適用する場合、パラメータ決定部122bは、揺らぎ強度が大きい程、アンシャープマスクのフィルタサイズを大きくする。これにより、揺らぎ強度が大きい程、画像の先鋭化度合いを大きくすることができ、画像の揺らぎによるボケ、及び、画像の平均化によって生じるボケを低減することができる。
次に、複数のフレームの合成の一例である重み付け加算を行う補正部120aの構成について、図1Bを用いて説明する。図1Bに示すように、補正部120aは、揺らぎ補正部121aと、パラメータ決定部122aとを備える。
揺らぎ補正部121aは、第1入力画像と補正後画像との重み付け加算を行う。言い換えると、揺らぎ補正部121aは、第1入力画像と補正後画像とを一定の合成比率αで合成する。合成比率αは、重み付け加算の重みであり、パラメータ決定部122aによって決定されるパラメータの一例である。
パラメータ決定部122aは、決定部110aによって決定された揺らぎ強度に応じて、重み付け加算の重みをパラメータとして決定する。具体的には、パラメータ決定部122aは、(式3)における重みαを決定する。
続いて、揺らぎ強度の算出に用いるエッジ量と隣接画像間の差分量とを算出する方法について説明する。
エッジは、画像内では輝度値の差として現れる。すなわち、コントラストが大きい部分にエッジが現れる。例えば、エッジの画素は、周囲の画素との輝度値の差が所定の閾値以上である画素である。
決定部110は、第1入力画像に含まれる画素毎に、当該画素が前のフレームとの差分が大きい画素(以下、「差分画素」と記載する場合がある)であるか否かを判定する。なお、差分画素は、例えば、第1入力画像と、当該第1入力画像より時間的に前のフレームとの画素値の差分が所定の閾値以上である画素である。
続いて、本実施の形態に係る揺らぎ強度と、エッジ量及び差分量との関係について、図2及び図3を用いて説明する。図2は、本実施の形態に係る画像処理装置において、異なる強度の揺らぎが発生した場合の差分画像を示す図である。図3は、本実施の形態に係る画像処理装置において、入力画像内に複数の物体がある場合と単一の物体がある場合との差分画像を示す図である。
[5-1.補正後画像を用いない場合]
続いて、本実施の形態に係る画像処理装置100及び100aの動作について、図4A~図5Bを用いて説明する。まず、本実施の形態に係る画像処理装置100の動作、具体的には、補正後画像を用いずに入力画像の揺らぎを補正する処理について、図4A及び図4Bを用いて説明する、図4Aは、本実施の形態に係る画像処理装置100の動作を示すフローチャートである。図4Bは、本実施の形態に係る揺らぎ強度を決定する処理を示すフローチャートである。
次に、本実施の形態に係る画像処理装置100aの動作、具体的には、補正後画像を用いて入力画像の揺らぎを補正する処理について、図5A及び図5Bを用いて説明する。図5Aは、本実施の形態に係る画像処理装置100aの動作を示すフローチャートである。図5Bは、本実施の形態に係る揺らぎ強度を決定する処理を示すフローチャートである。
以上のように、本実施の形態に係る画像処理装置100は、動画像に含まれる第1入力画像の揺らぎを補正する画像処理装置100であって、揺らぎの強さを示す揺らぎ強度を決定する決定部110と、決定部110によって決定された揺らぎ強度に応じて第1入力画像の揺らぎを補正する補正部120とを備え、決定部110は、第1入力画像又は当該第1入力画像より時間的に前のフレームに含まれるエッジの画素数に対する、第1入力画像と前のフレームとの画素値の差分が所定の閾値以上である画素数の比が大きい程、大きい値になる揺らぎ強度を決定する。
以下、図6A~図9Cを用いて、実施の形態2について説明する。
まず、本実施の形態に係る画像処理装置の構成について、図6A及び図6Bを用いて説明する。図6A及び図6Bは、本実施の形態に係る画像処理装置の構成例を示すブロック図である。
図6Aに示すように、画像処理装置300は、図1Aに示す画像処理装置100と比較して、決定部110の代わりに決定部310を備える点と、新たに特定部330を備える点とが異なっている。また、図6Bに示すように、画像処理装置300aは、図1Bに示す画像処理装置100aと比較して、決定部110aの代わりに決定部310aを備える点と、新たに特定部330aを備える点とが異なっている。以下では、異なる点を中心に説明し、同じ点は説明を省略する場合がある。
図6Aに示す特定部330は、入力画像と前のフレームとの間で動く移動体を含む移動体領域を特定する。例えば、特定部330は、複数のフレームの入力画像と、前回の揺らぎ強度とを取得し、移動体領域を特定して出力する。
図6Aに示す決定部310は、特定部330によって特定された移動体領域以外の領域を用いて揺らぎ強度を決定する。例えば、決定部310は、エッジ量の算出及び差分量の算出に用いる領域として、移動体領域以外の領域を利用する。つまり、決定部310は、入力画像全体の画素を対象とするのではなく、限定された領域内の画素を対象として、エッジ量の算出及び差分量の算出を行う。なお、算出の領域が限定されていることを除いて、決定部310の具体的な動作は、図1Aに示す決定部110の動作と同じである。
続いて、移動体領域を特定する処理の詳細について、図7A~図7Fを用いて説明する。
[4-1.補正後画像を用いない場合]
続いて、本実施の形態に係る画像処理装置300及び300aの動作について、図8A~図9Bを用いて説明する。まず、本実施の形態に係る画像処理装置300の動作、具体的には、補正後画像を用いずに入力画像の揺らぎを補正する処理について、図8A~図8Cを用いて説明する。
次に、本実施の形態に係る画像処理装置300aの動作、具体的には、補正後画像を用いて入力画像の揺らぎを補正する処理について、図9A~図9Cを用いて説明する。
以上のように、本実施の形態に係る画像処理装置300は、さらに、入力画像と前のフレームとの間で動く移動体を含む移動体領域を特定する特定部330を備え、決定部310は、移動体領域以外の領域を用いて揺らぎ強度を決定する。
以下、図10A~図13を用いて、実施の形態3について説明する。
まず、本実施の形態に係る画像処理装置の構成について、図10A及び図10Bを用いて説明する。図10A及び図10Bは、本実施の形態に係る画像処理装置の構成例を示すブロック図である。
図10Aに示すように、画像処理装置500は、図6Aに示す画像処理装置300と比較して、特定部330の代わりに特定部530を備える点と、新たに生成部540を備える点とが異なっている。また、図10Bに示すように、画像処理装置500aは、図6Bに示す画像処理装置300aと比較して、特定部330aの代わりに特定部530を備える点と、新たに生成部540を備える点とが異なっている。以下では、異なる点を中心に説明し、同じ点は説明を省略する場合がある。
図10A及び図10Bに示す生成部540は、入力画像を用いて背景画像を生成する。背景画像は、移動体を含まない画像である。具体的には、背景画像は、固定カメラで空間を撮影した場合に、移動体が写っていない画像である。背景画像には、揺らぎが十分に抑制されている、あるいは、発生していないことが好ましい。
図10A及び図10Bに示す特定部530は、入力画像と背景画像とを用いて移動体領域を特定する。具体的には、特定部530は、実施の形態2に係る特定部330と比較して、第2入力画像の代わりに背景画像を用いる点を除いて、詳細な動作などは同じである。
続いて、移動体領域を特定する処理の詳細について、図11A~図11Cを用いて説明する。
[4-1.補正後画像を用いない場合]
続いて、本実施の形態に係る画像処理装置500及び500aの動作について、図12A~図13を用いて説明する。まず、本実施の形態に係る画像処理装置500の動作、具体的には、補正後画像を用いずに入力画像の揺らぎを補正する処理について、図12A及び図12Bを用いて説明する。
続いて、本実施の形態に係る画像処理装置500aの動作、具体的には、補正後画像を用いて入力画像の揺らぎを補正する処理について、図13を用いて説明する。
以上のように、本実施の形態に係る画像処理装置500では、特定部530は、第1入力画像と移動体を含まない背景画像とを用いて移動体領域を特定する。
以上のように、本出願において開示する技術の例示として、実施の形態を説明した。しかしながら、本開示における技術は、これに限定されず、適宜、変更、置き換え、付加、省略などを行った実施の形態にも適用可能である。また、上記実施の形態で説明した各構成要素を組み合わせて、新たな実施の形態とすることも可能である。
110、110a、310、310a 決定部
120、120a、120b 補正部
121、121a 揺らぎ補正部
122、122a、122b パラメータ決定部
123 画像先鋭化部
200、230、400 物体
201 背景
210 エッジ
220、240、420、421、620、621 差分領域
330、330a、530 特定部
401 移動体
540 生成部
Claims (12)
- 動画像に含まれる第1入力画像の揺らぎを補正する画像処理装置であって、
前記揺らぎの強さを示す揺らぎ強度を決定する決定部と、
前記決定部によって決定された揺らぎ強度に応じて前記第1入力画像の揺らぎを補正する補正部とを備え、
前記決定部は、前記第1入力画像又は当該第1入力画像より時間的に前のフレームに含まれるエッジの画素数に対する、前記第1入力画像と前記前のフレームとの画素値の差分が所定の閾値以上である画素数の比が大きい程、大きい値になる前記揺らぎ強度を決定する
画像処理装置。 - 前記前のフレームは、前記第1入力画像の前に入力された第2入力画像、又は、前記補正部が当該第2入力画像の揺らぎを補正することで生成した補正後画像である
請求項1に記載の画像処理装置。 - 前記補正部は、
前記第1入力画像を含む複数のフレームの合成を行うことで、前記第1入力画像の揺らぎを補正する揺らぎ補正部と、
前記決定部によって決定された揺らぎ強度に応じて、前記合成に用いるパラメータを決定するパラメータ決定部とを含む
請求項2に記載の画像処理装置。 - 前記揺らぎ補正部は、前記複数のフレームの平均化を、前記合成として行い、
前記パラメータ決定部は、前記決定部によって決定された揺らぎ強度に応じて、前記平均化に用いるフレーム数を、前記パラメータとして決定する
請求項3に記載の画像処理装置。 - 前記揺らぎ補正部は、前記第1入力画像と前記補正後画像との重み付け加算を、前記合成として行い、
前記パラメータ決定部は、前記決定部によって決定された揺らぎ強度に応じて、前記重み付け加算の重みを、前記パラメータとして決定する
請求項3に記載の画像処理装置。 - 前記補正部は、さらに、画像の先鋭化を行う画像先鋭化部を有し、
前記パラメータ決定部は、前記決定部によって決定された揺らぎ強度に応じて、前記先鋭化のためのフィルタサイズを、前記パラメータとして決定し、
前記画像先鋭化部は、前記パラメータ決定部によって決定されたフィルタサイズを用いて、揺らぎが補正された第1入力画像の先鋭化を行う
請求項3に記載の画像処理装置。 - 前記画像処理装置は、さらに、前記入力画像と前記前のフレームとの間で動く移動体を含む移動体領域を特定する特定部を備え、
前記決定部は、前記移動体領域以外の領域を用いて前記揺らぎ強度を決定する
請求項1~6のいずれか1項に記載の画像処理装置。 - 前記特定部は、前記第1入力画像と前記前のフレームとの間で差分値が閾値以上である画素から構成される差分領域のうち、所定の面積以上の閉じた領域を前記移動体領域として特定する
請求項7に記載の画像処理装置。 - 前記特定部は、前記決定部によって決定された揺らぎ強度に応じて、前記所定の面積に対応するパラメータを設定する
請求項8に記載の画像処理装置。 - 前記特定部は、前記第1入力画像と前記移動体を含まない背景画像とを用いて前記移動体領域を特定する
請求項7~9のいずれか1項に記載の画像処理装置。 - 請求項1~10のいずれか1項に記載の画像処理装置を備える監視カメラ。
- 動画像に含まれる入力画像の揺らぎを補正する画像処理方法であって、
前記揺らぎの強さを示す揺らぎ強度を決定し、
決定された揺らぎ強度に応じて前記入力画像の揺らぎを補正し、
前記揺らぎ強度の決定では、前記入力画像又は当該入力画像より時間的に前のフレームに含まれるエッジの画素数に対する、前記入力画像と前記前のフレームとの画素値の差分が所定の閾値以上である画素数の比が大きい程、大きい値になる前記揺らぎ強度を決定する
画像処理方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015521167A JP6041113B2 (ja) | 2014-03-05 | 2014-09-18 | 画像処理装置、監視カメラ及び画像処理方法 |
EP14878397.0A EP3116213A4 (en) | 2014-03-05 | 2014-09-18 | Image processing apparatus, monitor camera, and image processing method |
US14/808,658 US9569825B2 (en) | 2014-03-05 | 2015-07-24 | Image processing device, monitoring camera, and image processing method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014-043215 | 2014-03-05 | ||
JP2014043215 | 2014-03-05 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/808,658 Continuation US9569825B2 (en) | 2014-03-05 | 2015-07-24 | Image processing device, monitoring camera, and image processing method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015132826A1 true WO2015132826A1 (ja) | 2015-09-11 |
Family
ID=54054671
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/004802 WO2015132826A1 (ja) | 2014-03-05 | 2014-09-18 | 画像処理装置、監視カメラ及び画像処理方法 |
Country Status (4)
Country | Link |
---|---|
US (1) | US9569825B2 (ja) |
EP (1) | EP3116213A4 (ja) |
JP (1) | JP6041113B2 (ja) |
WO (1) | WO2015132826A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020144835A (ja) * | 2019-03-04 | 2020-09-10 | キヤノン株式会社 | 画像内の乱流の影響を減少させるシステム及び方法 |
WO2021149484A1 (ja) * | 2020-01-20 | 2021-07-29 | ソニーグループ株式会社 | 画像生成装置、画像生成方法、および、プログラム |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6772000B2 (ja) * | 2016-08-26 | 2020-10-21 | キヤノン株式会社 | 画像処理装置、画像処理方法およびプログラム |
AU2017202910A1 (en) * | 2017-05-02 | 2018-11-22 | Canon Kabushiki Kaisha | Image processing for turbulence compensation |
KR20190028103A (ko) * | 2017-09-08 | 2019-03-18 | 삼성에스디에스 주식회사 | 비관심 객체에 대한 마스크 처리 방법 및 그 장치 |
JP7246900B2 (ja) * | 2018-11-26 | 2023-03-28 | キヤノン株式会社 | 画像処理装置、画像処理システム、撮像装置、画像処理方法、プログラム、および、記憶媒体 |
JP7263149B2 (ja) * | 2019-06-26 | 2023-04-24 | キヤノン株式会社 | 画像処理装置、画像処理方法、およびプログラム |
US11301967B2 (en) | 2019-08-27 | 2022-04-12 | Samsung Electronics Company, Ltd. | Intelligence-based editing and curating of images |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011229030A (ja) | 2010-04-21 | 2011-11-10 | Sony Corp | 画像処理装置および方法、記録媒体、並びにプログラム |
JP2013122639A (ja) * | 2011-12-09 | 2013-06-20 | Hitachi Kokusai Electric Inc | 画像処理装置 |
JP2013236249A (ja) | 2012-05-09 | 2013-11-21 | Hitachi Kokusai Electric Inc | 画像処理装置及び画像処理方法 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8339583B2 (en) * | 2009-07-17 | 2012-12-25 | The Boeing Company | Visual detection of clear air turbulence |
US8611691B2 (en) * | 2009-07-31 | 2013-12-17 | The United States Of America As Represented By The Secretary Of The Army | Automated video data fusion method |
JP2012090152A (ja) | 2010-10-21 | 2012-05-10 | Sanyo Electric Co Ltd | 撮像装置および撮像方法 |
US8625005B2 (en) * | 2010-11-05 | 2014-01-07 | Raytheon Company | First-in-first-out (FIFO) buffered median scene non-uniformity correction method |
JP2012182625A (ja) | 2011-03-01 | 2012-09-20 | Nikon Corp | 撮像装置 |
JP5810628B2 (ja) * | 2011-05-25 | 2015-11-11 | 富士ゼロックス株式会社 | 画像処理装置及び画像処理プログラム |
KR101306242B1 (ko) * | 2012-03-26 | 2013-09-09 | 엠텍비젼 주식회사 | 영상의 시간적 잡음 제거 방법 및 장치 |
KR101279374B1 (ko) * | 2012-11-27 | 2013-07-04 | 주식회사 카이넥스엠 | 카메라의 영상 개선을 위한 안개제거 영상 보정 장치 및 방법 |
-
2014
- 2014-09-18 WO PCT/JP2014/004802 patent/WO2015132826A1/ja active Application Filing
- 2014-09-18 JP JP2015521167A patent/JP6041113B2/ja not_active Expired - Fee Related
- 2014-09-18 EP EP14878397.0A patent/EP3116213A4/en not_active Withdrawn
-
2015
- 2015-07-24 US US14/808,658 patent/US9569825B2/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011229030A (ja) | 2010-04-21 | 2011-11-10 | Sony Corp | 画像処理装置および方法、記録媒体、並びにプログラム |
JP2013122639A (ja) * | 2011-12-09 | 2013-06-20 | Hitachi Kokusai Electric Inc | 画像処理装置 |
JP2013236249A (ja) | 2012-05-09 | 2013-11-21 | Hitachi Kokusai Electric Inc | 画像処理装置及び画像処理方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3116213A4 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020144835A (ja) * | 2019-03-04 | 2020-09-10 | キヤノン株式会社 | 画像内の乱流の影響を減少させるシステム及び方法 |
WO2021149484A1 (ja) * | 2020-01-20 | 2021-07-29 | ソニーグループ株式会社 | 画像生成装置、画像生成方法、および、プログラム |
Also Published As
Publication number | Publication date |
---|---|
US20150332443A1 (en) | 2015-11-19 |
US9569825B2 (en) | 2017-02-14 |
EP3116213A1 (en) | 2017-01-11 |
JPWO2015132826A1 (ja) | 2017-03-30 |
JP6041113B2 (ja) | 2016-12-07 |
EP3116213A4 (en) | 2017-06-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6041113B2 (ja) | 画像処理装置、監視カメラ及び画像処理方法 | |
CN107533753B (zh) | 图像处理装置 | |
JP6697684B2 (ja) | 画像処理装置、画像処理方法、及び画像処理回路 | |
KR101633377B1 (ko) | 다중 노출에 의한 프레임 처리 방법 및 장치 | |
JP5389903B2 (ja) | 最適映像選択 | |
US10726539B2 (en) | Image processing apparatus, image processing method and storage medium | |
JP6505237B2 (ja) | 画像処理装置 | |
US20200311981A1 (en) | Image processing method, image processing apparatus, image processing system, and learnt model manufacturing method | |
JP2006129236A (ja) | リンギング除去装置およびリンギング除去プログラムを記録したコンピュータ読み取り可能な記録媒体 | |
EP3016383B1 (en) | Method, device, and system for pre-processing a video stream for subsequent motion detection processing | |
US7551795B2 (en) | Method and system for quantization artifact removal using super precision | |
WO2014069103A1 (ja) | 画像処理装置 | |
EP2575348A2 (en) | Image processing device and method for processing image | |
JP6127282B2 (ja) | 画像処理装置、画像処理方法及びそれを実行させるためのプログラム | |
US20080025628A1 (en) | Enhancement of Blurred Image Portions | |
JP2008160733A (ja) | 撮像装置、撮像信号処理方法及びプログラム | |
JP2001331806A (ja) | 画像処理方式 | |
JP2015207090A (ja) | 画像処理装置、及びその制御方法 | |
JP2018033080A (ja) | 画像処理装置、画像処理方法およびプログラム | |
US9589324B1 (en) | Overshoot protection of upscaled images | |
CN115035311A (zh) | 基于可见光与热红外融合的托辊检测方法 | |
US10733708B2 (en) | Method for estimating turbulence using turbulence parameter as a focus parameter | |
CN103841312B (zh) | 物体侦测装置及方法 | |
US11403736B2 (en) | Image processing apparatus to reduce noise in an image | |
Goto et al. | Image restoration method for non-uniform blurred images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2015521167 Country of ref document: JP Kind code of ref document: A |
|
REEP | Request for entry into the european phase |
Ref document number: 2014878397 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2014878397 Country of ref document: EP |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14878397 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |