WO2016167014A1 - 画像処理装置及び方法、並びにプログラム及び記録媒体 - Google Patents
画像処理装置及び方法、並びにプログラム及び記録媒体 Download PDFInfo
- Publication number
- WO2016167014A1 WO2016167014A1 PCT/JP2016/054356 JP2016054356W WO2016167014A1 WO 2016167014 A1 WO2016167014 A1 WO 2016167014A1 JP 2016054356 W JP2016054356 W JP 2016054356W WO 2016167014 A1 WO2016167014 A1 WO 2016167014A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel
- image
- unit
- value
- gradation
- Prior art date
Links
- 238000000034 method Methods 0.000 title abstract description 23
- 239000002131 composite material Substances 0.000 claims abstract description 40
- 238000011156 evaluation Methods 0.000 claims description 30
- 230000002194 synthesizing effect Effects 0.000 claims description 30
- 230000015572 biosynthetic process Effects 0.000 claims description 24
- 238000003786 synthesis reaction Methods 0.000 claims description 24
- 238000006243 chemical reaction Methods 0.000 claims description 23
- 238000003384 imaging method Methods 0.000 claims description 22
- 230000010354 integration Effects 0.000 claims description 18
- 238000001514 detection method Methods 0.000 claims description 13
- 238000003672 processing method Methods 0.000 claims description 8
- 229920006395 saturated elastomer Polymers 0.000 claims description 5
- 230000007423 decrease Effects 0.000 claims description 4
- 230000002123 temporal effect Effects 0.000 claims 1
- 230000008569 process Effects 0.000 abstract description 12
- 230000000694 effects Effects 0.000 description 16
- 238000010586 diagram Methods 0.000 description 11
- 230000006872 improvement Effects 0.000 description 7
- 238000012952 Resampling Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 5
- 230000003247 decreasing effect Effects 0.000 description 5
- 238000013459 approach Methods 0.000 description 4
- 230000002146 bilateral effect Effects 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 230000012447 hatching Effects 0.000 description 2
- 230000000873 masking effect Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 101100115778 Caenorhabditis elegans dac-1 gene Proteins 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/337—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/14—Transformations for image registration, e.g. adjusting or mapping for alignment of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/94—Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/741—Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20182—Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20208—High dynamic range [HDR] image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20216—Image averaging
Definitions
- the present invention relates to an image processing apparatus and an image processing method for obtaining a single high-quality image by combining a plurality of images.
- the present invention also relates to a program for causing a computer to execute processing in the above-described image processing apparatus or image processing method, and a recording medium on which the program is recorded.
- Patent Document 1 in a plurality of images to be combined, image positional deviation is compensated according to the motion vector obtained between the images, and the image combining ratio is changed based on the reliability of the motion vector.
- JP 2012-19337 A paragraphs 0026 to 0030
- the image is transformed into an image with a different viewpoint Must be synthesized.
- the image deformation processing includes a method for obtaining a position of a corresponding portion of the image before the deformation for each pixel of the image after the deformation and a method for obtaining a position of a corresponding portion of the image after the deformation for each pixel of the image before the deformation. It can be done with either.
- the value of each pixel of the image after the deformation is determined by the gradation of the pixels around the position of the corresponding portion of the image before the deformation. Determine by resampling with values. In this case, there is a problem that the high frequency component is deteriorated due to the frequency characteristic of the kernel used at the time of resampling, and the deformed image is blurred.
- Patent Document 1 does not provide a solution to the problems associated with the deformation process.
- the present invention has been made to solve the above-described problems, and an object of the present invention is to enable generation of a high-quality composite image even when the viewpoint of the input image is different.
- the image processing apparatus of the present invention An image processing apparatus that uses one input image of a plurality of input images as a standard image and combines and outputs one or more input images other than the standard image as a reference image, Each receiving one or more reference images as input, and for each pixel of each input reference image, one or more misregistration amount detection units for detecting the misregistration amount with respect to the standard image; One or two or more alignment image generations that receive one or two or more reference images as inputs and generate alignment images in which gradation values of at least some of the pixels are defined from the input reference images.
- a combining unit that combines the reference image and the alignment image generated by the one or more alignment image generation units to output a combined image;
- Each of the alignment image generation units moves each pixel of the input reference image according to the amount of positional deviation for the pixel, and assigns the gradation value of the pixel to the pixel position closest to the movement destination. And defining the gradation value for a pixel position for which no pixel is allocated in the reference image among the pixel positions of the alignment image, thereby generating the alignment image,
- the synthesizing unit uses each pixel of the reference image as a target pixel, and sets the gradation value of the target pixel and the gradation value of a pixel at the same position as the target pixel of the alignment image.
- Each of the pixels is weighted and added according to whether or not the gradation value is a defined pixel, or the gradation value of a pixel for which no gradation value is defined is interpolated
- the gradation value of the pixel at the same position as the target pixel of the composite image is calculated by performing weighted addition above.
- a high-quality composite image can be generated even if the viewpoint of the input image is different.
- FIG. 1 is a block diagram illustrating an image processing apparatus according to a first embodiment of the present invention. It is a block diagram which shows an example of the imaging device provided with the image processing apparatus of FIG. (A)-(c) is a conceptual diagram of misalignment and the alignment image. It is a block diagram which shows the example of 1 structure of the synthetic
- (A) And (b) is a figure which shows an example of gradation value distribution, and the gradation conversion characteristic applied with respect to it.
- (A) And (b) is a figure which shows the other example of gradation value distribution, and the gradation conversion characteristic applied with respect to it. It is a block diagram which shows one structural example of the synthetic
- FIG. FIG. 1 shows an image processing apparatus according to Embodiment 1 of the present invention.
- the image processing apparatus of FIG. 1 receives N (N is an integer of 2 or more) input images and combines these images to generate one composite image Dg.
- N is an integer of 2 or more
- one input image is set as a standard image Da0
- the other input images are set as reference images Da1 to DaM.
- M is equal to N ⁇ 1 and is therefore an integer of 1 or 2 or more.
- FIG. 2 shows an example of an imaging apparatus provided with the image processing apparatus according to the first embodiment of the present invention.
- the imaging apparatus of FIG. 2 includes an imaging unit 2, an image memory 4, an image processing device 6, and a control unit 8.
- the image processing apparatus of FIG. 1 can be used as the image processing apparatus 6 of FIG.
- the imaging unit 2 performs imaging by controlling exposure conditions such as exposure time by the control unit 8.
- the control unit 8 determines the exposure condition based on the brightness of the image output from the image memory 4 and controls the imaging unit 2. For example, the exposure time is shortened as the image becomes brighter.
- the exposure condition may be determined based on information regarding image processing conditions such as a dynamic range. Captured images obtained by photographing at different times by the imaging unit 2 are sequentially stored in the image memory 4, and N images (N is an integer of 2 or more) are simultaneously read from the image memory 4, and the reference image Da0 and M reference images Da1 to DaM are supplied to the image processing apparatus 6.
- the standard image Da0 and the reference images Da1 to DaM are images obtained by photographing one after the other with the same one imaging unit 2, that is, at different times, and accordingly, the imaging unit 2 or the movement of the subject may change the viewpoint.
- the image processing apparatus of FIG. 1 includes M misregistration amount detection units 10-1 to 10-M, M alignment image generation units 20-1 to 20-M, and a synthesis unit 30. .
- M is an integer equal to or greater than 2, but M may be 1.
- the positional deviation amount detection units 10-1 to 10-M receive M reference images Da1 to DaM as inputs, respectively, and for each pixel Dami of each input reference image Dam (m is any one of 1 to M). Then, the positional deviation amount Dbmi from the reference image Da0 is detected. “I” of the code “Dami” representing each pixel of the reference image Dam and “i” of the code “Dbmi” representing the positional deviation amount for each pixel are codes for identifying the pixels. Takes any value up to the total number of pixels making up the image. The same applies to pixels of the reference image Da0, pixels of the composite image Dg, and pixels of alignment images Dc1 to DcM described later.
- the alignment image generation units 20-1 to 20-M each receive M reference images Da1 to DaM, and generate an alignment image Dcm from each input reference image Dam.
- Each of the alignment image generation units 20-1 to 20-M (20-m) moves each pixel Dami of the input reference image Dam according to the positional deviation amount Dbmi for the pixel Dami and sets By assigning the gradation value of the pixel Dami to a close pixel position, an alignment image Dcm in which the gradation values of at least some of the pixels are defined is generated.
- the gradation value of each pixel is, for example, a value (luminance value) representing the luminance of the pixel.
- the gradation value of each pixel may be a value (color component value) representing the intensity of the color component such as red, green, and blue of the pixel.
- the synthesizer 30 synthesizes the reference image Da0 and the M alignment images Dc1 to DcM generated by the M alignment image generators 20-1 to 20-M.
- the synthesizer 30 weights and adds the weights according to whether or not each pixel of the M alignment images Dc1 to DcM is a pixel in which the gradation value is defined, or the gradation
- the above synthesis is performed by interpolating the gradation values of pixels whose values are not defined and then performing weighted addition.
- each of the misregistration amount detection units 10-1 to 10-M (10-m) detects the misregistration amount Dbmi with respect to the standard image Da0 for each pixel Dami of the reference image Dam.
- the positional deviation amount Dbmi represents the relative value of the position of the corresponding part on the standard image Da0 with respect to the position of each pixel Dami of each reference image Dam, that is, the motion.
- a matching area centered on each pixel Dami of the reference image Dam is set, and a plurality of matching areas are set at different positions on the standard image Da0.
- Calculates the correlation (similarity) between each matching area and the matching area identifies the matching area with the highest correlation (similarity), and shifts the relative position of the identified matching area with respect to the matching area Output as quantity Dbmi.
- the processing is performed as “cannot be handled”, that is, “the positional deviation amount is unknown”.
- Each misregistration amount detection unit 10-m sequentially selects (scans) all the pixels of the reference image Dam, and performs the above-described processing on the selected pixels, whereby each Dami of all the reference image Dam is detected.
- a positional deviation amount Dbmi from the reference image Da0 is obtained. “Selecting in order” does not necessarily select one pixel at a time, but may select a plurality of pixels at the same time and perform the above processing on the selected pixels in parallel. . The same applies to the alignment image generation units 20-1 to 20-M and the synthesis unit 30 described later.
- Each of the alignment image generation units 20-1 to 20-M (20-m) moves each pixel Dami of the input reference image Dam according to the positional deviation amount Dbmi for the pixel Dami and sets
- the alignment image Dcm is generated by associating the pixel (movement source pixel) Dami with the close pixel position and assigning the gradation value of the pixel Dami.
- the source pixel is also referred to as an alignment source pixel.
- the alignment image Dcm generated in this way is an image at the same viewpoint as the reference image Da0.
- the image of the same viewpoint is an image obtained when the imaging unit 2 and the subject are not relatively moved.
- Some pixels of the alignment image Dcm are associated with two or more pixels of the reference image Dam.
- the average value of the gradation values of a plurality of associated pixels is determined as the gradation value of the pixels of the alignment image.
- a pixel that is an alignment source that is, a pixel assigned to the pixel of the alignment image Dcm. Does not exist in the reference image Dam, no gradation value is assigned to any pixel of the reference image Dam, and therefore there is a case where no gradation value is determined. Pixels for which tone values are not defined are called undefined pixels. On the other hand, a pixel for which a gradation value is determined is called a definition pixel.
- FIGS. 3A shows the reference image Dam
- FIG. 3B shows the positional deviation amount Dbmi between the reference image Dam and the standard image Da0
- FIG. 3C shows the alignment image Dcm.
- each pixel of the reference image Dam is indicated by a white circle
- the position of each pixel Dami in the reference image Dam is indicated by a black dot
- the above-described pixels in the standard image Da0 The position of the portion corresponding to Dami is indicated by a white circle, and the positional deviation amount Dbmi is indicated by an arrow.
- the positional deviation amount Dbmi is determined from the position (black dot) of each pixel Dami in the reference image Dam, the position of the portion corresponding to the pixel Dami in the standard image Da0 ( Represented as a vector to white circle).
- FIG. 3C shows a registration image Dcm.
- the pixel position of the alignment image Dcm is the same as the pixel position of the reference image Da0.
- the alignment image Dcm is generated by virtually moving each pixel Dami of the reference image Dam according to the positional deviation amount Dbmi and assigning the gradation value of the pixel Dami to the pixel position closest to the destination.
- the movement destination of the pixel Dami of the reference image Dam is indicated by a white circle.
- Each pixel position of the alignment image Dcm is located at the center of the lattice (rectangular region). Therefore, the pixel position closest to an arbitrary position in the grid centered on each pixel position is the pixel position located at the center of the grid.
- FIG. 3C shows a registration image Dcm.
- the pixel position of the alignment image Dcm is the same as the pixel position of the reference image Da0.
- the alignment image Dcm is generated by virtually moving each pixel Dami of the reference image Dam according to the position
- a pixel (indicated by hatching) in which one or more white circles (center) exist within a lattice (rectangular region centered on each pixel) of each pixel of the alignment image Dcm is a definition pixel.
- a pixel having no white circle (center) (without hatching) is an undefined pixel.
- Each alignment image generation unit 20-m sequentially selects all the pixels of the input reference image Dam, and performs the above-described process of assigning the gradation values to the selected pixels. By performing this process for all the pixels, an alignment image Dcm in which gradation values are defined for at least some of the pixels is generated.
- the synthesizing unit 30 generates a synthesized image Dg by synthesizing the reference image Da0 and the M alignment images Dc1 to DcM.
- the synthesizing unit 30 sets each pixel of the reference image Da0 as the target pixel Da0i, the gradation value of the target pixel Da0i, and the gradation values of the pixels Dc1i to DcMi at the same position as the target pixel Da0i of the alignment images Dc1 to DcM.
- the processing including the weighted addition the gradation value of the pixel Dgi at the same position as the target pixel Da0i of the composite image Dg is calculated. This weighted addition may be performed excluding undefined pixels or may be performed after interpolating undefined pixels.
- the synthesizing unit 30 sequentially selects the pixels constituting the reference image Da0 as the target pixel Da0i, and performs the above-described weighted addition on the target pixel Da0i, thereby obtaining the composite image Dg in which the gradation values of all the pixels are defined. Generate.
- a combining unit 30a (first configuration example) shown in FIG. 4, a combining unit 30b (second configuration example) shown in FIG. 5, or a combining unit 30c (third configuration) shown in FIG. Example) can be used.
- the synthesizing unit 30a illustrated in FIG. 4 sets each pixel of the reference image Da0 as the target pixel Da0i, and adjusts the gradation value of the target pixel Da0i and the pixels Dc1i to DcMi in the alignment images Dc1 to DcM at the same position as the target pixel Da0i.
- the time direction addition value Ddi for the pixel of interest Da0i is calculated by weighted addition of the definition pixels.
- the synthesizing unit 30a receives the M undefined pixel mask units 31-1 to 31-M that mask undefined pixels of the alignment images Dc1 to DcM, the target pixel Da0i, and the alignment images Dc1 to DcM, respectively.
- a time-direction weighted addition unit 32 that weights and adds the gradation values of the pixels Dc1i to DcMi at the same position as the pixel Da0i is provided.
- the undefined pixel mask units 31-1 to 31-M mask undefined pixels of the alignment images (Dc1 to DcM).
- the process of masking undefined pixels is a process in which the subsequent time-direction weighted addition unit 32 prevents the undefined pixels from being added.
- the gradation of the alignment images Dc1 to DcM that are undefined pixels This is done by prohibiting value output. Instead, information for identifying whether or not the pixel is an undefined pixel (undefined pixel mask information) may be output for each pixel of each alignment image Dcm.
- the time-direction weighted addition unit 32 calculates the gradation value of the pixel of interest Da0i and the gradation value of the definition pixel of the pixels Dc1i to DcMi at the same position as the pixel of interest Da0i of the masked alignment images Dc1 to DcM. Weighted addition (time direction weighted addition) is performed to calculate a time direction addition value Ddi for the pixel of interest Da0i.
- the undefined pixels due to the action of the undefined pixel mask units 31-1 to 31-M, the undefined pixels are not added.
- the undefined pixel may be determined based on the undefined pixel mask information, and the weight for the undefined pixel may be added as zero. The reason for adding the weights as zero is equivalent to not adding the objects.
- the same weight may be used for the pixel of interest Da0i and the pixel of the alignment image to be added, or the pixel of interest Da0i is the largest, and each pixel of the alignment image has its respective alignment.
- the weight may be changed for each pixel according to the difference or similarity between the gradation values of the pixel of the reference image Da0 and the pixel of each alignment image Dcm as in the processing using the bilateral filter.
- the time direction addition value Ddi for the target pixel Da0i is output from the combining unit 30a as the gradation value of the pixel Dgi at the same position as the target pixel Da0i in the composite image Dg.
- a composite image Dg is composed of the gradation values of all the pixels Dg1 to DgI.
- the synthesizer 30a generates the synthesized image Dg as described above.
- S / N can be increased by weighted addition in the time direction weighted addition unit 32.
- the difference between added pixels is zero, if the tone values of L pixels are simply added, the signal component of the tone values is L times.
- an S / N improvement effect corresponding to the sum of the weighting coefficients can be obtained.
- the combining unit 30b illustrated in FIG. 5 sets each pixel of the reference image Da0 as the target pixel Da0i, and the tone value of the target pixel Da0i and the target images Dac1 to DcM. Weighted addition is performed with the gradation value of the definition pixel among the pixels Dc1i to DcMi at the same position as the pixel Da0i, and the result of this weighted addition is used as the gradation of the pixel Dgi at the same position as the target pixel Da0i of the synthesized image Dg. Used for value calculation.
- the synthesizing unit 30b further performs weighted addition of the gradation value of the pixel of interest Da0i and the gradation values of the pixels around the pixel of interest Da0i in the reference image Da0, and the result of this weighted addition is also obtained from the synthesized image Dg. 4 differs from the combining unit 30a in FIG. 4 in that it is used for calculating the gradation value of the pixel Dgi at the same position as the target pixel Da0i.
- the combining unit 30b includes M undefined pixel mask units 31-1 to 31-M and a time direction weighted addition unit 32, and further includes a spatial direction weighted addition unit 33, An added pixel number evaluation unit 34 and a spatiotemporal integration unit 35 are provided.
- the undefined pixel mask units 31-1 to 31-M and the time direction weighted addition unit 32 are the same as the combining unit 30a in FIG. 4 and will not be described.
- the spatial direction weighted addition unit 33, the addition The pixel number evaluation unit 34 and the spatiotemporal integration unit 35 will be described in more detail.
- the spatial direction weighted addition unit 33 performs weighted addition (spatial direction weighted addition) between the gradation value of the pixel of interest Da0i and the gradation values of the pixels around the pixel of interest Da0i in the reference image Da0.
- the spatial direction addition value Dei for Da0i is calculated.
- the weighted addition may use the same weight for the pixel of interest Da0i and the surrounding pixels, or may use a weight that becomes the largest for the pixel of interest Da0i and decreases for the surrounding pixels as the distance from the pixel of interest Da0i increases. good. Further, the weight may be changed for each pixel according to the difference or similarity between the gradation values of the target pixel Da0i and the surrounding pixels as in the process using the bilateral filter.
- the added pixel number evaluation unit 34 determines the number of added pixels for the target pixel Da0i based on whether or not each of the pixels Dc1i to DcMi at the same position as the target pixel Da0i in the alignment images Dc1 to DcM is an undefined pixel.
- An evaluation value Ndi is calculated.
- the added pixel number evaluation value Ndi represents the number of pixels to be subjected to weighted addition in the time direction weighted addition unit 32 or the total weight used in the weighted addition.
- the time direction weighted addition unit 32 simply adds the target pixel Da0i and the definition pixels of the pixels Dc1i to DcMi at the same position as the target pixel Da0i in the alignment images Dc1 to DcM, the alignment is performed.
- a value obtained by adding “1” to the number of defined pixels among the pixels Dc1i to DcMi at the same position as the target pixel Da0i in the images Dc1 to DcM may be used as the added pixel number evaluation value Ndi.
- the reason why “1” is added is that the target pixel Da0i is also subject to weighted addition.
- the weight for the target pixel Da0i is set to “1”, and the pixels of the alignment images Dc1 to DcM are set to the weight “1” when the difference from the target pixel Da0i is zero.
- the total (integrated value) of the weights may be used as the added pixel number evaluation value Ndi.
- the spatiotemporal integration unit 35 weights the time direction addition value Ddi and the space direction addition value Dei for the pixel of interest Da0i with weights Wdi and Wei corresponding to the added pixel number evaluation value Ndi calculated for the pixel of interest Da0i. Addition is performed to calculate a three-dimensional addition value Dfi for the pixel of interest Da0i. In weighting, when the added pixel number evaluation value Ndi is large, the weight Wdi for the time direction added value Ddi is increased, and when the added pixel number evaluation value Ndi is small, the weight Wei for the spatial direction added value Diei is increased. To do.
- the three-dimensional addition value Dfi for the target pixel Da0i is output from the combining unit 30b as the gradation value of the pixel Dgi at the same position as the target pixel Da0i in the composite image Dg.
- a composite image Dg is composed of the gradation values of all the pixels Dg1 to DgI.
- the synthesizer 30b generates the synthesized image Dg as described above.
- S / N can be increased by weighted addition in the time direction weighted addition unit 32, weighted addition in the spatial direction weighted addition unit 33, and weighted addition in the spatiotemporal integration unit 35.
- the time direction weighted addition unit 32 simply adds the gradation values of the L1 pixels
- the spatial direction weighted addition unit 33 performs the gradation of the L2 pixels.
- the time-direction integration unit 35 simply adds the time direction addition value Ddi and the space direction addition value Dei, the signal component of the three-dimensional addition value Dfi becomes (L1 + L2) times.
- an S / N improvement effect corresponding to the sum of the weighting coefficients can be obtained.
- the compositing unit 30c shown in FIG. 6 interpolates the gradation values of the undefined pixels in each of the alignment images Dc1 to DcM using the gradation values of the surrounding pixels in the alignment image Dcm,
- Each pixel of the image Da0 is set as the target pixel Da0i, and the tone value of the target pixel Da0i and the tone value of the pixels Dc1i to DcMi at the same position as the target pixel Da0i of all the alignment images Dc1 to DcM are weighted and added.
- the time direction addition value Ddi for the pixel of interest Da0i is calculated.
- the synthesizing unit 30c is a pixel of interest of the M undefined pixel interpolating units 36-1 to 36-M that interpolate the undefined pixels of the alignment images Dc1 to DcM, and the reference image Da0 and the alignment images Dc1 to DcM, respectively.
- a time-direction weighted addition unit 32 that weights and adds the gradation values of the pixels Dc1i to DcMi at the same position as Da0i.
- the undefined pixel interpolation units 36-1 to 36-M interpolate the undefined pixels of the alignment images Dc1 to DcM.
- Interpolation of gradation values of undefined pixels is performed by prediction from gradation values of pixels around the interpolation target pixel.
- the result of weighted addition of gradation values of 8 pixels in the vicinity of the interpolation target pixel can be used as the value of the interpolation target pixel.
- interpolation may be performed by adaptively changing the weight in the weighted addition according to whether the edge direction or surrounding pixels are undefined pixels.
- the time direction weighted addition unit 32 performs weighted addition of the gradation value of the pixel of interest Da0i and the gradation values of the pixels Dc1i to DcMi at the same position as the pixel of interest Da0i of all the alignment images Dc1 to DcM, A time direction addition value Ddi for the pixel of interest is calculated. Since the gradation values of the undefined pixels of the alignment images Dc1 to DcM are interpolated by the undefined pixel interpolation units 36-1 to 36-M, the pixels of all the alignment images Dc1 to DcM are weighted in the time direction. This is subject to weighted addition in the unit 32.
- equal weight may be assigned to the pixel of interest Da0i and the pixels of the alignment images Dc1 to DcM, or the pixel of interest Da0i is the largest and each of the pixels of the alignment images Dc1 to DcM is aligned.
- the reference image corresponding to the image may be given a weight that decreases as the difference in shooting time from the base image Da0 increases. Further, the weight may be changed for each pixel in accordance with the difference or similarity between the gradation values of the reference image Da0 and each alignment image Dcm as in the process using the bilateral filter.
- the reliability of the interpolation result is evaluated for each undefined pixel in each of the undefined pixel interpolation units 36-1 to 36-M (36-m), and the alignment image Dcm is evaluated according to the reliability of the interpolation result.
- the weight may be changed. For example, when the surrounding pixels are definition pixels or the image pattern is not complicated, it can be determined that the reliability of the interpolation result is high. When it is determined that the reliability of the interpolation result is high, the weight for the alignment image Dcm is increased. Further, when the surrounding pixels are almost undefined pixels, or when the image pattern is complicated, it can be determined that the reliability is low. When it is determined that the reliability of the interpolation result is low, the weight for the alignment image Dcm is reduced.
- the time direction addition value Ddi for the target pixel Da0i is output from the combining unit 30c as the gradation value of the pixel Dgi at the same position as the target pixel Da0i in the composite image Dg.
- a composite image Dg is composed of the gradation values of all the pixels Dg1 to DgI.
- the compositing unit 30c generates the composite image Dg as described above.
- S / N can be increased by weighted addition in the time direction weighted addition unit 32.
- the signal component of the tone values is L times.
- one of the plurality of input images is set as the standard image Da0, and the other images are set as one or more reference images Da1 to DaM.
- one or two or more misregistration amount detection units 10-1 to 10-M detect the misregistration amount Dbmi with respect to the standard image Da0 for each pixel of the input reference image Dam.
- one or more alignment image generation units 20-1 to 20-M generate alignment images from the input reference images (Dam).
- the synthesizing unit 30 synthesizes the reference image Da0 and one or more alignment images Dc1 to DcM generated by the one or more alignment image generation units 20-1 to 20-M. Do.
- each of the alignment image generation units 20-1 to 20-M (20-m) moves each pixel Dami of the input reference image according to the positional deviation amount Dbmi for the pixel Dami, and moves it to the destination.
- the alignment image Dcm in which the gradation values of at least some of the pixels are defined is generated.
- the combining unit 30 adds weights according to whether or not each pixel of one or two or more alignment images Dc1 to DcM is a defined pixel, or is defined. By performing weighting addition after interpolating non-existing pixels, the tone value of the pixel Dgi at the same position as the target pixel Da0i of the synthesized image Dg is calculated.
- the gradation values of the pixels of the reference image can be used as they are without being subjected to resampling processing when the alignment images Dc1 to DcM are generated, a high frequency component can be obtained depending on the frequency characteristics of the kernel used at the time of resampling. Can be avoided, and a composite image Dg having no aliasing distortion and high sharpness can be generated.
- the weighted addition is performed using the pixels Da0i and Dc1i to DcMi at the same position in the reference image Da0 and the alignment images Dc1 to DcM. Since the gradation value is always defined for the pixel of the reference image Da0, the pixel of the composite image Dg is not lost. Further, since weighted addition in the spatial direction is not performed, the S / N can be improved by the number of pixels defined in the alignment images Dc1 to DcM without reducing the sharpness of the image.
- the time-direction weighted addition unit 32 is configured to change the weight for each pixel in accordance with the difference in gradation value or similarity between the reference image Da0 and each alignment image Dcm, the target pixel of the reference image Da0.
- the same effect is obtained as when the pixels having a high correlation with Da0i are preferentially selected from the pixels of the respective alignment images Dc1 to DcM and used for the synthesis, and there is a case where the amount of misalignment is not accurate or there is a concealment area or the like. Even in such a case, it is possible to avoid generation of an image in which images shifted from each other are superimposed, and to obtain a clear image.
- weighted addition time direction weighted addition
- the weighted addition spatial direction weighted addition
- the added pixel number evaluation unit 34 obtains an added pixel number evaluation value Ndi representing the number of pixels subject to weighted addition in the time direction weighted addition unit 32 or the total weight used in the weighted addition.
- the weight for the time direction addition value Ddi is increased, and when the addition pixel number evaluation value Ndi is small, the weight for the spatial direction addition value Dei is increased.
- the time direction weighted addition unit 32 when the configuration is such that the weight is changed for each pixel according to the difference or similarity of the gradation value between the pixels at the same position in the reference image Da0 and each alignment image Dcm, the addition is performed.
- the pixel number evaluation unit 34 uses the sum of the weights as the added pixel number evaluation value Ndi. In this case, even if there is no undefined pixel, the total weight becomes small when the amount of positional deviation is not accurate or when there is a concealment area or the like and sufficient addition cannot be performed in the time direction. In such a case, by increasing the weight for the result of addition in the spatial direction, the number of pixels to be added (effective number of added pixels taking the weight into consideration) is made constant between pixels, and the S / N The improvement effect can be made uniform.
- the gradation values of the undefined pixels in the alignment images Dc1 to DcM are interpolated using the gradation values of the surrounding pixels.
- the pixels of the reference image Da0 and the pixels of all the alignment images Dc1 to DcM can be added. Therefore, the number of pixels to be added can be made constant, and the S / N improvement effect can be made uniform.
- the image processing apparatus of the first embodiment of the present invention even if the viewpoint of the input image is different, the problem caused by the image deformation process is solved, and a high-quality composite image is generated. can do.
- FIG. FIG. 7 shows an image processing apparatus according to the second embodiment of the present invention. Similar to the first embodiment, the image processing apparatus according to the second embodiment synthesizes N input images to generate one synthesized image, and can be used as, for example, the image processing apparatus 6 in FIG.
- the image processing apparatus includes M misregistration amount detection units 10-1 to 10-M, M alignment image generation units 20-1 to 20-M, a synthesis unit 60, and a gain.
- a setting unit 40 and a gradation conversion unit 50 are provided.
- the positional deviation amount detection units 10-1 to 10-M and the alignment image generation units 20-1 to 20-M are the same as those in the first embodiment.
- the gain setting unit 40 Based on the gradation characteristics or gradation value distribution of the reference image Da0, the gain setting unit 40 obtains a gain Gj (j is one of 0 to J, where J is the maximum value of the gradation value). And the gain set for the gradation value of the target pixel Da0i when each pixel of the reference image Da0 is processed as the target pixel Da0i by the synthesizing unit 60 and the gradation conversion unit 50. Gj is determined as a gain Gi for the pixel of interest Da0i and output.
- the synthesizing unit 60 uses the gain Gi for the target pixel Da0i output from the gain setting unit 40 when processing each pixel as the target pixel Da0i, and uses the gain Gi for the target pixel Da0i to generate the reference image Da0 and the alignment images Dc1 to DcM.
- a composite image Dg is generated by combining.
- the gradation conversion unit 50 converts the gradation of the composite image Dg using the gain Gi for the target pixel Da0i output from the gain setting unit 40 when processing each pixel as the target pixel Da0i.
- a gradation converted image (output image) Dh is generated.
- the gain setting unit 40 sets the gain Gj for each gradation value based on the gradation characteristics of the reference image Da0.
- the gain Gj for each gradation value is set so that the gradation conversion characteristic determined by the gain Gj compresses the dynamic range of the reference image Da0.
- the dark portion gain Gj is large and the light portion gain Gj is close to “1”.
- the dark part gain Gj is set to be larger.
- the tone value distribution luminance distribution
- the control of the exposure time is performed by the control unit 8.
- the gain Gj is set so as to be as shown in FIG. 8B as the overall gradation conversion characteristics.
- the gain Gj is the largest in the darkest part, and as the gradation value increases, the gain Gj gradually decreases and approaches the maximum gradation value.
- the gain Gj approaches “1”.
- the gain Gj is the largest near the lower end (the left end in the figure) of the middle gradation value portion (the range where the gradation value distribution is concentrated) with a small Gj (the point where the gain Gj is maximum is indicated by the symbol Pgm).
- the gain Gj approaches “1” as it approaches the maximum value.
- the gain Gj is preferably adjusted according to the gradation characteristics or gradation value distribution.
- the synthesizing unit 60 generates a synthesized image Dg by synthesizing the reference image Da0 and the M alignment images Dc1 to DcM.
- the synthesizing unit 60 sets each pixel of the reference image Da0 as the target pixel Da0i, the gradation value of the target pixel Da0i, and the gradation values of the pixels Dc1i to DcMi at the same position as the target pixel Da0i of the alignment images Dc1 to DcM.
- the gradation value of the pixel Dgi at the same position as the target pixel Da0i of the composite image Dg is calculated. This weighted addition may be performed excluding undefined pixels or may be performed after interpolating undefined pixels.
- the synthesizing unit 60 is the same as the synthesizing unit 30 of the first embodiment in the above points.
- the synthesis unit 60 controls the number of weighted pixels or the weight in weighted addition for each pixel based on the gain Gi for the target pixel Da0i output from the gain setting unit 40. 1 different from the synthesis unit 30. That is, when the synthesizing unit 60 performs the synthesizing process on each pixel Da0i of the reference image Da0, the gain Da output from the gain setting unit 40 according to the gradation value of the pixel of interest Da0i is used for the pixel Da0i. (For the calculation of the gradation value of the pixel Dgi of the composite image Dg corresponding to the pixel Da0i), the number of weighted pixels or the weight in weighted addition is determined.
- synthesis unit 60 for example, a synthesis unit 60a (first configuration example) illustrated in FIG. 10, a synthesis unit 60b (second configuration example) illustrated in FIG. 11, or a synthesis unit 60c (third configuration) illustrated in FIG. Example) can be used.
- the combining unit 60a illustrated in FIG. 10 includes the gradation value of the target pixel Da0i and the pixels Dc1i to DcMi at the same position as the target pixel Da0i of the alignment images Dc1 to DcM.
- the time direction addition value Ddi for the pixel of interest Da0i is calculated by weighted addition with the definition pixel.
- the synthesizing unit 60a focuses on the M undefined pixel mask units 31-1 to 31-M for masking the undefined pixels of the alignment images Dc1 to DcM, the target pixel Da0i, and the alignment images Dc1 to DcM, respectively.
- a time-direction weighted addition unit 62 that weights and adds the gradation value of the definition pixel among the pixels Dc1i to DcMi at the same position as the pixel Da0i is provided.
- the undefined pixel mask portions 31-1 to 31-M are the same as those described with respect to the combining portion 30a in FIG.
- a time direction addition value Ddi for the pixel of interest Da0i is calculated by performing weighted addition (time direction weighted addition) with the gradation value of the definition pixel among Dc1i to DcMi.
- the number of definition pixels used for weighted addition is determined based on the gain Gi for the target pixel Ga0i output from the gain setting unit 40. That is, if the gain Gi for the pixel of interest Da0i is large, the number of pixels to be added is increased, and if the gain Gi is small, the number of pixels to be added is decreased.
- a priority order may be determined based on a difference in photographing time, a difference in gradation values, or a similarity degree, and a higher priority order may be selected.
- the priority order is determined based on the difference in shooting time, the priority is set higher as the difference in shooting time is smaller.
- the priority order is increased as the difference is smaller or the similarity is higher.
- the time direction addition value Ddi for the target pixel Da0i is output from the combining unit 60a as the gradation value of the pixel Dgi at the same position as the target pixel Da0i in the composite image Dg.
- a composite image Dg is composed of the gradation values of all the pixels Dg1 to DgI.
- the synthesizing unit 60a generates the synthesized image Dg as described above.
- the synthesizing unit 60a When the synthesizing unit 60a is used, the same effect as that obtained when the synthesizing unit 30a of FIG. 4 is used can be obtained. Furthermore, by changing the number of pixels used for calculating the time direction addition value Ddi for each pixel on the basis of the gain Gi determined according to the gradation value for each pixel, In addition to improving the S / N, it is possible to avoid unnecessary weighted addition and reduce blur in the bright part of the image.
- the combining unit 60b illustrated in FIG. 11 sets each pixel of the reference image Da0 as the target pixel Da0i, and the tone value of the target pixel Da0i and the target images of the alignment images Dc1 to DcM. Weighted addition is performed with the gradation value of the definition pixel among the pixels Dc1i to DcMi at the same position as the pixel Da0i, and the result of this weighted addition is used as the gradation value of the pixel Dgi at the same position as the target pixel Da0i of the synthesized image Dg.
- the synthesizing unit 600b further performs weighted addition of the gradation value of the pixel of interest Da0i and the gradation values of the pixels around the pixel of interest Da0i in the reference image Da0, and the result of this weighted addition is also obtained from the synthesized image Dg. 10 differs from the combining unit 60a of FIG. 10 in that it is used for calculating the gradation value of the pixel Dgi at the same position as the target pixel Da0i.
- the combining unit 60b includes M undefined pixel mask units 31-1 to 31-M and a time direction weighted addition unit 62, and further includes a spatial direction weighted addition unit 63, An added pixel number evaluation unit 34 and a spatiotemporal integration unit 65 are provided.
- the undefined pixel mask units 31-1 to 31-M, the time direction weighted addition unit 62, and the added pixel number evaluation unit 34 are added to the undefined pixel mask units 31-1 to 31-M of the synthesis unit 60a in FIG. Since it is the same as the pixel number evaluation unit 34, description thereof is omitted, and in the following, the spatial direction weighted addition unit 63 and the spatiotemporal integration unit 65 will be described in more detail.
- the spatial direction weighted addition unit 63 is the same as the spatial direction weighted addition unit 33 of the combining unit 30b in FIG. 5, and the gradation value of the pixel of interest Da0i and the pixels around the pixel of interest Da0i in the reference image Da0.
- the spatial direction addition value Dei for the pixel of interest Da0i is calculated by performing weighted addition (spatial direction weighted addition) with the tone value of.
- the number of pixels used for weighted addition is determined based on the gain Gi for the pixel of interest Ga0i output from the gain setting unit 40. That is, if the gain Gi for the pixel of interest Da0i is large, the number of pixels to be added is increased, and if the gain Gi is small, the number of pixels to be added is decreased.
- the priority order is determined based on the distance from the target pixel Da0i or the difference or similarity of the gradation value with respect to the target pixel Da0i, and the priority order is selected in descending order. It ’s fine.
- the priority order is determined based on the distance from the target pixel Da0i
- the priority order is increased as the distance from the target pixel Da0i is shorter.
- the priority order is increased as the difference is smaller or the similarity is higher.
- the number of pixels added in the time direction weighted addition unit 62 and the number of pixels added in the spatial direction weighted addition unit 63 are changed in cooperation with each other. For example, when it is necessary to increase the total number of pixels to be added, the number of pixels to be added is preferentially increased in the time direction weighted addition unit 62. It is preferable to increase the number of pixels. Conversely, when it is necessary to reduce the total number of pixels to be added, the number of pixels to be added is preferentially reduced in the spatial direction weighted addition unit 63, and when the number is still too large, the time direction weighted addition unit 62 performs addition. It is preferable to increase the number of pixels to be processed.
- the spatiotemporal integration unit 65 like the spatiotemporal integration unit 35 of the combining unit 30b in FIG. 5, adds the time direction addition value Ddi and the spatial direction addition value Dei for the pixel of interest Da0i calculated for the pixel of interest Da0i.
- a weighted addition is performed with weights Wdi and Wei corresponding to the pixel number evaluation value Ndi to calculate a three-dimensional addition value Dfi for the pixel of interest Da0i.
- the weights Wdi and Wei used in the weighted addition in the spatiotemporal integration unit 65 for calculating the three-dimensional addition value Dfi for each pixel are based on the addition pixel number evaluation value Ndi and the gain Gi for the pixel. Determined. That is, when the gain Gi determined for each pixel is large and the number of pixels added by the spatial direction weighting addition unit 63 is increased, the weight Wei for the spatial direction addition value Dei is increased accordingly. On the other hand, when the gain Gi is small and the number of pixels added by the spatial direction weighting addition unit 63 is reduced, the weight Wdi for the time direction addition value Ddi is increased accordingly.
- the three-dimensional addition value Dfi for the target pixel Da0i is output from the combining unit 60b as the gradation value of the pixel Dgi at the same position as the target pixel Da0i in the composite image Dg.
- a composite image Dg is composed of the gradation values of all the pixels Dg1 to DgI.
- the composition unit 60b generates the composite image Dg as described above.
- the synthesis unit 60b when the synthesis unit 60b is used, the same effect as that obtained when the synthesis unit 30b of FIG. 5 is used can be obtained. Further, for each pixel, based on the gain Gi determined according to the gradation value of the pixel, the number of pixels to be added to calculate the time direction addition value Ddi for the pixel, and the pixel By changing the number of pixels added to calculate the spatial direction addition value Dei, the S / N is improved in the dark part of the image, and in the bright part of the image, unnecessary weighted addition is avoided and the sharpness is improved. Reduction or blurring can be reduced. In addition, priority is given to the weighted addition in the time direction, and when still insufficient, the weighted addition in the spatial direction can be performed to suppress a reduction in sharpness.
- the combining unit 60c shown in FIG. 12 converts the gradation values of undefined pixels in each of the alignment images Dc1 to DcM into the pixel values of the surrounding pixels in the alignment image Dcm. Interpolation is performed using the gradation value, and each pixel of the reference image Da0 is set as the target pixel Da0i.
- a time direction addition value Ddi for the pixel of interest Da0i is calculated by weighted addition of all or some of the gradation values.
- the synthesizing unit 60c focuses on the M undefined pixel interpolation units 36-1 to 36-M that interpolate the undefined pixels of the alignment images Dc1 to DcM, the pixel of interest Da0i, and the alignment images Dc1 to DcM, respectively.
- a time direction weighted addition unit 62 that weights and adds all or some of the gradation values of the pixels Dc1i to DcM at the same position as the pixel Da0i is provided.
- the undefined pixel interpolation units 36-1 to 36-M are the same as the undefined pixel interpolation units 36-1 to 36-M of the synthesis unit 30c in FIG.
- the time-direction weighted addition unit 62 is the same as the time-direction weighted addition unit 32 of the combining unit 30c in FIG. 6, but the gradation value of the target pixel Da0i and the target pixel Da0i of the alignment images Dc1 to DcM.
- the time direction addition value Ddi for the pixel of interest Da0i is calculated by performing weighted addition with all or some of the gradation values of the pixels Dc1i to DcMi at the same position.
- the number of pixels to be added in the weighted addition is determined based on the gain Gi for the target pixel Ga0i output from the gain setting unit 40. That is, if the gain Gi for the pixel of interest Da0i is large, the number of pixels to be added is increased, and if the gain Gi is small, the number of pixels to be added is decreased.
- a priority order is determined based on whether the pixel is an interpolated pixel, or based on a difference in photographing time, a difference in gradation value, or a similarity, and priority is given. Select from the highest ranking.
- the priority order is determined based on the difference in shooting time, the priority is set higher as the difference in shooting time is smaller.
- the priority order is increased as the difference is smaller or the similarity is higher.
- the time direction addition value Ddi for the target pixel Da0i is output from the synthesis unit 60c as the gradation value of the pixel Dgi at the same position as the target pixel Da0i in the composite image Dg.
- a composite image Dg is composed of the gradation values of all the pixels Dg1 to DgI.
- the synthesizer 60c generates the synthesized image Dg as described above.
- the synthesizing unit 60c When the synthesizing unit 60c is used, the same effect as that obtained when the synthesizing unit 30c of FIG. 6 is used can be obtained. Furthermore, by changing the number of pixels used for calculating the time direction addition value Ddi for each pixel on the basis of the gain Gi determined according to the gradation value for each pixel, In addition to improving the S / N, it is possible to avoid unnecessary weighted addition and reduce blur in the bright part of the image.
- the combining unit 60a illustrated in FIG. 10 or the combining unit 60c illustrated in FIG. 12 is used, for each pixel, based on the gain Gi determined according to the gradation value of the pixel, The number of pixels used in the time direction weighted addition for calculating the time direction addition value Ddi for the pixel is controlled. Specifically, if the gain Gi is large, the number of pixels added in the time direction is increased, and if the gain Gi is small, the number of pixels added in the time direction is decreased.
- the spatiotemporal integration unit 65 performs weighted addition by increasing the weight for the spatial direction addition value Dei.
- the combining unit 60b of FIG. 11 when the combining unit 60b of FIG. 11 is used, if the gain Gi for the target pixel Da0i is large, the number of pixels to be added in the spatial direction as well as the time direction is increased. Even when a sufficient number of pixels cannot be added in the time direction, the effect of improving the S / N can be supplemented by the addition in the spatial direction.
- the gradation conversion unit 50 converts the gradation value of the composite image Dg based on the gain Gi for the pixel of interest Da0i output from the gain setting unit 40, whereby the gradation converted image (output image) Dh is converted.
- the gradation value of the pixel Dhi at the same position as the pixel of interest Da0i is calculated.
- a gain Gi may be simply applied to the gradation value of each pixel Dgi of the composite image Dg, or a dynamic range compression process based on the Retinex theory may be applied.
- the gradation value of the composite image Dg is separated into a base component (illumination light component) and a detail component (reflected light component), and gradation conversion is performed only on the base component according to the gain Gi, and the detail component has an intensity. Re-synthesize after adjusting.
- the following method can be used.
- a high frequency component or a change (secondary derivative) of the gradation value is converted using the conversion characteristics shown in FIG. As a result, it is possible to emphasize a detail component having a minute amplitude.
- the gradation value is divided into a low frequency component (base component) and a high frequency component (detail component), the gain for each pixel of the detail component is determined according to the base component, and the pixel of the detail component
- the intensity adjustment may be performed by multiplying the detail component by the gain determined every time. Thereby, emphasis of the detail component can be suppressed in a dark part with low S / N, and noise amplification can be suppressed.
- Patent Document 1 As a solution to this problem, as described in Patent Document 1, an image obtained by shooting with a longer exposure time and an image obtained by shooting with a shorter exposure time are synthesized. There is a technology that expands the dynamic range. However, when it is difficult to obtain a plurality of images of the same viewpoint due to shooting restrictions, such as when the imaging unit is moving at high speed, the above method cannot be appropriately dealt with.
- a plurality of images taken with a short exposure time so as not to saturate a gradation value in a bright subject portion are used as input images, and alignment is performed between different images.
- the random noise in the dark part is averaged and the S / N of the dark part is improved as in the first embodiment. be able to.
- the alignment image generation units 20-1 to 20-M, and the synthesis unit 60 are the same as those in the first embodiment, the same effects are obtained. be able to.
- the gain setting unit 40 sets the gain Gj for each gradation value based on the gradation conversion characteristic for compressing the dynamic range, and when each pixel is processed as the attention pixel Da0i, the gradation value of the attention pixel Da0i Accordingly, the gain Gi is output for the pixel of interest Da0i, the combining unit 60 controls the number of pixels to be weighted and added according to the gain Gi, or the weight in the weighted addition, and the gradation converting unit 50 is set according to the gain Gi. The gradation value of the pixel Dgi at the same position as the target pixel Da0i in the composite image Dg is converted.
- the number of pixels to be weighted and added in the combining unit 60 is increased, or pixels other than the target pixel Da0i in the weighted addition.
- the S / N improvement effect can be enhanced.
- the dynamic range can be expanded by improving the S / N of the dark part while maintaining the gradation value of the bright subject part not to be saturated.
- the present invention has been described above as an image processing apparatus, the image processing method implemented by the above image processing apparatus also forms part of the present invention.
- the computer in FIG. 14 includes a processor 101, a program memory 102, a data memory 103, an input interface 104, and an output interface 105, which are connected by a data bus 106.
- the input interface 104 is supplied with a plurality of images from the image memory 4 that stores captured images from the imaging unit 2. These plurality of images constitute a standard image and a reference image.
- the processor 101 operates in accordance with a program stored in the program memory 102, performs a process of each unit of the image processing apparatus according to the first or second embodiment on a plurality of images input via the input interface 104, and
- the composite image Dg (in the case of the first embodiment) or the gradation conversion image Dh (in the case of the second embodiment) obtained as a result of the processing is output from the output interface 105.
- the contents of the processing by the processor 101 are the same as those described in the first or second embodiment.
- Data generated in the course of processing is held in the data memory 103.
- each unit may perform processing of each unit of the image processing apparatus. The same applies when causing a computer to execute part or all of the processing of the image processing method.
- the data memory 103 may also have the role of the image memory 4. In that case, the captured image output from the imaging unit 2 is supplied to the data memory 103 via the input interface 104.
- 10-1 to 10-M misregistration detection unit 20-1 to 20-M alignment image generation unit, 30, 30a, 30b, 30c composition unit, 31-1 to 31-M undefined pixel mask unit, 32 Time direction weighted addition unit, 33 Spatial direction weighted addition unit, 34 Additional pixel number evaluation unit, 35 Spatio-temporal integration unit, 36-1 to 36-M Undefined pixel interpolation unit, 40 Gain setting unit, 50 gradation conversion unit, 60, 60a, 60b, 60c composition unit, 62 time direction weighted addition unit, 63 space direction weighted addition unit, 65 space-time integration unit.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
Description
複数の入力画像のうちの1つの入力画像を基準画像とし、前記基準画像以外の1又は2以上の入力画像を参照画像としてこれらを合成して出力する画像処理装置であって、
それぞれ前記1又は2以上の参照画像を入力として受け、各々入力された参照画像の各画素について、前記基準画像との位置ずれ量を検出する1又は2以上の位置ずれ量検出部と、
それぞれ前記1又は2以上の参照画像を入力として受け、各々入力された参照画像から、少なくとも一部の画素の階調値が定義された位置合わせ画像を生成する1又は2以上の位置合わせ画像生成部と、
前記基準画像と、前記1又は2以上の位置合わせ画像生成部で生成された前記位置合わせ画像との合成を行って合成画像を出力する合成部とを備え、
前記位置合わせ画像生成部の各々は、入力された参照画像の各画素を、当該画素についての前記位置ずれ量に従って移動させ、移動先に最も近い画素位置に、当該画素の階調値を割り当てて定義し、前記位置合わせ画像の画素位置のうち、前記参照画像中に割り当てられる画素が存在しない画素位置に対しては階調値を定義しないことにより、前記位置合わせ画像を生成し、
前記合成部は、前記基準画像の各画素を注目画素とし、前記注目画素の階調値と、前記位置合わせ画像の、前記注目画素と同じ位置の画素の階調値とを、前記位置合わせ画像の各々の各画素が、階調値が定義された画素であるか否かに応じた重みを付けて加重加算することによって、或いは階調値が定義されていない画素の階調値を補間した上で加重加算することによって前記合成画像の前記注目画素と同じ位置の画素の階調値を算出する
ことを特徴とする。
図1は本発明の実施の形態1の画像処理装置を示す。
図1の画像処理装置は、N枚(Nは2以上の整数)の入力画像を受けて、これらの画像を合成して1枚の合成画像Dgを生成する。ここで、N枚の入力画像のうち、1枚の入力画像を基準画像Da0、それ以外の入力画像を参照画像Da1~DaMとする。但し、MはN-1に等しく、従って、1又は2以上の整数である。
図2の撮像装置は、撮像部2と、画像メモリ4と、画像処理装置6と、制御部8とを備える。図1の画像処理装置は、図2の画像処理装置6として用い得る。
撮像部2は制御部8により露光時間などの露光条件を制御されて撮影を行う。制御部8は、画像メモリ4から出力される画像の明るさなどに基づいて露光条件を決定して、撮像部2に対する制御を行う。例えば、露光時間は画像が明るいほど短くされる。なお、撮像により得られた画像の明るさに基づいて露光条件を決定する代わりに、事前に取得された情報、例えば、光源条件に関する情報、被写体の反射率に関する情報、或いは撮像装置の信号変換特性、ダイナミックレンジ等の画像処理の条件に関する情報に基づいて露光条件を決定することとしても良い。
撮像部2によって、異なる時刻での撮影で得られた撮像画像は、順次画像メモリ4に記憶され、画像メモリ4からN枚(Nは2以上の整数)の画像が同時に読み出され、基準画像Da0及びM枚の参照画像Da1~DaMとして、画像処理装置6に供給される。
参照画像Damの各画素を表す符号「Dami」の「i」、及び各画素についての位置ずれ量を表す符号「Dbmi」の「i」は、画素を識別する符号であり、1からI(Iは画像を構成する画素の総数)までのいずれかの値を取る。基準画像Da0の画素、及び合成画像Dgの画素、並びに後述の位置合わせ画像Dc1~DcMの画素についても同様である。
位置合わせ画像生成部20-1~20-Mの各々(20-m)は、入力された参照画像Damの各画素Damiを、当該画素Damiについての位置ずれ量Dbmiに従って移動させ、移動先に最も近い画素位置に、当該画素Damiの階調値を割り当てることにより、少なくとも一部の画素の階調値が定義された位置合わせ画像Dcmを生成する。
ここで各画素の階調値は例えば当該画素の輝度を表す値(輝度値)である。代わりに、各画素の階調値は、当該画素の赤、緑、青等の色成分の強度を表す値(色成分値)であっても良い。
合成部30は、M枚の位置合わせ画像Dc1~DcMの各々の各画素が、階調値が定義された画素であるか否かに応じた重みを付けて加重加算することによって、或いは階調値が定義されていない画素の階調値を補間した上で加重加算することによって上記の合成を行う。
なお、「順に選択する」とは、必ずしも一度に一つの画素を選択するとは限らず、複数の画素を同時に選択し、選択された複数の画素に対する上記の処理を平行して行うこととしても良い。後述の位置合わせ画像生成部20-1~20-M及び合成部30についても同様である。
このようにして生成される、位置合わせ画像Dcmは基準画像Da0と同一視点の画像である。ここで、同一視点の画像とは、撮像部2と被写体とが相対的に移動していない場合に得られる画像である。
また、位置合わせ画像Dcmの画素の中には、参照画像Dam中に、位置合わせ画像Dcmの当該画素に対して、位置合わせ元となる画素、即ち、位置合わせ画像Dcmの当該画素に割り当てられる画素が参照画像Dam中に存在しないことから参照画像Damの画素の階調値が一つも割り当てられず、従って階調値が定められないものもある。
階調値が定められていない画素を未定義画素と呼ぶ。一方、階調値が定められた画素を定義画素と呼ぶ。
図3(c)において、位置合わせ画像Dcmの各画素の格子(各画素を中心とする矩形領域)の内部に白丸(の中心)が1つ以上存在する画素(ハッチングで示す)は、定義画素となり、白丸(の中心)が1つも存在しない画素(ハッチングなし)は、未定義画素となる。このように、位置合わせ画像Dcmの画素位置のうち、参照画像Dam中に、位置合わせ元となる画素が存在せず、従って、参照画像Dam中に、割り当てられる画素が存在せず、そのために、階調値が定義されなかった画素が未定義画素となる。
加重加算は、注目画素Da0iと周辺の画素について等しい重みを用いても良いし、注目画素Da0iが最も大きく、周辺の画素については注目画素Da0iとの距離が大きくなるにつれて小さくなる重みを用いても良い。また、バイラテラルフィルタを用いた処理のように注目画素Da0iと周辺の画素の階調値の差分又は類似度に応じて重みを画素毎に変更することとしても良い。
加算画素数評価値Ndiは、時間方向加重加算部32における加重加算の対象とされる画素の数、又は当該加重加算で用いられる重みの合計を表す。
重み付けに際しては、加算画素数評価値Ndiが大きい場合には、時間方向加算値Ddiに対する重みWdiを大きくし、加算画素数評価値Ndiが小さい場合には、空間方向加算値Dieiに対する重みWeiを大きくする。
合成部30cでは全ての位置合わせ画像について補間を行った上で加重加算を行うので、上記のLはNに等しい。
加重加算における重みが異なる場合には、重み付け係数の総和に応じたS/Nの改善効果が得られる。
実施の形態1の画像処理装置では、複数の入力画像のうち、1つの入力画像を基準画像Da0とし、それ以外の画像を1又は2以上の参照画像Da1~DaMとする。そして、1又は2以上の位置ずれ量検出部10-1~10-Mが、各々入力された参照画像Damの各画素について、基準画像Da0との位置ずれ量Dbmiを検出する。さらに、1又は2以上の位置合わせ画像生成部20-1~20-Mが、各々入力された参照画像(Dam)から位置合わせ画像を生成する。そして、合成部30が、基準画像Da0と、上記の1又は2以上の位置合わせ画像生成部20-1~20-Mで生成された1又は2以上の位置合わせ画像Dc1~DcMとの合成を行う。さらに、位置合わせ画像生成部20-1~20-Mの各々(20-m)が、入力された参照画像の各画素Damiを、当該画素Damiについての位置ずれ量Dbmiに従って移動させ、移動先に最も近い画素位置に、当該画素Damiの階調値を割り当てることにより、少なくとも一部の画素の階調値が定義された位置合わせ画像Dcmを生成する。そして、合成部30が、1又は2以上の位置合わせ画像Dc1~DcMの各々の各画素が定義された画素であるか否かに応じた重みを付けて加重加算することによって、或いは定義されていない画素を補間した上で加重加算することによって上記の合成を行うことで、合成画像Dgの、注目画素Da0iと同じ位置の画素Dgiの階調値を算出する。
そして、加算画素数評価部34により、時間方向加重加算部32における加重加算の対象とされる画素の数、又は当該加重加算で用いられる重みの合計を表す加算画素数評価値Ndiを求め、加算画素数評価値Ndiが大きい場合に時間方向加算値Ddiに対する重みを大きくし、加算画素数評価値Ndiが小さい場合に空間方向加算値Deiに対する重みを大きくしている。
これにより、未定義画素が多くて時間方向に十分な定義画素の加算を行うことができない場合に、空間方向に加算した結果に対してより大きな重みを付けて合成画像Dgを生成することによって、加算される画素の数(重みを考慮に入れた実効的な加算画素数)を画素間で一定にし、S/Nの向上効果を均一にすることができる。
図7は本発明の実施の形態2の画像処理装置を示す。実施の形態2の画像処理装置は、実施の形態1と同様、N枚の入力画像を合成して1枚の合成画像を生成するものであり、例えば図2の画像処理装置6として用い得る。
位置ずれ量検出部10-1~10-M、及び位置合わせ画像生成部20-1~20-Mは実施の形態1と同様のものである。
このように、ゲインGjは階調特性乃至階調値分布に応じて調整するのが好ましい。
但し、加重加算に用いられる定義画素の数が、ゲイン設定部40から出力される、注目画素Ga0iについてのゲインGiに基づいて決定される。即ち、注目画素Da0iについてのゲインGiが大きければ、加算される画素の数を多くし、上記のゲインGiが小さければ、加算される画素の数を少なくする。
但し、加重加算に用いられる画素の数が、ゲイン設定部40から出力される、注目画素Ga0iについてのゲインGiに基づいて決定される。即ち、注目画素Da0iについてのゲインGiが大きければ、加算される画素の数を多くし、上記のゲインGiが小さければ、加算される画素の数を少なくする。
例えば、加算される画素の総数を増やす必要があるときは、時間方向加重加算部62において加算される画素の数を優先して増やし、それでも足りないときに、空間方向加重加算部63において加算される画素の数を増やすのが好ましい。逆に、加算される画素の総数を減らす必要があるときは、空間方向加重加算部63において加算される画素の数を優先して減らし、それでも多過ぎるときに、時間方向加重加算部62において加算される画素の数を増やすのが好ましい。
Wdi:Wei=Ndi:Nei
上記の式でNdiは加算画素数評価値Ndiである。
また、時間方向の加重加算を優先して行い、それでも足りないときに空間方向の加重加算を行うことで、鮮鋭度の低下を抑制することができる。
加重加算において加算される画素の数は、ゲイン設定部40から出力される、注目画素Ga0iについてのゲインGiに基づいて決定される。即ち、注目画素Da0iについてのゲインGiが大きければ、加算される画素の数を多くし、上記のゲインGiが小さければ、加算される画素の数を少なくする。
階調変換は、合成画像Dgの各画素Dgiの階調値に対して単純にゲインGiを掛けてもよいし、Retinex理論に基づくダイナミックレンジ圧縮処理を適用しても良い。後者の場合、合成画像Dgの階調値をベース成分(照明光成分)とディテール成分(反射光成分)に分離し、ベース成分にのみゲインGiに従った階調変換を施し、ディテール成分は強度調整を施した後に再合成する。
強度調整は、例えば、階調値の高周波数成分或いは変化分(2次微分)を図13に示される変換特性を用いて変換する。これにより、微小な振幅のディテール成分を強調することができる。
また、別の方法として、階調値を低周波数成分(ベース成分)と高周波数成分(ディテール成分)とに分け、ベース成分に応じてディテール成分の各画素に対するゲインを決定し、ディテール成分の画素毎に決定されたゲインをディテール成分に乗算することにより強度調整を行ってもよい。これにより、S/Nの低い暗部においてディテール成分の強調を抑え、ノイズ増幅を抑制することができる。
暗部のS/Nが良好に保たれた画像を得るためには露光時間を長くして撮影する必要がある。しかし、ダイナミックレンジが広いシーンの場合、露光時間を長くして撮影すると、明るい被写体部分の階調値が飽和し、信号成分が失われてしまう。一方、明るい被写体部分の階調値が飽和しないように露光時間を短くして撮影すると、暗部のS/Nが保たれず、暗い被写体の信号成分がノイズ成分に埋もれてしまう。
入力インターフェース104には、例えば撮像部2からの撮像画像を蓄積する画像メモリ4からの複数の画像が供給される。これらの複数の画像は基準画像及び参照画像を構成するものである。
Claims (14)
- 複数の入力画像のうちの1つの入力画像を基準画像とし、前記基準画像以外の1又は2以上の入力画像を参照画像としてこれらを合成して出力する画像処理装置であって、
それぞれ前記1又は2以上の参照画像を入力として受け、各々入力された参照画像の各画素について、前記基準画像との位置ずれ量を検出する1又は2以上の位置ずれ量検出部と、
それぞれ前記1又は2以上の参照画像を入力として受け、各々入力された参照画像から、少なくとも一部の画素の階調値が定義された位置合わせ画像を生成する1又は2以上の位置合わせ画像生成部と、
前記基準画像と、前記1又は2以上の位置合わせ画像生成部で生成された前記位置合わせ画像との合成を行って合成画像を出力する合成部とを備え、
前記位置合わせ画像生成部の各々は、入力された参照画像の各画素を、当該画素についての前記位置ずれ量に従って移動させ、移動先に最も近い画素位置に、当該画素の階調値を割り当てて定義し、前記位置合わせ画像の画素位置のうち、前記参照画像中に割り当てられる画素が存在しない画素位置に対しては階調値を定義しないことにより、前記位置合わせ画像を生成し、
前記合成部は、前記基準画像の各画素を注目画素とし、前記注目画素の階調値と、前記位置合わせ画像の、前記注目画素と同じ位置の画素の階調値とを、前記位置合わせ画像の各々の各画素が、階調値が定義された画素であるか否かに応じた重みを付けて加重加算することによって、或いは階調値が定義されていない画素の階調値を補間した上で加重加算することによって前記合成画像の前記注目画素と同じ位置の画素の階調値を算出する
ことを特徴とする画像処理装置。 - 前記合成部は、
前記位置合わせ画像の、階調値が定義されていない画素をマスクする未定義画素マスク部と、
前記注目画素の階調値と、前記未定義画素マスク部でマスクされた前記1又は2以上の位置合わせ画像の、前記注目画素と同じ位置の画素の階調値との加重加算を行って、前記注目画素についての時間方向加算値を算出する時間方向加重加算部と
を備えたことを特徴とする請求項1に記載の画像処理装置。 - 前記時間方向加重加算部は、
前記1又は2以上の位置合わせ画像の各々の、前記注目画素と同じ位置の画素について、前記注目画素との階調値の差分または類似度に応じて、前記時間方向加重加算部における前記加重加算で用いる重みを決定する
ことを特徴とする請求項2に記載の画像処理装置。 - 前記合成部は、
前記注目画素の階調値と、前記注目画素の周辺の画素の階調値との加重加算を行って、前記注目画素についての空間方向加算値を算出する空間方向加重加算部と、
前記注目画素についての前記時間方向加算値の算出のための、前記時間方向加重加算部における加重加算の対象とされる画素の数、又は当該加重加算で用いられる重みの合計を前記注目画素についての加算画素数評価値として算出する加算画素数評価部と、
前記注目画素についての前記加算画素数評価値に応じて、前記注目画素についての前記時間方向加算値と、前記注目画素についての前記空間方向加算値との加重加算を行って前記注目画素についての三次元加算値を算出し、前記合成画像の、前記注目画素と同じ位置の画素の階調値として出力する時空間統合部と
をさらに備えたことを特徴とする請求項2又は3に記載の画像処理装置。 - 前記時空間統合部は、前記加算画素数評価値が大きいほど、前記時間方向加算値に対する重みを大きくし、前記空間方向加算値に対する重みを小さくして、前記加重加算を行う
ことを特徴とする請求項4に記載の画像処理装置。 - 前記合成部は、
前記位置合わせ画像の、階調値が定義されていない画素の階調値を補間する未定義画素補間部と、
前記注目画素の階調値と、前記未定義画素補間部で階調値が補間された前記1又は2以上の位置合わせ画像の、前記注目画素と同じ位置の画素の階調値との加重加算を行って、前記注目画素についての時間方向加算値を算出する時間方向加重加算部と
を備えたことを特徴とする請求項1に記載の画像処理装置。 - 前記基準画像の階調特性に基づいて階調値毎のゲインを設定し、前記注目画素の階調値に応じて前記注目画素についてのゲインを出力するゲイン設定部と、
前記注目画素について前記ゲインに基づいて、前記合成画像の、前記注目画素と同じ位置の画素の階調値を変換することにより階調変換後画像の、前記注目画素と同じ位置の画素の階調値を算出する階調変換部と
をさらに備えたことを特徴とする請求項1に記載の画像処理装置。 - 前記合成部は、前記注目画素について前記ゲインに基づいて、前記注目画素についての前記加重加算において加算される画素の数を決定する
ことを特徴とする請求項7に記載の画像処理装置。 - 前記基準画像の階調特性に基づいて階調値毎のゲインを設定し、前記注目画素の階調値に応じて前記注目画素についてのゲインを出力するゲイン設定部と、
前記注目画素についての前記ゲインに基づいて前記合成画像の、前記注目画素と同じ位置の画素の階調値を変換することにより階調変換後画像の、前記注目画素と同じ位置の画素の階調値を算出する階調変換部とをさらに備え、
前記時間方向加重加算部は、前記注目画素についての前記ゲインに基づいて、前記注目画素についての前記時間方向加算値の算出のための、前記時間方向加重加算部における前記加重加算において加算される画素の数を決定し、
前記空間方向加重加算部は、前記注目画素についての前記ゲインに基づいて、前記注目画素についての前記空間方向加算値の算出のための、前記空間方向加重加算部における前記加重加算において加算される画素の数を決定し、
前記時空間統合部は、前記注目画素についての前記加算画素数評価値と、前記注目画素についての前記ゲインとに基づいて、前記時空間統合部における前記加重加算で用いられる、前記注目画素についての前記時間方向加算値に対する重み、及び前記注目画素についての前記空間方向加算値に対する重みを決定する
ことを特徴とする請求項4に記載の画像処理装置。 - 前記入力画像は、明るい被写体部分において階調値が飽和しないように露光時間を定めて撮影することで得られた画像である
ことを特徴とする請求項7、8、又は9に記載の画像処理装置。 - 前記入力画像は、1つの撮像部で、異なる時刻に撮影を行うことで得られた画像である
ことを特徴とする請求項1から10のいずれか1項に記載の画像処理装置。 - 複数の入力画像のうちの1つの入力画像を基準画像とし、前記基準画像以外の1又は2以上の入力画像を参照画像としてこれらを合成して出力する画像処理方法であって、
それぞれ前記1又は2以上の参照画像を入力として受け、各々入力された参照画像の各画素について、前記基準画像との位置ずれ量を検出する1又は2以上の位置ずれ量検出ステップと、
それぞれ前記1又は2以上の参照画像を入力として受け、各々入力された参照画像から、少なくとも一部の画素の階調値が定義された位置合わせ画像を生成する1又は2以上の位置合わせ画像生成ステップと、
前記基準画像と、前記1又は2以上の位置合わせ画像生成ステップで生成された前記位置合わせ画像との合成を行って合成画像を出力する合成ステップとを備え、
前記位置合わせ画像生成ステップの各々は、入力された参照画像の各画素を、当該画素についての前記位置ずれ量に従って移動させ、移動先に最も近い画素位置に、当該画素の階調値を割り当てて定義し、前記位置合わせ画像の画素位置のうち、前記参照画像中に割り当てられる画素が存在しない画素位置に対しては階調値を定義しないことにより、前記位置合わせ画像を生成し、
前記合成ステップは、前記基準画像の各画素を注目画素とし、前記注目画素の階調値と、前記位置合わせ画像の、前記注目画素と同じ位置の画素の階調値とを、前記位置合わせ画像の各々の各画素が、階調値が定義された画素であるか否かに応じた重みを付けて加重加算することによって、或いは階調値が定義されていない画素の階調値を補間した上で加重加算することによって前記合成画像の前記注目画素と同じ位置の画素の階調値を算出する
ことを特徴とする画像処理方法。 - 請求項12に記載の画像処理方法における処理をコンピュータに実行させるためのプログラム。
- 請求項13に記載のプログラムを記録した、コンピュータで読み取り可能な記録媒体。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017512212A JP6377258B2 (ja) | 2015-04-16 | 2016-02-16 | 画像処理装置及び方法、並びにプログラム及び記録媒体 |
US15/559,722 US10269128B2 (en) | 2015-04-16 | 2016-02-16 | Image processing device and method, and recording medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015083872 | 2015-04-16 | ||
JP2015-083872 | 2015-04-16 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016167014A1 true WO2016167014A1 (ja) | 2016-10-20 |
Family
ID=57126776
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/054356 WO2016167014A1 (ja) | 2015-04-16 | 2016-02-16 | 画像処理装置及び方法、並びにプログラム及び記録媒体 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10269128B2 (ja) |
JP (1) | JP6377258B2 (ja) |
WO (1) | WO2016167014A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019139313A (ja) * | 2018-02-06 | 2019-08-22 | 日立オートモティブシステムズ株式会社 | 隊列走行制御装置、隊列走行制御システム、隊列走行制御方法 |
WO2023033396A1 (ko) * | 2021-09-06 | 2023-03-09 | 삼성전자 주식회사 | 연속된 촬영 입력을 처리하는 전자 장치 및 그의 동작 방법 |
CN117934474A (zh) * | 2024-03-22 | 2024-04-26 | 自贡市第一人民医院 | 一种肠胃镜检影像增强处理方法 |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220138901A1 (en) * | 2017-08-09 | 2022-05-05 | Beijing Boe Optoelectronics Technology Co., Ltd. | Image display method, image processing method, image processing device, display system and computer-readable storage medium |
US10694112B2 (en) * | 2018-01-03 | 2020-06-23 | Getac Technology Corporation | Vehicular image pickup device and image capturing method |
US10516831B1 (en) * | 2018-07-12 | 2019-12-24 | Getac Technology Corporation | Vehicular image pickup device and image capturing method |
US11995800B2 (en) * | 2018-08-07 | 2024-05-28 | Meta Platforms, Inc. | Artificial intelligence techniques for image enhancement |
US10664960B1 (en) * | 2019-04-15 | 2020-05-26 | Hanwha Techwin Co., Ltd. | Image processing device and method to perform local contrast enhancement |
JP6562492B1 (ja) * | 2019-05-16 | 2019-08-21 | 株式会社モルフォ | 画像処理装置、画像処理方法及びプログラム |
US12094123B2 (en) * | 2022-04-08 | 2024-09-17 | Canon Medical Systems Corporation | Image data processing apparatus and method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008293185A (ja) * | 2007-05-23 | 2008-12-04 | Olympus Corp | 画像処理装置又は画像処理プログラム |
JP2012019337A (ja) * | 2010-07-07 | 2012-01-26 | Olympus Corp | 画像処理装置及び方法並びにプログラム |
JP2012147087A (ja) * | 2011-01-07 | 2012-08-02 | Nikon Corp | 画像生成装置及び方法 |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6205259B1 (en) * | 1992-04-09 | 2001-03-20 | Olympus Optical Co., Ltd. | Image processing apparatus |
US20060245640A1 (en) * | 2005-04-28 | 2006-11-02 | Szczuka Steven J | Methods and apparatus of image processing using drizzle filtering |
JP4802333B2 (ja) | 2007-02-21 | 2011-10-26 | 国立大学法人静岡大学 | 画像合成における露光比決定方法 |
JP4942563B2 (ja) | 2007-06-22 | 2012-05-30 | 三洋電機株式会社 | 画像処理方法、画像処理装置、及びこの画像処理装置を備えた電子機器 |
US8068700B2 (en) | 2007-05-28 | 2011-11-29 | Sanyo Electric Co., Ltd. | Image processing apparatus, image processing method, and electronic appliance |
JP4606486B2 (ja) | 2007-12-28 | 2011-01-05 | 三洋電機株式会社 | 画像処理装置および撮影装置 |
US8325268B2 (en) | 2007-12-28 | 2012-12-04 | Sanyo Electric Co., Ltd. | Image processing apparatus and photographing apparatus |
JP5146335B2 (ja) * | 2009-01-22 | 2013-02-20 | ソニー株式会社 | 画像処理装置および方法、並びにプログラム |
JP2010219807A (ja) | 2009-03-16 | 2010-09-30 | Panasonic Corp | 画像処理装置および画像処理方法 |
JP2011044846A (ja) | 2009-08-20 | 2011-03-03 | Sanyo Electric Co Ltd | 画像処理装置及び撮像装置 |
JP5400655B2 (ja) * | 2010-02-17 | 2014-01-29 | オリンパス株式会社 | 画像処理装置、画像処理方法、画像処理プログラム、及び、電子機器 |
JP5195973B2 (ja) | 2011-07-19 | 2013-05-15 | 株式会社ニコン | 画像処理装置、電子カメラ、および画像処理プログラム |
JP6097588B2 (ja) | 2013-02-13 | 2017-03-15 | キヤノン株式会社 | 画像処理装置及び画像処理方法 |
JP6455016B2 (ja) * | 2013-08-27 | 2019-01-23 | 株式会社リコー | 画像検査装置、画像形成システム及び画像検査方法 |
JP2016224172A (ja) * | 2015-05-28 | 2016-12-28 | 株式会社リコー | 投影システム、画像処理装置、校正方法およびプログラム |
-
2016
- 2016-02-16 JP JP2017512212A patent/JP6377258B2/ja active Active
- 2016-02-16 WO PCT/JP2016/054356 patent/WO2016167014A1/ja active Application Filing
- 2016-02-16 US US15/559,722 patent/US10269128B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008293185A (ja) * | 2007-05-23 | 2008-12-04 | Olympus Corp | 画像処理装置又は画像処理プログラム |
JP2012019337A (ja) * | 2010-07-07 | 2012-01-26 | Olympus Corp | 画像処理装置及び方法並びにプログラム |
JP2012147087A (ja) * | 2011-01-07 | 2012-08-02 | Nikon Corp | 画像生成装置及び方法 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019139313A (ja) * | 2018-02-06 | 2019-08-22 | 日立オートモティブシステムズ株式会社 | 隊列走行制御装置、隊列走行制御システム、隊列走行制御方法 |
WO2023033396A1 (ko) * | 2021-09-06 | 2023-03-09 | 삼성전자 주식회사 | 연속된 촬영 입력을 처리하는 전자 장치 및 그의 동작 방법 |
CN117934474A (zh) * | 2024-03-22 | 2024-04-26 | 自贡市第一人民医院 | 一种肠胃镜检影像增强处理方法 |
CN117934474B (zh) * | 2024-03-22 | 2024-06-11 | 自贡市第一人民医院 | 一种肠胃镜检影像增强处理方法 |
Also Published As
Publication number | Publication date |
---|---|
US20180047176A1 (en) | 2018-02-15 |
JP6377258B2 (ja) | 2018-08-22 |
US10269128B2 (en) | 2019-04-23 |
JPWO2016167014A1 (ja) | 2017-11-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6377258B2 (ja) | 画像処理装置及び方法、並びにプログラム及び記録媒体 | |
US8131109B2 (en) | Image processing method and apparatus for contrast enhancement using intensity mapping | |
US9485435B2 (en) | Device for synthesizing high dynamic range image based on per-pixel exposure mapping and method thereof | |
JP5672796B2 (ja) | 画像処理装置、画像処理方法 | |
US9262811B2 (en) | System and method for spatio temporal video image enhancement | |
US20120008005A1 (en) | Image processing apparatus, image processing method, and computer-readable recording medium having image processing program recorded thereon | |
JP7104575B2 (ja) | 画像処理装置、制御方法、及びプログラム | |
US20100123792A1 (en) | Image processing device, image processing method and program | |
JP6326180B1 (ja) | 画像処理装置 | |
JP2011109638A (ja) | 画像処理装置、画像処理方法 | |
JP2010147985A (ja) | 画像処理装置、および画像処理方法、並びにプログラム | |
JP2020129276A (ja) | 画像処理装置、画像処理方法、およびプログラム | |
JP7253634B2 (ja) | 結合前ノイズ除去による高ダイナミックレンジ画像生成 | |
CN106709890B (zh) | 用于低照度视频图像处理的方法及装置 | |
CN105931213A (zh) | 基于边缘检测和帧差法的高动态范围视频去鬼影的方法 | |
JPWO2017195267A1 (ja) | 画像処理装置、画像処理方法及び画像処理プログラム | |
CN114820405A (zh) | 图像融合方法、装置、设备及计算机可读存储介质 | |
JPWO2017154293A1 (ja) | 画像処理装置、撮像装置、および画像処理方法、並びにプログラム | |
JP2018182376A (ja) | 画像処理装置 | |
JP7133979B2 (ja) | 画像処理装置、画像処理方法、画像処理プログラム、及び、記憶媒体 | |
JP6800090B2 (ja) | 画像処理装置、及び画像処理方法、並びにプログラム及び記録媒体 | |
JP2018128764A (ja) | 画像処理装置、画像処理方法、及びプログラム | |
JP6826472B2 (ja) | 画像処理装置およびその制御方法 | |
TWI551141B (zh) | A high dynamic range image synthesizing apparatus and a method thereof for performing exposure mapping based on individual pixels | |
KR102470242B1 (ko) | 영상 처리 장치, 영상 처리 방법, 및 프로그램 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16779806 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017512212 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15559722 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16779806 Country of ref document: EP Kind code of ref document: A1 |