WO2013054446A1 - 画像合成装置、画像合成方法、画像合成プログラム及び記録媒体 - Google Patents
画像合成装置、画像合成方法、画像合成プログラム及び記録媒体 Download PDFInfo
- Publication number
- WO2013054446A1 WO2013054446A1 PCT/JP2011/073713 JP2011073713W WO2013054446A1 WO 2013054446 A1 WO2013054446 A1 WO 2013054446A1 JP 2011073713 W JP2011073713 W JP 2011073713W WO 2013054446 A1 WO2013054446 A1 WO 2013054446A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- exposure
- likelihood
- pixel
- unit
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 83
- 238000006243 chemical reaction Methods 0.000 claims abstract description 77
- 239000002131 composite material Substances 0.000 claims abstract description 31
- 238000004364 calculation method Methods 0.000 claims abstract description 19
- 239000000203 mixture Substances 0.000 claims description 51
- 230000015572 biosynthetic process Effects 0.000 claims description 49
- 238000003786 synthesis reaction Methods 0.000 claims description 49
- 230000002194 synthesizing effect Effects 0.000 claims description 35
- 238000005070 sampling Methods 0.000 claims description 20
- 230000006870 function Effects 0.000 description 72
- 230000008569 process Effects 0.000 description 70
- 238000010586 diagram Methods 0.000 description 19
- 238000012937 correction Methods 0.000 description 14
- 238000012545 processing Methods 0.000 description 13
- 239000011159 matrix material Substances 0.000 description 8
- 239000000284 extract Substances 0.000 description 6
- 238000007781 pre-processing Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 3
- 230000007423 decrease Effects 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/743—Bracketing, i.e. taking a series of images with varying exposure conditions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/815—Camera processing pipelines; Components thereof for controlling the resolution by using a single image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10141—Special mode during image acquisition
- G06T2207/10144—Varying exposure
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20208—High dynamic range [HDR] image processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
Definitions
- the present invention relates to an image composition device, an image composition method, an image composition program, and a recording medium.
- HDR High Dynamic Range synthesis
- Patent Document 1 an image synthesizing apparatus that performs high dynamic range synthesis (HDR (High Dynamic Range) synthesis) is known (see Patent Document 1).
- This apparatus apparently expands the dynamic range of the video signal by synthesizing a plurality of screens sequentially captured under different exposure conditions. This eliminates “overexposure” or “blackout” (part where the luminance level is extremely high or low) that has occurred during backlighting.
- HDR synthesis is performed after coordinate conversion of each of the plurality of screens is performed in response to a positional shift with time during imaging between the plurality of screens caused by camera shake. Specifically, HDR synthesis is performed using the common area portion of the two screens using the motion information of the image. This eliminates the positional deviation (screen blur) of the screen (image sensor) with respect to the subject.
- an image composition device is a device that generates a composite image using a first image and a second image having different exposure conditions.
- This apparatus includes an input unit, a likelihood calculation unit, an exposure estimation unit, and a synthesis unit.
- the input unit inputs the first image and the second image.
- the likelihood calculating unit calculates the moving subject likelihood in each pixel based on the difference between the first image and the second image.
- the exposure estimation unit estimates an exposure conversion function that matches the exposure conditions of the first image and the second image based on the moving subject likelihood.
- the combining unit combines the first image and the second image using the exposure conversion function.
- the moving subject likelihood at each pixel is calculated based on the difference between the first image and the second image before matching the exposure of the first image and the second image. Based on the moving subject likelihood, an exposure conversion function that matches the exposure conditions of the first image and the second image is estimated.
- the moving subject likelihood is taken into consideration when adjusting the exposure, for example, the exposure can be adjusted except for a region where the color may change due to the movement of the subject. Therefore, it is possible to generate an appropriate composite image.
- the image processing apparatus further includes a normalization unit that normalizes pixel values of the first image and the second image, and the likelihood calculation unit includes the normalized first image and second image. Based on the difference, the moving subject likelihood at each pixel may be calculated. With this configuration, the moving subject likelihood at each pixel can be calculated appropriately.
- the likelihood calculating unit changes the resolution of the plurality of first processed images and the second image obtained by changing the resolution of the first image in stages.
- the moving subject likelihood of each pixel may be calculated by calculating the difference of each pixel for each resolution using the plurality of second processed images obtained in step S1 and weighting the difference obtained for each resolution. With this configuration, the moving subject likelihood of each pixel can be calculated with high accuracy.
- the likelihood calculation unit is obtained for each resolution based on the reliability of the difference between the first image and the second image and the image size or resolution of the first processed image or the second processed image.
- the differences may be weighted.
- the exposure estimation unit may select a sampling point for deriving an exposure conversion function based on the moving subject likelihood of each pixel.
- the exposure estimation unit may determine the weight of the sampling point for deriving the exposure conversion function based on the moving subject likelihood of each pixel.
- the weight corresponding to the moving subject likelihood can be set at the sampling point for deriving the exposure conversion function, so that the exposure conversion function can be estimated with high accuracy. Therefore, an appropriate composite image can be generated.
- the exposure estimation unit may determine the weight of the sampling point for deriving the exposure conversion function to be smaller as the moving subject likelihood of each pixel is higher.
- the synthesizing unit calculates a moving subject likelihood in each pixel based on a difference between the first image and the second image, and uses the moving subject likelihood and the exposure conversion function to perform the first image and the second image. Two images may be combined. By configuring in this way, it is possible to synthesize in consideration of the movement of the subject, so that an appropriate synthesized image can be generated.
- the synthesizing unit generates a luminance base mask indicating a synthesis ratio of the pixel values of the first image and the second image based on the original luminance value of the first image or the second image. May be.
- the synthesizing unit may generate a subject blur mask indicating a synthesis ratio of the pixel values of the first image and the second image based on the difference between the first image and the second image.
- the combining unit may combine the luminance base mask and the subject blur mask to generate a combined mask that combines the pixel values of the first image and the second image.
- the synthesizing unit may calculate a moving subject likelihood in each pixel based on the difference between the first image and the second image, and generate a subject blur mask based on the moving subject likelihood. Good.
- a subject blur mask can be generated by specifying a region where subject blur occurs based on the moving subject likelihood.
- the synthesizing unit obtains the plurality of first processed images obtained by changing the resolution of the first image in stages, and the resolution of the second image, respectively.
- the difference between each pixel is calculated for each resolution using a plurality of second processed images
- the moving subject likelihood of each pixel is calculated by weighting the difference obtained for each resolution
- the moving subject likelihood A subject blur mask may be generated based on the above.
- the synthesizing unit detects a region where pixels whose moving subject likelihood is equal to or less than a predetermined threshold is adjacent, assigns an identification label to each region, and generates a subject blur mask for each region. May be. With such a configuration, even when there is a moving body that moves differently in the image, it can be appropriately combined.
- the synthesis unit forcibly selects a pixel value having a low luminance value from the first image and the second image as the subject blur mask, or the first image and the second image.
- a second mask for forcibly selecting a pixel value having a high luminance value may be generated.
- the synthesis unit may generate a synthesis mask by multiplying a luminance base mask by a mask obtained by inverting the first mask or adding a second mask.
- the image processing apparatus may further include a motion information acquisition unit that acquires motion information of pixels between the first image and the second image.
- the likelihood calculating unit may correct the first image and the second image based on the motion information, and calculate the moving subject likelihood of each pixel using the corrected first image and second image. .
- the first image may be an image obtained by combining images having different exposure conditions.
- a final composite image can be generated by sequentially combining a plurality of images having different exposure conditions.
- an image processing method is a method for generating a composite image using a first image and a second image having different exposure conditions.
- a first image and a second image are input.
- the moving subject likelihood at each pixel is calculated.
- an exposure conversion function that matches the exposure conditions of the first image and the second image is estimated.
- the first image and the second image are synthesized using the exposure conversion function.
- an image composition program is a program for operating a computer so as to generate a composite image using a first image and a second image having different exposure conditions.
- This program causes the computer to operate as an input unit, a likelihood calculation unit, an exposure estimation unit, and a synthesis unit.
- the input unit inputs the first image and the second image.
- the likelihood calculating unit calculates the moving subject likelihood in each pixel based on the difference between the first image and the second image.
- the exposure estimation unit estimates an exposure conversion function that matches the exposure conditions of the first image and the second image based on the moving subject likelihood.
- the combining unit combines the first image and the second image using the exposure conversion function.
- a recording medium is a recording medium on which an image composition program for operating a computer to generate a composite image using a first image and a second image having different exposure conditions is recorded.
- This program causes the computer to operate as an input unit, a likelihood calculation unit, an exposure estimation unit, and a synthesis unit.
- the input unit inputs the first image and the second image.
- the likelihood calculating unit calculates the moving subject likelihood in each pixel based on the difference between the first image and the second image.
- the exposure estimation unit estimates an exposure conversion function that matches the exposure conditions of the first image and the second image based on the moving subject likelihood.
- the combining unit combines the first image and the second image using the exposure conversion function.
- an image synthesizing apparatus an image synthesizing method, an image synthesizing program, and an image synthesizing program that can generate an appropriate synthesized image even when the subject moves are stored.
- a recording medium is provided.
- FIG. 1 It is a functional block diagram of the portable terminal carrying the image composition apparatus concerning one embodiment. It is a hardware block diagram of the portable terminal in which the image composition apparatus of FIG. 1 is mounted.
- 3 is a flowchart showing a preprocessing operation of the image composition device shown in FIG. 1. It is a schematic diagram explaining motion detection. It is a schematic diagram explaining a difference image. It is a schematic diagram explaining the example which derives
- 3 is a flowchart illustrating a composition operation of the image composition apparatus illustrated in FIG. 1. It is a schematic diagram explaining the flow of a synthetic
- A is a graph which shows an example of an exposure conversion function.
- B is a graph which shows an example of the weight at the time of connecting exposure conversion functions.
- It is a schematic diagram explaining a brightness
- A) is an example of an input image.
- B) is an example of a luminance base mask.
- A) is an example of an input image.
- B) is an example of a luminance base mask.
- A is an example of a luminance base mask.
- A) is an example of a difference image.
- (B) is an example of a labeled difference image.
- It is a schematic diagram explaining the flow of a subject blur mask generation process.
- It is a schematic diagram explaining the flow of a synthetic
- the image composition apparatus is an apparatus that generates a single composite image by combining a plurality of images having different exposure conditions.
- This image synthesizing apparatus is employed, for example, when HDR synthesis is performed in which a plurality of images sequentially captured under different exposure conditions are synthesized and the dynamic range of the video signal is apparently enlarged.
- the image composition device according to the present embodiment is suitably mounted on a mobile terminal with limited resources, such as a mobile phone, a digital camera, and a PDA (Personal Digital Assistant), but is not limited thereto. For example, it may be mounted on a normal computer system.
- a mobile terminal having a camera function will be described as an example of an image composition device according to the present invention.
- FIG. 1 is a functional block diagram of a portable terminal 2 provided with an image composition device 1 according to this embodiment.
- a mobile terminal 2 shown in FIG. 1 is a mobile terminal carried by a user, for example, and has a hardware configuration shown in FIG.
- FIG. 2 is a hardware configuration diagram of the mobile terminal 2.
- the portable terminal 2 physically includes a main storage device such as a CPU (Central Processing Unit) 100, a ROM (Read Only Memory) 101, and a RAM (Random Access Memory) 102, a camera, a keyboard, and the like.
- the input device 103, the output device 104 such as a display, and the auxiliary storage device 105 such as a hard disk are configured as a normal computer system.
- Each function of the portable terminal 2 and the image composition device 1 described later is configured to load the input device 103 and the output device 104 under the control of the CPU 100 by reading predetermined computer software on hardware such as the CPU 100, the ROM 101, and the RAM 102. This is realized by operating and reading and writing data in the main storage device and the auxiliary storage device 105.
- the image composition apparatus 1 normally includes a CPU 100, a main storage device such as a ROM 101 and a RAM 102, an input device 103, an output device 104, an auxiliary storage device 105, and the like. It may be configured as a computer system.
- the mobile terminal 2 may include a communication module or the like.
- the portable terminal 2 includes a camera 20, an image composition device 1, and a display unit 21.
- the camera 20 has a function of capturing an image.
- a CMOS pixel sensor or the like is used as the camera 20.
- the camera 20 has a continuous imaging function that repeatedly captures images at a predetermined interval from a timing specified by a user operation or the like, for example. That is, the camera 20 has a function of acquiring not only one still image but also a plurality of still images (continuous frame images).
- the camera 20 has a function of changing the exposure condition of each successive frame image and taking an image. That is, each image continuously captured by the camera 20 is an image with different exposure conditions.
- the camera 20 has a function of outputting a captured frame image to the image composition device 1 every time it is captured.
- the image composition device 1 includes an image input unit 10, a preprocessing unit 11, a motion correction unit 15, and a composition unit 16.
- the image input unit 10 has a function of inputting a frame image captured by the camera 20.
- the image input unit 10 has a function of inputting, for example, a frame image captured by the camera 20 every time it is captured.
- the image input unit 10 has a function of saving an input frame image in a storage device provided in the mobile terminal 2.
- the preprocessing unit 11 performs preprocessing before HDR synthesis.
- the preprocessing unit 11 includes a motion information acquisition unit 12, a likelihood calculation unit 13, and an exposure estimation unit 14.
- the motion information acquisition unit 12 has a function of acquiring pixel motion information between images. For example, when the input frame image is a first image and a second image, pixel motion information between the first image and the second image is acquired. For example, a motion vector is used as the motion information. Further, when three or more input images are input by the image input unit 10, the motion information acquisition unit 12 sorts the input images in the order of exposure, and acquires motion information between input images having similar exposure conditions. Good. By detecting motion by comparing images with similar exposure conditions, it is possible to avoid a decrease in motion detection accuracy due to a difference in exposure between images. Then, the motion information acquisition unit 12 may select a reference image that matches the motion information from among a plurality of input images.
- an image having the most effective pixels among a plurality of input images is employed.
- an effective pixel is a pixel that is not crushed or blown out. Blackout or whiteout is determined based on the luminance value.
- the motion information acquisition unit 12 acquires motion information using two input images, the motion information acquisition unit 12 extracts a feature point from an input image with high exposure among the two input images, and sets a corresponding point for the feature point as an exposure point. You may obtain
- the motion information acquisition unit 12 has a function of outputting motion information to the likelihood calculation unit 13.
- the likelihood calculation unit 13 has a function of calculating the likelihood of the movement of the subject in each pixel (moving subject likelihood). The larger the moving subject likelihood, the higher the possibility that the subject is moving and the higher the possibility that the composite image will be blurred.
- the likelihood calculating unit 13 corrects the screen motion between the input images using the motion information. Thereafter, the likelihood calculating unit 13 normalizes the pixel values of the corresponding pixels in the two input images. For example, the likelihood calculating unit 13 obtains Local Terternary Patterns (LTP) based on the pixel values of neighboring pixels. As the pixel value, RGB three colors are used, and as the neighboring pixels, 24 neighborhoods are used. Then, the likelihood calculating unit 13 calculates the moving subject likelihood by using the difference between the normalized images. For example, the normalized pixel value difference, that is, the sign mismatch rate in the LTP of the target pixel is calculated as the moving subject likelihood of the target pixel.
- LTP Local Terternary Patterns
- the likelihood calculating unit 13 may calculate the moving subject likelihood by converting two input images to multi-resolution. For example, the likelihood calculating unit 13 changes the resolution of each input image (the first image and the second image) in stages, thereby a plurality of images having different resolutions (the first processed image and the second processed image). Create Then, the likelihood calculating unit 13 creates a difference image between the first processed image and the second processed image at the same resolution. This difference image is a difference between the first processed image and the second processed image, and specifically, is a difference between pixel values. And the likelihood calculation part 13 calculates the moving subject likelihood of each pixel by weighting the difference image obtained for every resolution. As the weight (reliability), a code mismatch rate in the LTP of each pixel is used.
- the weight may be further weighted according to the image size or resolution of the first processed image or the second processed image. That is, the weight may be increased as the image size is larger or the resolution is larger.
- the likelihood calculating unit 13 has a function of outputting the moving subject likelihood of each pixel to the exposure estimating unit 14.
- the exposure estimation unit 14 has a function of estimating an exposure conversion function that matches exposure conditions between input images.
- the exposure conversion function is a function for converting the exposure of each input image to an exposure equivalent to the reference image.
- the exposure estimation unit 14 may match the exposure conditions between input images having similar exposure conditions. By matching the exposures by comparing the images having similar exposure conditions, it is possible to avoid the estimation accuracy from being lowered due to the difference in exposure between the images.
- the exposure estimation unit 14 corrects the motion between the input screens using the motion information. Then, in the two input images after motion correction, the luminance values are sampled from the same location as a set, and the relationship is plotted. For example, a Halton number sequence is used as the coordinates of the input image. Note that the exposure estimation unit 14 does not have to adopt a luminance value greater than or equal to a predetermined value or a luminance value less than or equal to a predetermined value as a sampling point. For example, a luminance value included in the range of 10 to 245 is employed as the sampling point. The exposure estimation unit 14 estimates the exposure conversion function by fitting the plot result, for example.
- the exposure conversion function is f (K i )
- the original luminance value at the sampling point i of the second image is U i
- the fitting may be performed by the Gauss-Newton method using the function e.
- the exposure estimation unit 14 performs sampling for deriving an exposure conversion function based on the moving subject likelihood of each pixel. For example, the exposure estimation unit 14 selects a sampling point based on the moving subject likelihood of each pixel. For example, the exposure estimation unit 14 samples the luminance value from pixels with a small moving subject likelihood by providing threshold values in stages. Further, the exposure estimation unit 14 may weight the sampling points based on the moving subject likelihood. For example, the following error function e may be minimized for fitting. In Equation 2, w i is a weight. Here, the weight w i is set to be smaller as the moving object likelihood becomes higher.
- the exposure estimation unit 14 calculates the exposure conversion function based on the moving subject likelihood of each pixel, so that the data of the sampling point with lower reliability does not affect the derivation of the exposure conversion function. be able to. Note that the exposure conversion function may be changed so that the input image after conversion falls within a representable range.
- the motion correction unit 15 has a function of correcting motion between input screens using motion information.
- the synthesizing unit 16 synthesizes the input images or the already synthesized image and the input image using the synthesis mask.
- the combination mask is an image of the combination ratio (weight) when combining images ( ⁇ blend).
- the synthesizing unit 16 first synthesizes the two input images according to the synthesis mask, generates a synthesis mask of the synthesized image and the remaining input images, and performs synthesis.
- the combining unit 16 combines the luminance base mask and the subject blur mask to generate a combined mask.
- the luminance base mask is a mask for avoiding the use of an overexposure or underexposure region for synthesis by determining a weight for combining images based on a luminance value.
- the subject blur mask is a mask for avoiding a phenomenon (ghost phenomenon) in which a subject is displayed in a double or triple manner when an image in which the subject moves is combined.
- the synthesizer 16 calculates a weight based on the original luminance value of the input image and generates a luminance base mask.
- the weight is obtained by the following calculation formula, for example. According to the above calculation formula, the weight is appropriately determined and the luminance discontinuity is reduced. In order to reduce spatial discontinuity, the synthesis mask may be subjected to blurring processing.
- the synthesizing unit 16 calculates a weight based on the difference between the input images and generates a subject blur mask.
- the synthesizer 16 calculates the moving subject likelihood from the difference in pixel values between the input images.
- the pixel value difference between the input images and the moving subject likelihood can be obtained by operating in the same manner as the likelihood calculating unit 13 described above.
- the likelihood calculating unit 13 detects a subject blur region adjacent to a pixel whose moving subject likelihood is equal to or less than a predetermined threshold, assigns an identification label to each subject blur region, and subjects each subject blur region. Generate a blur mask.
- the predetermined threshold value can be changed as appropriate according to the required specifications. If the threshold value is set large, it is possible to easily extract the continuous area.
- pixels can be selected from an image with a large amount of information so as to avoid a whiteout area or a blackout area for each subject blur area. That is, as this subject blur mask, lo_mask (first mask) for forcibly selecting a pixel value having a low luminance value among the images to be combined, or a pixel forcibly having a high luminance value among the images to be combined. There is a hi_mask (second mask) that selects a value.
- the synthesizing unit 16 basically generates a second mask that selects pixel values from a high-exposure image with a large amount of information.
- the composition unit 16 generates the first mask when the subject blur area is affected by the whiteout area in the high-exposure image.
- the first mask is generated when any of the following conditions is satisfied.
- the first condition is a case where, among the two images to be combined, the overexposure area of the high exposure image is larger than the area of the blackout area of the low exposure image.
- the second condition is a case where, in a high-exposure image of the two images to be combined, the area of the whiteout region in the subject blur region is 10% or more. It should be noted that, in the high-exposure image of the two images to be combined, a condition may be set where the region adjacent to the subject blur region is a whiteout region.
- the combining unit 16 combines the luminance base mask and the subject blur mask to generate a combined mask. For example, the synthesis unit 16 multiplies the luminance base mask by a mask obtained by inverting the first mask. The synthesizing unit 16 adds the second mask to the luminance base mask. The synthesizer 16 synthesizes all the input images and outputs the final synthesized image to the display unit 21.
- the display unit 21 displays a composite image. For example, a display device is used as the display unit 21.
- FIG. 3 is a flowchart for explaining preprocessing for HDR synthesis.
- the control process illustrated in FIG. 3 starts when the HDR synthesis mode is selected by the user and the camera 20 continuously captures a plurality of images, for example.
- the image input unit 10 inputs an image frame (S10).
- S10 image frame
- five input images I 0 to I 4 are input in consideration of ease of understanding.
- S12 exposure order sort process
- the motion information acquisition unit 12 sorts the input images I 0 to I 4 in the order of exposure.
- the motion information acquisition unit 12 sorts using, for example, the average value of the luminance values.
- the luminance value decreases as the numbers of the input images I 0 to I 4 decrease. In this case, the input images I 0 to I 4 are sorted in numerical order.
- FIG. 4 is a schematic diagram illustrating motion information acquisition processing. As shown in FIG. 4, it is assumed that the input images I 0 to I 4 are arranged so that the average luminance value increases in order from left to right.
- the motion information acquisition unit 12 sets a reference image from among the input images I 0 to I 4 . Here, the reference image input image I 2.
- motion information between input images having similar exposure conditions is acquired (for example, input image I 0 and input image I 1 , input image I 1 and input image I 2, etc.).
- the motion information acquisition unit 12 extracts a feature point from the input image with high exposure out of the two input images, and extracts a corresponding point for the extracted feature point from the input image with low exposure. Based on this motion information, it is possible to obtain a conversion matrix for converting input images having similar exposure conditions into coordinates of the same dimension.
- FIG. 4 shows conversion matrices m10, m21, m32, and m43 for matching a low-exposure image to a high-exposure image among input images with similar exposure conditions.
- the conversion matrix for converting the input image I 0 into the reference image I 2 is m10 * m21. Transformation matrix to convert the input image I 1 to the reference image I 2 is m10. A conversion matrix for converting the input image I 3 to the reference image I 2 is (m32) ⁇ 1 . The conversion matrix for converting the input image I 4 to the reference image I 2 is (m32 * m43) ⁇ 1 .
- the input image after conversion will be described as I 0 ′ to I 4 ′ .
- the likelihood calculating unit 13 calculates the moving subject likelihood between the respective images of the input images I 0 ′ to I 4 ′ .
- FIG. 5 is an example of calculating the moving subject likelihood between the images of the input image I 0 ′ and the input image I 1 ′ .
- FIG. 5 shows a case where the R value is used as the pixel value.
- the likelihood calculating unit 13 also normalizes the input image I 1 ′ in the same manner.
- the pixels of the input image I 1 ′ corresponding to the target pixel of the input image I 0 ′ are normalized.
- the difference image X is represented as an image in which the color of the pixel is changed from black to white in accordance with the magnitude of the difference (the degree of mismatch of the signs).
- This difference image is an image of the moving subject likelihood of each pixel.
- the R value but also the G value and the B value may be processed similarly.
- the likelihood calculating unit 13 may obtain the moving subject likelihood using multiresolution.
- FIG. 6 is an example of obtaining the moving subject likelihood using multi-resolution.
- the likelihood calculating unit 13 generates a plurality of images in which the resolutions of the input image I 0 ′ and the input image I 1 ′ are changed stepwise. Then, difference images are generated with the same resolution. This difference image is obtained by simply subtracting pixel values.
- FIG. 6 shows a case where the input image I 0 ′ and the input image I 1 ′ are multiplexed in six stages.
- the difference images are X 1 to X 6 , and the difference image has a lower resolution as the number increases.
- the difference image is weighted with reliability to calculate a final difference image.
- the reliability for example, a value obtained by multiplying the number of pairs having a significant difference in the above-described LTP difference by the image size (or resolution) is used. For example, in the case of LTP shown in FIG. 5, the number of pairs having a significant difference is 1.
- the weight image (the image of the weight) corresponding to the difference images X 1 to X 6 is calculated by multiplying the number of pairs by the image size for each pixel.
- a final difference image is calculated using the difference images X 1 to X 6 and the weight image.
- the likelihood calculating unit 13 calculates difference images from the input images I 1 ′ to I 4 ′ by the same method as described above.
- the exposure estimation unit 14 estimates an exposure conversion function.
- the exposure estimation unit 14 can express the exposure conversion function by the following formula, where x is the luminance value before conversion and y is the luminance value after conversion.
- (a, b) are exposure conversion parameters.
- An exposure conversion function can be obtained by deriving the exposure conversion parameters (a, b).
- the exposure estimation unit 14 samples several sets of luminance values of the low-exposure input image I 0 ′ and the low-exposure input image I 1 ′ at the point (x, y) of the input image. Plot the relationship.
- a point to be sampled is selected based on the difference image acquired in the process of S16.
- the setting is made so that sampling is not performed from an area where the moving subject likelihood is high.
- the sampling is set so as to sample from a moving object with a low likelihood.
- a lower weight is assigned as the moving subject likelihood is higher, and the exposure conversion function is estimated using Expression 2.
- fitting as shown in FIG. 7 is performed.
- the likelihood calculating unit 13 estimates an exposure conversion function between the input images I 1 ′ to I 4 ′ by the same method as described above. Note that data with luminance values close to 0 or data close to 255 may be excluded.
- FIG. 8 is a schematic diagram illustrating the exposure conversion function estimation process.
- exposure conversion parameters (a10, b10), (a21, b21), (a32, b32) for matching a low-exposure image to a high-exposure image among input images with similar exposure conditions. (A43, b43) are shown.
- the conversion result is obtained by setting A 0 of the exposure conversion parameter (A 0 , B 0 ) of the input image I 0 ′ having the lowest exposure to 1.0 so that the final composite image can be expressed. May not exceed 1.0.
- 'the image after exposure conversion of the input image I 0' input image I 0 is displayed as a '.
- the exposure conversion parameter of the reference image I 2 ′ with respect to the input image I 0 ′ having the lowest exposure is (A 2 , B 2 )
- a 0 is set to 1.0
- B 2 is set to 1.0.
- the color may be set to be equal to that of the input image when the gain is 1 / A 2 .
- the likelihood calculating unit 13 performs the above-described processing separately for each RGB channel. When the process of S18 ends, the pre-process shown in FIG. 3 ends.
- FIG. 9 is a flowchart for explaining HDR synthesis.
- the control process shown in FIG. 9 starts when the control process shown in FIG.
- the motion correction unit 15 actually corrects the motion (S20).
- the motion correction unit 15 corrects the motion of the input images I 0 ′′ to I 4 ′′ after exposure conversion using the conversion matrix. Note that a subpixel interpolation algorithm or the like may be used according to the required accuracy.
- the process of S20 ends, the process proceeds to a luminance base mask generation process and a subject blur area extraction process (S22 and S24).
- FIG. 10 is a schematic diagram illustrating the flow of the synthesis process.
- the images are synthesized by replacing the input images I 1 ′′ to I 4 ′′ in order from the low-exposure input image I 0 ′′ . That is, first, a luminance base mask that determines how much the input image I 1 ′′ is to be combined with the input image I 0 ′′ is generated.
- This luminance base mask calculates the weight from the original luminance value of the input image I 1 ′′ . For example, the weight near the whiteout area is set to zero.
- FIG. 11A is a graph showing the relationship of the pixel value with respect to the input luminance.
- the functions f 0 to f 3 are graphs showing which image pixel value is adopted based on the luminance value.
- the functions f 0 to f 3 are applied to an image having a larger exposure as the number increases. For example, when the input image I 0 ′′ having the lowest exposure is input, the function f 0 is applied and all pixel values are adopted.
- the function f 0 and the function f 1 are applied.
- the input image I 1 ′′ is adopted in the range of the luminance values of S0 to S5
- the input image I 0 ′′ is adopted in the range of the luminance value of S6 or more.
- the range of the luminance values from S5 to S6 is adopted as a composite value blended with the weight shown in (B). For convenience, ⁇ correction is omitted.
- the functions f 0 to f 2 are applied.
- the input image I 2 ′′ is adopted in the range of luminance values of S0 to S3, the input image I 1 ′′ is adopted in the range of luminance values of S4 to S5, and the range of luminance values of S6 or higher. Then, the input image I 0 ′′ is adopted.
- the range of the luminance values of S3 to S4 and S5 to S6 is adopted as a composite value blended with the weight shown in (B).
- the functions f 0 to f 3 are applied.
- the input image I 3 ′′ is adopted in the range of luminance values of S0 to S1
- the input image I 2 ′′ is adopted in the range of luminance values of S2 to S3, and the luminance values of S4 to S5 are set.
- the input image I 1 ′′ is adopted, and in the range of the luminance value of S6 or more, the input image I 0 ′′ is adopted.
- the luminance value ranges of S1 to S2, S3 to S4, and S5 to S6 are adopted as the combined values blended with the weights shown in (B). In this way, images with high exposure are preferentially adopted. In addition, an image with low exposure is adopted for the overexposed region portion, and the boundary portion is blended smoothly.
- FIG. 12 shows an example of a luminance base mask obtained by imaging the graph shown in FIG. FIG. 12A shows an input image
- FIG. 12B shows a luminance base mask of the input image.
- white is used when the pixel value of the 100% input image is used
- black is displayed when the pixel value of the 100% input image is not used.
- the synthesizer 16 extracts the subject blur area.
- the synthesizing unit 16 calculates a difference image in the same manner as the process of S16 in FIG. 3, and extracts a region where the moving subject likelihood is a predetermined value or more as a subject blur region.
- FIG. 13A is an example of a difference image including a subject blur area.
- the synthesizer 16 labels the subject blur area.
- the combining unit 16 sets one of the label R n with respect to subject shake a continuous area.
- FIG. 13B shows an example in which continuous regions are labeled.
- the combining unit 16 sets a reference image for each subject blur area.
- the synthesizing unit 16 gives priority to an image with high exposure basically as a reference image. For example, when the input image I 0 ′′ and the input image I 1 ′′ are combined, the input image I 1 ′′ is selected as the reference image. However, when the subject blur area is affected by the whiteout area in the input image I 1 ′′ , the input image I 0 ′′ is selected as the reference image.
- subject blur mask generation processing S30.
- FIG. 14 is a schematic diagram illustrating a series of processes in S24 to S30. As shown in FIG. 14, when the input image I 0 ′′ and the input image I 1 ′′ are combined, a difference image X is obtained, and a first mask (lo_mask) or a second mask is obtained for each area of the difference image. (Hi_mask) is generated. That is, for the region where the subject moves, the ghost phenomenon described above can be avoided by inputting the pixel value from only one image by using the subject blur mask.
- S32 synthesis mask generation process
- the composition unit 16 In the process of S32, the composition unit 16 generates a composition mask based on the luminance base mask and the subject blur mask.
- FIG. 15 is a schematic diagram for explaining the synthesis mask generation process. As shown in FIG. 15, the luminance base mask is multiplied by an image obtained by inverting lo_mask. Also, hi_mask is added to the luminance base mask. By combining in this way, a composite mask is generated.
- the process of S32 proceeds to the composition process (S34).
- the synthesis unit 16 performs a synthesis process according to the synthesis mask created in the process of S32.
- the combined luminance value P 2 can be obtained by the following formula. At this time, the entire region is synthesized as it is for the image with the lowest exposure.
- an HDR composite image in which subject blur is corrected is generated.
- the image composition program includes a main module, an input module, and an arithmetic processing module.
- the main module is a part that comprehensively controls image processing.
- the input module operates the mobile terminal 2 so as to acquire an input image.
- the arithmetic processing module includes a motion information acquisition module, a likelihood calculation module, an exposure estimation module, a motion correction module, and a synthesis module.
- the functions realized by executing the main module, the input module, and the arithmetic processing module are the image input unit 10, the motion information acquisition unit 12, the likelihood calculation unit 13, the exposure estimation unit 14, the motion of the image synthesis apparatus 1 described above.
- the functions of the correction unit 15 and the synthesis unit 16 are the same.
- the image composition program is provided by a recording medium such as a ROM or a semiconductor memory, for example.
- the image composition program may be provided as a data signal via a network.
- the image synthesizing apparatus 1 before matching the exposure of the first image and the second image, based on the difference between the first image and the second image.
- the likelihood of movement of the subject in each pixel is calculated.
- an exposure conversion function that matches the exposure conditions of the first image and the second image is estimated based on the likelihood of the movement of the subject.
- the exposure can be adjusted except for a region where the color may change due to the movement of the subject. Therefore, it is possible to generate an appropriate composite image.
- the use of the subject blur mask prevents the occurrence of subject blur (ghost-like display), and a clear image can be obtained.
- the above-described embodiment shows an example of an image composition device according to the present invention.
- the image synthesizing apparatus according to the present invention is not limited to the image synthesizing apparatus 1 according to the embodiment, and the image synthesizing apparatus according to the embodiment may be modified or otherwise changed without changing the gist described in each claim. It may be applied to the above.
- the camera 20 acquires a frame image has been described, but an image transmitted from another device via a network may be used. Further, when only recording without displaying the composite image, the display unit 21 may not be provided.
- the image composition device 1 according to each embodiment described above may be operated together with the camera shake correction device.
- SYMBOLS 1 DESCRIPTION OF SYMBOLS 1 ... Image composition apparatus, 10 ... Image input part (input part), 12 ... Motion information acquisition part, 13 ... Likelihood calculation part, 14 ... Exposure estimation part, 15 ... Motion correction part, 16 ... Composition part.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (19)
- 露出条件の異なる第1画像及び第2画像を用いて合成画像を生成する画像合成装置であって、
前記第1画像及び前記第2画像を入力する入力部と、
前記第1画像及び前記第2画像の差分に基づいて、各画素における動被写体尤度を算出する尤度算出部と、
前記動被写体尤度に基づいて、前記第1画像と前記第2画像との露出条件を合わせる露出変換関数を推定する露出推定部と、
前記露出変換関数を用いて前記第1画像及び前記第2画像を合成する合成部と、
を備える画像合成装置。 - 前記尤度算出部は、前記第1画像及び前記第2画像の画素値を正規化し、正規化された前記第1画像及び前記第2画像の差分に基づいて、各画素における前記動被写体尤度を算出する請求項1に記載の画像合成装置。
- 前記尤度算出部は、
前記第1画像の解像度をそれぞれ段階的に変更することで得られる複数の第1処理画像、及び、前記第2画像の解像度をそれぞれ段階的に変更することで得られる複数の第2処理画像を用いて、各画素の差分を解像度ごとに算出し、解像度ごとに得られた差分を重み付けすることによって各画素の前記動被写体尤度を算出する請求項1又は2に記載の画像合成装置。 - 前記尤度算出部は、前記第1画像と前記第2画像との差分の信頼度、及び前記第1処理画像又は前記第2処理画像の画像サイズもしくは解像度に基づいて、解像度ごとに得られた差分を重み付けする請求項3に記載の画像合成装置。
- 前記露出推定部は、各画素の前記動被写体尤度に基づいて、前記露出変換関数を導出するためのサンプリング点を選択する請求項1~4の何れか一項に記載の画像合成装置。
- 前記露出推定部は、各画素の前記動被写体尤度に基づいて、前記露出変換関数を導出するためのサンプリング点の重みを決定する請求項1~5の何れか一項に記載の画像合成装置。
- 前記露出推定部は、各画素の前記動被写体尤度が高いほど前記露出変換関数を導出するためのサンプリング点の重みを小さく決定する請求項6に記載の画像合成装置。
- 前記合成部は、
前記第1画像及び前記第2画像の差分に基づいて各画素における動被写体尤度を算出し、該動被写体尤度及び前記露出変換関数を用いて前記第1画像及び前記第2画像を合成する請求項1~7の何れか一項に記載の画像合成装置。 - 前記合成部は、
前記第1画像又は前記第2画像の元の輝度値の大きさに基づいて、前記第1画像及び前記第2画像の画素値の合成比を示す輝度ベースマスクを生成し、
前記第1画像及び前記第2画像の差分に基づいて、前記第1画像及び前記第2画像の画素値の合成比を示す被写体ぶれマスクを生成し、
前記輝度ベースマスク及び前記被写体ぶれマスクを結合させて、前記第1画像及び前記第2画像の画素値を合成する合成マスクを生成する請求項1~8の何れか一項に記載の画像合成装置。 - 前記合成部は、
前記第1画像及び前記第2画像の差分に基づいて、各画素における動被写体尤度を算出し、
該動被写体尤度に基づいて前記被写体ぶれマスクを生成する請求項9に記載の画像合成装置。 - 前記合成部は、
前記第1画像の解像度をそれぞれ段階的に変更することで得られる複数の第1処理画像、及び、前記第2画像の解像度をそれぞれ段階的に変更することで得られる複数の第2処理画像を用いて、各画素の差分を解像度ごとに算出し、解像度ごとに得られた差分を重み付けすることによって各画素の前記動被写体尤度を算出し、該動被写体尤度に基づいて前記被写体ぶれマスクを生成する請求項10に記載の画像合成装置。 - 前記合成部は、
前記動被写体尤度が所定の閾値以下となる画素が隣接する領域を検出し、各領域に対して識別ラベルを付与し、領域ごとに前記被写体ぶれマスクを生成する請求項11に記載の画像合成装置。 - 前記合成部は、
前記被写体ぶれマスクとして、前記第1画像及び前記第2画像のうち強制的に輝度値の低い画素値を選択させる第1マスク、又は、前記第1画像及び前記第2画像のうち強制的に輝度値の高い画素値を選択させる第2マスクを生成する請求項9~12の何れか一項に記載の画像合成装置。 - 前記合成部は、
前記輝度ベースマスクに対して、前記第1マスクを反転させたマスクを乗算し、又は、前記第2マスクを加算することで、前記合成マスクを生成する請求項13に記載の画像合成装置。 - 前記第1画像と前記第2画像との間の画素の動き情報を取得する動き情報取得部をさらに備え、
前記尤度算出部は、前記動き情報に基づいて前記第1画像及び前記第2画像を補正し、補正後の前記第1画像及び前記第2画像を用いて各画素の前記動被写体尤度を算出する請求項1~14の何れか一項に記載の画像合成装置。 - 前記第1画像は、露出条件の異なる画像同士が合成された画像からなる請求項1~15の何れか一項に記載の画像合成装置。
- 露出条件の異なる第1画像及び第2画像を用いて合成画像を生成する画像合成方法であって、
前記第1画像及び前記第2画像を入力し、
前記第1画像及び前記第2画像の差分に基づいて、各画素における動被写体尤度を算出し、
前記動被写体尤度に基づいて前記第1画像と前記第2画像との露出条件を合わせる露出変換関数を推定し、
前記露出変換関数を用いて前記第1画像及び前記第2画像を合成する、
画像合成方法。 - 露出条件の異なる第1画像及び第2画像を用いて合成画像を生成するようにコンピュータを動作させる画像合成プログラムであって、
前記第1画像及び前記第2画像を入力する入力部、
前記第1画像及び前記第2画像の差分に基づいて、各画素における動被写体尤度を算出する尤度算出部、
前記動被写体尤度に基づいて前記第1画像と前記第2画像との露出条件を合わせる露出変換関数を推定する露出推定部、及び
前記露出変換関数を用いて前記第1画像及び前記第2画像を合成する合成部として前記コンピュータを動作させる画像合成プログラム。 - 露出条件の異なる第1画像及び第2画像を用いて合成画像を生成するようにコンピュータを動作させる画像合成プログラムを記録した記録媒体であって、
前記第1画像及び前記第2画像を入力する入力部、
前記第1画像及び前記第2画像の差分に基づいて、各画素における動被写体尤度を算出する尤度算出部、
前記動被写体尤度に基づいて前記第1画像と前記第2画像との露出条件を合わせる露出変換関数を推定する露出推定部、及び
前記露出変換関数を用いて前記第1画像及び前記第2画像を合成する合成部として前記コンピュータを動作させる画像合成プログラムを記録した記録媒体。
Priority Applications (12)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201180004309.2A CN103168462B (zh) | 2011-10-14 | 2011-10-14 | 图像合成装置和图像合成方法 |
JP2012558101A JP5365823B2 (ja) | 2011-10-14 | 2011-10-14 | 画像合成装置、画像合成方法、画像合成プログラム及び記録媒体 |
EP11874110.7A EP2688288B1 (en) | 2011-10-14 | 2011-10-14 | Image compositing device, image compositing method, image compositing program, and recording medium |
US13/824,889 US9129414B2 (en) | 2011-10-14 | 2011-10-14 | Image compositing apparatus, image compositing method, image compositing program, and recording medium |
PCT/JP2011/073713 WO2013054446A1 (ja) | 2011-10-14 | 2011-10-14 | 画像合成装置、画像合成方法、画像合成プログラム及び記録媒体 |
EP12840445.6A EP2693738B1 (en) | 2011-10-14 | 2012-08-31 | Image processing device, image processing method, image processing program, and recording medium |
US14/114,786 US9626750B2 (en) | 2011-10-14 | 2012-08-31 | Image processing method for a composite image and image processing device, image processing program, and recording medium therefor |
CN201280019344.6A CN103493473B (zh) | 2011-10-14 | 2012-08-31 | 图像处理装置、图像处理方法、图像处理程序和记录介质 |
KR1020147012928A KR101609491B1 (ko) | 2011-10-14 | 2012-08-31 | 화상 합성 장치, 화상 합성 방법 및 기록 매체 |
JP2013538473A JP5720034B2 (ja) | 2011-10-14 | 2012-08-31 | 画像処理装置、画像処理方法、画像処理プログラム及び記録媒体 |
KR1020167008285A KR101643122B1 (ko) | 2011-10-14 | 2012-08-31 | 화상 처리 장치, 화상 처리 방법 및 기록 매체 |
PCT/JP2012/072242 WO2013054607A1 (ja) | 2011-10-14 | 2012-08-31 | 画像処理装置、画像処理方法、画像処理プログラム及び記録媒体 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2011/073713 WO2013054446A1 (ja) | 2011-10-14 | 2011-10-14 | 画像合成装置、画像合成方法、画像合成プログラム及び記録媒体 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013054446A1 true WO2013054446A1 (ja) | 2013-04-18 |
Family
ID=48081519
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/073713 WO2013054446A1 (ja) | 2011-10-14 | 2011-10-14 | 画像合成装置、画像合成方法、画像合成プログラム及び記録媒体 |
PCT/JP2012/072242 WO2013054607A1 (ja) | 2011-10-14 | 2012-08-31 | 画像処理装置、画像処理方法、画像処理プログラム及び記録媒体 |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/072242 WO2013054607A1 (ja) | 2011-10-14 | 2012-08-31 | 画像処理装置、画像処理方法、画像処理プログラム及び記録媒体 |
Country Status (6)
Country | Link |
---|---|
US (2) | US9129414B2 (ja) |
EP (2) | EP2688288B1 (ja) |
JP (1) | JP5365823B2 (ja) |
KR (2) | KR101609491B1 (ja) |
CN (2) | CN103168462B (ja) |
WO (2) | WO2013054446A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9508173B2 (en) | 2013-10-30 | 2016-11-29 | Morpho, Inc. | Image processing device having depth map generating unit, image processing method and non-transitory computer redable recording medium |
KR20190101427A (ko) * | 2017-06-16 | 2019-08-30 | 광동 오포 모바일 텔레커뮤니케이션즈 코포레이션 리미티드 | 이미지 합성 방법, 장치 및 비휘발성 컴퓨터 판독 가능 매체 |
Families Citing this family (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101896026B1 (ko) * | 2011-11-08 | 2018-09-07 | 삼성전자주식회사 | 휴대 단말기에서 움직임 블러를 생성하는 장치 및 방법 |
JP5713885B2 (ja) * | 2011-12-26 | 2015-05-07 | キヤノン株式会社 | 画像処理装置及び画像処理方法、プログラム、並びに記憶媒体 |
US9576341B2 (en) * | 2013-10-30 | 2017-02-21 | Ricoh Imaging Company, Ltd. | Image-processing system, imaging apparatus and image-processing method |
US9210335B2 (en) * | 2014-03-19 | 2015-12-08 | Konica Minolta Laboratory U.S.A., Inc. | Method for generating HDR images using modified weight |
JP6598479B2 (ja) * | 2014-03-31 | 2019-10-30 | キヤノン株式会社 | 画像処理装置、その制御方法、および制御プログラム |
JP5847228B2 (ja) * | 2014-04-16 | 2016-01-20 | オリンパス株式会社 | 画像処理装置、画像処理方法及び画像処理プログラム |
WO2015196122A1 (en) * | 2014-06-19 | 2015-12-23 | Contentguard Holdings, Inc. | Rendering content using obscuration techniques |
CN105472263B (zh) * | 2014-09-12 | 2018-07-13 | 聚晶半导体股份有限公司 | 影像撷取方法及使用此方法的影像撷取设备 |
US20160232672A1 (en) * | 2015-02-06 | 2016-08-11 | Qualcomm Incorporated | Detecting motion regions in a scene using ambient-flash-ambient images |
JP6485204B2 (ja) * | 2015-05-14 | 2019-03-20 | リコーイメージング株式会社 | 撮影装置、画像ファイル生成装置、画像ファイル処理装置及びプログラム |
JP6648914B2 (ja) * | 2015-06-16 | 2020-02-14 | キヤノン株式会社 | 画像処理装置、画像処理方法及びプログラム |
US9819873B2 (en) * | 2015-06-25 | 2017-11-14 | Canon Kabushiki Kaisha | Image-processing apparatus and image-processing method |
CN105187730B (zh) * | 2015-07-31 | 2018-08-07 | 上海兆芯集成电路有限公司 | 高动态范围图像产生方法以及使用该方法的装置 |
US10122936B2 (en) * | 2015-09-02 | 2018-11-06 | Mediatek Inc. | Dynamic noise reduction for high dynamic range in digital imaging |
US10755395B2 (en) * | 2015-11-27 | 2020-08-25 | Canon Medical Systems Corporation | Dynamic image denoising using a sparse representation |
EP3236418B1 (en) * | 2016-04-13 | 2020-10-28 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and storage medium |
JP6877109B2 (ja) * | 2016-04-13 | 2021-05-26 | キヤノン株式会社 | 画像処理装置、画像処理方法、及びプログラム |
CN109417592B (zh) * | 2016-07-01 | 2020-12-29 | 麦克赛尔株式会社 | 拍摄装置、拍摄方法及拍摄程序 |
JP6715463B2 (ja) * | 2016-09-30 | 2020-07-01 | パナソニックIpマネジメント株式会社 | 画像生成装置、画像生成方法、プログラムおよび記録媒体 |
JP2019028537A (ja) * | 2017-07-26 | 2019-02-21 | キヤノン株式会社 | 画像処理装置および画像処理方法 |
KR102138483B1 (ko) | 2017-10-31 | 2020-07-27 | 가부시키가이샤 모르포 | 화상 합성 장치, 화상 합성 방법, 화상 합성 프로그램 및 기억 매체 |
JP7009252B2 (ja) * | 2018-02-20 | 2022-01-25 | キヤノン株式会社 | 画像処理装置、画像処理方法およびプログラム |
JP7098980B2 (ja) * | 2018-03-16 | 2022-07-12 | 株式会社リコー | 撮像装置、画像処理装置および画像処理方法 |
KR102628911B1 (ko) * | 2019-01-07 | 2024-01-24 | 삼성전자주식회사 | 영상 처리 방법 및 이를 수행하는 영상 처리 장치 |
KR102648747B1 (ko) | 2019-01-18 | 2024-03-20 | 삼성전자주식회사 | Hdr 이미지를 생성하기 위한 이미징 시스템 및 그것의 동작 방법 |
KR102649032B1 (ko) * | 2019-04-26 | 2024-03-20 | 엘지전자 주식회사 | 자율주행 차량의 영상 처리 방법 |
US11430134B2 (en) * | 2019-09-03 | 2022-08-30 | Nvidia Corporation | Hardware-based optical flow acceleration |
CN113469200A (zh) | 2020-03-30 | 2021-10-01 | 阿里巴巴集团控股有限公司 | 数据处理方法和系统、存储介质、计算设备 |
CN112233127B (zh) * | 2020-10-15 | 2022-09-16 | 上海圭目机器人有限公司 | 一种用于弯道拼接图像的下采样方法 |
CN113409201B (zh) * | 2021-06-01 | 2024-03-19 | 平安科技(深圳)有限公司 | 图像增强处理方法、装置、设备及介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3110797B2 (ja) | 1991-06-21 | 2000-11-20 | キヤノン株式会社 | 撮像方法及び撮像画面合成装置 |
JP2005065119A (ja) * | 2003-08-19 | 2005-03-10 | Hitachi Ltd | 撮像装置及び方法 |
JP2005130054A (ja) * | 2003-10-21 | 2005-05-19 | Sharp Corp | 撮像装置およびその駆動方法 |
JP2007221423A (ja) * | 2006-02-16 | 2007-08-30 | Matsushita Electric Ind Co Ltd | 撮像装置 |
JP2010258885A (ja) * | 2009-04-27 | 2010-11-11 | Ricoh Co Ltd | 撮像装置及び画像処理方法 |
JP2011171842A (ja) * | 2010-02-16 | 2011-09-01 | Olympus Corp | 画像処理装置及び画像処理プログラム |
Family Cites Families (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE3927888A1 (de) | 1989-08-24 | 1991-02-28 | Philips Patentverwaltung | Wechselrichteranordnung |
JPH10191136A (ja) | 1996-12-27 | 1998-07-21 | Canon Inc | 撮像装置及び画像合成装置 |
US6370270B1 (en) * | 1999-01-06 | 2002-04-09 | National Instruments Corporation | System and method for sampling and/or placing objects using low discrepancy sequences |
KR20010103394A (ko) * | 2000-05-10 | 2001-11-23 | 박정관 | 신분증 인식 기술을 이용한 고객 정보 관리 시스템 및 방법 |
JP3731729B2 (ja) | 2000-12-20 | 2006-01-05 | 松下電器産業株式会社 | 画像合成回路 |
US7492375B2 (en) | 2003-11-14 | 2009-02-17 | Microsoft Corporation | High dynamic range image viewing on low dynamic range displays |
US20050117799A1 (en) | 2003-12-01 | 2005-06-02 | Chiou-Shann Fuh | Method and apparatus for transforming a high dynamic range image into a low dynamic range image |
US7463296B2 (en) * | 2004-04-01 | 2008-12-09 | Microsoft Corporation | Digital cameras with luminance correction |
KR101086402B1 (ko) * | 2004-08-30 | 2011-11-25 | 삼성전자주식회사 | 영상 분할 방법 |
EP1797722B1 (en) * | 2004-10-05 | 2019-05-29 | Vectormax Corporation | Adaptive overlapped block matching for accurate motion compensation |
JP2006148550A (ja) | 2004-11-19 | 2006-06-08 | Konica Minolta Opto Inc | 画像処理装置及び撮像装置 |
CN101627408B (zh) * | 2007-03-13 | 2012-08-22 | 奥林巴斯株式会社 | 图像信号处理装置、图像信号处理方法 |
JP5133085B2 (ja) | 2007-04-18 | 2013-01-30 | パナソニック株式会社 | 撮像装置および撮像方法 |
JP4851392B2 (ja) | 2007-05-30 | 2012-01-11 | 富士通株式会社 | 画像合成装置と画像合成方法および画像合成プログラム |
JP4933354B2 (ja) * | 2007-06-08 | 2012-05-16 | キヤノン株式会社 | 情報処理装置、及び情報処理方法 |
KR101442610B1 (ko) * | 2008-02-18 | 2014-09-19 | 삼성전자주식회사 | 디지털 촬영장치, 그 제어방법 및 제어방법을 실행시키기위한 프로그램을 저장한 기록매체 |
JP2009276956A (ja) * | 2008-05-14 | 2009-11-26 | Fujifilm Corp | 画像処理装置および方法並びにプログラム |
JP2010045510A (ja) | 2008-08-11 | 2010-02-25 | Panasonic Corp | 画像処理装置及び画像処理方法 |
KR101534317B1 (ko) * | 2008-10-10 | 2015-07-06 | 삼성전자주식회사 | 높은 동적 대역(hdr) 영상 생성방법 및 장치 |
US20100091119A1 (en) | 2008-10-10 | 2010-04-15 | Lee Kang-Eui | Method and apparatus for creating high dynamic range image |
US8339475B2 (en) * | 2008-12-19 | 2012-12-25 | Qualcomm Incorporated | High dynamic range image combining |
US8040258B2 (en) * | 2009-04-07 | 2011-10-18 | Honeywell International Inc. | Enhanced situational awareness system and method |
US8390698B2 (en) * | 2009-04-08 | 2013-03-05 | Panasonic Corporation | Image capturing apparatus, reproduction apparatus, image capturing method, and reproduction method |
KR101551690B1 (ko) * | 2009-06-26 | 2015-09-09 | 삼성전자주식회사 | 디지털 촬영장치, 그 제어방법 및 이를 실행하기 위한 프로그램을 저장한 기록매체 |
KR101614914B1 (ko) * | 2009-07-23 | 2016-04-25 | 삼성전자주식회사 | 모션 적응적 고대비 영상 획득 장치 및 방법 |
US8737755B2 (en) * | 2009-12-22 | 2014-05-27 | Apple Inc. | Method for creating high dynamic range image |
US8606009B2 (en) * | 2010-02-04 | 2013-12-10 | Microsoft Corporation | High dynamic range image generation and rendering |
JP5445235B2 (ja) | 2010-03-09 | 2014-03-19 | ソニー株式会社 | 画像処理装置、画像処理方法およびプログラム |
US8711248B2 (en) * | 2011-02-25 | 2014-04-29 | Microsoft Corporation | Global alignment for high-dynamic range image generation |
JP2013066142A (ja) * | 2011-08-31 | 2013-04-11 | Sony Corp | 画像処理装置、および画像処理方法、並びにプログラム |
JP5832855B2 (ja) * | 2011-11-01 | 2015-12-16 | クラリオン株式会社 | 画像処理装置、撮像装置および画像処理プログラム |
JP5633550B2 (ja) * | 2012-09-05 | 2014-12-03 | カシオ計算機株式会社 | 画像処理装置、画像処理方法及びプログラム |
JP5364887B2 (ja) | 2013-03-04 | 2013-12-11 | 株式会社モルフォ | 画像合成装置、画像合成方法、画像合成プログラム及び記録媒体 |
-
2011
- 2011-10-14 US US13/824,889 patent/US9129414B2/en active Active
- 2011-10-14 WO PCT/JP2011/073713 patent/WO2013054446A1/ja active Application Filing
- 2011-10-14 EP EP11874110.7A patent/EP2688288B1/en not_active Not-in-force
- 2011-10-14 CN CN201180004309.2A patent/CN103168462B/zh active Active
- 2011-10-14 JP JP2012558101A patent/JP5365823B2/ja active Active
-
2012
- 2012-08-31 KR KR1020147012928A patent/KR101609491B1/ko active Application Filing
- 2012-08-31 CN CN201280019344.6A patent/CN103493473B/zh active Active
- 2012-08-31 WO PCT/JP2012/072242 patent/WO2013054607A1/ja active Application Filing
- 2012-08-31 KR KR1020167008285A patent/KR101643122B1/ko active IP Right Grant
- 2012-08-31 EP EP12840445.6A patent/EP2693738B1/en active Active
- 2012-08-31 US US14/114,786 patent/US9626750B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3110797B2 (ja) | 1991-06-21 | 2000-11-20 | キヤノン株式会社 | 撮像方法及び撮像画面合成装置 |
JP2005065119A (ja) * | 2003-08-19 | 2005-03-10 | Hitachi Ltd | 撮像装置及び方法 |
JP2005130054A (ja) * | 2003-10-21 | 2005-05-19 | Sharp Corp | 撮像装置およびその駆動方法 |
JP2007221423A (ja) * | 2006-02-16 | 2007-08-30 | Matsushita Electric Ind Co Ltd | 撮像装置 |
JP2010258885A (ja) * | 2009-04-27 | 2010-11-11 | Ricoh Co Ltd | 撮像装置及び画像処理方法 |
JP2011171842A (ja) * | 2010-02-16 | 2011-09-01 | Olympus Corp | 画像処理装置及び画像処理プログラム |
Non-Patent Citations (1)
Title |
---|
See also references of EP2688288A4 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9508173B2 (en) | 2013-10-30 | 2016-11-29 | Morpho, Inc. | Image processing device having depth map generating unit, image processing method and non-transitory computer redable recording medium |
KR20190101427A (ko) * | 2017-06-16 | 2019-08-30 | 광동 오포 모바일 텔레커뮤니케이션즈 코포레이션 리미티드 | 이미지 합성 방법, 장치 및 비휘발성 컴퓨터 판독 가능 매체 |
KR102184308B1 (ko) * | 2017-06-16 | 2020-12-01 | 광동 오포 모바일 텔레커뮤니케이션즈 코포레이션 리미티드 | 이미지 합성 방법, 장치 및 비휘발성 컴퓨터 판독 가능 매체 |
Also Published As
Publication number | Publication date |
---|---|
US9129414B2 (en) | 2015-09-08 |
EP2688288A1 (en) | 2014-01-22 |
WO2013054607A1 (ja) | 2013-04-18 |
CN103493473B (zh) | 2018-01-16 |
CN103168462B (zh) | 2016-09-28 |
US20140212065A1 (en) | 2014-07-31 |
US9626750B2 (en) | 2017-04-18 |
JPWO2013054446A1 (ja) | 2015-03-30 |
CN103168462A (zh) | 2013-06-19 |
CN103493473A (zh) | 2014-01-01 |
KR101643122B1 (ko) | 2016-07-26 |
EP2688288A4 (en) | 2015-04-08 |
EP2693738B1 (en) | 2019-04-17 |
US20140079333A1 (en) | 2014-03-20 |
EP2688288B1 (en) | 2018-05-16 |
KR101609491B1 (ko) | 2016-04-05 |
EP2693738A4 (en) | 2015-10-14 |
EP2693738A1 (en) | 2014-02-05 |
KR20140093237A (ko) | 2014-07-25 |
KR20160039701A (ko) | 2016-04-11 |
JP5365823B2 (ja) | 2013-12-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5365823B2 (ja) | 画像合成装置、画像合成方法、画像合成プログラム及び記録媒体 | |
JP5610245B2 (ja) | 画像合成装置、画像合成方法、画像合成プログラム及び記録媒体 | |
CN108668093B (zh) | Hdr图像的生成方法及装置 | |
US9344638B2 (en) | Constant bracket high dynamic range (cHDR) operations | |
JP3880553B2 (ja) | 画像処理方法および装置 | |
US20120008005A1 (en) | Image processing apparatus, image processing method, and computer-readable recording medium having image processing program recorded thereon | |
JP5896788B2 (ja) | 画像合成装置及び画像合成方法 | |
US9055217B2 (en) | Image compositing apparatus, image compositing method and program recording device | |
JP5767485B2 (ja) | 画像処理装置及び制御方法 | |
JP6326180B1 (ja) | 画像処理装置 | |
US20110299795A1 (en) | Image processing system, image processing method, and image processing program | |
JP2012208553A (ja) | 画像処理装置、および画像処理方法、並びにプログラム | |
JP2010244360A (ja) | 画像処理装置、画像処理方法、及びコンピュータプログラム | |
JP2018036960A (ja) | 画像類似度算出装置、画像処理装置、画像処理方法、及び記録媒体 | |
JP7133979B2 (ja) | 画像処理装置、画像処理方法、画像処理プログラム、及び、記憶媒体 | |
JP2008072604A (ja) | 画像処理方式、装置、メディア、プログラム | |
CN114979500B (zh) | 图像处理方法、图像处理装置、电子设备及可读存储介质 | |
JP2012109849A (ja) | 撮像装置 | |
JP5364887B2 (ja) | 画像合成装置、画像合成方法、画像合成プログラム及び記録媒体 | |
JP4544064B2 (ja) | 撮像装置、撮像方法及びプログラム | |
JP5720034B2 (ja) | 画像処理装置、画像処理方法、画像処理プログラム及び記録媒体 | |
JP5645704B2 (ja) | 画像処理装置、及びその制御方法 | |
JP2018190201A (ja) | 画像処理装置、画像処理方法およびプログラム | |
JP2021071906A (ja) | 画像処理装置および画像処理方法、並びにプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2012558101 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13824889 Country of ref document: US |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11874110 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011874110 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |