WO2012164881A1 - 画像処理装置および画像処理方法 - Google Patents
画像処理装置および画像処理方法 Download PDFInfo
- Publication number
- WO2012164881A1 WO2012164881A1 PCT/JP2012/003398 JP2012003398W WO2012164881A1 WO 2012164881 A1 WO2012164881 A1 WO 2012164881A1 JP 2012003398 W JP2012003398 W JP 2012003398W WO 2012164881 A1 WO2012164881 A1 WO 2012164881A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- motion amount
- amount
- target
- image processing
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 202
- 238000003672 processing method Methods 0.000 title claims description 29
- 238000005259 measurement Methods 0.000 claims abstract description 35
- 238000000034 method Methods 0.000 claims description 85
- 238000006073 displacement reaction Methods 0.000 claims description 19
- 238000004364 calculation method Methods 0.000 claims description 15
- 238000010586 diagram Methods 0.000 description 27
- 238000012937 correction Methods 0.000 description 26
- 238000003384 imaging method Methods 0.000 description 24
- 238000011156 evaluation Methods 0.000 description 19
- 230000014509 gene expression Effects 0.000 description 12
- 230000003287 optical effect Effects 0.000 description 6
- 239000000470 constituent Substances 0.000 description 5
- FFBHFFJDDLITSX-UHFFFAOYSA-N benzyl N-[2-hydroxy-4-(3-oxomorpholin-4-yl)phenyl]carbamate Chemical compound OC1=C(NC(=O)OCC2=CC=CC=C2)C=CC(=C1)N1CCOCC1=O FFBHFFJDDLITSX-UHFFFAOYSA-N 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000010354 integration Effects 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 238000007792 addition Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000007935 neutral effect Effects 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 229910003460 diamond Inorganic materials 0.000 description 1
- 239000010432 diamond Substances 0.000 description 1
- 230000001678 irradiating effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
- G01C11/06—Interpretation of pictures by comparison of two or more pictures of the same area
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C3/00—Measuring distances in line of sight; Optical rangefinders
- G01C3/32—Measuring distances in line of sight; Optical rangefinders by focusing the object, e.g. on a ground glass screen
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B7/00—Mountings, adjusting means, or light-tight connections, for optical elements
- G02B7/28—Systems for automatic generation of focusing signals
- G02B7/36—Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals
- G02B7/38—Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals measured at different points on the optical axis, e.g. focussing on two or more planes and comparing image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/681—Motion detection
- H04N23/6811—Motion detection based on the image signal
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/682—Vibration or motion blur correction
- H04N23/683—Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/741—Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20201—Motion blur correction
Definitions
- the present invention relates to an image processing apparatus and an image processing method for performing image processing such as subject distance measurement and HDR image generation from a plurality of captured images captured in a plurality of capturing states.
- the subject distance that indicates the distance from the camera to the subject at the same time as the photographed image of the subject
- an application using the subject distance for example, an image viewed from different viewpoints can be generated from a single photographed image and the subject distance with respect to the photographed image, based on the principle of triangulation, and stereo. Alternatively, it is possible to generate a three-dimensional image corresponding to multiple viewpoints. Further, if the captured image is divided based on the subject distance, it is possible to cut out only a subject existing at a specific subject distance or to adjust the image quality.
- the main methods of measuring subject distance without contact can be broadly divided into the following two.
- the first method is an active method that measures the subject distance based on the time until the reflected wave returns and the angle of the reflected wave by irradiating infrared rays, ultrasonic waves, lasers, and the like.
- this method when this method is used, there is a problem that an active irradiation / light-receiving device that is not necessary for a normal camera is required although it can be measured with high accuracy when the subject distance is short. Further, when the subject is far away, if the output level of the irradiation device is low, the irradiation light reaching the subject becomes weak, and the measurement accuracy of the subject distance is lowered.
- the second method is a passive method in which the subject distance is measured using only the captured image captured by the camera.
- DFD Depth from Defocus
- the amount of blur occurring in a captured image is uniquely determined for each camera in accordance with the relationship between the focus state (lens focus state) at the time of shooting and the subject distance.
- the DFD uses this characteristic to measure the relationship between the subject distance and the correlation value of the blur amount generated in the photographed image by photographing a subject at a known subject distance in advance with different focus states. Accordingly, when shooting is performed in a plurality of focus states in actual shooting, the subject distance can be measured by calculating the correlation value of the blur amount between images (see, for example, Non-Patent Document 1). .
- Non-Patent Document 2 As a method for capturing an image in a plurality of focus states, as described in Non-Patent Document 2, there is a method in which light incident on a camera is separated into a plurality of pieces and then received by image pickup devices arranged at different distances. In this method, multiple images with different focus states can be captured at the same time, so there is no shift in shooting timing between images, but there is a need to use multiple image sensors and the distance to the image sensors should be different. There is a problem that a special configuration is required.
- a focus adjustment mechanism realized by autofocus or the like as a technique of photographing a plurality of focus state images with a camera configuration using a single-plate image sensor.
- a focus adjustment mechanism is controlled to capture a plurality of images in order.
- the shooting timings of the multiple shot images differ, so if the subject moves or the shooting direction of the camera changes, The subject is displaced.
- the subject and the camera do not move, that is, the subject is not displaced between a plurality of captured images, and the same between the images captured in a plurality of focus states.
- the correlation value of the blur amount with respect to the subject is compared. For this reason, in the measurement of the subject distance using the DFD using a plurality of photographed images, there is a problem that if the subject is misaligned, this comparison cannot be performed accurately, and the subject distance measurement accuracy decreases.
- the present invention has been made in view of the above problems, and can perform image processing more stably and with high accuracy even when a subject is misaligned between a plurality of photographed images having different photographing states.
- An object is to provide an image processing apparatus and an image processing method.
- an image processing apparatus performs image processing for measuring a subject distance from a plurality of captured images obtained by capturing the same subject in a plurality of focus states.
- An apparatus comprising: a first image photographed in a first focus state among the plurality of photographed images; and a second image photographed in a second focus state different from the first focus state.
- a target motion amount estimation unit that estimates a target motion amount that represents the amount of positional deviation of the subject between, a correction image generation unit that generates a correction image in which the second image is motion-compensated based on the target motion amount,
- a subject distance measuring unit that measures the subject distance in the first image based on a correlation value of a blur amount between the first image and the corrected image;
- the present invention it is possible to measure the subject distance more stably and with high accuracy even when the subject is displaced between a plurality of photographed images having different photographing states.
- FIG. 1 is a block diagram showing a configuration example of an image processing apparatus according to Embodiments 1, 2, and 3 of the present invention.
- FIG. 2 is a block diagram illustrating a configuration example of the target motion amount estimation unit according to Embodiment 1 of the present invention.
- FIG. 3 is a flowchart showing an example of the processing flow of the image processing method according to Embodiment 1 of the present invention.
- FIG. 4 is a diagram illustrating a relationship between a plurality of captured images used in Embodiment 1 of the present invention, a target motion amount, and a first motion amount.
- FIG. 5A is an explanatory diagram illustrating an example of a relationship between a search source image and a target block in block matching processing.
- FIG. 5A is an explanatory diagram illustrating an example of a relationship between a search source image and a target block in block matching processing.
- FIG. 5B is an explanatory diagram illustrating an example of a relationship between a search destination image and a search area in the block matching process.
- FIG. 6 is a block diagram illustrating a configuration example of the target motion amount estimation unit according to the second embodiment of the present invention.
- FIG. 7 is a flowchart showing an example of the processing flow of the image processing method according to Embodiment 2 of the present invention.
- FIG. 8 is a diagram illustrating a relationship among a plurality of captured images used in the second embodiment of the present invention, a target motion amount, a first motion amount, and a second motion amount.
- FIG. 9 is a block diagram illustrating a configuration example of the target motion amount estimation unit according to the third embodiment of the present invention.
- FIG. 10 is a flowchart showing an example of the processing flow of the image processing method according to Embodiment 3 of the present invention.
- FIG. 11 is a diagram illustrating a relationship among a plurality of captured images used in Embodiment 3 of the present invention, a target motion amount, a first motion amount, a second motion amount, and a third motion amount.
- FIG. 12 is a vector diagram showing the relationship of the motion amount estimated between the three captured images in Embodiment 3 of the present invention.
- FIG. 13 is a block diagram illustrating a configuration example of the image processing apparatus according to the fourth embodiment of the present invention.
- FIG. 14 is a flowchart showing an example of the processing flow of the image processing method according to Embodiment 4 of the present invention.
- FIG. 11 is a diagram illustrating a relationship among a plurality of captured images used in Embodiment 3 of the present invention, a target motion amount, a first motion amount, a second motion amount, and a third motion amount.
- FIG. 15 is a diagram illustrating a relationship between pixel positions used for subject distance compensation processing according to Embodiment 4 of the present invention.
- FIG. 16 is a block diagram illustrating a configuration example of the image processing apparatus according to the fifth embodiment of the present invention.
- FIG. 17 is a flowchart showing the flow of processing according to the fifth embodiment of the present invention.
- FIG. 18 is a diagram illustrating a relationship between a plurality of captured images used in the fifth embodiment of the present invention, a target motion amount, and a first motion amount.
- FIG. 19 is an external view showing an example of a camera equipped with the image processing apparatus of the present invention.
- an image processing apparatus performs image processing for measuring a subject distance from a plurality of captured images obtained by photographing the same subject in a plurality of focus states.
- An apparatus comprising: a first image photographed in a first focus state among the plurality of photographed images; and a second image photographed in a second focus state different from the first focus state.
- a target motion amount estimation unit that estimates a target motion amount that represents the amount of positional deviation of the subject between, a correction image generation unit that generates a correction image in which the second image is motion-compensated based on the target motion amount,
- a subject distance measuring unit that measures the subject distance in the first image based on a correlation value of a blur amount between the first image and the corrected image;
- a corrected image in which the subject positional deviation is eliminated from the first image by motion compensation or the subject positional deviation amount is reduced so that the subject distance can be calculated. Since the subject distance is generated and calculated using the first image and the corrected image, the subject distance can be measured with high accuracy even if the subject is misaligned between the plurality of captured images. In other words, in the image processing apparatus of this configuration, even when a subject position shift occurs between a plurality of captured images having different focus states, the subject position is the same as that of the first image due to motion compensation. Since the corrected image is generated in this way, only the focus state is different between the first image and the corrected image, or the degree of positional deviation is very small, and the subject distance can be measured more favorably by DFD.
- the subject here indicates the whole of the photographed image, and includes not only a person but also a background image.
- the image processing device includes the first image, a third image captured in the first focus state at a timing different from the first image, the first image, and the third image.
- the target motion amount estimation unit estimates a first motion amount representing a subject displacement amount between the first image and the third image. It is good also as a structure which has one motion amount estimation part and the object motion amount determination part which estimates the said object motion amount using said 1st motion amount.
- the target motion amount is obtained from the highly accurate first motion amount obtained between the first image and the third image having the same focus state, even between captured images having different focus states,
- the target motion amount can be estimated with high accuracy, and as a result, the subject distance can be measured with high accuracy.
- the target motion amount determination unit sets the size of the target motion amount to the first motion amount and the first motion amount between the first image and the third image. It is good also as a structure which estimates the said object motion amount by integrating
- This configuration makes it possible to obtain the target motion amount even when the shooting intervals of the first image, the second image, and the third image are not equal.
- the shooting time interval between the first image and the shooting time interval between the third image are often different, which is useful.
- the target motion amount estimation unit further includes a second motion amount estimation unit that estimates a second motion amount representing a positional deviation amount between the first image and the second image
- the target motion amount determination unit may be configured to estimate the target motion amount using the first motion amount and the second motion amount.
- the target motion amount determination unit includes a pixel between the calculation target pixel of the target motion amount among pixels constituting the second image and a pixel on the first image corresponding to the calculation target pixel. Based on the difference between the values, the accuracy of the second motion amount is determined, and when it is determined that the accuracy of the second motion amount is higher than a threshold, the second motion amount is estimated as the target motion amount. When it is determined that the accuracy of the second motion amount is lower than the threshold value, the target motion amount may be estimated using the first motion amount.
- the second motion amount directly obtained between the first image and the second image is used. Since it is considered that the accuracy of the directly obtained motion amount is good, the target motion amount can be estimated with higher accuracy.
- the target motion amount estimation unit further includes a third motion amount estimation unit that estimates a third motion amount representing a positional deviation amount between the second image and the third image
- the target motion amount determination unit may be configured to estimate the target motion amount using the third motion amount in addition to the first motion amount and the second motion amount.
- the target motion amount determination unit estimates the second motion amount as the target motion amount when the sum of the second motion amount and the third motion amount is equal to the first motion amount. If the sum of the second motion amount and the third motion amount is not equal to the first motion amount, the pixel of the second image that is the target of calculation of the target motion amount and the corresponding first image The accuracy of the second motion amount is determined based on a difference in pixel value from the upper pixel, and the pixel of the second image that is the target of calculation of the target motion amount and the corresponding pixel on the third image And determining the accuracy of the third motion amount based on the difference between the second motion amount and the target motion amount when it is determined that the accuracy of the second motion amount is higher than a threshold value. And when it is determined that the accuracy of the second motion amount is lower than the threshold value, A motion amount obtained by subtracting the third motion amount from the motion amount may be configured to estimate as the target amount of movement.
- the target motion amount can be estimated using the directly obtained high-accuracy first image and third image. It is possible to estimate the quantity.
- the “subtraction” of the third motion amount from the first motion amount is a subtraction in vector calculation.
- the blur of the first image is generated for each of a blur region determination unit that determines a region in which blur is generated based on the target motion amount as a blur region, and each of the pixels constituting the blur region.
- a subject distance compensation unit that measures the subject distance of the first image using the subject distance of a non-blurred region that is a non-blurred region or the subject distance of another captured image for which the subject distance has been obtained in advance.
- the subject distance measurement unit obtains the subject distance based on a correlation value of a blur amount between the first image and the corrected image for each of the pixels constituting the non-blurr region. Also good.
- This configuration makes it possible to measure the subject distance with high accuracy even when blur occurs in the captured image.
- blur is a motion blur that occurs in a captured image when the movement of the subject and the change in the shooting direction are fast and the position of the subject changes greatly during the exposure time.
- the conventional image processing method does not consider these influences, there is a problem that the measurement accuracy of the subject distance is lowered.
- the subject distance is measured in a region that is not affected by the blur, so that the subject distance can be measured with high accuracy.
- an image processing device performs image processing using a plurality of captured images obtained by capturing the same subject in a plurality of shooting states.
- a processing device wherein the first image captured in the first imaging state among the plurality of captured images, and the second image captured in a second imaging state different from the first imaging state,
- a target motion amount estimation unit that estimates a target motion amount that represents the amount of positional displacement of the subject between
- a correction image generation unit that generates a corrected image in which the second image is motion-compensated based on the target motion amount
- An image processing unit that performs image processing using the first image and the corrected image;
- a corrected image that eliminates the positional deviation of the subject from the first image is generated by motion compensation, and image processing is performed using the first image and the corrected image.
- Image processing can be performed using a plurality of captured images that differ only in the shooting state, and it is possible to prevent a decrease in accuracy of image processing.
- the image processing apparatus accepts the first image taken in the first exposure state and the second image taken in the second exposure state, and the image processing unit As a process, the first image and the corrected image may be combined to generate a combined image having a wide dynamic range.
- a corrected image in which the positional deviation of the subject is eliminated from the first image is generated by motion compensation, and HDR (High Dynamic Range) is used by using the first image and the corrected image. Since the image is generated, the HDR image can be generated with high accuracy even if the subject is misaligned between the plurality of captured images.
- An image processing method is an image processing method for measuring a subject distance from a plurality of photographed images obtained by photographing the same subject in a plurality of focus states. Of the images, the amount of positional deviation of the subject between the first image taken in the first focus state and the second image taken in the second focus state different from the first focus state
- a target motion amount estimation step for estimating a target motion amount to be expressed
- a correction image generation step for generating a corrected image in which the second image is motion-compensated based on the target motion amount
- the first image and the correction image is an image processing method for measuring a subject distance from a plurality of photographed images obtained by photographing the same subject in a plurality of focus states. Of the images, the amount of positional deviation of the subject between the first image taken in the first focus state and the second image taken in the second focus state different from the first focus state
- a target motion amount estimation step for estimating a target motion amount to be expressed
- a correction image generation step for generating a corrected image in which the second image is motion
- An image processing method is an image processing method for performing image processing using a plurality of captured images obtained by capturing the same subject in a plurality of capturing states, Among the captured images, the amount of positional deviation of the subject between the first image captured in the first capturing state and the second image captured in the second capturing state different from the first capturing state
- a target motion amount estimation step that estimates a target motion amount that represents the correction
- a correction image generation step that generates a correction image in which the second image is motion-compensated based on the target motion amount, the first image, and the correction image.
- an image processing step for performing image processing using the is an image processing method for performing image processing using the.
- Embodiment 1 An image processing apparatus according to Embodiment 1 of the present invention will be described with reference to FIGS. 1 to 5B.
- the image processing apparatus is an apparatus that measures a subject distance using a DFD from a plurality of captured images captured in a plurality of capturing states, and is mounted on an imaging apparatus capable of capturing a moving image.
- a case will be described as an example.
- the shooting state includes a focus state, an exposure state, ISO sensitivity, and the like.
- the image processing apparatus measures the subject distance using DFD, a case where the shooting state is the focus state will be described as an example.
- FIG. 19 is an external view showing an example of the video camera 200.
- the video camera 200 takes a foreground focus (corresponding to the first focus state in the present embodiment) and a distant view focus (in the second embodiment, the second focus state) at a certain time interval during moving image shooting. And the like are alternately switched to perform shooting.
- the video camera 200 alternately outputs a distant view captured image captured with the distant focus and a near view captured image captured with the near view focus to the image processing apparatus.
- the image processing apparatus will be described as an example in which the image processing apparatus is mounted on the video camera 200, but is mounted on a device (for example, a mobile phone) other than the video camera 200 capable of capturing a moving image. Alternatively, it may be provided in another device that can acquire a captured image from the imaging device.
- a device for example, a mobile phone
- the foreground focus here indicates a shooting state in which the video camera 200 is focused on a position where the distance from the camera is closest in the in-focus range in which the video camera 200 is focused.
- a shooting state is shown in which the focus is on the position (infinity) where the distance from the camera is farthest.
- FIG. 1 is a block diagram illustrating a configuration example of the image processing apparatus 100.
- the image processing apparatus 100 is an apparatus that measures a subject distance from a plurality of photographed images photographed with a distant focus and a foreground focus. As illustrated in FIG. In the embodiment, 10A), a corrected image generation unit 20 and a subject distance measurement unit 30 are provided.
- the image processing apparatus 100 continuously and continuously captures a distant shot image in which a subject is shot with a distant focus and a foreground shot image in which the same subject is shot with a near view focus at a timing different from that of the distant shot image. And is configured to get.
- a photographed image for subject distance calculation is a first image
- a photographed image photographed immediately before the first image is a second image
- a photographed image photographed immediately before the second image is a third image.
- the focus states of the first image and the third image are the same.
- a predetermined near-field photographed image is a subject distance calculation target (when the foreground focus is set to the first focus state) will be described as an example. Even when the object is to be calculated (when the distant focus is set to the first focus state), the subject distance can be calculated by the same method. Further, a focus state other than the near view focus and the distant view focus may be used.
- the target motion amount estimation unit 10A among the captured images captured by the video camera 200, the amount of subject position shift that occurs between the first image captured with the foreground focus and the second image captured with the distant focus. Is estimated as the target motion amount and output to the corrected image generation unit 20.
- the target motion amount is a vector amount, and is defined by the direction of displacement and the magnitude of displacement.
- FIG. 2 is a block diagram illustrating a configuration example of the target motion amount estimation unit 10A. As illustrated in FIG. 2, the target motion amount estimation unit 10A includes a first motion amount estimation unit 11A and a motion amount determination unit 12A.
- the first motion amount estimation unit 11A receives the first image and the third image that are captured in the same focus state, and determines the amount of positional deviation of the subject generated between the first image and the third image as the first motion amount. And output to the motion amount determination unit 12A.
- the first motion amount is a vector amount, and is defined by the direction of displacement and the magnitude of displacement.
- the motion amount determination unit 12A estimates, as the target motion amount, the amount of subject positional deviation that has occurred between the first image and the second image based on the first motion amount.
- the corrected image generation unit 20 performs motion compensation on the second image based on the target motion amount, generates a corrected image having no subject displacement from the first image, and outputs the corrected image to the subject distance measurement unit 30.
- the subject distance measuring unit 30 measures the subject distance using the DFD based on the correlation value of the blur amount between the first image and the corrected image.
- the frequency information I (u, v) is expressed as the following Expression 1.
- variable u and the variable v represent frequency components in the two-dimensional Fourier space.
- S (u, v) represents frequency information of the omnifocal image when shooting is performed so that the amount of blur is zero.
- OTF (u, v, d) is related to the focus state at the time of shooting, and represents the transfer function (Optical Transfer Function) of the optical system when shooting a subject existing at a distance d from the camera.
- the frequency information I (u, v) of the captured image includes the frequency information S (u, v) of the omnifocal image and the transfer function OTF (u, v, d) of the optical system at the time of shooting. It is represented by the product of
- the frequency information I 1 (u, v) and I 2 (u, v) of the two photographed images is expressed by the following formula 2 And represented by Equation 3. However, it is assumed that the subject is not displaced between the two captured images.
- OTF 1 (u, v, d) and OTF 2 (u, v, d) represent the characteristics of the optical system when each image is taken, the characteristics for each distance d are measured in advance. Can be retained. Therefore, if two images I 1 (u, v) and I 2 (u, v) with different focus states are taken, the subject distance is measured by obtaining the distance d on the right side that matches the left side of Equation 4. It becomes possible to do.
- Equation 5 the absolute value e (d) of the difference between the left side and the right side of Equation 4 using the distance d as a parameter is obtained, and the distance d at which e (d) is minimized is determined to the subject. May be estimated as the distance.
- the subject distance can be obtained even when the characteristics of the photographed image do not completely match the characteristics of the optical system measured in advance. Can be measured.
- FIG. 3 is a flowchart showing the processing procedure of the image processing method in the present embodiment
- FIG. 4 is an explanatory diagram showing the relationship among the captured image, the target motion amount, and the first motion amount in the present embodiment. is there.
- the video camera 200 (imaging device) shown in FIG. 19 captures images in a plurality of focus states and outputs them to the image processing device 100 (step S101).
- the video camera 200 repeats shooting in the near-field focus and shooting in the far-field focus alternately in time, and takes a foreground and far-field shot image.
- the video camera 200 will be described as an example in which the same direction is shot from the same position, that is, the same subject is shot with a plurality of shot images.
- the subject is a person at a short distance from the camera and its background.
- the present invention is not limited to this.
- the composition of the subject is not limited to the person and the background thereof.
- This step S101 is not an essential step of the present invention, but will be described as constituting a more preferable form. Any configuration may be used as long as the image processing apparatus 100 can acquire captured images in a plurality of shooting states. Further, the processing of steps S102 to S105 described below may be executed in parallel with the shooting by the video camera 200, or may be executed after shooting.
- the position of the subject does not shift between the images.
- the subject distance can be measured by the DFD processing based on Equation 5 using the first image and the second image as they are.
- the omnifocal image S (u, v) differs between Equation 2 and Equation 3, and Equation 4 does not hold. The distance cannot be measured.
- a positional shift occurs after estimating a motion amount corresponding to the positional shift between images.
- a corrected image is generated by performing motion compensation on the subject.
- the motion amount between the first image and the second image is referred to as a target motion amount.
- the image processing apparatus 100 first, as shown in FIG. 3, in the first motion amount estimation unit 11A of the target motion amount estimation unit 10A, the first image and the same foreground focus state as the first image.
- the amount of motion between the third image captured in step S1 is estimated as the first amount of motion (step S102).
- This estimation of the amount of motion is an estimation between images taken in the same focus state, and only the position of the subject is different, so that a highly accurate estimation result can be obtained.
- FIGS. 5A and 5B a method of estimating the first motion amount between the first image and the third image will be described with reference to FIGS. 5A and 5B.
- a case where a block matching method is used for motion amount estimation will be described as an example.
- the block matching method is a method for estimating the amount of motion between images for each block region.
- the region having the highest correlation with the image of the block region set in one image (hereinafter referred to as a search source image) is the other.
- the amount of motion is estimated by specifying from the image (hereinafter referred to as a search destination image).
- FIG. 5A is an explanatory diagram illustrating an example of a relationship between a search source image and a target block
- FIG. 5B is an explanatory diagram illustrating an example of a relationship between a search destination image and a search area.
- first motion amount estimation unit 11A first sets a block of interest composed of a plurality of pixels in the search source image (that is, the first image).
- the size of the block of interest can be arbitrarily set such as 8 ⁇ 8 pixels or 16 ⁇ 16 pixels.
- the first motion amount estimation unit 11A divides the search source image into a plurality of block areas of the same size, and sequentially sets the plurality of block areas as the block of interest.
- the first motion amount estimation unit 11A sets a search area in the search destination image (that is, the third image).
- This search area indicates an area for searching for an area (region) having the highest correlation with the target block in the search source image, and is an area having a size larger than that of the target block.
- the search area is preferably set at a position close to the position of the target block in the search source image.
- the first motion amount estimation unit 11A cuts out a block area having the same size as the block of interest in the search source image from the search area of the search destination image, and sets the search area of the image based on Expression 6 below.
- An evaluation value r x, y representing the correlation is calculated.
- x and y are coordinate positions indicating the position of the block area in the search destination image
- the coordinates (x, y) of the pixel at the upper left corner of the search destination image shown in FIG. 5B are (0, 0). This is the coordinate position.
- (i, j) is a relative coordinate position within the block area of the pixels constituting the block area (the target block and the search block), and the coordinates (i, j) of the pixel at the upper left corner of the block area Is the coordinate position when (0, 0) is set.
- f (i, j) is the pixel value of the pixel constituting the target block set in the search source image
- g x, y (i, j) is the pixel value of the search block cut out from the search destination image.
- the motion amount determination unit 12A of the target motion amount estimation unit 10A estimates a target motion amount that is a motion amount between the first image and the second image based on the first motion amount (step S103). .
- the target motion amount is the same as the first motion amount in the direction of displacement, and the size is halved. To be determined. This is based on the characteristic that the movement of the subject and the change in the shooting direction are almost constant for a short time, and the shooting time interval between the first image and the second image is the first image. If the shooting time interval between the first image and the third image is half, the amount of movement between them is also almost half.
- the target movement amount may be determined by correcting the magnitude of the first movement amount according to the ratio of the shooting time intervals.
- the size of the target motion amount the size of the first motion amount ⁇ ((the shooting time interval between the first image and the second image) / (the first image and the third image
- the magnitude of the target motion amount The size of the first motion amount ⁇ 0.5.
- the corrected image generation unit 20 Upon receiving the target motion amount from the target motion amount estimation unit 10A, the corrected image generation unit 20 performs motion compensation on the second image based on the received target motion amount, and generates a corrected image (step S104).
- the first image and the corrected image have the same subject position, so that the omnifocal image is common and only the blur amount is different.
- the first image and the corrected image have the same subject state and are different only in the focus state.
- the subject distance measuring unit 30 Upon receipt of the corrected image from the corrected image generating unit 20, the subject distance measuring unit 30 measures the subject distance by DFD based on the correlation value of the blur amount between the first image and the corrected image (step S105). .
- the subject distance can be measured by obtaining the distance d that minimizes Equation 5.
- the first image and the second image photographed in different focus states are used. It is possible to estimate the target motion amount with high accuracy by estimating the target motion amount. By performing motion compensation on the second image using this target motion amount, it is possible to generate a corrected image with no positional deviation between the first image and the subject with high accuracy, and to improve subject distance measurement processing by DFD. It becomes possible to do.
- the amount of motion between the first image and the second image is directly calculated even if the focus state is different. It is possible to do.
- the difference between the focus state when the first image is captured and the focus state when the second image is captured is small, there is a high possibility that the amount of motion can be estimated well.
- the amount of blur varies greatly between images, so the correlation between images is the same for the same subject.
- the correlation between images is the same for the same subject.
- many errors occur in the estimated motion amount.
- the greater the difference in focus state between images the better the measurement accuracy of the subject distance.
- the first image between the first image and the third image obtained with high accuracy by the block matching method.
- the amount of motion between the first image and the second image can be estimated with high accuracy.
- the target motion amount estimated with high accuracy it is possible to eliminate the positional deviation between the first image and the corrected image, or to reduce the subject distance so that the subject distance can be measured satisfactorily. Measurement can be performed satisfactorily.
- Embodiment 2 An image processing apparatus according to Embodiment 2 of the present invention will be described with reference to FIGS. 1 and 6 to 8.
- the image processing apparatus is different from the image processing apparatus 100 according to the first embodiment in that the target motion amount estimation unit 10B determines the first motion amount between the first image and the third image.
- the target motion amount estimation unit 10B determines the first motion amount between the first image and the third image.
- a second motion amount estimation unit 11B that directly obtains the second motion amount between the first image and the second image is provided.
- the imaging apparatus is the video camera 200 shown in FIG. 19 and the shooting state is two focus states of a foreground focus and a distant focus will be described as an example.
- FIG. 6 is a block diagram illustrating a configuration example of the target motion amount estimation unit 10B. Note that, in the configuration of the image processing apparatus according to the present embodiment, blocks that are the same as those of the image processing apparatus 100 according to Embodiment 1 are denoted by the same reference numerals, and description thereof is omitted.
- the configuration of the image processing apparatus according to the present embodiment is the same block configuration as that of the image processing apparatus 100 according to the first embodiment shown in FIG. 1, and the target motion amount estimation unit 10B, the corrected image generation unit 20, and the subject distance measurement unit 30. It has.
- the configurations of the corrected image generation unit 20 and the subject distance measurement unit 30 are the same as those in the first embodiment.
- the image processing apparatus according to the present embodiment alternately and continuously obtains a distant view photographed image and a foreground photograph image photographed continuously in time from the video camera 200. Is configured to do.
- a photographed image for subject distance calculation is a first image, a photographed image photographed immediately before the first image is a second image, and a photographed image photographed immediately before the second image is a third image. To do.
- the target motion amount estimation unit 10B is configured to estimate, as the target motion amount, the amount of subject position deviation that occurs between the first image captured with the foreground focus and the second image captured with the distant focus. As shown in FIG. 6, the first motion amount estimation unit 11A, the second motion amount estimation unit 11B, and the motion amount determination unit 12B are included.
- the configuration of the first motion amount estimation unit 11A is the same as that of the first embodiment.
- the first motion amount between the first image and the third image is obtained by the block matching method and output to the motion amount determination unit 12B. To do.
- the second motion amount estimation unit 11B uses the block matching method described in the first embodiment to determine the amount of subject position deviation between the first image and the second image as the second motion amount (corresponding to the initial estimated value). ) And output to the motion amount determination unit 12B.
- the second motion amount is a vector amount, like the target motion amount and the first motion amount, and is defined by the direction of displacement and the magnitude of displacement.
- the motion amount determination unit 12B determines the first image and the second image.
- the target motion amount indicating the positional deviation of the subject generated between the two is estimated.
- FIG. 7 is a flowchart showing the processing procedure of the image processing method in the present embodiment
- FIG. 8 shows the relationship among the captured image, the target motion amount, the first motion amount, and the second motion amount in the present embodiment. It is explanatory drawing which shows.
- the same reference numerals are given to the processing common to the processing flow of the first embodiment shown in FIG.
- the video camera 200 (imaging device) shown in FIG. 19 captures images in a plurality of focus states and outputs them to the image processing device 100 (step S101).
- the image processing apparatus 100 firstly, as shown in FIG. 7, the first motion amount estimation unit 11A of the target motion amount estimation unit 10B moves between the first image and the third image. Is estimated as the first motion amount (step S102). The steps so far are the same as in the first embodiment.
- the image processing apparatus 100 estimates the second motion amount, which is an initial estimated value of the motion amount between the first image and the second image, by the second motion amount estimation unit 11B of the target motion amount estimation unit 10B. (Step S201). For this estimation, a block matching method can be performed.
- the image processing apparatus 100 estimates the target motion amount using the first motion amount and the second motion amount by the motion amount determination unit 12B of the target motion amount estimation unit 10B (step S202).
- the second motion amount estimated directly is higher in accuracy than the case where it is indirectly estimated from the first motion amount. That is, there is a case where the accuracy is higher when the second motion amount between the first image and the second image is directly estimated.
- the focus state when the first image is photographed and the focus state when the second image is photographed are likely to be greatly different, particularly in the measurement of the subject distance by DFD. For this reason, if the amount of blur due to the change in the focus state differs greatly, the correlation between images is low even for the same subject, and a large error may occur in the estimated amount of motion.
- the motion amount determination unit 12B determines the accuracy of the second motion amount, and when the accuracy is necessary for use in the measurement of the subject distance, the second motion amount is directly used as the target motion amount. If the required accuracy is not obtained, the target motion amount is estimated from the first motion amount as in the first embodiment.
- the method for estimating the target motion amount from the first motion amount is the same as step S103 in the first embodiment.
- the motion amount determination unit 12B determines the accuracy of the second motion amount based on the minimum evaluation value r x, y (minimum evaluation value r min ) in the block matching method applied when estimating the second motion amount. Determine. More specifically, the motion amount determination unit 12B determines the second motion amount as the target motion amount when it is determined that the minimum evaluation value r min is smaller than a predetermined threshold (when it is determined that the accuracy is high). And the minimum evaluation value r min is determined to be larger than the predetermined threshold (when the accuracy is determined to be low), the target motion amount is estimated from the first motion amount.
- Step S104 is the same as that in the first embodiment.
- Step S105 is the same as that in the first embodiment.
- the second motion amount when it is determined that the estimation accuracy of the second motion amount estimated between the first image and the second image captured in different focus states is high, the second motion amount is set as the target motion amount. And when it is determined that the estimation accuracy of the second motion amount is low, the target motion amount is estimated based on the first motion amount estimated between the first image and the third image captured in the same focus state. Can be estimated with high accuracy.
- the first motion amount estimation unit 11A determines that the first motion amount is determined when the minimum evaluation value r min obtained by the second motion amount estimation unit 11B is smaller than a predetermined threshold value. You may make it the structure which does not calculate quantity.
- the target motion amount estimation method in step S202 is based on the ratio of the first motion amount based on the ratio of the minimum evaluation value r min of the block matching method applied when estimating the first motion amount and the second motion amount.
- a motion amount obtained by internally dividing the half motion amount and the second motion amount may be used as the target motion amount.
- Embodiment 3 An image processing apparatus according to Embodiment 3 of the present invention will be described with reference to FIGS. 1 and 9 to 12.
- the image processing apparatus is different from the image processing apparatus 100 according to the second embodiment in that the target motion amount estimation unit 10C calculates the first motion amount between the first image and the third image.
- the target motion amount estimation unit 10C calculates the first motion amount between the first image and the third image.
- the imaging apparatus is the video camera 200 shown in FIG. 19 and the shooting state is two focus states of the foreground focus and the far view focus.
- the shooting state is two focus states of the foreground focus and the far view focus.
- FIG. 9 is a block diagram illustrating a configuration example of the target motion amount estimation unit 10C. Note that, in the configuration of the image processing apparatus according to the present embodiment, blocks that are the same as those in the image processing apparatus according to the first or second embodiment are denoted by the same reference numerals, and description thereof is omitted.
- the configuration of the image processing apparatus according to the present embodiment is the same block configuration as that of the image processing apparatus 100 according to the first embodiment shown in FIG. 1, and the target motion amount estimation unit 10C, the corrected image generation unit 20, and the subject distance measurement unit 30. It has. Note that the configurations of the corrected image generation unit 20 and the subject distance measurement unit 30 are the same as those in the first and second embodiments.
- the image processing apparatus according to the present embodiment includes a distant view photographed image taken with the distant focus and a foreground photograph image photographed with the foreground focus from the video camera 200. Are obtained alternately and continuously.
- a photographed image for subject distance calculation is a first image, a photographed image photographed immediately before the first image is a second image, and a photographed image photographed immediately before the second image is a third image. To do.
- the target motion amount estimation unit 10 ⁇ / b> C is configured to estimate the amount of displacement of the subject generated between the first image captured with the foreground focus and the second image captured with the distant focus as the target motion amount. As shown in FIG. 9, the first motion amount estimation unit 11A, the second motion amount estimation unit 11B, the third motion amount estimation unit 11C, and the motion amount determination unit 12C are included.
- the configuration of the first motion amount estimating unit 11A is the same as that of the first and second embodiments, and the first motion amount between the first image and the third image is obtained by the block matching method, and the motion amount is calculated. It outputs to the determination part 12C.
- the configuration of the second motion amount estimation unit 11B is the same as that of the second embodiment, and the second motion amount (initial estimated value) between the first image and the second image is directly obtained by the block matching method. And output to the motion amount determination unit 12C.
- the third motion amount estimation unit 11C estimates the amount of subject displacement generated between the second image and the third image as the third motion amount by the block matching method described in the first embodiment. It outputs to the determination part 12C.
- the motion amount determination unit 12C has the first motion amount estimated by the first motion amount estimation unit 11A, the second motion amount estimated by the second motion amount estimation unit 11B, and the first motion amount estimated by the third motion amount estimation unit 11C. Based on the three motion amounts, a target motion amount indicating the amount of displacement of the subject that occurs between the first image and the second image is estimated.
- FIG. 10 is a flowchart showing the processing procedure of the image processing method in the present embodiment
- FIG. 11 shows the captured image, the target motion amount, the first motion amount, the second motion amount, and the third in the present embodiment. It is explanatory drawing which shows the relationship with a motion amount.
- FIG. 12 is a vector diagram showing the relationship among the first motion amount, the second motion amount, the third motion amount, and the target motion amount.
- the video camera 200 (imaging device) shown in FIG. 19 captures images in a plurality of focus states and outputs them to the image processing device 100 (step S101).
- the image processing apparatus 100 firstly, as shown in FIG. 10, the first motion amount estimation unit 11A of the target motion amount estimation unit 10C moves between the first image and the third image. Is calculated as the first motion amount (step S102). The steps so far are the same as those in the first and second embodiments.
- the image processing apparatus 100 estimates the second motion amount that is the initial estimated value of the motion amount between the first image and the second image by the second motion amount estimation unit 11B of the target motion amount estimation unit 10C. (Step S201). This process is the same as in the second embodiment.
- the image processing apparatus 100 estimates the third motion amount, which is the motion amount between the second image and the third image, by the third motion amount estimation unit 11C of the target motion amount estimation unit 10C (step S301). ). For this estimation, a block matching method can be performed.
- the image processing apparatus 100 estimates the target motion amount based on the first motion amount, the second motion amount, and the third motion amount by the motion amount determination unit 12C of the target motion amount estimation unit 10C (step S302). ).
- a method for determining the target motion amount by selecting a combination determined to have high estimation accuracy from the relationship between the first motion amount, the second motion amount, and the third motion amount will be described with reference to FIG. To do.
- the first movement amount obtained in step S102 is represented as V1.
- the second motion amount obtained in step S201 is represented as V2a, and the evaluation value obtained at this time is represented as r2a.
- the third movement amount obtained in step S301 is represented as V3a, and the evaluation value obtained at this time is represented as r3a.
- V1, V2a, and V3a represent two-dimensional motion amounts on the image, they are represented as vector amounts as indicated by solid line arrows in FIG.
- V1, V2a, and V3a are matched as movements between the three images, so that all are determined to have high estimation accuracy, and the second movement amount is set as the target movement amount.
- V2a is used.
- the relationship of Expression 7 does not hold, at least one of the first motion amount V1, the second motion amount V2a, and the third motion amount V3a has low estimation accuracy.
- the first motion amount V1 is a motion amount estimated between captured images captured in the same focus state, it is considered that the estimation accuracy is high.
- the second motion amount V2a and the third motion amount V3a are motion amounts estimated between captured images taken in different focus states, here, either the second motion amount V2a or the third motion amount V3a, Alternatively, both estimation accuracy is considered to be low.
- the third motion amount correction candidate amount V3b can be calculated by Equation 8. This is represented by a dotted line in FIG.
- the third motion amount V3a and the actual second motion amount are vector-added. Since the motion amount must match the first motion amount V1, the correction candidate amount V2b of the second motion amount can be calculated by Equation 9. This is represented by a broken line in FIG.
- the target block of the search source image (second image) and the block region of the search destination image (third image) corresponding to the third motion amount correction candidate amount V3b are cut out, and the evaluation value based on Expression 6 is obtained.
- r3b be the result of calculating.
- the target block of the search source image (first image) and the block region of the search destination image (second image) corresponding to the correction candidate amount V2b of the second motion amount are cut out, and the evaluation value is calculated based on Expression 6. Let the calculated result be r2b.
- the total evaluation value for the combination of the second motion amount V2a and the third motion amount correction candidate amount V3b is (r2a + r3b).
- the estimation accuracy of the third motion amount V3a is high, the total evaluation value for the combination of the second motion amount correction candidate amount V2b and the third motion amount V3a is (r2b + r3a).
- the amount correction candidate amount V2b is determined as the target motion amount. In other words, when (r2a + r3b) is small, it is determined that the estimation accuracy of the second motion amount V2a is high, and the second motion amount V2a is set as the target motion amount. On the other hand, when (r2b + r3a) is small, it is determined that the estimation accuracy of the third motion amount V3a is high, and the second motion amount correction candidate amount V2b is determined as the target motion amount.
- a motion amount that is half of the motion amount V1 is determined as the target motion amount.
- Step S104 Upon receiving the target motion amount from the target motion amount estimation unit 10C, the corrected image generation unit 20 performs motion compensation on the second image based on the received target motion amount, and generates a corrected image (step S104).
- Step S104 is the same as that in the first and second embodiments.
- Step S105 is the same as that in the first and second embodiments.
- the target motion amount can be determined by selecting a combination determined to have high estimation accuracy from the relationship between the three motion amounts, the first motion amount, the second motion amount, and the third motion amount. In addition, more accurate estimation is possible.
- Embodiment 4 An image processing apparatus according to Embodiment 4 of the present invention will be described with reference to FIGS.
- the image processing apparatus differs from the image processing apparatus 100 according to the first to third embodiments in consideration of the influence of blur that occurs when the movement of the subject or the change in the shooting direction is fast. Thus, the subject distance is measured.
- the imaging apparatus is the video camera 200 shown in FIG. 19 and the shooting state is two focus states of a foreground focus and a distant focus will be described as an example.
- FIG. 13 is a block diagram illustrating a configuration example of the image processing apparatus 100. Note that, in the configuration of the image processing apparatus according to the present embodiment, the same reference numerals are assigned to blocks common to the image processing apparatus 100 according to any of the first, second, and third embodiments. The description is omitted.
- the image processing apparatus 100 includes a target motion amount estimation unit 10, a corrected image generation unit 20, a subject distance measurement unit 30, a blur region determination unit 40, and a subject distance compensation unit 50, as shown in FIG.
- the configuration of the target motion amount estimation unit 10 is described as an example where it is the same as that of the target motion amount estimation unit 10A of the first embodiment.
- the target motion amount estimation of the second embodiment is described.
- the configuration may be the same as that of unit 10B or target motion amount estimation unit 10C of the third embodiment.
- the image processing apparatus continuously and continuously captures a distant view photographed image taken with a distant focus and a foreground photograph photographed with a foreground focus from the video camera 200. Configured to get.
- a photographed image for subject distance calculation is a first image
- a photographed image photographed immediately before the first image is a second image
- a photographed image photographed immediately before the second image is a third image. To do.
- the blur region determination unit 40 determines whether or not blur has occurred in the first image and the second image based on the target motion amount output from the target motion amount estimation unit 10, and if blur has occurred.
- a blur region determination result including information indicating the determined region is output to the corrected image generation unit 20, the subject distance measurement unit 30, and the subject distance compensation unit 50.
- the corrected image generation unit 20 performs motion compensation on the second image based on the target motion amount output from the target motion amount estimation unit 10 to generate a corrected image in which the first image and the subject are not misaligned. Output to the distance measuring unit 30.
- a corrected image is generated only for a region determined to have no blur in the blur region determination result output from the blur region determination unit 40 (hereinafter referred to as a non-blur region). To do.
- the subject distance measuring unit 30 measures the subject distance using the DFD based on the correlation value of the blur amount between the first image and the corrected image, as in the first to third embodiments. However, in the present embodiment, the subject distance is measured only for a region determined as a non-blur region in the blur region determination result.
- the subject distance compensation unit 50 estimates the subject distance with respect to an area determined to be blurred in the blur area determination result (hereinafter referred to as a blur area).
- FIG. 14 is a flowchart showing the processing procedure of the image processing method according to the present embodiment
- FIG. 15 is an explanatory diagram showing the blur region determination method. 14, the processing flow of the first embodiment shown in FIG. 3, the processing flow of the second embodiment shown in FIG. 7, and the processing of the third embodiment shown in FIG. 10.
- the same reference numerals are given to the processes common to the flow, and the description is omitted.
- the video camera 200 (imaging device) shown in FIG. 19 captures images in a plurality of focus states and outputs them to the image processing device 100 (step S101). This process is the same as in the first to third embodiments.
- the image processing apparatus 100 first estimates the target motion amount between the first image and the second image by the target motion amount estimation unit 10 as shown in FIG. 14 (step S401). .
- this process is the same as the process shown in steps S102 and S103 (see FIG. 3) in the first embodiment.
- the target motion amount estimation processing in step S401 includes the processing shown in step S102, step S201 and step S202 (see FIG. 7) in the second embodiment, or step S102, step S201, step in the third embodiment. You may use the process shown to S301 and step S302 (refer FIG. 10).
- the image processing apparatus 100 uses the blur region determination unit 40 to determine a region where blur has occurred in at least one of the first image and the second image (step S402).
- the omnifocal image S (u, v) differs between the two expressions, Expression 2 and Expression 3, and therefore Expression 4 is satisfied. Accordingly, the subject distance cannot be measured based on Equation 5.
- the omnifocal image S (u, v) is common to the two captured images including the influence of blur. Therefore, in theory, Equation 4 is established, and the subject distance can be measured based on Equation 5.
- the frequency information of the photographed image corresponding to the common omnifocal image S (u, v) is the photographed image photographed in different focus states. Only the low-frequency components that have little difference between them remain. Accordingly, in this case as well, it is considered difficult to measure the subject distance by the DFD processing based on Expression 5.
- the subject distance is measured by another method.
- an object that is the amount of movement between the first image and the second image A case where the amount of motion is used will be described.
- the blur caused by the movement of the subject or the change of the photographing direction that occurs during the exposure time is not related to the positional deviation of the subject between the photographed images.
- the blur region determination unit 40 has a blur larger than a predetermined size in at least one of the first image and the second image when the target motion amount is larger than the predetermined threshold. It is determined that By performing this determination on the entire captured image (all block regions set in the captured image), a blur region determination result for determining whether the block region is a blur region or a non-blur region is obtained.
- the corrected image generation unit 20 receives the target motion amount from the target motion amount estimation unit 10 and the blur region determination result from the blur region determination unit 40, based on the received target motion amount, Motion compensation is performed to generate a corrected image (step S104).
- the corrected image generation unit 20 does not perform correction on the block area determined as the blur area, and performs motion compensation only on the block area determined as the non-blur area.
- the processing for the block area determined as the non-blurr area is the same as in the first to third embodiments.
- the subject distance measurement unit 30 When the subject distance measurement unit 30 receives the correction image from the correction image generation unit 20 and the blur region determination result from the blur region determination unit 40, the subject distance measurement unit 30 is based on the correlation value of the blur amount between the first image and the correction image. Then, the subject distance is measured by DFD (step S105). In the present embodiment, the subject distance measuring unit 30 does not measure the subject distance by DFD for the block area determined to be a blur area, but only the block distance determined by the DFD for the block area determined to be a non-blur area. Measure. The processing for the block area determined as the non-blurr area is the same as in the first to third embodiments. Therefore, in the block area determined to be the blur area, the subject distance has not been obtained yet.
- the subject distance compensation unit 50 performs subject distance compensation processing on the block area determined to be the blur area, and obtains subject distances for all the pixels constituting the captured image (step S403).
- Subject distance compensation process 1 the subject distance of each pixel in the blur area is calculated by interpolating using the subject distance of the non-blurr area (adjacent to the blur area) around the blur area.
- the outline of this process will be described with reference to FIG.
- image areas representing the entire captured image areas determined to be blur areas are represented by diagonal lines, and areas determined to be non-blurr areas are represented by white background.
- the pixel of interest in the blur area where the subject distance is generated by interpolation is represented by a round point
- the reference pixel in the non-blur area that refers to the subject distance in the compensation process 1 is represented by a diamond point.
- the subject distance compensation unit 50 determines the position of the target pixel in the blur region where the subject distance is generated by interpolation, at the position where the target pixel intersects the non-blur region when a straight line is drawn from the target pixel in the horizontal and vertical directions.
- the subject distance is referred to using the pixel as a reference pixel.
- the reference pixel is a pixel in a non-blurred region.
- the subject distance of the target pixel is estimated by calculating an average value obtained by weighting the subject distance of the reference pixel according to the reciprocal of the length of the straight line drawn from the target pixel for generating the subject distance to the reference pixel. .
- the subject distance compensation unit 50 performs the processing while sequentially setting the target pixel, and generates the subject distance by interpolation for all the pixels in the blur region.
- Subject distance compensation process 2 As another example of the subject distance compensation process by the subject distance compensation unit 50, the subject distance of the second image obtained immediately before is used for the pixel to be compensated, and the target motion amount There is a process for correcting the subject distance.
- the measurement processing of captured images is performed in the order in which the images were captured. That is, the measurement process of the second image is performed before the measurement of the subject distance with respect to the current first image.
- the second image based on the correlation value of the blur amount between the corrected image obtained by motion compensation of the third image and the second image. Measure the subject distance in the image. Therefore, the subject distance of the second image is obtained by using the target motion amount that is the motion amount between the first image and the second image with respect to the subject distance obtained by measuring the subject distance with respect to the second image that is the past frame.
- the object distance to the first image can be estimated.
- a subject distance generated by motion compensation of the subject distance measured in the second image in this way may be used.
- the subject distance can be generated based on the measurement result of the past frame even in an area where it is difficult to measure the subject distance based on the correlation value of the blur amount due to the blur.
- a highly accurate subject distance can be generated.
- the generation of the corrected image and the measurement of the subject distance based on the correlation value of the blur amount are performed not only on the non-blurred region but also on the entire image. You may make it carry out with respect to. In this case, since the object distance that is not used is finally measured, a redundant calculation is included. However, in the case of LSI (Large Scale Integration) and the like, rather than branching the process according to specific conditions, the correction of the corrected image is generated for the entire image and the subject distance is measured based on the correlation value of the blur amount. In this case, since these processes can be performed before the blur area determination result is obtained, the delay may be reduced.
- LSI Large Scale Integration
- the subject distance of the area determined to be the blur area is the subject distance.
- the object distance generated by the compensation unit 50 may be overwritten. Even in this case, the same result can be obtained as the subject distance.
- Embodiment 5 An image processing apparatus according to Embodiment 5 of the present invention will be described with reference to FIGS.
- the image processing apparatus is different from the image processing apparatus 100 according to the first to fourth embodiments in that an HDR image is generated instead of a subject distance measurement.
- the image processing apparatus is an apparatus that generates an HDR image from a plurality of captured images captured in a plurality of capturing states, and is mounted on an image capturing apparatus (for example, a surveillance camera) that can capture a moving image.
- an image capturing apparatus for example, a surveillance camera
- An example will be described. Therefore, in the present embodiment, a case where the shooting state is two exposure states of overexposure and underexposure will be described as an example.
- the imaging device of the present embodiment is overexposed (corresponding to the first imaging state in the present embodiment) and underexposure (at the present embodiment, at regular time intervals). In this embodiment, it is configured such that shooting is performed by alternately switching to the second shooting state.
- the imaging apparatus according to the present embodiment alternately outputs an overexposed image captured with overexposure and an underexposed image captured with underexposure to the image processing apparatus according to the present embodiment.
- the image processing apparatus according to the present embodiment is described as an example in which the image processing apparatus is mounted on the imaging apparatus, but may be mounted on another device capable of capturing a moving image, or the imaging You may provide in the other apparatus which can acquire a picked-up image from an apparatus.
- FIG. 16 is a block diagram illustrating a configuration example of the image processing apparatus 300. Note that, in the configuration of the image processing apparatus according to the present embodiment, blocks that are the same as those of the image processing apparatus 100 according to Embodiment 1 are denoted by the same reference numerals, and description thereof is omitted.
- the image processing apparatus 300 is an image processing apparatus that generates an HDR image by combining captured images captured with overexposure and underexposure. As illustrated in FIG. 16, the target motion amount estimation unit 10 and the corrected image generation unit 20 and an HDR image generation unit 60. Note that the configurations of the target motion amount estimation unit 10 and the corrected image generation unit 20 are described as an example where they are the same as those in the first embodiment, but may be the same as those in the second embodiment or the third embodiment. .
- the image processing apparatus 300 is configured to continuously and continuously acquire an overexposed image captured with overexposure and an underexposed image captured with underexposure from the imaging apparatus.
- the overexposed image is the first image
- the underexposed image taken immediately before the first image is the second image
- the overexposed image taken immediately before the image will be described as a third image.
- the exposure state of the first image and the third image is the same.
- a predetermined overexposed image is the first image will be described as an example.
- an HDR image can be generated by the same method even when the underexposure is used as the first image.
- the target motion amount estimating unit 10 performs the first image (overexposed image in the present embodiment) and the second image (underexposed image in the present embodiment).
- the positional deviation amount of the subject that occurs between and is estimated as the target motion amount and output to the corrected image generation unit 20.
- the corrected image generation unit 20 performs motion compensation on the second image based on the target motion amount, and a corrected image in which the subject is not misaligned with the first image. And output to the subject distance measuring unit 30.
- the HDR image generation unit 60 combines the first image and the corrected image to generate an HDR image.
- FIG. 17 is a flowchart showing the processing procedure of the image processing method in the present embodiment
- FIG. 18 is an explanatory diagram showing the relationship among the captured image, the target motion amount, and the first motion amount in the present embodiment. is there.
- the processing flow of the first embodiment shown in FIG. 3 the processing flow of the second embodiment shown in FIG. 7, and the processing flow of the third embodiment shown in FIG.
- Processes common to the process flow of the fourth embodiment shown in FIG. 14 are assigned the same reference numerals, and descriptions thereof are omitted.
- the video camera 200 (imaging device) shown in FIG. 19 captures images in a plurality of exposure states and outputs them to the image processing device 100 (step S501).
- the video camera 200 alternately and continuously repeats shooting with overexposure and underexposure, and performs image processing on the captured overexposure image and underexposure image. Output to the device 100. Specifically, switching between overexposure and underexposure is performed by, for example, turning on / off the neutral density filter, switching the exposure time between long exposure and short exposure, or switching the aperture. In the overexposed image, the dark part is reproduced, but the bright part is overexposed, whereas in the underexposed image, the bright part is overexposed, but the dark part is overexposed. It has not been reproduced.
- one of the overexposed images taken with overexposure is a first image for generating an HDR image
- the underexposed image taken with underexposure one image before is the second image.
- an overexposed image taken with overexposure one image before is set as a third image.
- step S501 is not an essential step of the present invention, but will be described as constituting a more preferable form. Any configuration may be used as long as the image processing apparatus 100 can acquire captured images in a plurality of shooting states. Further, the processing of steps S102 to S104 and S502 described below may be executed in parallel with the shooting by the video camera 200, or may be executed after shooting.
- the image processing apparatus 100 first obtains the first motion amount between the first image and the third image by the target motion amount estimation unit 10 as shown in FIG. 17 (step S102). Then, the target motion amount between the first image and the second image is estimated using the first motion amount (step S103). This process is the same as that of the first embodiment (see FIG. 3) in the present embodiment. Instead of steps S102 and S103, step S102, step S201 and step S202 in the second embodiment (see FIG. 7), or step S102, step S201, step S301 and step S302 in the third embodiment (see FIG. 7). (See FIG. 10).
- the corrected image generation unit 20 performs motion compensation of the second image based on the received target motion amount, and generates a corrected image (step S104).
- This process is the same as that of the first embodiment (see FIG. 3) in the present embodiment.
- the HDR image generating unit 60 when receiving the corrected image from the corrected image generating unit 20, the HDR image generating unit 60 combines the first image and the corrected image to generate a combined image having a wide dynamic range (step S502).
- Conventional methods can be used as a method for generating an HDR image.
- an HDR image is generated by combining regions closer to appropriate exposure from the first image and the corrected image.
- the sum of the pixel value of the synthesis target pixel of the first image with the first weighting count and the sum of the pixel value of the correction target pixel of the correction image with the second weighting count are added, The pixel of the corrected image may be used.
- a low weighting factor value is set for a region where the luminance is equal to or higher than a certain level, and a low luminance value region is set, and a high weighting factor value is set for a region where the luminance is close to the median value.
- blurring may occur when the exposure time is switched between long exposure and short exposure in the shooting of a shot image (here, a moving image) in step S501.
- a blur area determination unit 40 may be provided, and for an area determined to be a blur area, an image of the area may be generated from an image without blur.
- the first image between the first image and the third image obtained with high accuracy by the block matching method.
- the amount of motion between the first image and the second image can be estimated with high accuracy.
- the brightness (S / N ratio) differs between the first image and the second image.
- the blur of the area in which the subject moves is blurred. The amount will be different.
- the depth of field is not only between the first image and the second image but also in the brightness (S / N ratio). Will be different. In any case, since the brightness (S / N ratio), blur amount, and depth of field are substantially the same between the first image and the third image, the first motion amount should be obtained with high accuracy. Can do.
- the positional deviation between the first image and the corrected image can be eliminated, or the HDR image can be made small enough to generate the HDR image. Generation can be performed better.
- Embodiments 1 to 5 described above the case where image processing is performed using two captured images of the first image and the second image captured in two capturing states has been described. You may comprise so that image processing may be performed using the 3 or more picked-up image image
- the focus state is varied in a plurality of stages from the foreground focus (for example, the nearest neighbor) to the foreground focus (for example, infinity). Acquire the captured image.
- the foreground focus for example, the nearest neighbor
- the foreground focus for example, infinity
- two captured images having the same focus state may be used as the first image and the third image, and any captured image captured between the two captured images may be used as the second image.
- one photographed image in the foreground focus is used as the first image
- one photographed image taken in the foreground focus before the first image is used as the third image
- the first image and the third image are used.
- a plurality of photographed images photographed between the images are defined as second images.
- the target motion amount is obtained for each of the second images.
- the target motion amount used for generating the interpolated image is the same as the first motion amount according to the ratio of the shooting time intervals, as in the case where the shooting time intervals are not equal (see step S103 in the first embodiment). It can be obtained by correcting the thickness.
- a corrected image without a positional deviation of the subject is generated with respect to the first image using the corresponding target motion amount.
- the several 2nd image without a position shift between 1st images is acquirable.
- the subject distance is measured using the first image and the plurality of corrected images.
- This configuration makes it possible to measure the subject distance with higher accuracy.
- an HDR image in the case of the fifth embodiment
- a photographed image obtained by varying the exposure state in multiple stages from overexposure to underexposure is acquired.
- two captured images having the same exposure state may be used as the first image and the third image, and any captured image captured between the two captured images may be used as the second image.
- one overexposed image is a first image
- one overexposed image taken after the first image is a third image
- the first image and the third image are between.
- a plurality of photographed images photographed in the above are defined as second images.
- the target motion amount is obtained for each of the second images.
- the target motion amount used for generating the interpolated image is the same as the first motion amount according to the ratio of the shooting time intervals, as in the case where the shooting time intervals are not equal (see step S103 in the first embodiment). It can be obtained by correcting the thickness.
- a corrected image without a positional deviation of the subject is generated with respect to the first image using the corresponding target motion amount.
- the several 2nd image without a position shift between 1st images is acquirable.
- an HDR image is generated using the first image and the plurality of corrected images.
- This configuration makes it possible to generate an HDR image with higher image quality.
- each functional block in the block diagrams is typically realized as an LSI that is an integrated circuit. . These may be individually made into one chip, or may be made into one chip so as to include a part or all of them. Although referred to as LSI here, it may be referred to as IC, system LSI, super LSI, or ultra LSI depending on the degree of integration.
- the method of circuit integration is not limited to LSI, and implementation with a dedicated circuit or a general-purpose processor is also possible.
- An FPGA Field Programmable Gate Array
- a reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
- each component may be configured by dedicated hardware, or a component that can be realized by software may be realized by executing a program.
- the image processing apparatus and the image processing method according to the present invention provide a more stable and high performance even when a subject is misaligned when image processing is performed using a plurality of captured images captured in a plurality of capturing states. It is possible to perform image processing with high accuracy.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Electromagnetism (AREA)
- Optics & Photonics (AREA)
- Studio Devices (AREA)
- Measurement Of Optical Distance (AREA)
- Image Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
Abstract
Description
本発明の実施の形態1に係る画像処理装置について、図1~図5Bを基に説明する。
先ず、画像処理装置100の構成を、図1および図2を参照しながら説明する。図1は、画像処理装置100の構成例を示すブロック図である。
次に、本発明の実施の形態1に係る画像処理装置において、被写体距離を計測する場合の処理の流れ(画像処理方法の処理手順)について、図3および図4を用いて説明する。図3は、本実施の形態における画像処理方法の処理手順を示すフロー図であり、図4は、本実施の形態における撮影画像と対象動き量と第一動き量との関係を示す説明図である。
本発明の実施の形態2に係る画像処理装置について、図1、図6~図8を基に説明する。
先ず、実施の形態2における画像処理装置100の構成を、図1および図6を参照しながら説明する。
次に、本発明の実施の形態2に係る画像処理装置において、被写体距離を計測する場合の処理の流れについて、図7および図8を用いて説明する。図7は、本実施の形態における画像処理方法の処理手順を示すフロー図であり、図8は、本実施の形態における撮影画像と対象動き量と第一動き量と第二動き量との関係を示す説明図である。なお、図7に示す処理の流れのうち、図3に示した実施の形態1の処理の流れと共通の処理には同一の符号を付し、説明を省略する。
本発明の実施の形態3に係る画像処理装置について、図1、図9~図12を基に説明する。
先ず、実施の形態3における画像処理装置100の構成を、図1および図9を参照しながら説明する。
次に、本発明の実施の形態3に係る画像処理装置において、被写体距離を計測する場合の処理の流れについて、図10~図12を用いて説明する。図10は、本実施の形態における画像処理方法の処理手順を示すフロー図であり、図11は、本実施の形態における撮影画像と対象動き量と第一動き量と第二動き量と第三動き量との関係を示す説明図である。また、図12は、第一動き量、第二動き量、第三動き量および対象動き量の関係を示すベクトル図である。なお、図10に示す処理の流れのうち、図3示す実施の形態1の処理の流れおよび図7に示す実施の形態2の処理の流れと共通の処理には同一の符号を付し、説明を省略する。
本発明の実施の形態4に係る画像処理装置について、図13~図15を基に説明する。
先ず、実施の形態4における画像処理装置100の構成を、図13を参照しながら説明する。
次に、本発明の実施の形態4に係る画像処理装置において、被写体距離を計測する場合の処理の流れについて、図14および図15を用いて説明する。図14は、本実施の形態における画像処理方法の処理手順を示すフロー図であり、図15は、ブラー領域の判定方法を示す説明図である。なお、図14に示す処理の流れのうち、図3に示す実施の形態1の処理の流れ、図7に示す実施の形態2の処理の流れ、および、図10に示す実施の形態3の処理の流れと共通の処理には同一の符号を付し、説明を省略する。
ここでは、ブラー領域の各画素の被写体距離を、ブラー領域の周辺の(ブラー領域に隣接する)非ブラー領域の被写体距離を使って補間することで算出する。この処理の概要を、図15を用いて説明する。図15では、撮影された画像全体を表す画像領域のうち、ブラー領域と判定された領域を斜線で表し、非ブラー領域と判定された領域を白地で表している。また、被写体距離を補間生成するブラー領域の注目画素を丸い点で表し、この補償処理1で被写体距離を参照する非ブラー領域の参照画素を菱形の点で表している。本実施の形態では、被写体距離補償部50は、被写体距離を補間生成するブラー領域内の注目画素について、当該注目画素から横方向および縦方向に直線を引いたときに非ブラー領域と交わる位置の画素を参照画素として被写体距離を参照する。図15から分かるように、参照画素は非ブラー領域の画素である。さらに、被写体距離を補間生成する注目画素から参照画素まで引いた直線の長さの逆数に応じて、参照画素の被写体距離を重み付けした平均値を算出することで、注目画素の被写体距離を推定する。被写体距離補償部50は、注目画素を順次設定しながら、当該処理を行い、ブラー領域の全ての画素について被写体距離を補間生成する。
なお、被写体距離補償部50による被写体距離の補償処理の他の例としては、補償処理の対象となる画素について、1つ前に求められた第二画像の被写体距離を用い、対象動き量で当該被写体距離を補正する処理がある。
本発明の実施の形態5に係る画像処理装置について、図16~図18を基に説明する。
先ず、実施の形態5における画像処理装置300の構成を、図16を参照しながら説明する。図16は、画像処理装置300の構成例を示すブロック図である。なお、本実施の形態に係る画像処理装置の構成のうち、実施の形態1に係る画像処理装置100と共通のブロックには同じ符号を付し、説明を省略する。
次に、本発明の実施の形態5に係る画像処理装置において、被写体距離を計測する場合の処理の流れ(画像処理方法の処理手順)について、図17および図18を用いて説明する。図17は、本実施の形態における画像処理方法の処理手順を示すフロー図であり、図18は、本実施の形態における撮影画像と対象動き量と第一動き量との関係を示す説明図である。なお、図17に示す処理の流れのうち、図3に示す実施の形態1の処理の流れ、図7に示す実施の形態2の処理の流れ、図10に示す実施の形態3の処理の流れ、および、図14に示す実施の形態4の処理の流れと共通の処理には同一の符号を付し、説明を省略する。
(1)上記実施の形態1~実施の形態5では、動画の場合について説明したが、静止画の場合でも、第一画像、第二画像および第三画像の3枚の画像を撮影することで、適応可能である。なお、実施の形態4において、ブラー領域の各画素の被写体距離の補間は、被写体距離の補償処理1を用いることで、静止画に対応できる。
11A 第一動き量推定部
11B 第二動き量推定部
11C 第三動き量推定部
12A、12B、12C 動き量決定部
20 補正画像生成部
30 被写体距離計測部
40 ブラー領域判定部
50 被写体距離補償部
60 HDR画像生成部
100、300 画像処理装置
200 ビデオカメラ
Claims (12)
- 複数のフォーカス状態で同一の被写体が撮影されることにより得られた複数の撮影画像から被写体距離を計測する画像処理装置であって、
前記複数の撮影画像のうち、第一のフォーカス状態で撮影された第一画像と、前記第一のフォーカス状態とは異なる第二のフォーカス状態で撮影された第二画像との間の前記被写体の位置ずれ量を表す対象動き量を推定する対象動き量推定部と、
前記対象動き量に基づいて前記第2の画像を動き補償した補正画像を生成する補正画像生成部と、
前記第一画像と前記補正画像との間のぼけ量の相関値に基づいて、前記第一画像における前記被写体距離を計測する被写体距離計測部とを備える
画像処理装置。 - 前記画像処理装置は、前記第一画像と、前記第一画像とは異なるタイミングにおいて前記第一のフォーカス状態で撮影された第三画像と、前記第一画像と前記第三画像との間で撮影された前記第二画像とを受け付け、
前記対象動き量推定部は、
前記第一画像と前記第三画像との間の被写体の位置ずれ量を表す第一動き量を推定する第一動き量推定部と、
前記第一動き量を用いて前記対象動き量を推定する対象動き量決定部とを有する
請求項1に記載の画像処理装置。 - 前記対象動き量決定部は、前記対象動き量の大きさを、前記第一動き量の大きさに前記第一画像と前記第三画像との間の撮影時間間隔に対する前記第一画像と前記第二画像との間の撮影時間間隔の比率を積算して求めることにより、前記対象動き量を推定する
請求項2に記載の画像処理装置。 - 前記対象動き量推定部は、さらに、前記第一画像と前記第二画像との間の位置ずれ量を表す第二動き量を推定する第二動き量推定部を有し、
前記対象動き量決定部は、前記第一動き量と前記第二動き量とを用いて前記対象動き量を推定する
請求項2に記載の画像処理装置。 - 前記対象動き量決定部は、前記第二画像を構成する画素のうちの前記対象動き量の算出対象画素と、前記算出対象画素に対応する前記第一画像上の画素との画素値の差に基づいて、前記第二動き量の精度を判定し、前記第二動き量の精度が閾値よりも高いと判定された場合は、前記第二動き量を前記対象動き量として推定し、前記第二動き量の精度が前記閾値よりも低いと判定された場合は、前記第一動き量を用いて前記対象動き量を推定する
請求項4に記載の画像処理装置。 - 前記対象動き量推定部は、さらに、前記第二画像と前記第三画像との間の位置ずれ量を表す第三動き量を推定する第三動き量推定部を有し、
前記対象動き量決定部は、前記第一動き量と前記第二動き量とに加え、前記第三動き量を用いて前記対象動き量を推定する
請求項4に記載の画像処理装置。 - 前記対象動き量決定部は、
前記第二動き量と前記第三動き量との合計が前記第一動き量と等しい場合は、前記第二動き量を前記対象動き量として推定し、
前記第二動き量と前記第三動き量との合計が前記第一動き量と等しくない場合は、前記対象動き量の算出対象である前記第二画像の画素と、対応する前記第一画像上の画素との画素値の差に基づいて、前記第二動き量の精度を判定し、前記対象動き量の算出対象である前記第二画像の画素と、対応する前記第三画像上の画素との画素値の差に基づいて、前記第三動き量の精度を判定し、前記第二動き量の精度が閾値よりも高いと判定されたときは、前記第二動き量を前記対象動き量として推定し、前記第二動き量の精度が前記閾値よりも低いと判定されたときは、前記第一動き量から前記第三動き量を減算した動き量を前記対象動き量として推定する
請求項6に記載の画像処理装置。 - 前記対象動き量に基づいてブラーが生じている領域をブラー領域として判定するブラー領域判定部と、
前記ブラー領域を構成する画素のそれぞれについて、前記第一画像の前記ブラーが生じていない領域である非ブラー領域の前記被写体距離、あるいは、予め前記被写体距離が求められた他の撮影画像の前記被写体距離を用いて、前記第一画像の前記被写体距離を計測する被写体距離補償部とを備え、
前記被写体距離計測部は、前記非ブラー領域を構成する画素のそれぞれについて、前記第一画像と前記補正画像との間のぼけ量の相関値に基づいて、前記被写体距離を求める
請求項1~7の何れか1項に記載の画像処理装置。 - 複数の撮影状態で同一の被写体が撮影されることにより得られた複数の撮影画像を用いて画像処理を行う画像処理装置であって、
前記複数の撮影画像のうち、第一の撮影状態で撮影された第一画像と、前記第一の撮影状態とは異なる第二の撮影状態で撮影された第二画像との間の前記被写体の位置ずれ量を表す対象動き量を推定する対象動き量推定部と、
前記対象動き量に基づいて前記第二画像を動き補償した補正画像を生成する補正画像生成部と、
前記第一画像と前記補正画像とを用いて画像処理を行う画像処理部とを備える
画像処理装置。 - 前記画像処理装置は、第一の露出状態で撮影された前記第一画像と、第二の露出状態で撮影された前記第二画像とを受け付け、
前記画像処理部は、前記画像処理として、前記第一画像と前記補正画像とを合成して、ダイナミックレンジの広い合成画像を生成する処理を行う
請求項9に記載の画像処理装置。 - 複数のフォーカス状態で同一の被写体が撮影されることにより得られた複数の撮影画像から被写体距離を計測する画像処理方法であって、
前記複数の撮影画像のうち、第一のフォーカス状態で撮影された第一画像と、前記第一のフォーカス状態とは異なる第二のフォーカス状態で撮影された第二画像との間の前記被写体の位置ずれ量を表す対象動き量を推定する対象動き量推定ステップと、
前記対象動き量に基づいて前記第二画像を動き補償した補正画像を生成する補正画像生成ステップと、
前記第一画像と前記補正画像との間のぼけ量の相関値に基づいて、前記第一画像における前記被写体距離を計測する被写体距離計測ステップとを含む
画像処理方法。 - 複数の撮影状態で同一の被写体が撮影されることにより得られた複数の撮影画像を用いて画像処理を行う画像処理方法であって、
前記複数の撮影画像のうち、第一の撮影状態で撮影された第一画像と、前記第一の撮影状態とは異なる第二の撮影状態で撮影された第二画像との間の前記被写体の位置ずれ量を表す対象動き量を推定する対象動き量推定ステップと、
前記対象動き量に基づいて前記第二画像を動き補償した補正画像を生成する補正画像生成ステップと、
前記第一画像と前記補正画像とを用いて画像処理を行う画像処理ステップとを含む
画像処理方法。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/812,038 US9068831B2 (en) | 2011-05-27 | 2012-05-24 | Image processing apparatus and image processing method |
EP16176575.5A EP3101387B1 (en) | 2011-05-27 | 2012-05-24 | Image processing apparatus and image processing method |
CN201280002102.6A CN103026171B (zh) | 2011-05-27 | 2012-05-24 | 图像处理装置及图像处理方法 |
EP12792516.2A EP2717012B1 (en) | 2011-05-27 | 2012-05-24 | Image processing apparatus and image processing method |
JP2012540192A JP5934929B2 (ja) | 2011-05-27 | 2012-05-24 | 画像処理装置および画像処理方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2011-119049 | 2011-05-27 | ||
JP2011119049 | 2011-05-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012164881A1 true WO2012164881A1 (ja) | 2012-12-06 |
Family
ID=47258758
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/003398 WO2012164881A1 (ja) | 2011-05-27 | 2012-05-24 | 画像処理装置および画像処理方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US9068831B2 (ja) |
EP (2) | EP2717012B1 (ja) |
JP (1) | JP5934929B2 (ja) |
CN (1) | CN103026171B (ja) |
WO (1) | WO2012164881A1 (ja) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014171051A1 (ja) * | 2013-04-15 | 2014-10-23 | パナソニック株式会社 | 距離測定装置、及び、距離測定方法 |
JP2015036632A (ja) * | 2013-08-12 | 2015-02-23 | キヤノン株式会社 | 距離計測装置、撮像装置、距離計測方法 |
WO2015053113A1 (ja) * | 2013-10-08 | 2015-04-16 | オリンパス株式会社 | 撮像装置及び電子機器 |
JP2015152484A (ja) * | 2014-02-17 | 2015-08-24 | キヤノン株式会社 | 距離計測装置、撮像装置、距離計測方法、およびプログラム |
JP2016114528A (ja) * | 2014-12-16 | 2016-06-23 | 東洋ゴム工業株式会社 | 流体計測方法、流体計測装置、流体計測用データ生成方法、流体計測用データ生成装置、及びコンピュータプログラム |
Families Citing this family (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012060434A (ja) * | 2010-09-09 | 2012-03-22 | Fuji Xerox Co Ltd | 画素補間装置、画像読取装置、及び画素補間プログラム |
JP5587930B2 (ja) * | 2012-03-09 | 2014-09-10 | 日立オートモティブシステムズ株式会社 | 距離算出装置及び距離算出方法 |
KR102126355B1 (ko) * | 2013-06-04 | 2020-06-25 | 삼성전자주식회사 | 방사선 촬영 장치 및 방사선 영상 생성 방법 |
US20150009355A1 (en) * | 2013-07-05 | 2015-01-08 | Himax Imaging Limited | Motion adaptive cmos imaging system |
US10136063B2 (en) * | 2013-07-12 | 2018-11-20 | Hanwha Aerospace Co., Ltd | Image stabilizing method and apparatus |
JP6270413B2 (ja) * | 2013-10-29 | 2018-01-31 | キヤノン株式会社 | 画像処理装置、撮像装置、および画像処理方法 |
US20150116525A1 (en) * | 2013-10-31 | 2015-04-30 | Himax Imaging Limited | Method for generating high dynamic range images |
AU2013273843A1 (en) * | 2013-12-23 | 2015-07-09 | Canon Kabushiki Kaisha | Motion blur compensation for depth from defocus |
JP6136019B2 (ja) * | 2014-02-03 | 2017-05-31 | パナソニックIpマネジメント株式会社 | 動画像撮影装置、および、動画像撮影装置の合焦方法 |
DE102014204360A1 (de) * | 2014-03-10 | 2015-09-10 | Ford Global Technologies, Llc | Verfahren sowie Vorrichtung zur Abschätzung des Abstandes eines in Bewegung befindlichen Fahrzeuges von einem Objekt |
JP6432038B2 (ja) * | 2014-03-19 | 2018-12-05 | パナソニックIpマネジメント株式会社 | 撮像装置 |
JP6395429B2 (ja) * | 2014-04-18 | 2018-09-26 | キヤノン株式会社 | 画像処理装置、その制御方法及び記憶媒体 |
JP6317635B2 (ja) * | 2014-06-30 | 2018-04-25 | 株式会社東芝 | 画像処理装置、画像処理方法及び画像処理プログラム |
FR3028611A1 (fr) * | 2014-11-13 | 2016-05-20 | Valeo Schalter & Sensoren Gmbh | Dispositif et procede de determination de positions de points dans un environnement tridimensionnel, dispositif de detection d'obstacles et vehicule equipe d'un tel dispositif |
US20160232672A1 (en) * | 2015-02-06 | 2016-08-11 | Qualcomm Incorporated | Detecting motion regions in a scene using ambient-flash-ambient images |
US9684970B2 (en) * | 2015-02-27 | 2017-06-20 | Qualcomm Incorporated | Fast adaptive estimation of motion blur for coherent rendering |
CN107408302B (zh) * | 2015-03-02 | 2020-09-01 | 三菱电机株式会社 | 图像处理装置和图像处理方法 |
JP2016170522A (ja) * | 2015-03-11 | 2016-09-23 | 株式会社東芝 | 移動体検出装置 |
AU2015202286A1 (en) | 2015-05-01 | 2016-11-17 | Canon Kabushiki Kaisha | Method, system and apparatus for determining distance to an object in a scene |
US10091436B2 (en) * | 2015-05-13 | 2018-10-02 | Samsung Electronics Co., Ltd. | Electronic device for processing image and method for controlling the same |
CN106289157B (zh) * | 2015-06-12 | 2019-10-29 | 联想(北京)有限公司 | 一种信息处理方法及电子设备 |
JP6608763B2 (ja) * | 2015-08-20 | 2019-11-20 | 株式会社東芝 | 画像処理装置及び撮影装置 |
CN105258673B (zh) * | 2015-11-02 | 2017-05-31 | 南京航空航天大学 | 一种基于双目合成孔径聚焦图像的目标测距方法、装置 |
WO2018209603A1 (zh) * | 2017-05-17 | 2018-11-22 | 深圳配天智能技术研究院有限公司 | 图像处理方法、图像处理设备及存储介质 |
US10834309B2 (en) * | 2017-11-16 | 2020-11-10 | Canon Kabushiki Kaisha | Lens control apparatus and control method for tracking moving object |
CN107945112B (zh) * | 2017-11-17 | 2020-12-08 | 浙江大华技术股份有限公司 | 一种全景图像拼接方法及装置 |
US10594940B1 (en) * | 2018-01-12 | 2020-03-17 | Vulcan Inc. | Reduction of temporal and spatial jitter in high-precision motion quantification systems |
CN109688322B (zh) * | 2018-11-26 | 2021-04-02 | 维沃移动通信(杭州)有限公司 | 一种生成高动态范围图像的方法、装置及移动终端 |
US11044404B1 (en) | 2018-11-28 | 2021-06-22 | Vulcan Inc. | High-precision detection of homogeneous object activity in a sequence of images |
US10872400B1 (en) | 2018-11-28 | 2020-12-22 | Vulcan Inc. | Spectral selection and transformation of image frames |
EP3820138A1 (en) * | 2019-11-06 | 2021-05-12 | Koninklijke Philips N.V. | A system for performing image motion compensation |
US20230034727A1 (en) * | 2021-07-29 | 2023-02-02 | Rakuten Group, Inc. | Blur-robust image segmentation |
DE102022117726A1 (de) | 2022-07-15 | 2024-01-18 | Carl Zeiss Industrielle Messtechnik Gmbh | Koordinatenmessgerät, Verfahren und Computerprogrammprodukt |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000050173A (ja) * | 1998-07-24 | 2000-02-18 | Olympus Optical Co Ltd | 撮像装置および撮像プログラムを記録した記録媒体 |
JP2002259987A (ja) * | 2001-03-06 | 2002-09-13 | Sony Corp | 画像処理装置及び方法、記憶媒体、コンピュータ・プログラム |
JP2002296494A (ja) * | 2001-03-30 | 2002-10-09 | Minolta Co Ltd | 結像位置検出プログラムおよびカメラ |
JP2003134385A (ja) * | 2001-10-23 | 2003-05-09 | Olympus Optical Co Ltd | 画像合成装置 |
JP2007036743A (ja) * | 2005-07-27 | 2007-02-08 | Matsushita Electric Works Ltd | 複数画像合成方法及び撮像装置 |
JP2010183207A (ja) * | 2009-02-03 | 2010-08-19 | Panasonic Electric Works Co Ltd | 画像処理方法および画像処理装置 |
JP2010249794A (ja) | 2009-03-26 | 2010-11-04 | Kyocera Corp | 被写体距離計測装置 |
Family Cites Families (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4065778A (en) * | 1976-06-17 | 1977-12-27 | Eastman Kodak Company | Automatic rangefinder and focusing apparatus |
JPH07109625B2 (ja) * | 1985-04-17 | 1995-11-22 | 株式会社日立製作所 | 三次元立体視方法 |
DE3905619C2 (de) * | 1988-02-23 | 2000-04-13 | Olympus Optical Co | Bildeingabe-/Ausgabevorrichtung |
JP3218730B2 (ja) * | 1992-10-19 | 2001-10-15 | 株式会社ニコン | 予測機能を有する焦点調節装置 |
US5539493A (en) * | 1992-12-15 | 1996-07-23 | Nikon Corporation | Autofocus camera |
JP3450449B2 (ja) * | 1994-07-18 | 2003-09-22 | キヤノン株式会社 | 撮像装置およびその撮像方法 |
JP3697745B2 (ja) * | 1995-06-30 | 2005-09-21 | ソニー株式会社 | オートフォーカス制御装置及び方法 |
JPH10311945A (ja) * | 1997-05-12 | 1998-11-24 | Canon Inc | 焦点検出装置 |
JP2003066321A (ja) * | 2001-08-29 | 2003-03-05 | Mega Chips Corp | Af制御装置およびaf制御方法 |
JP2004240054A (ja) * | 2003-02-04 | 2004-08-26 | Olympus Corp | カメラ |
WO2006121088A1 (ja) * | 2005-05-10 | 2006-11-16 | Olympus Corporation | 画像処理装置、画像処理方法および画像処理プログラム |
US7538813B2 (en) * | 2005-05-11 | 2009-05-26 | Sony Ericsson Mobile Communications Ab | Digital cameras with triangulation autofocus systems and related methods |
EP1887382A1 (en) * | 2005-05-19 | 2008-02-13 | Olympus Corporation | Distance measuring apparatus, distance measuring method and distance measuring program |
US7912337B2 (en) * | 2005-11-02 | 2011-03-22 | Apple Inc. | Spatial and temporal alignment of video sequences |
US7903168B2 (en) * | 2006-04-06 | 2011-03-08 | Eastman Kodak Company | Camera and method with additional evaluation image capture based on scene brightness changes |
JP4806329B2 (ja) * | 2006-10-23 | 2011-11-02 | 三洋電機株式会社 | 撮像装置及び撮像方法 |
DE102006055908A1 (de) * | 2006-11-27 | 2008-05-29 | Adc Automotive Distance Control Systems Gmbh | Verfahren zur automatischen Fernlichtsteuerung |
US8390733B2 (en) * | 2007-02-15 | 2013-03-05 | Panasonic Corporation | Imaging device with interchangeable lens and camera body for imaging device |
JP2009159092A (ja) | 2007-12-25 | 2009-07-16 | Canon Inc | 撮像装置及びその制御方法 |
US8098957B2 (en) * | 2008-02-13 | 2012-01-17 | Qualcomm Incorporated | Shared block comparison architechture for image registration and video coding |
KR20100013171A (ko) * | 2008-07-30 | 2010-02-09 | 삼성디지털이미징 주식회사 | 오토 포커스 영역의 움직임 보상 방법 및 장치, 이를이용한 오토 포커스 방법 및 장치 |
JP5294805B2 (ja) * | 2008-11-05 | 2013-09-18 | キヤノン株式会社 | 撮影システム及びレンズ装置 |
US8339475B2 (en) * | 2008-12-19 | 2012-12-25 | Qualcomm Incorporated | High dynamic range image combining |
JP5206494B2 (ja) * | 2009-02-27 | 2013-06-12 | 株式会社リコー | 撮像装置、画像表示装置と、撮像方法及び画像表示方法並びに合焦領域枠の位置補正方法 |
US8390698B2 (en) * | 2009-04-08 | 2013-03-05 | Panasonic Corporation | Image capturing apparatus, reproduction apparatus, image capturing method, and reproduction method |
US8482622B2 (en) * | 2009-08-27 | 2013-07-09 | Sony Corporation | Method, system and computer program product for reducing motion blur |
JP5829018B2 (ja) * | 2010-11-22 | 2015-12-09 | オリンパス株式会社 | 撮像装置 |
JP5867996B2 (ja) * | 2010-12-27 | 2016-02-24 | キヤノン株式会社 | 焦点検出装置及びそれを有する撮像装置 |
-
2012
- 2012-05-24 WO PCT/JP2012/003398 patent/WO2012164881A1/ja active Application Filing
- 2012-05-24 CN CN201280002102.6A patent/CN103026171B/zh active Active
- 2012-05-24 JP JP2012540192A patent/JP5934929B2/ja active Active
- 2012-05-24 EP EP12792516.2A patent/EP2717012B1/en not_active Not-in-force
- 2012-05-24 EP EP16176575.5A patent/EP3101387B1/en not_active Not-in-force
- 2012-05-24 US US13/812,038 patent/US9068831B2/en not_active Expired - Fee Related
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000050173A (ja) * | 1998-07-24 | 2000-02-18 | Olympus Optical Co Ltd | 撮像装置および撮像プログラムを記録した記録媒体 |
JP2002259987A (ja) * | 2001-03-06 | 2002-09-13 | Sony Corp | 画像処理装置及び方法、記憶媒体、コンピュータ・プログラム |
JP2002296494A (ja) * | 2001-03-30 | 2002-10-09 | Minolta Co Ltd | 結像位置検出プログラムおよびカメラ |
JP2003134385A (ja) * | 2001-10-23 | 2003-05-09 | Olympus Optical Co Ltd | 画像合成装置 |
JP2007036743A (ja) * | 2005-07-27 | 2007-02-08 | Matsushita Electric Works Ltd | 複数画像合成方法及び撮像装置 |
JP2010183207A (ja) * | 2009-02-03 | 2010-08-19 | Panasonic Electric Works Co Ltd | 画像処理方法および画像処理装置 |
JP2010249794A (ja) | 2009-03-26 | 2010-11-04 | Kyocera Corp | 被写体距離計測装置 |
Non-Patent Citations (4)
Title |
---|
M. SUBBARAO; G. SURYA: "Depth from Defocus: A Spatial Domain Approach", INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 13, no. 3, 1994, pages 271 - 294 |
S. HIURA, T.; MATSUYAMA: "Multi-Focus Range Finder with Coded Aperture", TRANSACTIONS OF INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS OF JAPAN, vol. J82-D-II, no. 11, November 1999 (1999-11-01), pages 1912 - 1920 |
See also references of EP2717012A4 |
T. MATSUYAMA; T. TAKEMURA: "Real-Time Depth Sensing from Multi-Focus Images", TRANSACTIONS OF INFORMATION PROCESSING SOCIETY OF JAPAN, vol. 39, no. 7, July 1998 (1998-07-01), pages 2149 - 2158 |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014171051A1 (ja) * | 2013-04-15 | 2014-10-23 | パナソニック株式会社 | 距離測定装置、及び、距離測定方法 |
US9467616B2 (en) | 2013-04-15 | 2016-10-11 | Panasonic Intellectual Property Management Co., Ltd. | Distance measurement apparatus and distance measurement method |
JPWO2014171051A1 (ja) * | 2013-04-15 | 2017-02-16 | パナソニックIpマネジメント株式会社 | 距離測定装置、及び、距離測定方法 |
JP2015036632A (ja) * | 2013-08-12 | 2015-02-23 | キヤノン株式会社 | 距離計測装置、撮像装置、距離計測方法 |
WO2015053113A1 (ja) * | 2013-10-08 | 2015-04-16 | オリンパス株式会社 | 撮像装置及び電子機器 |
JPWO2015053113A1 (ja) * | 2013-10-08 | 2017-03-09 | オリンパス株式会社 | 撮像装置及び電子機器 |
US9888177B2 (en) | 2013-10-08 | 2018-02-06 | Olympus Corporation | Imaging device and electronic device |
JP2015152484A (ja) * | 2014-02-17 | 2015-08-24 | キヤノン株式会社 | 距離計測装置、撮像装置、距離計測方法、およびプログラム |
JP2016114528A (ja) * | 2014-12-16 | 2016-06-23 | 東洋ゴム工業株式会社 | 流体計測方法、流体計測装置、流体計測用データ生成方法、流体計測用データ生成装置、及びコンピュータプログラム |
Also Published As
Publication number | Publication date |
---|---|
EP2717012B1 (en) | 2016-07-27 |
CN103026171B (zh) | 2016-03-16 |
EP3101387A1 (en) | 2016-12-07 |
US9068831B2 (en) | 2015-06-30 |
EP2717012A4 (en) | 2015-06-24 |
EP2717012A1 (en) | 2014-04-09 |
CN103026171A (zh) | 2013-04-03 |
JPWO2012164881A1 (ja) | 2015-02-23 |
EP3101387B1 (en) | 2018-09-12 |
JP5934929B2 (ja) | 2016-06-15 |
US20130121537A1 (en) | 2013-05-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5934929B2 (ja) | 画像処理装置および画像処理方法 | |
US9092875B2 (en) | Motion estimation apparatus, depth estimation apparatus, and motion estimation method | |
JP6140935B2 (ja) | 画像処理装置、画像処理方法、画像処理プログラム、および撮像装置 | |
KR101233013B1 (ko) | 화상 촬영 장치 및 그 거리 연산 방법과 합초 화상 취득 방법 | |
JP6245885B2 (ja) | 撮像装置およびその制御方法 | |
JP4782899B2 (ja) | 視差検出装置、測距装置及び視差検出方法 | |
JP6436783B2 (ja) | 画像処理装置、撮像装置、画像処理方法、プログラム、および、記憶媒体 | |
US20070189750A1 (en) | Method of and apparatus for simultaneously capturing and generating multiple blurred images | |
WO2012056686A1 (ja) | 3次元画像補間装置、3次元撮像装置および3次元画像補間方法 | |
JP2019510234A (ja) | 奥行き情報取得方法および装置、ならびに画像取得デバイス | |
WO2011158508A1 (ja) | 画像処理装置および画像処理方法 | |
JP6786225B2 (ja) | 画像処理装置、撮像装置および画像処理プログラム | |
US9619886B2 (en) | Image processing apparatus, imaging apparatus, image processing method and program | |
JP2014044408A (ja) | 撮像装置、距離情報取得方法およびプログラム | |
JP6071257B2 (ja) | 画像処理装置及びその制御方法、並びにプログラム | |
JP7378219B2 (ja) | 撮像装置、画像処理装置、制御方法、及びプログラム | |
JP2013044844A (ja) | 画像処理装置および画像処理方法 | |
JP2017092983A (ja) | 画像処理装置、画像処理方法、画像処理プログラム、および撮像装置 | |
US20140192163A1 (en) | Image pickup apparatus and integrated circuit therefor, image pickup method, image pickup program, and image pickup system | |
US11348271B2 (en) | Image processing device and three-dimensional measuring system | |
JP2014022806A (ja) | 撮像装置および撮像装置制御方法 | |
CN107845108B (zh) | 一种光流值计算方法、装置及电子设备 | |
JP2022173069A (ja) | 画像処理装置及び方法、及び、撮像装置及びその制御方法、プログラム、記憶媒体 | |
WO2016194576A1 (ja) | 情報処理装置および方法 | |
JP7479840B2 (ja) | 画像処理装置、画像処理方法およびプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201280002102.6 Country of ref document: CN |
|
ENP | Entry into the national phase |
Ref document number: 2012540192 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12792516 Country of ref document: EP Kind code of ref document: A1 |
|
REEP | Request for entry into the european phase |
Ref document number: 2012792516 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13812038 Country of ref document: US Ref document number: 2012792516 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |