WO2014064875A1 - 画像処理装置および画像処理方法 - Google Patents
画像処理装置および画像処理方法 Download PDFInfo
- Publication number
- WO2014064875A1 WO2014064875A1 PCT/JP2013/004965 JP2013004965W WO2014064875A1 WO 2014064875 A1 WO2014064875 A1 WO 2014064875A1 JP 2013004965 W JP2013004965 W JP 2013004965W WO 2014064875 A1 WO2014064875 A1 WO 2014064875A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- phase difference
- image processing
- image
- parallax
- distribution
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 81
- 238000003672 processing method Methods 0.000 title claims description 8
- 238000009826 distribution Methods 0.000 claims abstract description 160
- 238000004364 calculation method Methods 0.000 claims abstract description 43
- 238000011156 evaluation Methods 0.000 claims description 54
- 238000000034 method Methods 0.000 claims description 49
- 238000003384 imaging method Methods 0.000 claims description 41
- 238000005259 measurement Methods 0.000 claims description 9
- 230000006870 function Effects 0.000 description 30
- 238000010586 diagram Methods 0.000 description 25
- 238000001514 detection method Methods 0.000 description 12
- 238000005516 engineering process Methods 0.000 description 10
- 230000007547 defect Effects 0.000 description 9
- 238000012937 correction Methods 0.000 description 8
- 238000003860 storage Methods 0.000 description 7
- 238000012986 modification Methods 0.000 description 6
- 230000004048 modification Effects 0.000 description 6
- 230000010365 information processing Effects 0.000 description 5
- 238000005070 sampling Methods 0.000 description 5
- 239000007787 solid Substances 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 230000007423 decrease Effects 0.000 description 2
- 101100248200 Arabidopsis thaliana RGGB gene Proteins 0.000 description 1
- 230000005856 abnormality Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005304 joining Methods 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C3/00—Measuring distances in line of sight; Optical rangefinders
- G01C3/02—Details
- G01C3/06—Use of electric means to obtain final indication
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C3/00—Measuring distances in line of sight; Optical rangefinders
- G01C3/10—Measuring distances in line of sight; Optical rangefinders using a parallactic triangle with variable angles and a base of fixed length in the observation station, e.g. in the instrument
- G01C3/14—Measuring distances in line of sight; Optical rangefinders using a parallactic triangle with variable angles and a base of fixed length in the observation station, e.g. in the instrument with binocular observation at a single point, e.g. stereoscopic type
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S11/00—Systems for determining distance or velocity not using reflection or reradiation
- G01S11/12—Systems for determining distance or velocity not using reflection or reradiation using electromagnetic waves other than radio waves
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
Definitions
- the present technology relates to an image processing apparatus and an image processing method capable of acquiring distance information of a measurement target.
- Patent Document 1 describes a distance calculation method using a technique called stereo matching.
- a phase difference between the parallax images is obtained.
- the process for obtaining the phase difference is to move the local region (unit region) to be compared sequentially in the horizontal direction, and position shift (Pixel shift, disparity) between the parallax images of the unit regions having the strongest correlation within the comparison range Is obtained as a phase difference.
- the distance can also be calculated from a plurality of parallax images along an arbitrary angle direction in the image.
- the measurement accuracy of the phase difference between parallax images is determined by the distance between the object and the camera.
- the measurement accuracy of the phase difference decreases due to the influence of camera noise, disturbance, and the like, and accordingly, the distance calculation accuracy also decreases.
- an object of the present technology is to provide an image processing apparatus and an image processing method capable of improving distance calculation accuracy.
- an image processing apparatus includes a parallax information generation unit.
- the parallax information generation unit is configured to generate a first phase difference distribution generated in units of pixels for the first parallax image and the second parallax image, and a sub-pixel unit based on the first phase difference distribution The disparity information is generated based on the second phase difference distribution generated in step.
- the disparity information is based on the first phase difference distribution generated in units of pixels and the second phase difference distribution generated in units of subpixels using the first phase difference distribution. Is generated. For this reason, it is possible to acquire parallax information that is more robust against disturbances such as camera noise and that is more accurate.
- the sub-pixel is sometimes used as a color pixel, but in this specification, it means a pixel unit smaller than one pixel. Therefore, when each color pixel is one pixel, the sub-pixel means a smaller pixel unit.
- the image processing apparatus may further include a phase difference calculation unit that generates the second phase difference distribution based on the first phase difference distribution.
- the phase difference calculation unit is configured to generate the second phase difference distribution by correcting the first phase difference distribution with edge information detected from the first phase difference distribution. Also good. Since the luminance difference is large near the edge portion, it is easy to estimate the phase difference in units of subpixels.
- a second phase difference distribution may be generated.
- the phase difference in units of subpixels can be detected with high accuracy.
- the phase difference calculation unit includes a luminance distribution in the first pixel group constituting the first parallax image, and a second pixel group constituting the second parallax image and corresponding to the first pixel group.
- the second phase difference distribution may be generated by calculating a correlation value with the luminance distribution at. Also by such a method, the phase difference in units of subpixels can be detected.
- an evaluation function related to a luminance difference between a plurality of adjacent pixels when the first pixel group and the second pixel group are overlaid on a pixel basis may be used.
- the length of the broken line obtained when the luminance values of the plurality of pixels are connected can be used.
- a surface area of a predetermined three-dimensional surface obtained when the luminance values of the plurality of pixels are connected can be used.
- the parallax information is typically distance information of a measurement target. Thereby, various information processing based on the distance information can be executed.
- the image processing apparatus may further include a control unit that generates a refocus image using the distance information generated by the parallax information generation unit. Thereby, a desired refocus image can be generated with high accuracy.
- the image processing apparatus may further include an imaging unit that acquires a plurality of parallax images having different viewpoints.
- the parallax information generation unit can generate parallax information using a plurality of parallax images acquired by the imaging unit.
- An image processing method includes obtaining a first parallax image and a second parallax image. For the first parallax image and the second parallax image, a first phase difference distribution generated in units of pixels and a first phase difference distribution generated in units of subpixels based on the first phase difference distribution. Disparity information is generated based on the two phase difference distributions.
- the first and second parallax images may be images prepared in advance or parallax images captured by an imaging device or the like.
- FIG. 1 It is a schematic structure figure of an imaging device concerning one embodiment of this art. It is a functional block diagram showing schematic structure of the control part shown in FIG. It is a flowchart explaining the effect
- FIG. 10 is a diagram for describing a second phase difference distribution generation method according to the second embodiment of the present technology, where (A) illustrates a luminance distribution of a main part of a left viewpoint image, and (B) illustrates a right viewpoint image. The luminance distribution of the main part is shown. It is a schematic diagram explaining the process sequence for producing
- FIG. 1 is a schematic diagram illustrating an overall configuration of an imaging apparatus according to an embodiment of the present technology.
- the imaging device 1 generates and outputs image data (imaging data) Dout by imaging the imaging object (subject) 2 and performing predetermined image processing.
- the imaging device 1 includes an imaging unit 10 and a control unit 20.
- the imaging unit 10 includes two cameras 11 and 12, and the cameras 11 and 12 are configured to be able to acquire a plurality of parallax images having different viewpoints.
- the number of cameras is not limited to two and may be three or more.
- the plurality of cameras is not limited to being arranged linearly, and may be arranged in a matrix.
- the cameras 11 and 12 are typically a two-dimensional solid-state imaging device such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal-Oxide Semiconductor) in which a plurality of pixels are arranged in the horizontal and vertical directions. It is configured.
- the cameras 11 and 12 generate imaging data D0 of the subject 2 and output it to the control unit 20.
- the control unit 20 has a function as an image processing unit.
- the control unit 20 generates image data Dout including parallax information by performing predetermined image processing on the imaging data D0 acquired by the imaging unit 10.
- the image processing method of the present embodiment is embodied in the control unit 20 and will be described below.
- the image processing program of this embodiment corresponds to a software implementation of each image processing function in the control unit 20.
- the software is composed of a program group for causing each computer to execute each image processing function.
- Each program may be used by being incorporated in advance in dedicated hardware, for example, or installed in a general-purpose personal computer or the like from a network or a recording medium.
- the control unit 20 generates a first phase difference distribution in units of pixels between the plurality of parallax images, and generates a second phase in units of sub-pixels between the plurality of parallax images based on the first phase difference distribution.
- the phase difference distribution of is generated.
- the control unit 20 generates parallax information based on the first phase difference distribution and the second phase difference distribution.
- the control unit 20 includes a defect correction unit 21, a clamp processing unit 22, a distance information acquisition unit 23, and a storage unit 24.
- the defect correction unit 21 corrects a defect such as a blackout included in the imaging data D0 (a defect caused by an abnormality in the cameras 11 and 12 itself).
- the clamp processing unit 22 performs black level setting processing (clamp processing) for each pixel data in the image data after defect correction by the defect correction unit 21. Note that color interpolation processing such as demosaic processing may be further performed on the captured image data after the clamp processing.
- the distance information acquisition unit 23 acquires predetermined distance information based on the imaging data D1 supplied from the clamp processing unit 22.
- the distance information acquisition unit 23 includes a phase difference calculation unit 231 and a parallax information generation unit 232.
- the phase difference calculation unit 231 generates (calculates) the first phase difference distribution DM1 and the second phase difference distribution DM2 based on the imaging data D1. That is, the phase difference calculation unit 231 generates a first phase difference distribution DM1 in units of pixels between the first parallax image and the second parallax image having different viewpoints, and based on the first phase difference distribution A second phase difference distribution DM2 is generated in units of subpixels between the first parallax image and the second parallax image.
- the parallax information generation unit 232 calculates the parallax information d including distance information based on the first and second phase difference distributions DM1 and DM2.
- the storage unit 24 is composed of, for example, a ROM (Read Only Memory) or a RAM (Random Access Memory), and stores programs, calculation data, setting parameters, and the like necessary for the operations of the above-described units constituting the control unit 20.
- the storage unit 24 may be provided outside the control unit 20. In this case, the storage unit 24 is controlled by the control unit 20.
- the storage unit 24 may be configured by an external storage device such as a hard disk drive.
- the control unit 20 may generate parallax information based on a first phase difference distribution and a second phase difference distribution prepared in advance. In this case, the first phase difference distribution and the second phase difference are generated. The distribution is stored in the storage unit 24.
- control unit 20 [Operation of imaging device] Subsequently, the details of the control unit 20 will be described together with the operation of the imaging apparatus 1 of the present embodiment.
- the image processing method in the present embodiment includes a step of obtaining a first parallax image and a second parallax image; For the first parallax image and the second parallax image, a first phase difference distribution generated in units of pixels and a first phase difference distribution generated in units of subpixels based on the first phase difference distribution. And generating a parallax information based on the two phase difference distributions.
- FIG. 3 is a flowchart showing a processing procedure of the control unit 20 (phase difference calculation unit 231).
- the controller 20 includes a parallax image input step (Step 1), an edge recognition step (Step 2), an identical edge estimation step (Step 3), an edge angle estimation step (Step 4), and a matching processing step (Step 5). ) And a disparity correction step (step 6).
- the camera 11 acquires the left image PL as the first parallax image, and the camera 12 acquires the right image PR as the second parallax image.
- the left image PL and the right image PR are obtained by simultaneously capturing the same subject 2.
- Control unit 20 receives input of imaging data D0 of both parallax images of left image PL and right image PR from imaging unit 10 (step 1).
- the clamp processing unit 22 performs clamping processing on the imaging data after the defect correction.
- the imaged data D1 after the clamping process is input to the distance information acquisition unit 23, where first and second phase difference distributions DM1 and DM2 are generated, respectively.
- the phase difference calculating unit 231 performs edge processing on these two parallax images by edge processing, color recognition, distance measurement in units of one pixel (Pixel), and the like (step 2).
- FIG. 4A illustrates a partial region of the left image PL acquired by the imaging unit 10
- FIG. 4B illustrates a partial region of the right image PR acquired by the imaging unit 10.
- Both images are images of the same subject (house in this example) 2 and the light and dark boundaries appearing in each image are recognized as edge portions EL and ER.
- the phase difference calculation unit 231 generates a first phase difference distribution in units of one pixel (pixel) by calculating a phase difference between the left image PL and the right image PR. Since the camera 11 and the camera 12 have different horizontal viewpoints, the edge portion EL of the left image PL and the edge portion ER of the right image PR have different pixel positions in the horizontal direction. Corresponds to parity.
- the following stereo matching technique is used. This is because the two parallax images are sequentially compared in the local area (the correlation value (pixel correlation value) indicating the similarity between the images is obtained), thereby moving the object (the phase difference between the parallax images). It is a method of seeking.
- the phase difference distribution DM is generated as follows. That is, first, a unit region (partial image C1: center coordinates (x1, y1) in FIG. 5) in one parallax image DC is extracted and the position is fixed. Next, the comparison target unit region (partial image H1: center coordinate (x1, y1) in FIG. 5) in the other parallax image DH is extracted, and the position of this partial image H1 is sequentially moved in the horizontal direction within the comparison range H10.
- the first phase difference distribution DM1 is obtained by repeatedly performing such calculation processing on the entire surface of the parallax images DC and DH while changing the position of the partial image C1.
- the phase difference in the vertical direction may also be obtained.
- the correlation value may be calculated while fixing the unit region of one parallax image and sequentially moving the unit region to be compared in the other parallax image within the comparison range in the vertical direction.
- Various formulas can be used for calculating the correlation value, but representative examples include correlation between SAD (Sum of Absolute Difference), SSD (Sum of Squared Difference), or NCC (Normalized Cross-Correlation). Can be used as a value.
- SAD and SSD indicate that the smaller the value (closer to 0), the stronger the correlation, and the higher the value (closer to ⁇ ), the weaker the correlation.
- NCC indicates that the correlation is stronger as the value is closer to 1, and the correlation is weaker as the value is closer to 0.
- the first phase difference distribution DM1 in units of one pixel between the left image PL and the right image PR is generated.
- the edge processing is performed on the left image PL and the right image PR when the first phase difference distribution DM1 is generated, the phase difference can be easily calculated.
- the phase difference calculation unit 231 generates a second phase difference distribution DM2.
- edge information is detected from the first phase difference distribution DM1, and the first phase difference distribution DM1 is corrected with the edge information, thereby generating the second phase difference distribution DM2 (steps 3 to 3). 6).
- the phase difference calculation unit 231 estimates that the edge portion EL and the edge portion ER are the same target from the first phase difference distribution DM1, and determines an area for performing sub-pixel detection (step 3).
- a 5 ⁇ 5 pixel region including the edge portions EL and ER is determined as a detection area.
- the phase difference calculation unit 231 calculates the local luminance distribution of the edge portion EL calculated for the first parallax image (left image PL) and the second parallax image (right image PR).
- a second phase difference distribution is generated by matching the local luminance distribution of the edge portion ER (steps 4 and 5).
- the phase difference calculation unit 231 measures the luminance of each pixel in the same detection area of the left image PL and the right image PR.
- FIG. 6 shows the luminance distribution in the detection area of the left image PL.
- the horizontal coordinate is the X-axis
- the vertical coordinate is the Y-axis
- the magnitude of the luminance is expressed along the Z-axis orthogonal thereto.
- the five curves L1, L2, L3, L4, and L5 shown in FIG. 6 are approximate curves obtained by connecting the luminance values of the pixels in the detection area in units of rows (Y1, Y2, Y3, Y4, Y5).
- Each of the curves L1 to L5 has a luminance gradient at the edge portion EL. Since the edge portion EL straddles between pixels in each row, the phase difference of the edge portion EL cannot be detected with high accuracy in units of one pixel. Therefore, in the present embodiment, the sub-pixel level phase difference is detected by the following processing.
- the phase difference calculation unit 231 estimates the edge angle of the edge portion EL from the luminance distribution in the detection area (step 4).
- a sampling axis S orthogonal to the luminance axis (parallel to the XY plane) is set. Then, the angle ⁇ formed by the sampling axis S and the reference axis (Y) parallel to the Y axis is changed, and the luminance values of the curves L1 to L5 virtually sampled along the axial direction of the sampling axis S match.
- the angle ⁇ ans to be (or most approximate) is estimated as the edge angle.
- the sample axis S is set to an axis passing through a pixel located at the (X3, Y3) coordinates, but it is not limited to this.
- the vertical axis represents luminance
- the horizontal axis corresponds to a combined axis (X ⁇ Ytan ⁇ ) obtained by combining vertical coordinates with horizontal coordinates.
- the edge angle ( ⁇ ans) FIG. 9B.
- the composite figure estimated as the edge angle ( ⁇ ans) is also referred to as “evaluation luminance distribution”.
- the estimation of the edge angle ( ⁇ ans) as described above is performed for each of the left image PL and the right image PR.
- the luminance of each point of the luminance distribution for evaluation acquired for each of the left image PL and the right image PR is normalized by an arbitrary arithmetic expression. Thereby, the influence by the individual difference between the cameras 11 and 12 can be excluded.
- the normalization arithmetic expression is not particularly limited, and the following arithmetic expression is used in the present embodiment.
- Left image brightness (Left Value) Left brightness x (Left maximum value / Left minimum value)
- Right image brightness (Right Value) Right brightness x (Right maximum value / Right minimum value)
- Left maximum value and Left minimum value correspond to the maximum value and the minimum value of the luminance in the evaluation luminance distribution, respectively, as shown in FIG.
- Light maximum value and “Right minimum value” respectively correspond to the maximum value and the minimum value of the luminance in the evaluation luminance distribution, as shown in FIG.
- the phase difference calculation unit 231 executes matching processing of both images using the evaluation luminance distribution of the left image PL and the evaluation luminance distribution of the right image PR (step 5).
- FIG. 11 is an explanatory diagram showing an example of the matching process.
- the (X ⁇ Ytan ⁇ ) coordinates of the right image having the same luminance as the pixel d of the left image are not d, but are coordinates that are separated by “d ⁇ sub ”from d.
- the “d sub” corresponds to a phase difference in units of subpixels.
- fitting processing such as linear interpolation may be applied to the evaluation luminance distribution (highly sampled edge) of each image in order to suppress the influence of noise included in the camera image. Good.
- the phase difference calculation unit 231 reconverts the coordinate values of “d sub” acquired over the entire image into the coordinate values of the original XY coordinate system, and corrects the phase difference distribution (step S6).
- the second phase difference distribution DM2 in units of subpixels is generated.
- the first and second phase difference distributions DM1 and DM2 generated by the phase difference calculation unit 231 are input to the parallax information generation unit 232.
- the phase difference calculation unit 231 may input a phase difference distribution obtained by combining the first and second phase difference distributions DM1 and DM2 to the parallax information generation unit 232.
- the disparity information generation unit 232 generates disparity information d based on the first phase difference distribution DM1 and the second phase difference distribution D2.
- the disparity information d includes distance information.
- the distance information means information on the distance from the cameras 11 and 12 (photographing lenses) to an arbitrary reference position in the captured image corresponding to the captured data D0.
- the information on the distance includes information on the distance from the cameras 11 and 12 to the reference position or the focal distance on the object side of the imaging lens.
- the control unit 20 is configured to be able to output image data Dout including the parallax information d to the information processing apparatus.
- the information processing apparatus acquires data related to the distance of the subject 2 based on the image data Dout, and thereby generates, for example, an image (refocus image) focused on an arbitrary position of the subject 2 with high accuracy. be able to.
- the subject is not limited to a stationary object, and may be a moving body that moves relative to the imaging apparatus 1. In this case, since it is possible to detect information on the position, distance, moving speed, and moving direction of the moving object, the imaging apparatus 1 is configured as an input device that causes the information processing apparatus to execute predetermined processing according to the movement of the moving object. can do.
- the first phase difference distribution D1 generated in units of pixels, and the second phase difference distribution D2 generated in units of subpixels using the first phase difference distribution D1.
- the parallax information d is generated. For this reason, parallax information d that is more robust against disturbances such as camera noise and more accurate can be acquired.
- the disparity (dsub) having the highest correlation is estimated from the correlation value distribution acquired in units of pixels.
- the distribution was greatly affected by camera noise.
- phase difference in units of subpixels is calculated using a plurality of pixels in the detection area, it is difficult to be affected by noise, thereby obtaining distance information with high accuracy. Is possible.
- phase difference of 1 pixel unit is detected using existing techniques such as stereo matching.
- the phase difference calculation unit 231 has a luminance distribution in the first pixel group constituting the first parallax image and a luminance distribution in the second pixel group constituting the second parallax image and corresponding to the first pixel group.
- the second phase difference distribution is generated by calculating the correlation value between the two.
- FIG. 13 is a diagram in which the pixel luminance values of the local region are expressed in a three-dimensional structure, where (A) shows the luminance distribution of the pixel group of the left image, and (B) shows the luminance distribution of the pixel group of the right image.
- the pixel luminance value based on the edge information of the parallax image described with reference to FIG. 6 can be used.
- FIG. 13 in the pixels R32 and L32 that are matched at the one-pixel level, neighboring pixels are used as matching evaluation target data.
- FIG. 14 shows data when the pixel luminance value of the pixel group of the left image and the pixel luminance value of the pixel group of the right image are superimposed on each other in pixel units.
- n When the value of n is set to a large value, the accuracy of parallax detection can be improved, but an error is likely to occur at a portion where the difference between the two parallax images is large (such as an occlusion). For this reason, an appropriate value is set in consideration of the calculation cost.
- an evaluation function for evaluating the matching degree at the sub-pixel level is defined as f ( ⁇ x, ⁇ y) at an arbitrary ⁇ x, ⁇ y, ⁇ x, ⁇ y where f ( ⁇ x, ⁇ y) has the highest evaluation can be obtained. It will be good.
- the search range of ⁇ x and ⁇ y may be within a range of ⁇ 0.5 pixels when the accuracy of 1-pixel parallax detection is sufficiently high. If the performance is low, the search is performed in the range of ⁇ several pixels.
- the phase difference calculation unit 231 uses an evaluation function related to a luminance difference between a plurality of adjacent pixels when the first pixel group and the second pixel group are overlapped with each other in a pixel unit. The correlation value is calculated. In this embodiment, the length of the broken line obtained when the luminance values of a plurality of pixels are connected is used as the evaluation function.
- matching evaluation data (R21, R22, R23, R31, R32, R33, R41, R42, R43, L11, L12, L13, L14, L21, L22, L23, L24 , L31, L32, L33, L34, L41, L42, L43, and L44) are converted into a projected image viewed from the observation angle ⁇ , and adjacent luminance values are connected by a straight line to generate a polygonal line shape as shown in FIG. To do.
- the observation angle ⁇ is an angle formed by a reference axis parallel to the X axis (horizontal axis) and an observation axis parallel to the XY plane, but the reference axis can be arbitrarily set, and is not limited to the example illustrated.
- FIGS. 17 and 18 are examples in which matching is achieved at the sub-pixel level and the length is minimized.
- the luminance values are B 1 and B 2 , respectively, and the length of the line segment connecting L11 and R21 is D1, D1 is It can be expressed by a formula.
- the coefficient a is used in this formula.
- the coefficient a is a parameter for converting luminance into spatial distance information, and affects the performance of the evaluation function. For this reason, it is necessary to determine the value with sufficient consideration so that the performance of the evaluation function is improved. In this way, the lengths of all the line segments (D1, D2, D3, ... D3, D24) were obtained, and the length of the broken line was obtained only in the range of the width (2n + 1) centered on R32. What becomes the evaluation function f ( ⁇ x, ⁇ y).
- the evaluation by the minimum polyline length method is an evaluation method on the premise that the local region of the photographed image is an aggregate of edges facing a certain direction. Therefore, although it is difficult to evaluate an area without an edge, an area without an edge may be excluded from being an evaluation target because there is little information for obtaining a subpixel.
- the second phase difference distribution in units of subpixels is generated.
- the edge angle is separately estimated for the left image and the right image, and then the second phase difference distribution is generated by performing matching processing of both images.
- a second phase difference distribution based on the edge information is generated in a state where the left image is superimposed on each other. Also in this embodiment, the same effect as that of the first embodiment can be obtained.
- the phase difference calculation unit 231 configures the luminance distribution in the first pixel group constituting the first parallax image, and the second parallax image and corresponds to the first pixel group.
- a second phase difference distribution is generated by calculating a correlation value with the luminance distribution in the second pixel group.
- the phase difference calculation unit 231 uses an evaluation function related to a luminance difference between a plurality of adjacent pixels when the first pixel group and the second pixel group are overlapped with each other in a pixel unit.
- the correlation value is calculated.
- This embodiment is different from the second embodiment in that the surface area of a predetermined three-dimensional surface obtained when the luminance values of a plurality of pixels are connected is used as an evaluation function.
- matching evaluation data (R21, R22, R23, R31, R32, R33, R41, R42, R43, L11, L12, L13, L14, L21, L22, L23, L24 , L31, L32, L33, L34, L41, L42, L43, and L44) are connected in a three-dimensional space to generate a solid surface shape as shown in FIG.
- FIG. 21 and FIG. 22 are examples when matching is achieved at the sub-pixel level and the area is minimized.
- FIGS. 19 to 22 an example of a method for determining the surface area by joining matching evaluation data in a three-dimensional space is shown.
- the area of the triangle composed of matching evaluation data L11, R21, L21 is S11
- the area of the triangle composed of matching evaluation data L11, R21, 21L12 is S12.
- the triangles are constructed according to the above, and the area of 36 triangles in total from S11 to S94 is determined, and the total sum is obtained.
- the luminance values of L11, R21, and L21 are B 1 , B 2 , and B 3 , respectively, the length of the line segment connecting L11 and R21 is D 1 , and the length of the line segment connecting R21 and L21 is D 2 , L21
- the length of the line segment and D 3 which connects the L11, D 1, D 2, D 3 can be expressed by the following equation.
- D 1 ⁇ ( ⁇ x 2 + (1 ⁇ y) 2 + (a (B 1 ⁇ B 2 )) 2 )
- D 2 ⁇ ( ⁇ x 2 + ⁇ y 2 + (a (B 2 ⁇ B 3 )) 2 )
- D 3 ⁇ (1 + (a (B 3 ⁇ B 1 )) 2 )
- the coefficient a is a parameter for converting luminance into spatial distance information, and affects the performance of the evaluation function. For this reason, it is necessary to determine the value with sufficient consideration so that the performance of the evaluation function is improved. From the lengths D 1 , D 2 , and D 3 of the line segments thus obtained, the area S11 of the triangle is obtained using Heron's formula.
- the second phase difference distribution in units of subpixels is generated. Also in this embodiment, the same effect as the first and second embodiments can be obtained.
- the correlation value can be obtained with higher accuracy than in the second embodiment using the length of the predetermined broken line as the evaluation function. it can.
- control unit 20 develops the imaging data D0 acquired by the imaging unit 10, and generates a second phase difference distribution in subpixel units based on the developed parallax images.
- the second phase difference distribution may be generated by performing matching processing with the image before the development of the imaging data D0, that is, the RAW image.
- each color component When performing matching evaluation with RAW data as it is, each color component is separated, the surface area of a solid obtained when the luminance values of a plurality of pixels are connected for each color is obtained, and the total value of the surface areas of each color is calculated as an evaluation function. May be used.
- 23 to 26 are diagrams showing an example of RAW image matching evaluation.
- RAW image matching evaluation an example in which four color pixels (RGGB) are arranged in one pixel will be described.
- FIG. 23 shows data when the pixel luminance value of each color of the pixel group of the left image and the pixel luminance value of each color of the pixel group of the right image are superimposed on each other in pixel units. For each color, the correlation value of the luminance distribution between the images is calculated by the same method as described above.
- FIG. 24 shows an example of a three-dimensional surface composed of R pixels
- FIG. 25 shows an example of a three-dimensional surface composed of G pixels
- FIG. 26 shows an example of a three-dimensional surface composed of B pixels.
- the evaluation function f R ( ⁇ x, ⁇ y) of the surface area of the three-dimensional surface composed of R pixels, the evaluation function f G ( ⁇ x, ⁇ y) of the surface area of the three-dimensional surface composed of G pixels, and the B pixel ⁇ x and ⁇ y that minimize the sum of the evaluation function f B ( ⁇ x, ⁇ y) of the surface area of the solid surface to be configured are calculated.
- ⁇ x and ⁇ y calculated as described above are generated as the second phase difference distribution of the sub-pixels.
- the imaging unit 10 having the two cameras 11 and 12 has been described as an example. May be used.
- an optical element capable of distributing light such as a liquid crystal lens array, a liquid lens array, and a diffractive lens array may be used.
- the image processing unit is incorporated in the imaging device.
- the present invention is not limited to this, and the image processing unit may be configured by an information processing device such as a PC (Personal Computer) terminal.
- the image processing unit may be configured by an information processing device such as a PC (Personal Computer) terminal.
- the distance information of the measurement target is calculated by the method described above.
- the area of a predetermined triangle is used as the evaluation function.
- the volume of the triangle, the total area of a plurality of adjacent triangles, and the like are used as the evaluation function. May be.
- this technique can also take the following structures.
- (1) For the first parallax image and the second parallax image, the first phase difference distribution generated in pixel units and the sub-pixel unit generated based on the first phase difference distribution An image processing apparatus comprising: a parallax information generation unit that generates parallax information based on the second phase difference distribution.
- An image processing apparatus further comprising: a phase difference calculation unit that generates the second phase difference distribution based on the first phase difference distribution.
- the image processing apparatus according to (2) above, The image processing device generates the second phase difference distribution by correcting the first phase difference distribution with edge information detected from the first phase difference distribution.
- the phase difference calculation unit performs a matching process between the local luminance distribution of the edge calculated for the first parallax image and the local luminance distribution of the edge calculated for the second parallax image.
- the image processing device generates the second phase difference distribution.
- the phase difference calculation unit includes a luminance distribution in the first pixel group constituting the first parallax image, and a luminance in the second pixel group constituting the second parallax image and corresponding to the first pixel group. An image processing apparatus that generates the second phase difference distribution by calculating a correlation value with the distribution.
- the phase difference calculation unit uses an evaluation function related to a luminance difference between a plurality of adjacent pixels when the first pixel group and the second pixel group are overlapped with each other in pixel units. An image processing apparatus that calculates the correlation value.
- the image processing apparatus according to (6) above, The phase difference calculation unit is an image processing device that uses, as the evaluation function, the length of a polygonal line obtained when the luminance values of the plurality of pixels are connected.
- the phase difference calculation unit uses the surface area of a predetermined three-dimensional surface obtained when the luminance values of the plurality of pixels are connected as the evaluation function.
- the parallax information generation unit is an image processing device that generates distance information of a measurement target as the parallax information.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Image Processing (AREA)
- Measurement Of Optical Distance (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Image Analysis (AREA)
- Studio Devices (AREA)
Abstract
Description
上記視差情報生成部は、第1の視差画像と第2の視差画像とを対象として、ピクセル単位で生成された第1の位相差分布と、上記第1の位相差分布に基づいてサブピクセル単位で生成された第2の位相差分布とに基づいて、視差情報を生成する。
エッジ部近傍は輝度差が大きいため、サブピクセル単位の位相差の推定が容易となる。
これによりサブピクセル単位の位相差を高精度に検出することができる。
このような方法によっても、サブピクセル単位の位相差を検出することができる。
これにより所望とするリフォーカス画像を高精度に生成することができる。
この場合、上記視差情報生成部は、撮像部で取得された複数の視差画像を用いて、視差情報を生成することができる。
上記第1の視差画像と上記第2の視差画像とを対象として、ピクセル単位で生成された第1の位相差分布と、前記第1の位相差分布に基づいてサブピクセル単位で生成された第2の位相差分布とに基づいて、視差情報が生成される。
上記第1及び第2の視差画像は、予め用意された画像であってもよいし、撮像装置等で撮像された視差画像であってもよい。
図1は、本技術の一実施形態に係る撮像装置の全体構成を示す概略図である。撮像装置1は、撮像対象物(被写体)2を撮像して所定の画像処理を施すことにより、画像データ(撮像データ)Doutを生成し出力する。
撮像装置1は、撮像部10と、制御部20とを有する。
次に、図2を参照して、制御部20の詳細構成について説明する。図2は、制御部20の機能ブロック構成を表したものである。制御部20は、欠陥補正部21と、クランプ処理部22と、距離情報取得部23と、記憶部24とを有している。
続いて、本実施形態の撮像装置1の動作とともに、制御部20の詳細について説明する。
上記第1の視差画像と上記第2の視差画像とを対象として、ピクセル単位で生成された第1の位相差分布と、前記第1の位相差分布に基づいてサブピクセル単位で生成された第2の位相差分布とに基づいて、視差情報を生成する工程とを有する。
位相差算出部231は、これら2つの視差画像についてエッジ処理、色認識、1ピクセル(Pixel)単位の測距等によってエッジ処理を行う(ステップ2)。
続いて、位相差算出部231は、第2の位相差分布DM2を生成する。本実施形態では、第1の位相差分布DM1からエッジ情報を検出し、第1の位相差分布DM1を当該エッジ情報で補正することで、第2の位相差分布DM2を生成する(ステップ3~6)。
左画像の輝度(Left Value)=Left輝度×(Left最大値/Left最小値)
右画像の輝度(Right Value)=Right輝度×(Right最大値/Right最小値)
位相差算出部231で生成された第1,第2の位相差分布DM1,DM2は、視差情報生成部232へ入力される。位相差算出部231は、第1,第2の位相差分布DM1,DM2を合成した位相差分布を視差情報生成部232へ入力してもよい。視差情報生成部232は、第1の位相差分布DM1と第2の位相差分布D2とに基づいて視差情報dを生成する。
次に、本技術の第2の実施形態について説明する。本実施形態では、制御部20(位相差算出部231)における第2の位相差分布(DM2)の生成方法が上述の第1の実施形態と異なる。以下、第1の実施形態と異なる構成について主に説明し、上述の実施形態と同様の構成については同様の符号を付しその説明を省略または簡略化する。
yr = (x+Δx) sin(-θ)+(y+Δy) cos(-θ)
次に、本技術の第3の実施形態について説明する。本実施形態では、制御部20(位相差算出部231)における第2の位相差分布(DM2)の生成方法が上述の第1,第2の実施形態と異なる。以下、第1,第2の実施形態と異なる構成について主に説明し、上述の実施形態と同様の構成については同様の符号を付しその説明を省略または簡略化する。
D2 = √(Δx2 + Δy2 + (a(B2-B3))2 )
D3 = √( 1 + (a(B3-B1))2 )
以上より、評価関数f(Δx、Δy)は、
f(Δx、Δy)= S11 + S12 + S13+ S14 + S21 + …… + S84 + S91 + S92 + S93 + S94
となる。
続いて、第3の実施形態の変形例について説明する。
(1)第1の視差画像と第2の視差画像とを対象として、ピクセル単位で生成された第1の位相差分布と、前記第1の位相差分布に基づいてサブピクセル単位で生成された第2の位相差分布とに基づいて、視差情報を生成する視差情報生成部
を具備する画像処理装置。
(2)上記(1)に記載の画像処理装置であって、
前記第1の位相差分布に基づいて、前記第2の位相差分布を生成する位相差算出部をさらに具備する
画像処理装置。
(3)上記(2)に記載の画像処理装置であって、
前記位相差算出部は、前記第1の位相差分布を、前記第1の位相差分布から検出されたエッジ情報で補正することで、前記第2の位相差分布を生成する
画像処理装置。
(4)上記(3)に記載の画像処理装置であって、
前記位相差算出部は、前記第1の視差画像について算出されたエッジ部の局所的な輝度分布と、前記第2の視差画像について算出されたエッジ部の局所的な輝度分布とをマッチング処理することで、前記第2の位相差分布を生成する
画像処理装置。
(5)上記(2)に記載の画像処理装置であって、
前記位相差算出部は、第1の視差画像を構成する第1のピクセル群における輝度分布と、前記第2の視差画像を構成し前記第1のピクセル群に対応する第2のピクセル群における輝度分布との間の相関値を算出することで、前記第2の位相差分布を生成する
画像処理装置。
(6)上記(5)に記載の画像処理装置であって、
前記位相差算出部は、前記第1のピクセル群と前記第2のピクセル群とをピクセル単位で相互に重ね合わせときに相互に隣接する複数のピクセル間の輝度差に関連する評価関数を用いて前記相関値を算出する
画像処理装置。
(7)上記(6)に記載の画像処理装置であって、
前記位相差算出部は、前記複数のピクセルの輝度値を繋ぎ合わせたときに得られる折れ線の長さを前記評価関数として用いる
画像処理装置。
(8)上記(6)に記載の画像処理装置であって、
前記位相差算出部は、前記複数のピクセルの輝度値を繋ぎ合わせたときに得られる所定の立体面の表面積を前記評価関数として用いる
画像処理装置。
(9)上記(1)~(8)のいずれか1つに記載の画像処理装置であって、
前記視差情報生成部は、前記視差情報として測定対象の距離情報を生成する
画像処理装置。
(10)上記(9)に記載の画像処理装置であって、
前記視差情報生成部により生成された距離情報を用いて、リフォーカス画像を生成する制御部をさらに具備する
画像処理装置。
(11)上記(1)~(10)のいずれか1つに記載の画像処理装置であって、
相互に視点が異なる複数の視差画像を取得する撮像部をさらに具備する
画像処理装置。
(12)第1の視差画像と第2の視差画像とを取得し、
前記第1の視差画像と前記第2の視差画像とを対象として、ピクセル単位で生成された第1の位相差分布と、前記第1の位相差分布に基づいてサブピクセル単位で生成された第2の位相差分布とに基づいて、視差情報を生成する
画像処理方法。
2…被写体(撮像対象物)
10…撮像部
11,12…カメラ
20…制御部
231…位相差算出部
232…視差情報生成部
EL,ER…エッジ部
PL…左画像
PR…右画像
Claims (12)
- 第1の視差画像と第2の視差画像とを対象として、ピクセル単位で生成された第1の位相差分布と、前記第1の位相差分布に基づいてサブピクセル単位で生成された第2の位相差分布とに基づいて、視差情報を生成する視差情報生成部
を具備する画像処理装置。 - 請求項1に記載の画像処理装置であって、
前記第1の位相差分布に基づいて、前記第2の位相差分布を生成する位相差算出部をさらに具備する
画像処理装置。 - 請求項2に記載の画像処理装置であって、
前記位相差算出部は、前記第1の位相差分布を、前記第1の位相差分布から検出されたエッジ情報で補正することで、前記第2の位相差分布を生成する
画像処理装置。 - 請求項3に記載の画像処理装置であって、
前記位相差算出部は、前記第1の視差画像について算出されたエッジ部の局所的な輝度分布と、前記第2の視差画像について算出されたエッジ部の局所的な輝度分布とをマッチング処理することで、前記第2の位相差分布を生成する
画像処理装置。 - 請求項2に記載の画像処理装置であって、
前記位相差算出部は、第1の視差画像を構成する第1のピクセル群における輝度分布と、前記第2の視差画像を構成し前記第1のピクセル群に対応する第2のピクセル群における輝度分布との間の相関値を算出することで、前記第2の位相差分布を生成する
画像処理装置。 - 請求項5に記載の画像処理装置であって、
前記位相差算出部は、前記第1のピクセル群と前記第2のピクセル群とをピクセル単位で相互に重ね合わせたときに相互に隣接する複数のピクセル間の輝度差に関連する評価関数を用いて前記相関値を算出する
画像処理装置。 - 請求項6に記載の画像処理装置であって、
前記位相差算出部は、前記複数のピクセルの輝度値を繋ぎ合わせたときに得られる折れ線の長さを前記評価関数として用いる
画像処理装置。 - 請求項6に記載の画像処理装置であって、
前記位相差算出部は、前記複数のピクセルの輝度値を繋ぎ合わせたときに得られる所定の立体面の表面積を前記評価関数として用いる
画像処理装置。 - 請求項1に記載の画像処理装置であって、
前記視差情報生成部は、前記視差情報として測定対象の距離情報を生成する
画像処理装置。 - 請求項9に記載の画像処理装置であって、
前記視差情報生成部により生成された距離情報を用いて、リフォーカス画像を生成する制御部をさらに具備する
画像処理装置。 - 請求項1に記載の画像処理装置であって、
相互に視点が異なる複数の視差画像を取得する撮像部をさらに具備する
画像処理装置。 - 第1の視差画像と第2の視差画像とを取得し、
前記第1の視差画像と前記第2の視差画像とを対象として、ピクセル単位で生成された第1の位相差分布と、前記第1の位相差分布に基づいてサブピクセル単位で生成された第2の位相差分布とに基づいて、視差情報を生成する
画像処理方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014543128A JP6365303B2 (ja) | 2012-10-24 | 2013-08-22 | 画像処理装置および画像処理方法 |
CN201380048338.8A CN104641395B (zh) | 2012-10-24 | 2013-08-22 | 图像处理设备及图像处理方法 |
EP13848331.8A EP2913793B1 (en) | 2012-10-24 | 2013-08-22 | Image processing device and image processing method |
US14/427,403 US10134136B2 (en) | 2012-10-24 | 2013-08-22 | Image processing apparatus and image processing method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012235121 | 2012-10-24 | ||
JP2012-235121 | 2012-10-24 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014064875A1 true WO2014064875A1 (ja) | 2014-05-01 |
Family
ID=50544262
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/004965 WO2014064875A1 (ja) | 2012-10-24 | 2013-08-22 | 画像処理装置および画像処理方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US10134136B2 (ja) |
EP (1) | EP2913793B1 (ja) |
JP (1) | JP6365303B2 (ja) |
CN (1) | CN104641395B (ja) |
WO (1) | WO2014064875A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10748264B2 (en) | 2015-09-09 | 2020-08-18 | Sony Corporation | Image processing apparatus and image processing method |
WO2021200190A1 (ja) | 2020-03-31 | 2021-10-07 | ソニーグループ株式会社 | 情報処理装置および方法、並びにプログラム |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI676920B (zh) * | 2018-11-22 | 2019-11-11 | 國家中山科學研究院 | 光斑影像精密比對定位方法 |
CN112866544B (zh) * | 2019-11-12 | 2022-08-12 | Oppo广东移动通信有限公司 | 相位差的获取方法、装置、设备及存储介质 |
CN112866548B (zh) * | 2019-11-12 | 2022-06-14 | Oppo广东移动通信有限公司 | 相位差的获取方法和装置、电子设备 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002221405A (ja) * | 2001-01-26 | 2002-08-09 | Sumitomo Electric Ind Ltd | 三次元位置計測方法及び三次元位置計測装置 |
WO2008050904A1 (fr) * | 2006-10-25 | 2008-05-02 | Tokyo Institute Of Technology | Procédé de génération d'image dans un plan de focalisation virtuel haute résolution |
JP2009224982A (ja) * | 2008-03-14 | 2009-10-01 | Sony Corp | 画像処理装置、画像処理プログラムおよび表示装置 |
JP2011171858A (ja) | 2010-02-16 | 2011-09-01 | Sony Corp | 画像処理装置、画像処理方法、画像処理プログラムおよび撮像装置 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5334606B2 (ja) * | 2009-01-28 | 2013-11-06 | 三菱電機株式会社 | レーダ画像信号処理装置 |
CN101866497A (zh) * | 2010-06-18 | 2010-10-20 | 北京交通大学 | 基于双目立体视觉的智能三维人脸重建方法及系统 |
JP2012123296A (ja) * | 2010-12-10 | 2012-06-28 | Sanyo Electric Co Ltd | 電子機器 |
-
2013
- 2013-08-22 WO PCT/JP2013/004965 patent/WO2014064875A1/ja active Application Filing
- 2013-08-22 EP EP13848331.8A patent/EP2913793B1/en active Active
- 2013-08-22 CN CN201380048338.8A patent/CN104641395B/zh not_active Expired - Fee Related
- 2013-08-22 JP JP2014543128A patent/JP6365303B2/ja not_active Expired - Fee Related
- 2013-08-22 US US14/427,403 patent/US10134136B2/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002221405A (ja) * | 2001-01-26 | 2002-08-09 | Sumitomo Electric Ind Ltd | 三次元位置計測方法及び三次元位置計測装置 |
WO2008050904A1 (fr) * | 2006-10-25 | 2008-05-02 | Tokyo Institute Of Technology | Procédé de génération d'image dans un plan de focalisation virtuel haute résolution |
JP2009224982A (ja) * | 2008-03-14 | 2009-10-01 | Sony Corp | 画像処理装置、画像処理プログラムおよび表示装置 |
JP2011171858A (ja) | 2010-02-16 | 2011-09-01 | Sony Corp | 画像処理装置、画像処理方法、画像処理プログラムおよび撮像装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2913793A4 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10748264B2 (en) | 2015-09-09 | 2020-08-18 | Sony Corporation | Image processing apparatus and image processing method |
WO2021200190A1 (ja) | 2020-03-31 | 2021-10-07 | ソニーグループ株式会社 | 情報処理装置および方法、並びにプログラム |
Also Published As
Publication number | Publication date |
---|---|
US10134136B2 (en) | 2018-11-20 |
JPWO2014064875A1 (ja) | 2016-09-08 |
CN104641395A (zh) | 2015-05-20 |
EP2913793A1 (en) | 2015-09-02 |
CN104641395B (zh) | 2018-08-14 |
JP6365303B2 (ja) | 2018-08-01 |
EP2913793B1 (en) | 2020-06-03 |
US20150248766A1 (en) | 2015-09-03 |
EP2913793A4 (en) | 2016-08-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10958892B2 (en) | System and methods for calibration of an array camera | |
US10043290B2 (en) | Image processing to enhance distance calculation accuracy | |
CN107077743B (zh) | 用于阵列相机的动态校准的系统和方法 | |
US9686479B2 (en) | Method for combining multiple image fields | |
JP6365303B2 (ja) | 画像処理装置および画像処理方法 | |
CN104685513A (zh) | 根据使用阵列源捕捉的低分辨率图像的基于特征的高分辨率运动估计 | |
JP2014038151A (ja) | 撮像装置及び位相差検出方法 | |
US20220270210A1 (en) | Method and device for restoring image obtained from array camera | |
WO2015159791A1 (ja) | 測距装置および測距方法 | |
JP2019527495A (ja) | 立体画像キャプチャ | |
CN109089100A (zh) | 一种双目立体视频的合成方法 | |
WO2022107530A1 (ja) | 信号処理装置と信号処理方法およびプログラム | |
CN112866550B (zh) | 相位差获取方法和装置、电子设备、计算机可读存储介质 | |
Darvatkar et al. | Implementation of Barrel Distortion Correction Algorithm for Wide Angle Camera Based Systems | |
JP2023094234A (ja) | 複数画像超解像システム | |
JP2019158776A (ja) | 撮像装置、車両、及び撮像方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13848331 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2014543128 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14427403 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013848331 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |