WO2015182771A1 - Dispositif de capture d'image, dispositif de traitement d'image, procédé de traitement d'image et programme informatique - Google Patents
Dispositif de capture d'image, dispositif de traitement d'image, procédé de traitement d'image et programme informatique Download PDFInfo
- Publication number
- WO2015182771A1 WO2015182771A1 PCT/JP2015/065660 JP2015065660W WO2015182771A1 WO 2015182771 A1 WO2015182771 A1 WO 2015182771A1 JP 2015065660 W JP2015065660 W JP 2015065660W WO 2015182771 A1 WO2015182771 A1 WO 2015182771A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- common feature
- feature amount
- captured image
- imaging
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C3/00—Measuring distances in line of sight; Optical rangefinders
- G01C3/02—Details
- G01C3/06—Use of electric means to obtain final indication
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Definitions
- the present invention relates to an imaging device, an image processing device, an image processing method, and a computer program.
- a first imaging unit that captures an object including a visible light component and a near-infrared component and an object including a visible light component and not including a near-infrared component are captured.
- a second imaging unit measures the distance to the subject by a stereo matching method based on visible light components in the daytime, while projecting an infrared pattern on the subject with a near-infrared auxiliary light source at night, The distance to the subject is measured by a pattern light projection method based on the components.
- the apparatus configuration becomes complicated.
- a near-infrared auxiliary light source for projecting an infrared pattern is required for the pattern light projection method.
- the present invention has been made in view of such circumstances, and it is an object of the present invention to provide an imaging device, an image processing device, an image processing method, and a computer program capable of simply obtaining parallax information by a stereo matching method. .
- the first imaging unit receives light in a first wavelength range.
- the second captured image is received by the step of acquiring the first captured image and the second imaging unit receives light in a second wavelength range different from the first wavelength range.
- An image correction step for acquiring a first corrected image and a second corrected image for each by correcting a difference between the first captured image and the second captured image, and the first
- a common feature amount acquisition step for acquiring a first common feature amount and a second common feature amount for each based on a common feature amount of the corrected image and the second corrected image; and the first common feature Stereo matching using the second common feature quantity and the second common feature quantity.
- a parallax information acquiring step of acquiring the parallax information is an imaging method with.
- the first common feature amount and the second common feature amount are absolute values of differential values of luminance values.
- the first common feature amount or the second common feature amount is based on an absolute value of a differential value of all of R, G, and / or at least one of the luminance values. It is the acquired imaging method.
- the first wavelength region and the second wavelength region partially overlap, and information indicating the amount of received light having the wavelength of the overlapping portion is the first common feature. And an imaging method used as the second common feature amount.
- the imaging surface of the first imaging unit and the imaging surface of the second imaging unit are on different planes
- the parallax information acquisition step includes the first common feature.
- a sub-step for determining a search pixel p1 of the first common feature quantity image made up of a quantity, and a second common feature quantity image made up of the second common feature quantity corresponding to the search pixel p1 according to epipolar constraints A sub-step for obtaining the search pixel p2 at, and a degree of coincidence between the first common feature amount image and the second common feature amount image with the search pixels p1 and p2 as the center, and a second step based on the degree of coincidence.
- the parallax information acquisition step includes the pixel value of the first corrected image or the pixel value of the second corrected image and the parallax value of the parallax image acquired by the stereo matching. And clustering the pixel values of the first corrected image or the pixel values of the second corrected image surrounded by a set of pixel points as a result of the clustering as a region representing one object It is.
- a first imaging unit that receives light in a first wavelength range and acquires a first captured image, and a second wavelength that is different from the first wavelength range.
- the first captured image An image correction unit for obtaining a first correction image obtained from the captured image and a second correction image obtained from the second captured image; and the first correction image and the second correction image.
- Stereo matching is performed by using a common feature amount acquisition unit that acquires a first common feature amount and a second common feature amount for each, and the first common feature amount and the second common feature amount. It is an imaging device provided with the parallax information acquisition part which acquires parallax information.
- a first captured image captured by receiving light in the first wavelength range and light in a second wavelength range different from the first wavelength range are received.
- the second captured image captured in this manner is retained, and the difference between the captured images is corrected to obtain from the first corrected image obtained from the first captured image and the second captured image.
- a first common feature amount and a second common feature amount based on a common feature amount of the image correction unit that acquires the second corrected image and the first corrected image and the second corrected image, respectively.
- a parallax information acquisition unit that acquires parallax information by stereo matching using the first common feature quantity and the second common feature quantity.
- a program recorded in a non-volatile storage medium and executed by a computer receiving light in a first wavelength range and acquiring a first captured image; Receiving light in a second wavelength range different from the first wavelength range to obtain a second captured image, and correcting a difference between the first captured image and the second captured image
- the image correction step for obtaining the first correction image and the second correction image for each, and the common feature amount of the first correction image and the second correction image the first correction image and the second correction image are obtained.
- the common feature amount acquisition step of acquiring one common feature amount and the second common feature amount, and using the first common feature amount and the second common feature amount the parallax information is obtained by stereo matching.
- a program recorded in a non-volatile storage medium and executed by a computer wherein the first common feature amount and the second common feature amount are differential values of luminance values. It is a program recorded on a computer-readable storage medium that is an absolute value.
- a program recorded in a non-volatile storage medium and executed by a computer wherein the first common feature amount or the second common feature amount is R, G, or B. It is a program recorded on a computer-readable storage medium that is acquired based on the absolute value of the derivative value of all or at least one of the luminance values.
- 1 is a block diagram illustrating a configuration of an imaging apparatus 1 according to an embodiment of the present invention.
- 1 is a configuration diagram illustrating a configuration of an imaging apparatus 1 according to a first embodiment of the present invention. It is a figure which shows the visual field of the imaging part which concerns on 1st Embodiment of this invention. It is a figure which shows the vehicle coordinate system which is the world coordinate system which concerns on 1st Embodiment of this invention. It is a flowchart which shows the procedure of the parameter acquisition process which concerns on 1st Embodiment of this invention. It is a figure which shows each coordinate system which concerns on 1st Embodiment of this invention.
- FIG. 1 is a block diagram showing a configuration of an imaging apparatus 1 according to an embodiment of the present invention.
- the imaging device 1 includes a first imaging unit 11, a second imaging unit 12, and an image processing device 20.
- the image processing apparatus 20 includes an image correction unit 21, a parameter acquisition unit 22, a common feature amount acquisition unit 23, and a parallax information acquisition unit 24.
- the first imaging unit 11 receives and captures light in the first wavelength range.
- the second imaging unit 12 receives and captures light in a second wavelength range different from the first wavelength range of the first imaging unit 11. Examples of combinations of light in the first wavelength range and light in the second wavelength range include the following light combination examples 1 and 2.
- Light combination example 1 “visible light” as light in the first wavelength region and “far infrared light” as light in the second wavelength region.
- Light combination example 2 “visible light” as light in the first wavelength range and “near infrared” as light in the second wavelength range.
- light combinations other than the light combination examples 1 and 2 described above may be used.
- the first captured image captured by the first imaging unit 11 and the second captured image captured by the second imaging unit 12 are input to the image processing device 20.
- the image correction unit 21 corrects the difference between the first captured image and the second captured image.
- the parameter acquisition unit 22 acquires parameters for stereo matching.
- the common feature amount acquisition unit 23 acquires a common feature amount for each of the first correction image obtained from the first captured image and the second correction image obtained from the second captured image by the image correction unit 21. To do.
- the disparity information acquisition unit 24 acquires disparity information by stereo matching using the common feature amount acquired for the first correction image by the common feature amount acquisition unit 23 and the common feature amount acquired for the second correction image. To do.
- the image processing apparatus 20 may be realized by dedicated hardware, or configured by a memory and a CPU (central processing unit) to realize the functions of the image processing apparatus 20.
- the function may be realized by the CPU executing the computer program.
- FIG. 2 is a configuration diagram showing the configuration of the imaging apparatus 1 according to the first embodiment of the present invention.
- the vehicle 30 is provided with a first imaging unit 11 and a second imaging unit 12.
- the first imaging unit 11 receives visible light and images
- the second imaging unit 12 receives far infrared light and images.
- the first imaging unit 11 is installed at the center upper end of a windshield (Front shield) 31 of the vehicle 30.
- the second imaging unit 12 is installed at a position offset to the left from the center of the front bumper 32 of the vehicle 30.
- the first imaging unit 11 is a reference camera
- the second imaging unit 12 is a comparison camera.
- the image processing apparatus 20 is provided in a driving support unit 41 provided in the vehicle 30.
- the first captured image captured by the first imaging unit 11 and the second captured image captured by the second imaging unit 12 are input to the driving support unit 41.
- the first captured image and the second captured image input to the driving support unit 41 are input to the image processing device 20 provided in the driving support unit 41.
- the vehicle 30 is provided with a CAN (Controller Area Network) 42.
- the driving support unit 41 and the other control units 43 a to 43 f of the vehicle 30 are connected to the CAN 42.
- the driving support unit 41 transmits a control signal to the brake control unit 43a and the electric power steering control unit 42b through the CAN 42, for example.
- Other examples of the control unit include an engine control unit and an inter-vehicle distance control unit.
- the first imaging unit 11 and the second imaging unit 12 are connected to the CAN 42, and the first imaging unit 11 transmits the first captured image to the driving support unit 41 via the CAN 42, so that the second imaging is performed.
- the unit 12 may transmit the second captured image to the driving support unit 41 via the CAN 42.
- you may comprise either the 1st imaging part 11 or the 2nd imaging part 12, and the driving assistance unit 41 as the same apparatus.
- the first imaging unit 11 and the driving support unit 41 may be configured as an integrated device.
- FIG. 3 is a diagram illustrating a field of view (FOV) of the imaging unit according to the first embodiment.
- FOV field of view
- the field of view FOV11 of the first imaging unit 11 and the field of view FOV12 of the second imaging unit 12 are different. Therefore, the first captured image captured by the first image capturing unit 11 and the second captured image captured by the second image capturing unit 12 are different in view angle and number of pixels. Therefore, in the first embodiment, in addition to calibration for correcting distortion caused by the optical system in each of the imaging units 11 and 12, the difference in the angle of view and the number of pixels between the first captured image and the second captured image are determined. Perform correction processing.
- the image correction unit 21 performs distortion correction. This distortion correction will be described.
- a distortion correction method in a general stereo camera will be described.
- a stereo camera shows a target such as a chessboard pattern with respect to the left and right cameras, and the optical system distortion rate “ ⁇ 1 , ⁇ 2 , ⁇ 3 ,..., S, f by a technique such as“ Tsai, Zhang ”. , K x , k y , c x , c y ”.
- “ ⁇ 1 , ⁇ 2 , ⁇ 3 ,...” Is a distortion rate parameter proportional to “r 2 , r 4 , r 6 ,.
- r is a distance from the optical axis coordinates (c x , c y ) to the image coordinates (u, v), and is calculated by [Equation 1].
- the optical axis coordinates (c x , c y ) are expressed in an image coordinate system with the upper left corner of the image as the origin.
- f is a focal length. The focal length f may be expressed as a ratio with respect to the pixel size k x .
- a pan angle, a pitch angle, and a roll angle with respect to the world coordinate system are obtained.
- the focal length f and the pixel size k x of the captured image of the left and right cameras, k y becomes common, and the affine transformation as an imaging surface and the epipolar line (epipolar line) is parallel (affine transformation) is for
- a parameter or a dedicated LUT (look up table) to be set in the dedicated logic (operation circuit or operation program) is obtained.
- the parameter or the dedicated LUT is set in the dedicated logic.
- the affine transformation is to perform linear mapping transformation such as rotation, enlargement or reduction, and shear accompanied by parallel movement.
- the above is the distortion correction method in a general stereo camera.
- FIG. 4 is a diagram illustrating a vehicle coordinate system that is a world coordinate system according to the first embodiment.
- a world coordinate system ⁇ [X, Y, Z], origin O 0 ⁇ is defined for the vehicle 30.
- the first imaging unit 11 reference camera
- the second imaging unit 12 compare camera
- the first imaging is the left and right cameras.
- the focal length f and the pixel sizes k x and k y are different between the unit 11 and the second imaging unit 12.
- the focal length of the comparison image focal length f and the pixel size k x, k y is (the first image captured by the first imaging unit 11) reference image (utilizing the second imaging unit 12 and the second captured image) f and pixel size k x, to be the same as k y, set parameters or dedicated LUT set dedicated logic for affine transformation.
- the matching operation in stereo matching can use a template having the same number of pixels for the reference image and the comparison image, and the parallax calculation can be performed simply by counting the number of pixels.
- the imaging surfaces of the left and right cameras are on the same plane.
- the epipolar lines can be made parallel by the first imaging unit 11 (reference camera) and the second imaging unit 12 (comparison camera).
- the surfaces cannot be coplanar.
- each captured image exists in a stepped state before and after. Therefore, in the first embodiment, a method described later based on epipolar geometry is used as a stereo matching method in a state where the imaging surfaces are not the same plane.
- the image correction unit 21 performs distortion correction (WARP) by performing affine transformation on the comparison image (second captured image by the second imaging unit 12) using the parameters obtained above or the dedicated LUT. .
- the focal length f and the pixel sizes k x and k y of the comparison image (second corrected image) after the distortion correction are the reference image (first captured image (first corrected image by the first imaging unit 11)).
- the same as the focal length f and the pixel sizes k x , k y Thereby, the difference in the angle of view and the number of pixels between the first captured image captured by the first imaging unit 11 and the second captured image captured by the second imaging unit 12 is corrected. Become.
- FIG. 5 is a flowchart showing a procedure of parameter acquisition processing according to the first embodiment.
- Step S11 The parameter acquisition unit 22 calculates the parameters (E 1 , K 1 , E 2 , K 2 ). Calculation of these parameters (E 1 , K 1 , E 2 , K 2 ) will be described.
- FIG. 6 is a diagram illustrating each coordinate system according to the first embodiment. In FIG.
- the world coordinate system (vehicle coordinate system) ⁇ [X, Y, Z], origin O 0 ⁇ and the reference camera (first imaging unit 11) coordinate system ⁇ [U 1 , V 1 , W 1 ], Origin O 1 (focus) ⁇ , comparative camera (second imaging unit 12) coordinate system ⁇ [U 2 , V 2 , W 2 ], origin O 2 (focus) ⁇ , and reference image (first Captured image of imaging unit 11) coordinate system ⁇ [u 1 , v 1 , 1], origin o 1 (upper left of captured image) ⁇ and comparative image (captured image of second imaging unit 12) coordinate system ⁇ [u 2 , v 2 , 1], the origin o 2 (upper left of the captured image) ⁇ .
- S 1 is a screen (imaging surface) of the reference camera (first imaging unit 11), [c x1 , c y1 , 1] is an image center of the screen S 1 , and 51 is a reference camera optical axis.
- S 2 is a screen (imaging surface) of the comparison camera (second imaging unit 12), [c x2, c y2, 1] is the image center of the screen S 2, 52 is a comparative optical axis of the camera.
- 53 is an epipolar plane.
- e 1 and e 2 are epipolar points (sometimes called epipoles).
- (E 1 -e 2 ) is an epipolar line. Incidentally, in FIG.
- the position of the object 50 in the world coordinate system is P 0, and the position of the object 50 in each camera coordinate system is P c (c is 1 (reference camera coordinate system) or 2 (comparison camera coordinate system). )). Then, the relationship between the positions P 0 and P c of the object 50 is expressed by [Equation 2].
- R c is a rotation matrix composed of a pan angle, a pitch angle, and a roll angle generated by attaching each camera (the first imaging unit 11 and the second imaging unit 12) to the vehicle 30. is there.
- T c represents the attachment position of the pre-camera principal point with respect to the origin O 0 of the world coordinate system.
- E c in the case of transformation as in [Equation 3] is referred to as an external camera parameter.
- h is the third element w c of P c .
- f is the same focal length after adjustment on the screens S 1 and S 2 .
- k x and k y are the same pixel sizes after adjustment on the screens S 1 and S 2 .
- c x, c y is the image center on the screen S c (optical axis position).
- K c in [Expression 4] is referred to as an internal camera parameter.
- the [number 10] is an equation for coordinate transformation to the position P 2 of the comparison camera coordinate system from a position P 1 of the base camera coordinate system.
- R 0 and T 0 are coordinate transformation matrices, R 0 has 3 rows and 3 columns, and T 0 has 3 rows and 1 column.
- Equation 14 a basic matrix E 0 of the epipolar equation is obtained.
- E 0 has 3 rows and 3 columns.
- Step S13 The parameter acquisition unit 22 calculates a basic matrix (F 0 ).
- the calculation of the basic matrix (F 0 ) will be described. 6, the image coordinates of the object 50 on the screen S 1 of the base camera coordinate system and p 1, when the image coordinates of the object 50 on the screen S 2 in comparison camera coordinate system and p 2, [Expression 15 ].
- an epipolar constraint I 2 (see FIG. 6) when searching for the position p 2 on the comparison image with respect to the position p 1 on the reference image can be obtained.
- the epipolar constraint I 2 is obtained as a 3 ⁇ 1 matrix by [Equation 19].
- the common feature amount acquisition unit 23 acquires a common feature amount from each of the first correction image and the second correction image in which the difference in the angle of view and the number of pixels is corrected by the image correction unit 21.
- k y is the focal length f and the pixel size k x of the first corrected image (reference image), to be the same as correction and k y Yes.
- the first image pickup unit 11 receives visible light to pick up an image (visible light camera), and the second image pickup unit 12 receives far infrared light to pick up an image (far infrared camera). ).
- the luminance value of the captured image of the visible light camera is a value corresponding to the amount of visible light, and means the color and brightness of the object surface.
- the luminance value of the far-infrared camera is a value corresponding to the amount of far-infrared emitted from the object surface by black body radiation, and means the temperature of the object surface.
- the first embodiment it has been conceived that information on the contour of an object is used as a common feature amount that exists in common in each captured image. Specifically, since the brightness value of each captured image varies greatly with the contour of the object, the absolute value of the differential value of the brightness value is used as the common feature amount.
- the common feature amount acquisition unit 23 performs a Laplacian filter process that is a second order differential in the u and v directions on each of the first correction image and the second correction image.
- the Laplacian filter coefficient is multiplied by one-to-one with the pixel of the corrected image, and the sum of the products is calculated (convolution: convolution integration).
- FIG. 7 is a chart showing an example of Laplacian filter coefficients according to the first embodiment.
- the example of FIG. 7 is a 5 ⁇ 5 Laplacian filter coefficient.
- the common feature amount acquisition unit 23 calculates the absolute value of the differential value of the first corrected image after the Laplacian filter processing. This absolute value is the common feature amount of the first corrected image.
- the image composed of the common feature amount of the first corrected image (first common feature amount image) is used for stereo matching in the parallax information acquisition unit 24.
- the common feature amount acquisition unit 23 calculates the absolute value of the differential value of the second corrected image after the Laplacian filter processing. This absolute value is the common feature amount of the second corrected image.
- the image composed of the common feature amount of the second corrected image (second common feature amount image) is used for stereo matching in the parallax information acquisition unit 24.
- the first captured image received by visible light and captured can be acquired as an R, G, B color image.
- the first corrected image corrected by the image correction unit 21 can also be acquired as an R, G, B color image. Therefore, as another embodiment, the common feature amount acquisition unit 23 first performs Laplacian filter processing on each of the first correction images acquired as the R, G, and B color images. As a result, data composed of absolute values of differential values corresponding to R, G, and B is calculated as an image composed of the first common feature amount. Based on the absolute value data of the differential values of R, G, and B, the most effective first common feature amount can be obtained when performing stereo matching. At this time, all of the absolute value data of the differential values of R, G, and B may be used, or any one or two of R, G, and B may be selected and used. As a result, a more accurate stereo matching process can be performed.
- the disparity information acquisition unit 24 performs stereo matching using the first common feature amount image and the second common feature amount image obtained by the common feature amount acquisition unit 23, and acquires disparity information.
- 8 and 9 are flowcharts of the disparity information acquisition method according to the first embodiment.
- the parallax information acquisition method according to the first embodiment will be described with reference to FIGS. 8 and 9.
- Step S ⁇ b> 21 The image processing apparatus 20 inputs the first captured image captured by the first imaging unit 11 and the second captured image captured by the second imaging unit 12.
- the image correction unit 21 performs distortion correction (WARP) on the first captured image and the second captured image input in Step S21.
- WARP distortion correction
- the image correction unit 21 performs affine transformation using a dedicated LUT on the second captured image.
- LUT the dedicated, the focal length f and the pixel size k x of the second captured image, k y are the affine transformation, the focal length f and the pixel size k x of the first captured image, the same as so as the k y
- it has been prepared in advance.
- Step S23 The common feature amount acquisition unit 23 performs Laplacian filter processing on each of the first correction image and the second correction image obtained by the distortion correction in step S22.
- Step S24 The common feature amount acquisition unit 23 calculates the absolute value of the differential value of the first corrected image after the Laplacian filter processing in Step S23, and from this absolute value (common feature amount of the first corrected image). A first common feature amount image is obtained. Further, the common feature amount acquisition unit 23 calculates the absolute value of the differential value of the second corrected image after the Laplacian filter processing in step S23, and the first value composed of this absolute value (the common feature amount of the second corrected image). Two common feature amount images are obtained.
- the upper left pixel of the first common feature quantity image is set as a search start reference pixel.
- the search start reference pixel is the first search pixel of the first common feature amount image.
- Step S26 Epipolar constraint when the parallax information acquisition unit 24 searches for the position p 2 on the second common feature amount image (comparison image) with respect to the search pixel p 1 of the first common feature amount image. determine the I 2.
- the epipolar constraint I 2 is obtained by the above [Equation 19].
- Epipolar constraint I 2 is the coefficient of the comparison image search line. Note that the parallax information acquisition unit 24 holds a basic matrix F 0 calculated in advance.
- This search start lateral position is the first search lateral position of the second common feature amount image.
- Step S28 the disparity information acquisition unit 24, a search vertical position v 2 corresponding to the search lateral position u 2 at the second common feature amount image, determined according to epipolar constraints I 2 as shown in FIG. 10.
- FIG. 10 is a conceptual diagram of the parallax search method according to the first embodiment.
- the search vertical position v 2 is calculated by “(c ⁇ (a ⁇ u 1 )) / b” using elements (epipolar constraint line coefficients) a, b, and c of the epipolar constraint I 2 .
- Step S29 The parallax information acquisition unit 24 calculates the degree of coincidence between the first common feature amount image and the second common feature amount image with the pixels of interest p 1 and p 2 as the center.
- a method of calculating the degree of matching for example, SSD (Sum of Squared Difference), SAD (Sum of Absolute Difference), NCC (Normalized Cross-Correlation: normal) Examples thereof include cross-correlation), ZNCC (Zero-mean Normalized Cross-Correlation), and SGM (Semi-Global Matching).
- Step S30 The parallax information acquisition unit 24 increments the search lateral position u2 of the second common feature amount image.
- Step S31 parallax information acquiring unit 24, the search lateral position u 2 at the second common feature value image to determine whether the reached to the end of the second common feature amount image. As a result of this determination, if the end of the second common feature amount image has been reached, the process proceeds to step S32 in FIG. 9, and if not, the process returns to step S28.
- Step S32 The parallax information acquisition unit 24 selects the pixel of interest of the second common feature quantity image that has the highest degree of coincidence among the degrees of coincidence calculated for the search pixel p1 of the first common feature quantity image. p 2 (maximum coincidence comparison coordinate position) is obtained.
- Step S33 The parallax information acquisition unit 24 obtains the distance from the search start position (initial pixel position) to the maximum coincidence comparison coordinate position in the second common feature amount image. Then, the parallax information acquisition unit 24 divides the obtained distance by the pixel size k x . The disparity information acquisition unit 24 sets the quotient that is the result of the division as a disparity value for the search pixel p1 of the first common feature amount image.
- Step S34 The parallax information acquisition unit 24 stores the parallax value obtained in step S33 in a parallax image.
- the storage position of the parallax value is the same position as the search pixel p1 of the first common feature amount image.
- a parallax image corresponding to the first common feature amount image is obtained by the processing of FIGS. From this parallax image, three-dimensional information of the subject (for example, three-dimensional distance information to the subject) can be acquired.
- the parallax information acquisition unit 24 may perform a process of supplementing the information of the parallax image. Specifically, the parallax information acquisition unit 24 uses the pixel value (the luminance value or hue of the image captured by the visible light camera) of the first corrected image (reference image) from which the first common feature amount image is obtained.
- the pixel value of the second corrected image (comparison image) from which the second common feature amount image is obtained (the luminance value of the image captured by the far-infrared camera and representing the surface temperature of the subject).
- the parallax value of the parallax image obtained in the first embodiment that is, distance information
- pixels that are relatively equidistant are obtained by clustering as a pixel point set, that is, a point sequence equivalent to a contour.
- the parallax information acquisition unit 24 uses the pixel value of the first corrected image or the luminance value of the second corrected image surrounded by the pixel point set as a result of the clustering as a region representing one object. Clustering.
- the parallax image after clustering has a parallax value for pixels corresponding to the contour of the subject, and further has identification information indicating that the same object is present in the pixel areas clustered as the same cluster. Thereby, the three-dimensional distance information to the subject is obtained from the parallax value of the pixel corresponding to the contour. Furthermore, it can be determined that pixel areas having the same identification information are the same object.
- Example of acquisition method of three-dimensional distance information an example of a method for acquiring three-dimensional distance information using the parallax image according to the first embodiment will be described.
- a road surface and an object can be identified using methods such as “v-disparity” and “virtual disparity”.
- the parallax image according to the first embodiment has a parallax value only in the contour of the subject, and the parallax value is not obtained in a planar shape.
- the parallax value is compared with the parallax value of a pixel connected to the pixel, and based on the comparison result, (For example, identification of a road surface and an object). For example, three-dimensional distance information is obtained from the pixel corresponding to the contour in which the parallax value is stored. From the distribution of the three-dimensional distance information, the subject is identified by determining whether a certain distribution is planar such as a road surface or is standing vertically. Then, the three-dimensional distance information of the contour of the identified subject is set as the three-dimensional distance information to the subject.
- the parallax that can be used for calculating the three-dimensional information such as the distance measurement to the subject by the stereo matching method from the captured image of each camera day and night. Information can be obtained simply.
- a visible light camera is used as a reference camera and a far-infrared camera is used as a comparison camera.
- a far-infrared camera is used as a reference camera and a comparison camera is visible. You may comprise so that an optical camera may be used.
- the second embodiment another example of the common feature amount will be described.
- the first wavelength range of light received by the first imaging unit 11 and the second wavelength range of light received by the second imaging unit 12 are partially overlapped.
- information indicating the received light amount of light having the wavelength of the overlapping portion is used as a common feature amount.
- the light in the first wavelength range related to the first imaging unit 11 is visible light
- the light in the second wavelength range related to the second imaging unit 12 is near-infrared light and visible light adjacent to the near-infrared light. This is the red end (end on the long wavelength side).
- the first wavelength region and the second wavelength region overlap the red end portion of the visible light included in the second wavelength region. From this, the red pixel in the captured image of the first imaging unit 11 and the pixel of the captured image of the second imaging unit 12 are correlated. This highly correlated pixel is used for stereo matching as a common feature.
- the 2nd imaging part 12 of this example receives a part of near infrared rays and visible light, it can be arrange
- the image plane of both cameras can be made the same, it is possible to obtain a parallax image by the conventional stereo matching method.
- the driving support unit 41 shown in FIG. 2 performs a stereo camera recognition process using the parallax information acquired by the image processing device 20, and based on the result of the stereo camera recognition process, via the CAN 42, A control signal may be transmitted to each of the control units 43a to 43f.
- the driving support unit 41 acquires the three-dimensional information of the subject from the parallax information acquired by the image processing device 20. Next, the driving support unit 41 recognizes a travelable range such as a road surface, a road obstacle such as a guard rail, and an obstacle different from the road surface such as a preceding vehicle or an oncoming vehicle from the acquired three-dimensional information. . Next, the driving support unit 41 obtains a relative distance and a relative speed with each of the recognized objects. Next, the driving support unit 41 supports traveling set as safe traveling such as acceleration, following, deceleration, stop, or avoidance based on the obtained relative distance and relative speed.
- a travelable range such as a road surface, a road obstacle such as a guard rail, and an obstacle different from the road surface such as a preceding vehicle or an oncoming vehicle from the acquired three-dimensional information.
- the driving support unit 41 obtains a relative distance and a relative speed with each of the recognized objects.
- the driving support unit 41 supports traveling set as safe traveling such as acceleration, following,
- the driving support unit 41 accelerates the vehicle 30 in accordance with the function of running along the lane on the road (Lane Keep) and the road conditions (congestion status, position of the preceding vehicle, presence of an interrupted vehicle, etc.) You may make it have the function (Auto
- the driving support unit 41 has a function of data linking with the navigation device, a function of calculating a traveling course on the road surface from the course information set in the navigation device and the lane information recognized by the driving support unit 41, and the calculated You may make it have a function which supports automatic driving
- the visible light cameras 110 and 120 having different visual fields FOV 110 and FOV 120 are arranged on the front windshield of the vehicle 30.
- the visual field FOV 110 of the visible light camera 110 is wider than the visual field FOV 120 of the visible light camera 120.
- distortion is performed so that the angle of view of the image captured by the visible light camera 110 with the wide angle becomes the same as the angle of view of the image captured with the visible light camera 120 with the narrow angle.
- a computer program for realizing the functions of the above-described image processing apparatus 20 is recorded on a computer-readable recording medium, and the program recorded on the recording medium is read into the computer system and executed.
- the “computer system” may include an OS and hardware such as peripheral devices.
- Computer-readable recording medium means a flexible disk, a magneto-optical disk, a ROM, a writable nonvolatile memory such as a flash memory, a portable medium such as a DVD (Digital Versatile Disk), and a built-in computer system.
- a storage device such as a hard disk.
- the “computer-readable recording medium” means a volatile memory (for example, DRAM (Dynamic DRAM) in a computer system that becomes a server or a client when a program is transmitted through a network such as the Internet or a communication line such as a telephone line. Random Access Memory)), etc., which hold programs for a certain period of time.
- DRAM Dynamic DRAM
- the program may be transmitted from a computer system storing the program in a storage device or the like to another computer system via a transmission medium or by a transmission wave in the transmission medium.
- the “transmission medium” for transmitting the program refers to a medium having a function of transmitting information, such as a network (communication network) such as the Internet or a communication line (communication line) such as a telephone line.
- the program may be for realizing a part of the above-described functions. Furthermore, what can implement
- SYMBOLS 1 ... Imaging device, 11 ... 1st imaging part, 12 ... 2nd imaging part, 20 ... Image processing apparatus, 21 ... Image correction part, 22 ... Parameter acquisition part, 23 ... Common feature-value acquisition part, 24 ... Parallax Information acquisition unit, 30 ... vehicle, 31 ... windshield, 32 ... front bumper, 41 ... driving support unit, 42 ... CAN, 43a to 43f ... control unit
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Measurement Of Optical Distance (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
[Problème] L'invention vise à permettre l'acquisition d'informations de parallaxe de façon simple par stéréocorrespondance. [Solution] La présente invention comprend : une première unité de capture d'image (11) destinée à recevoir de la lumière d'une première bande de longueur d'onde et capturer une image; une seconde unité de capture d'image (12) destinée à recevoir de la lumière d'une seconde bande de longueur d'onde différente de la première bande de longueur d'onde et capturer une image; une unité de correction d'image (21) destinée à corriger des différences entre une première image capturée, capturée par la première unité de capture d'image (11), et une seconde image capturée, capturée par la seconde unité de capture d'image (12); une unité d'acquisition de quantités de caractéristiques communes (23) destinée à acquérir une quantité de caractéristiques communes pour chacune d'une première image corrigée obtenue à partir de la première image capturée et d'une seconde image corrigée obtenue à partir de la seconde image capturée, les images corrigées ayant été obtenues par l'unité de correction d'image (21); et une unité d'acquisition d'informations de parallaxe (24) destinée à obtenir des informations de parallaxe par stéréocorrespondance au moyen de la quantité de caractéristiques communes qui a été acquise pour la première image corrigée par l'unité d'acquisition de quantité de caractéristiques communes (23) et de la quantité de caractéristiques communes qui a été acquise pour la seconde image corrigée par l'unité d'acquisition de quantité de caractéristiques communes (23).
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014113173 | 2014-05-30 | ||
JP2014-113173 | 2014-05-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015182771A1 true WO2015182771A1 (fr) | 2015-12-03 |
Family
ID=54699088
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2015/065660 WO2015182771A1 (fr) | 2014-05-30 | 2015-05-29 | Dispositif de capture d'image, dispositif de traitement d'image, procédé de traitement d'image et programme informatique |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2015182771A1 (fr) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3343511A1 (fr) * | 2016-12-27 | 2018-07-04 | Kabushiki Kaisha Toshiba | Appareil et procédé de traitement d'images |
EP3389009A4 (fr) * | 2015-12-10 | 2018-12-19 | Ricoh Company, Ltd. | Dispositif de traitement d'image, dispositif de reconnaissance d'objet, système de commande d'appareil, procédé de traitement d'image et programme |
CN114108994A (zh) * | 2022-01-25 | 2022-03-01 | 深圳市门罗智能有限公司 | 玻璃幕墙安装助力系统 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07225127A (ja) * | 1994-02-14 | 1995-08-22 | Mitsubishi Motors Corp | 車両用路上物体認識装置 |
JPH11153406A (ja) * | 1997-11-20 | 1999-06-08 | Nissan Motor Co Ltd | 車両用障害物検出装置 |
WO2007129563A1 (fr) * | 2006-05-09 | 2007-11-15 | Panasonic Corporation | télémètre avec une fonction de sélection d'image pour TELEMETRIE |
WO2012073722A1 (fr) * | 2010-12-01 | 2012-06-07 | コニカミノルタホールディングス株式会社 | Dispositif de synthèse d'image |
JP2013257244A (ja) * | 2012-06-13 | 2013-12-26 | Sharp Corp | 距離測定装置、距離測定方法、及び距離測定プログラム |
WO2014054752A1 (fr) * | 2012-10-04 | 2014-04-10 | アルプス電気株式会社 | Dispositif de traitement d'images et dispositif de surveillance d'une zone devant un véhicule |
-
2015
- 2015-05-29 WO PCT/JP2015/065660 patent/WO2015182771A1/fr active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07225127A (ja) * | 1994-02-14 | 1995-08-22 | Mitsubishi Motors Corp | 車両用路上物体認識装置 |
JPH11153406A (ja) * | 1997-11-20 | 1999-06-08 | Nissan Motor Co Ltd | 車両用障害物検出装置 |
WO2007129563A1 (fr) * | 2006-05-09 | 2007-11-15 | Panasonic Corporation | télémètre avec une fonction de sélection d'image pour TELEMETRIE |
WO2012073722A1 (fr) * | 2010-12-01 | 2012-06-07 | コニカミノルタホールディングス株式会社 | Dispositif de synthèse d'image |
JP2013257244A (ja) * | 2012-06-13 | 2013-12-26 | Sharp Corp | 距離測定装置、距離測定方法、及び距離測定プログラム |
WO2014054752A1 (fr) * | 2012-10-04 | 2014-04-10 | アルプス電気株式会社 | Dispositif de traitement d'images et dispositif de surveillance d'une zone devant un véhicule |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3389009A4 (fr) * | 2015-12-10 | 2018-12-19 | Ricoh Company, Ltd. | Dispositif de traitement d'image, dispositif de reconnaissance d'objet, système de commande d'appareil, procédé de traitement d'image et programme |
US10546383B2 (en) | 2015-12-10 | 2020-01-28 | Ricoh Company, Ltd. | Image processing device, object recognizing device, device control system, image processing method, and computer-readable medium |
EP3343511A1 (fr) * | 2016-12-27 | 2018-07-04 | Kabushiki Kaisha Toshiba | Appareil et procédé de traitement d'images |
US10726528B2 (en) | 2016-12-27 | 2020-07-28 | Kabushiki Kaisha Toshiba | Image processing apparatus and image processing method for image picked up by two cameras |
CN114108994A (zh) * | 2022-01-25 | 2022-03-01 | 深圳市门罗智能有限公司 | 玻璃幕墙安装助力系统 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9749614B2 (en) | Image capturing system obtaining scene depth information and focusing method thereof | |
US9774837B2 (en) | System for performing distortion correction and calibration using pattern projection, and method using the same | |
JP5455124B2 (ja) | カメラ姿勢パラメータ推定装置 | |
CN113196007B (zh) | 一种应用于车辆的相机系统 | |
CN107122770B (zh) | 多目相机系统、智能驾驶系统、汽车、方法和存储介质 | |
JP6565188B2 (ja) | 視差値導出装置、機器制御システム、移動体、ロボット、視差値導出方法、およびプログラム | |
JP6970577B2 (ja) | 周辺監視装置および周辺監視方法 | |
TW201403553A (zh) | 自動校正鳥瞰影像方法 | |
JP2008102620A (ja) | 画像処理装置 | |
JP6337504B2 (ja) | 画像処理装置、移動体、ロボット、機器制御方法およびプログラム | |
US20130329019A1 (en) | Image processing apparatus that estimates distance information, method of controlling the same, and storage medium | |
CN109658451B (zh) | 一种深度感知方法,装置和深度感知设备 | |
WO2018222122A1 (fr) | Procédés de correction de perspective, produits programmes d'ordinateur et systèmes | |
JP6375633B2 (ja) | 車両周辺画像表示装置、車両周辺画像表示方法 | |
CN105513074B (zh) | 一种羽毛球机器人相机以及车身到世界坐标系的标定方法 | |
JP2015179066A (ja) | 視差値導出装置、機器制御システム、移動体、ロボット、視差値導出方法、およびプログラム | |
WO2015182771A1 (fr) | Dispositif de capture d'image, dispositif de traitement d'image, procédé de traitement d'image et programme informatique | |
JP6543935B2 (ja) | 視差値導出装置、機器制御システム、移動体、ロボット、視差値導出方法、およびプログラム | |
WO2014054752A1 (fr) | Dispositif de traitement d'images et dispositif de surveillance d'une zone devant un véhicule | |
CN106846385B (zh) | 基于无人机的多传感遥感影像匹配方法、装置和系统 | |
WO2019198399A1 (fr) | Dispositif et procédé de traitement d'image | |
KR101697229B1 (ko) | 차량용 영상 정합을 위한 차선 정보 기반의 자동보정장치 및 그 방법 | |
JP4696925B2 (ja) | 画像処理装置 | |
JP7303064B2 (ja) | 画像処理装置、および、画像処理方法 | |
CN111260538B (zh) | 基于长基线双目鱼眼相机的定位及车载终端 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15799032 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15799032 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: JP |