WO2015182771A1 - Image capturing device, image processing device, image processing method, and computer program - Google Patents

Image capturing device, image processing device, image processing method, and computer program Download PDF

Info

Publication number
WO2015182771A1
WO2015182771A1 PCT/JP2015/065660 JP2015065660W WO2015182771A1 WO 2015182771 A1 WO2015182771 A1 WO 2015182771A1 JP 2015065660 W JP2015065660 W JP 2015065660W WO 2015182771 A1 WO2015182771 A1 WO 2015182771A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
common feature
feature amount
captured image
imaging
Prior art date
Application number
PCT/JP2015/065660
Other languages
French (fr)
Japanese (ja)
Inventor
和由 大塚
Original Assignee
日本電産エレシス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電産エレシス株式会社 filed Critical 日本電産エレシス株式会社
Publication of WO2015182771A1 publication Critical patent/WO2015182771A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • G01C3/02Details
    • G01C3/06Use of electric means to obtain final indication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to an imaging device, an image processing device, an image processing method, and a computer program.
  • a first imaging unit that captures an object including a visible light component and a near-infrared component and an object including a visible light component and not including a near-infrared component are captured.
  • a second imaging unit measures the distance to the subject by a stereo matching method based on visible light components in the daytime, while projecting an infrared pattern on the subject with a near-infrared auxiliary light source at night, The distance to the subject is measured by a pattern light projection method based on the components.
  • the apparatus configuration becomes complicated.
  • a near-infrared auxiliary light source for projecting an infrared pattern is required for the pattern light projection method.
  • the present invention has been made in view of such circumstances, and it is an object of the present invention to provide an imaging device, an image processing device, an image processing method, and a computer program capable of simply obtaining parallax information by a stereo matching method. .
  • the first imaging unit receives light in a first wavelength range.
  • the second captured image is received by the step of acquiring the first captured image and the second imaging unit receives light in a second wavelength range different from the first wavelength range.
  • An image correction step for acquiring a first corrected image and a second corrected image for each by correcting a difference between the first captured image and the second captured image, and the first
  • a common feature amount acquisition step for acquiring a first common feature amount and a second common feature amount for each based on a common feature amount of the corrected image and the second corrected image; and the first common feature Stereo matching using the second common feature quantity and the second common feature quantity.
  • a parallax information acquiring step of acquiring the parallax information is an imaging method with.
  • the first common feature amount and the second common feature amount are absolute values of differential values of luminance values.
  • the first common feature amount or the second common feature amount is based on an absolute value of a differential value of all of R, G, and / or at least one of the luminance values. It is the acquired imaging method.
  • the first wavelength region and the second wavelength region partially overlap, and information indicating the amount of received light having the wavelength of the overlapping portion is the first common feature. And an imaging method used as the second common feature amount.
  • the imaging surface of the first imaging unit and the imaging surface of the second imaging unit are on different planes
  • the parallax information acquisition step includes the first common feature.
  • a sub-step for determining a search pixel p1 of the first common feature quantity image made up of a quantity, and a second common feature quantity image made up of the second common feature quantity corresponding to the search pixel p1 according to epipolar constraints A sub-step for obtaining the search pixel p2 at, and a degree of coincidence between the first common feature amount image and the second common feature amount image with the search pixels p1 and p2 as the center, and a second step based on the degree of coincidence.
  • the parallax information acquisition step includes the pixel value of the first corrected image or the pixel value of the second corrected image and the parallax value of the parallax image acquired by the stereo matching. And clustering the pixel values of the first corrected image or the pixel values of the second corrected image surrounded by a set of pixel points as a result of the clustering as a region representing one object It is.
  • a first imaging unit that receives light in a first wavelength range and acquires a first captured image, and a second wavelength that is different from the first wavelength range.
  • the first captured image An image correction unit for obtaining a first correction image obtained from the captured image and a second correction image obtained from the second captured image; and the first correction image and the second correction image.
  • Stereo matching is performed by using a common feature amount acquisition unit that acquires a first common feature amount and a second common feature amount for each, and the first common feature amount and the second common feature amount. It is an imaging device provided with the parallax information acquisition part which acquires parallax information.
  • a first captured image captured by receiving light in the first wavelength range and light in a second wavelength range different from the first wavelength range are received.
  • the second captured image captured in this manner is retained, and the difference between the captured images is corrected to obtain from the first corrected image obtained from the first captured image and the second captured image.
  • a first common feature amount and a second common feature amount based on a common feature amount of the image correction unit that acquires the second corrected image and the first corrected image and the second corrected image, respectively.
  • a parallax information acquisition unit that acquires parallax information by stereo matching using the first common feature quantity and the second common feature quantity.
  • a program recorded in a non-volatile storage medium and executed by a computer receiving light in a first wavelength range and acquiring a first captured image; Receiving light in a second wavelength range different from the first wavelength range to obtain a second captured image, and correcting a difference between the first captured image and the second captured image
  • the image correction step for obtaining the first correction image and the second correction image for each, and the common feature amount of the first correction image and the second correction image the first correction image and the second correction image are obtained.
  • the common feature amount acquisition step of acquiring one common feature amount and the second common feature amount, and using the first common feature amount and the second common feature amount the parallax information is obtained by stereo matching.
  • a program recorded in a non-volatile storage medium and executed by a computer wherein the first common feature amount and the second common feature amount are differential values of luminance values. It is a program recorded on a computer-readable storage medium that is an absolute value.
  • a program recorded in a non-volatile storage medium and executed by a computer wherein the first common feature amount or the second common feature amount is R, G, or B. It is a program recorded on a computer-readable storage medium that is acquired based on the absolute value of the derivative value of all or at least one of the luminance values.
  • 1 is a block diagram illustrating a configuration of an imaging apparatus 1 according to an embodiment of the present invention.
  • 1 is a configuration diagram illustrating a configuration of an imaging apparatus 1 according to a first embodiment of the present invention. It is a figure which shows the visual field of the imaging part which concerns on 1st Embodiment of this invention. It is a figure which shows the vehicle coordinate system which is the world coordinate system which concerns on 1st Embodiment of this invention. It is a flowchart which shows the procedure of the parameter acquisition process which concerns on 1st Embodiment of this invention. It is a figure which shows each coordinate system which concerns on 1st Embodiment of this invention.
  • FIG. 1 is a block diagram showing a configuration of an imaging apparatus 1 according to an embodiment of the present invention.
  • the imaging device 1 includes a first imaging unit 11, a second imaging unit 12, and an image processing device 20.
  • the image processing apparatus 20 includes an image correction unit 21, a parameter acquisition unit 22, a common feature amount acquisition unit 23, and a parallax information acquisition unit 24.
  • the first imaging unit 11 receives and captures light in the first wavelength range.
  • the second imaging unit 12 receives and captures light in a second wavelength range different from the first wavelength range of the first imaging unit 11. Examples of combinations of light in the first wavelength range and light in the second wavelength range include the following light combination examples 1 and 2.
  • Light combination example 1 “visible light” as light in the first wavelength region and “far infrared light” as light in the second wavelength region.
  • Light combination example 2 “visible light” as light in the first wavelength range and “near infrared” as light in the second wavelength range.
  • light combinations other than the light combination examples 1 and 2 described above may be used.
  • the first captured image captured by the first imaging unit 11 and the second captured image captured by the second imaging unit 12 are input to the image processing device 20.
  • the image correction unit 21 corrects the difference between the first captured image and the second captured image.
  • the parameter acquisition unit 22 acquires parameters for stereo matching.
  • the common feature amount acquisition unit 23 acquires a common feature amount for each of the first correction image obtained from the first captured image and the second correction image obtained from the second captured image by the image correction unit 21. To do.
  • the disparity information acquisition unit 24 acquires disparity information by stereo matching using the common feature amount acquired for the first correction image by the common feature amount acquisition unit 23 and the common feature amount acquired for the second correction image. To do.
  • the image processing apparatus 20 may be realized by dedicated hardware, or configured by a memory and a CPU (central processing unit) to realize the functions of the image processing apparatus 20.
  • the function may be realized by the CPU executing the computer program.
  • FIG. 2 is a configuration diagram showing the configuration of the imaging apparatus 1 according to the first embodiment of the present invention.
  • the vehicle 30 is provided with a first imaging unit 11 and a second imaging unit 12.
  • the first imaging unit 11 receives visible light and images
  • the second imaging unit 12 receives far infrared light and images.
  • the first imaging unit 11 is installed at the center upper end of a windshield (Front shield) 31 of the vehicle 30.
  • the second imaging unit 12 is installed at a position offset to the left from the center of the front bumper 32 of the vehicle 30.
  • the first imaging unit 11 is a reference camera
  • the second imaging unit 12 is a comparison camera.
  • the image processing apparatus 20 is provided in a driving support unit 41 provided in the vehicle 30.
  • the first captured image captured by the first imaging unit 11 and the second captured image captured by the second imaging unit 12 are input to the driving support unit 41.
  • the first captured image and the second captured image input to the driving support unit 41 are input to the image processing device 20 provided in the driving support unit 41.
  • the vehicle 30 is provided with a CAN (Controller Area Network) 42.
  • the driving support unit 41 and the other control units 43 a to 43 f of the vehicle 30 are connected to the CAN 42.
  • the driving support unit 41 transmits a control signal to the brake control unit 43a and the electric power steering control unit 42b through the CAN 42, for example.
  • Other examples of the control unit include an engine control unit and an inter-vehicle distance control unit.
  • the first imaging unit 11 and the second imaging unit 12 are connected to the CAN 42, and the first imaging unit 11 transmits the first captured image to the driving support unit 41 via the CAN 42, so that the second imaging is performed.
  • the unit 12 may transmit the second captured image to the driving support unit 41 via the CAN 42.
  • you may comprise either the 1st imaging part 11 or the 2nd imaging part 12, and the driving assistance unit 41 as the same apparatus.
  • the first imaging unit 11 and the driving support unit 41 may be configured as an integrated device.
  • FIG. 3 is a diagram illustrating a field of view (FOV) of the imaging unit according to the first embodiment.
  • FOV field of view
  • the field of view FOV11 of the first imaging unit 11 and the field of view FOV12 of the second imaging unit 12 are different. Therefore, the first captured image captured by the first image capturing unit 11 and the second captured image captured by the second image capturing unit 12 are different in view angle and number of pixels. Therefore, in the first embodiment, in addition to calibration for correcting distortion caused by the optical system in each of the imaging units 11 and 12, the difference in the angle of view and the number of pixels between the first captured image and the second captured image are determined. Perform correction processing.
  • the image correction unit 21 performs distortion correction. This distortion correction will be described.
  • a distortion correction method in a general stereo camera will be described.
  • a stereo camera shows a target such as a chessboard pattern with respect to the left and right cameras, and the optical system distortion rate “ ⁇ 1 , ⁇ 2 , ⁇ 3 ,..., S, f by a technique such as“ Tsai, Zhang ”. , K x , k y , c x , c y ”.
  • “ ⁇ 1 , ⁇ 2 , ⁇ 3 ,...” Is a distortion rate parameter proportional to “r 2 , r 4 , r 6 ,.
  • r is a distance from the optical axis coordinates (c x , c y ) to the image coordinates (u, v), and is calculated by [Equation 1].
  • the optical axis coordinates (c x , c y ) are expressed in an image coordinate system with the upper left corner of the image as the origin.
  • f is a focal length. The focal length f may be expressed as a ratio with respect to the pixel size k x .
  • a pan angle, a pitch angle, and a roll angle with respect to the world coordinate system are obtained.
  • the focal length f and the pixel size k x of the captured image of the left and right cameras, k y becomes common, and the affine transformation as an imaging surface and the epipolar line (epipolar line) is parallel (affine transformation) is for
  • a parameter or a dedicated LUT (look up table) to be set in the dedicated logic (operation circuit or operation program) is obtained.
  • the parameter or the dedicated LUT is set in the dedicated logic.
  • the affine transformation is to perform linear mapping transformation such as rotation, enlargement or reduction, and shear accompanied by parallel movement.
  • the above is the distortion correction method in a general stereo camera.
  • FIG. 4 is a diagram illustrating a vehicle coordinate system that is a world coordinate system according to the first embodiment.
  • a world coordinate system ⁇ [X, Y, Z], origin O 0 ⁇ is defined for the vehicle 30.
  • the first imaging unit 11 reference camera
  • the second imaging unit 12 compare camera
  • the first imaging is the left and right cameras.
  • the focal length f and the pixel sizes k x and k y are different between the unit 11 and the second imaging unit 12.
  • the focal length of the comparison image focal length f and the pixel size k x, k y is (the first image captured by the first imaging unit 11) reference image (utilizing the second imaging unit 12 and the second captured image) f and pixel size k x, to be the same as k y, set parameters or dedicated LUT set dedicated logic for affine transformation.
  • the matching operation in stereo matching can use a template having the same number of pixels for the reference image and the comparison image, and the parallax calculation can be performed simply by counting the number of pixels.
  • the imaging surfaces of the left and right cameras are on the same plane.
  • the epipolar lines can be made parallel by the first imaging unit 11 (reference camera) and the second imaging unit 12 (comparison camera).
  • the surfaces cannot be coplanar.
  • each captured image exists in a stepped state before and after. Therefore, in the first embodiment, a method described later based on epipolar geometry is used as a stereo matching method in a state where the imaging surfaces are not the same plane.
  • the image correction unit 21 performs distortion correction (WARP) by performing affine transformation on the comparison image (second captured image by the second imaging unit 12) using the parameters obtained above or the dedicated LUT. .
  • the focal length f and the pixel sizes k x and k y of the comparison image (second corrected image) after the distortion correction are the reference image (first captured image (first corrected image by the first imaging unit 11)).
  • the same as the focal length f and the pixel sizes k x , k y Thereby, the difference in the angle of view and the number of pixels between the first captured image captured by the first imaging unit 11 and the second captured image captured by the second imaging unit 12 is corrected. Become.
  • FIG. 5 is a flowchart showing a procedure of parameter acquisition processing according to the first embodiment.
  • Step S11 The parameter acquisition unit 22 calculates the parameters (E 1 , K 1 , E 2 , K 2 ). Calculation of these parameters (E 1 , K 1 , E 2 , K 2 ) will be described.
  • FIG. 6 is a diagram illustrating each coordinate system according to the first embodiment. In FIG.
  • the world coordinate system (vehicle coordinate system) ⁇ [X, Y, Z], origin O 0 ⁇ and the reference camera (first imaging unit 11) coordinate system ⁇ [U 1 , V 1 , W 1 ], Origin O 1 (focus) ⁇ , comparative camera (second imaging unit 12) coordinate system ⁇ [U 2 , V 2 , W 2 ], origin O 2 (focus) ⁇ , and reference image (first Captured image of imaging unit 11) coordinate system ⁇ [u 1 , v 1 , 1], origin o 1 (upper left of captured image) ⁇ and comparative image (captured image of second imaging unit 12) coordinate system ⁇ [u 2 , v 2 , 1], the origin o 2 (upper left of the captured image) ⁇ .
  • S 1 is a screen (imaging surface) of the reference camera (first imaging unit 11), [c x1 , c y1 , 1] is an image center of the screen S 1 , and 51 is a reference camera optical axis.
  • S 2 is a screen (imaging surface) of the comparison camera (second imaging unit 12), [c x2, c y2, 1] is the image center of the screen S 2, 52 is a comparative optical axis of the camera.
  • 53 is an epipolar plane.
  • e 1 and e 2 are epipolar points (sometimes called epipoles).
  • (E 1 -e 2 ) is an epipolar line. Incidentally, in FIG.
  • the position of the object 50 in the world coordinate system is P 0, and the position of the object 50 in each camera coordinate system is P c (c is 1 (reference camera coordinate system) or 2 (comparison camera coordinate system). )). Then, the relationship between the positions P 0 and P c of the object 50 is expressed by [Equation 2].
  • R c is a rotation matrix composed of a pan angle, a pitch angle, and a roll angle generated by attaching each camera (the first imaging unit 11 and the second imaging unit 12) to the vehicle 30. is there.
  • T c represents the attachment position of the pre-camera principal point with respect to the origin O 0 of the world coordinate system.
  • E c in the case of transformation as in [Equation 3] is referred to as an external camera parameter.
  • h is the third element w c of P c .
  • f is the same focal length after adjustment on the screens S 1 and S 2 .
  • k x and k y are the same pixel sizes after adjustment on the screens S 1 and S 2 .
  • c x, c y is the image center on the screen S c (optical axis position).
  • K c in [Expression 4] is referred to as an internal camera parameter.
  • the [number 10] is an equation for coordinate transformation to the position P 2 of the comparison camera coordinate system from a position P 1 of the base camera coordinate system.
  • R 0 and T 0 are coordinate transformation matrices, R 0 has 3 rows and 3 columns, and T 0 has 3 rows and 1 column.
  • Equation 14 a basic matrix E 0 of the epipolar equation is obtained.
  • E 0 has 3 rows and 3 columns.
  • Step S13 The parameter acquisition unit 22 calculates a basic matrix (F 0 ).
  • the calculation of the basic matrix (F 0 ) will be described. 6, the image coordinates of the object 50 on the screen S 1 of the base camera coordinate system and p 1, when the image coordinates of the object 50 on the screen S 2 in comparison camera coordinate system and p 2, [Expression 15 ].
  • an epipolar constraint I 2 (see FIG. 6) when searching for the position p 2 on the comparison image with respect to the position p 1 on the reference image can be obtained.
  • the epipolar constraint I 2 is obtained as a 3 ⁇ 1 matrix by [Equation 19].
  • the common feature amount acquisition unit 23 acquires a common feature amount from each of the first correction image and the second correction image in which the difference in the angle of view and the number of pixels is corrected by the image correction unit 21.
  • k y is the focal length f and the pixel size k x of the first corrected image (reference image), to be the same as correction and k y Yes.
  • the first image pickup unit 11 receives visible light to pick up an image (visible light camera), and the second image pickup unit 12 receives far infrared light to pick up an image (far infrared camera). ).
  • the luminance value of the captured image of the visible light camera is a value corresponding to the amount of visible light, and means the color and brightness of the object surface.
  • the luminance value of the far-infrared camera is a value corresponding to the amount of far-infrared emitted from the object surface by black body radiation, and means the temperature of the object surface.
  • the first embodiment it has been conceived that information on the contour of an object is used as a common feature amount that exists in common in each captured image. Specifically, since the brightness value of each captured image varies greatly with the contour of the object, the absolute value of the differential value of the brightness value is used as the common feature amount.
  • the common feature amount acquisition unit 23 performs a Laplacian filter process that is a second order differential in the u and v directions on each of the first correction image and the second correction image.
  • the Laplacian filter coefficient is multiplied by one-to-one with the pixel of the corrected image, and the sum of the products is calculated (convolution: convolution integration).
  • FIG. 7 is a chart showing an example of Laplacian filter coefficients according to the first embodiment.
  • the example of FIG. 7 is a 5 ⁇ 5 Laplacian filter coefficient.
  • the common feature amount acquisition unit 23 calculates the absolute value of the differential value of the first corrected image after the Laplacian filter processing. This absolute value is the common feature amount of the first corrected image.
  • the image composed of the common feature amount of the first corrected image (first common feature amount image) is used for stereo matching in the parallax information acquisition unit 24.
  • the common feature amount acquisition unit 23 calculates the absolute value of the differential value of the second corrected image after the Laplacian filter processing. This absolute value is the common feature amount of the second corrected image.
  • the image composed of the common feature amount of the second corrected image (second common feature amount image) is used for stereo matching in the parallax information acquisition unit 24.
  • the first captured image received by visible light and captured can be acquired as an R, G, B color image.
  • the first corrected image corrected by the image correction unit 21 can also be acquired as an R, G, B color image. Therefore, as another embodiment, the common feature amount acquisition unit 23 first performs Laplacian filter processing on each of the first correction images acquired as the R, G, and B color images. As a result, data composed of absolute values of differential values corresponding to R, G, and B is calculated as an image composed of the first common feature amount. Based on the absolute value data of the differential values of R, G, and B, the most effective first common feature amount can be obtained when performing stereo matching. At this time, all of the absolute value data of the differential values of R, G, and B may be used, or any one or two of R, G, and B may be selected and used. As a result, a more accurate stereo matching process can be performed.
  • the disparity information acquisition unit 24 performs stereo matching using the first common feature amount image and the second common feature amount image obtained by the common feature amount acquisition unit 23, and acquires disparity information.
  • 8 and 9 are flowcharts of the disparity information acquisition method according to the first embodiment.
  • the parallax information acquisition method according to the first embodiment will be described with reference to FIGS. 8 and 9.
  • Step S ⁇ b> 21 The image processing apparatus 20 inputs the first captured image captured by the first imaging unit 11 and the second captured image captured by the second imaging unit 12.
  • the image correction unit 21 performs distortion correction (WARP) on the first captured image and the second captured image input in Step S21.
  • WARP distortion correction
  • the image correction unit 21 performs affine transformation using a dedicated LUT on the second captured image.
  • LUT the dedicated, the focal length f and the pixel size k x of the second captured image, k y are the affine transformation, the focal length f and the pixel size k x of the first captured image, the same as so as the k y
  • it has been prepared in advance.
  • Step S23 The common feature amount acquisition unit 23 performs Laplacian filter processing on each of the first correction image and the second correction image obtained by the distortion correction in step S22.
  • Step S24 The common feature amount acquisition unit 23 calculates the absolute value of the differential value of the first corrected image after the Laplacian filter processing in Step S23, and from this absolute value (common feature amount of the first corrected image). A first common feature amount image is obtained. Further, the common feature amount acquisition unit 23 calculates the absolute value of the differential value of the second corrected image after the Laplacian filter processing in step S23, and the first value composed of this absolute value (the common feature amount of the second corrected image). Two common feature amount images are obtained.
  • the upper left pixel of the first common feature quantity image is set as a search start reference pixel.
  • the search start reference pixel is the first search pixel of the first common feature amount image.
  • Step S26 Epipolar constraint when the parallax information acquisition unit 24 searches for the position p 2 on the second common feature amount image (comparison image) with respect to the search pixel p 1 of the first common feature amount image. determine the I 2.
  • the epipolar constraint I 2 is obtained by the above [Equation 19].
  • Epipolar constraint I 2 is the coefficient of the comparison image search line. Note that the parallax information acquisition unit 24 holds a basic matrix F 0 calculated in advance.
  • This search start lateral position is the first search lateral position of the second common feature amount image.
  • Step S28 the disparity information acquisition unit 24, a search vertical position v 2 corresponding to the search lateral position u 2 at the second common feature amount image, determined according to epipolar constraints I 2 as shown in FIG. 10.
  • FIG. 10 is a conceptual diagram of the parallax search method according to the first embodiment.
  • the search vertical position v 2 is calculated by “(c ⁇ (a ⁇ u 1 )) / b” using elements (epipolar constraint line coefficients) a, b, and c of the epipolar constraint I 2 .
  • Step S29 The parallax information acquisition unit 24 calculates the degree of coincidence between the first common feature amount image and the second common feature amount image with the pixels of interest p 1 and p 2 as the center.
  • a method of calculating the degree of matching for example, SSD (Sum of Squared Difference), SAD (Sum of Absolute Difference), NCC (Normalized Cross-Correlation: normal) Examples thereof include cross-correlation), ZNCC (Zero-mean Normalized Cross-Correlation), and SGM (Semi-Global Matching).
  • Step S30 The parallax information acquisition unit 24 increments the search lateral position u2 of the second common feature amount image.
  • Step S31 parallax information acquiring unit 24, the search lateral position u 2 at the second common feature value image to determine whether the reached to the end of the second common feature amount image. As a result of this determination, if the end of the second common feature amount image has been reached, the process proceeds to step S32 in FIG. 9, and if not, the process returns to step S28.
  • Step S32 The parallax information acquisition unit 24 selects the pixel of interest of the second common feature quantity image that has the highest degree of coincidence among the degrees of coincidence calculated for the search pixel p1 of the first common feature quantity image. p 2 (maximum coincidence comparison coordinate position) is obtained.
  • Step S33 The parallax information acquisition unit 24 obtains the distance from the search start position (initial pixel position) to the maximum coincidence comparison coordinate position in the second common feature amount image. Then, the parallax information acquisition unit 24 divides the obtained distance by the pixel size k x . The disparity information acquisition unit 24 sets the quotient that is the result of the division as a disparity value for the search pixel p1 of the first common feature amount image.
  • Step S34 The parallax information acquisition unit 24 stores the parallax value obtained in step S33 in a parallax image.
  • the storage position of the parallax value is the same position as the search pixel p1 of the first common feature amount image.
  • a parallax image corresponding to the first common feature amount image is obtained by the processing of FIGS. From this parallax image, three-dimensional information of the subject (for example, three-dimensional distance information to the subject) can be acquired.
  • the parallax information acquisition unit 24 may perform a process of supplementing the information of the parallax image. Specifically, the parallax information acquisition unit 24 uses the pixel value (the luminance value or hue of the image captured by the visible light camera) of the first corrected image (reference image) from which the first common feature amount image is obtained.
  • the pixel value of the second corrected image (comparison image) from which the second common feature amount image is obtained (the luminance value of the image captured by the far-infrared camera and representing the surface temperature of the subject).
  • the parallax value of the parallax image obtained in the first embodiment that is, distance information
  • pixels that are relatively equidistant are obtained by clustering as a pixel point set, that is, a point sequence equivalent to a contour.
  • the parallax information acquisition unit 24 uses the pixel value of the first corrected image or the luminance value of the second corrected image surrounded by the pixel point set as a result of the clustering as a region representing one object. Clustering.
  • the parallax image after clustering has a parallax value for pixels corresponding to the contour of the subject, and further has identification information indicating that the same object is present in the pixel areas clustered as the same cluster. Thereby, the three-dimensional distance information to the subject is obtained from the parallax value of the pixel corresponding to the contour. Furthermore, it can be determined that pixel areas having the same identification information are the same object.
  • Example of acquisition method of three-dimensional distance information an example of a method for acquiring three-dimensional distance information using the parallax image according to the first embodiment will be described.
  • a road surface and an object can be identified using methods such as “v-disparity” and “virtual disparity”.
  • the parallax image according to the first embodiment has a parallax value only in the contour of the subject, and the parallax value is not obtained in a planar shape.
  • the parallax value is compared with the parallax value of a pixel connected to the pixel, and based on the comparison result, (For example, identification of a road surface and an object). For example, three-dimensional distance information is obtained from the pixel corresponding to the contour in which the parallax value is stored. From the distribution of the three-dimensional distance information, the subject is identified by determining whether a certain distribution is planar such as a road surface or is standing vertically. Then, the three-dimensional distance information of the contour of the identified subject is set as the three-dimensional distance information to the subject.
  • the parallax that can be used for calculating the three-dimensional information such as the distance measurement to the subject by the stereo matching method from the captured image of each camera day and night. Information can be obtained simply.
  • a visible light camera is used as a reference camera and a far-infrared camera is used as a comparison camera.
  • a far-infrared camera is used as a reference camera and a comparison camera is visible. You may comprise so that an optical camera may be used.
  • the second embodiment another example of the common feature amount will be described.
  • the first wavelength range of light received by the first imaging unit 11 and the second wavelength range of light received by the second imaging unit 12 are partially overlapped.
  • information indicating the received light amount of light having the wavelength of the overlapping portion is used as a common feature amount.
  • the light in the first wavelength range related to the first imaging unit 11 is visible light
  • the light in the second wavelength range related to the second imaging unit 12 is near-infrared light and visible light adjacent to the near-infrared light. This is the red end (end on the long wavelength side).
  • the first wavelength region and the second wavelength region overlap the red end portion of the visible light included in the second wavelength region. From this, the red pixel in the captured image of the first imaging unit 11 and the pixel of the captured image of the second imaging unit 12 are correlated. This highly correlated pixel is used for stereo matching as a common feature.
  • the 2nd imaging part 12 of this example receives a part of near infrared rays and visible light, it can be arrange
  • the image plane of both cameras can be made the same, it is possible to obtain a parallax image by the conventional stereo matching method.
  • the driving support unit 41 shown in FIG. 2 performs a stereo camera recognition process using the parallax information acquired by the image processing device 20, and based on the result of the stereo camera recognition process, via the CAN 42, A control signal may be transmitted to each of the control units 43a to 43f.
  • the driving support unit 41 acquires the three-dimensional information of the subject from the parallax information acquired by the image processing device 20. Next, the driving support unit 41 recognizes a travelable range such as a road surface, a road obstacle such as a guard rail, and an obstacle different from the road surface such as a preceding vehicle or an oncoming vehicle from the acquired three-dimensional information. . Next, the driving support unit 41 obtains a relative distance and a relative speed with each of the recognized objects. Next, the driving support unit 41 supports traveling set as safe traveling such as acceleration, following, deceleration, stop, or avoidance based on the obtained relative distance and relative speed.
  • a travelable range such as a road surface, a road obstacle such as a guard rail, and an obstacle different from the road surface such as a preceding vehicle or an oncoming vehicle from the acquired three-dimensional information.
  • the driving support unit 41 obtains a relative distance and a relative speed with each of the recognized objects.
  • the driving support unit 41 supports traveling set as safe traveling such as acceleration, following,
  • the driving support unit 41 accelerates the vehicle 30 in accordance with the function of running along the lane on the road (Lane Keep) and the road conditions (congestion status, position of the preceding vehicle, presence of an interrupted vehicle, etc.) You may make it have the function (Auto
  • the driving support unit 41 has a function of data linking with the navigation device, a function of calculating a traveling course on the road surface from the course information set in the navigation device and the lane information recognized by the driving support unit 41, and the calculated You may make it have a function which supports automatic driving
  • the visible light cameras 110 and 120 having different visual fields FOV 110 and FOV 120 are arranged on the front windshield of the vehicle 30.
  • the visual field FOV 110 of the visible light camera 110 is wider than the visual field FOV 120 of the visible light camera 120.
  • distortion is performed so that the angle of view of the image captured by the visible light camera 110 with the wide angle becomes the same as the angle of view of the image captured with the visible light camera 120 with the narrow angle.
  • a computer program for realizing the functions of the above-described image processing apparatus 20 is recorded on a computer-readable recording medium, and the program recorded on the recording medium is read into the computer system and executed.
  • the “computer system” may include an OS and hardware such as peripheral devices.
  • Computer-readable recording medium means a flexible disk, a magneto-optical disk, a ROM, a writable nonvolatile memory such as a flash memory, a portable medium such as a DVD (Digital Versatile Disk), and a built-in computer system.
  • a storage device such as a hard disk.
  • the “computer-readable recording medium” means a volatile memory (for example, DRAM (Dynamic DRAM) in a computer system that becomes a server or a client when a program is transmitted through a network such as the Internet or a communication line such as a telephone line. Random Access Memory)), etc., which hold programs for a certain period of time.
  • DRAM Dynamic DRAM
  • the program may be transmitted from a computer system storing the program in a storage device or the like to another computer system via a transmission medium or by a transmission wave in the transmission medium.
  • the “transmission medium” for transmitting the program refers to a medium having a function of transmitting information, such as a network (communication network) such as the Internet or a communication line (communication line) such as a telephone line.
  • the program may be for realizing a part of the above-described functions. Furthermore, what can implement
  • SYMBOLS 1 ... Imaging device, 11 ... 1st imaging part, 12 ... 2nd imaging part, 20 ... Image processing apparatus, 21 ... Image correction part, 22 ... Parameter acquisition part, 23 ... Common feature-value acquisition part, 24 ... Parallax Information acquisition unit, 30 ... vehicle, 31 ... windshield, 32 ... front bumper, 41 ... driving support unit, 42 ... CAN, 43a to 43f ... control unit

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Measurement Of Optical Distance (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

[Problem] To acquire parallax information straightforwardly by stereo matching. [Solution] The present invention is provided with: a first image-capturing unit (11) for receiving light of a first wavelength band and capturing an image; a second image pickup unit (12) for receiving light of a second wavelength band different from the first wavelength band and capturing an image; an image correction unit (21) for correcting differences between a first captured image captured by the first image-capturing unit (11) and a second captured image captured by the second image-capturing unit (12); a common feature quantity acquisition unit (23) for acquiring a common feature quantity for each of a first corrected image obtained from the first captured image and a second corrected image obtained from the second captured image, the corrected images having been obtained by the image correction unit (21); and a parallax information acquisition unit (24) for obtaining parallax information by stereo matching using the common feature quantity that was acquired for the first corrected image by the common feature quantity acquisition unit (23) and the common feature quantity that was acquired for the second corrected image by the common feature quantity acquisition unit (23).

Description

撮像装置、画像処理装置、画像処理方法およびコンピュータプログラムImaging apparatus, image processing apparatus, image processing method, and computer program
 本発明は、撮像装置、画像処理装置、画像処理方法およびコンピュータプログラムに関する。 The present invention relates to an imaging device, an image processing device, an image processing method, and a computer program.
 例えば特許文献1に記載される従来技術では、可視光成分および近赤外線成分を含む被写体の撮像を行う第1の撮像部と、可視光成分を含み且つ近赤外線成分を含まない被写体の撮像を行う第2の撮像部と、を備え、昼間は可視光成分に基づいたステレオマッチング法により被写体までの距離を測定し、一方、夜間は近赤外線補助光源で赤外パターンを被写体に投射し、近赤外線成分に基づいたパターン光投影法により被写体までの距離を測定している。 For example, in the conventional technique described in Patent Document 1, a first imaging unit that captures an object including a visible light component and a near-infrared component and an object including a visible light component and not including a near-infrared component are captured. A second imaging unit, and measures the distance to the subject by a stereo matching method based on visible light components in the daytime, while projecting an infrared pattern on the subject with a near-infrared auxiliary light source at night, The distance to the subject is measured by a pattern light projection method based on the components.
特開2013-156109号公報JP 2013-156109 A
 しかし、上述した従来技術では、2種類の方法(ステレオマッチング法とパターン光投影法)を使用するので、装置構成が複雑になる。また、パターン光投影法のために、赤外パターンを投射する近赤外線補助光源が必要となる。 However, since the above-described conventional technique uses two types of methods (stereo matching method and pattern light projection method), the apparatus configuration becomes complicated. In addition, a near-infrared auxiliary light source for projecting an infrared pattern is required for the pattern light projection method.
 本発明は、このような事情を考慮してなされたものであり、ステレオマッチング法により視差情報を簡素に取得できる撮像装置、画像処理装置、画像処理方法およびコンピュータプログラムを提供することを課題とする。 The present invention has been made in view of such circumstances, and it is an object of the present invention to provide an imaging device, an image processing device, an image processing method, and a computer program capable of simply obtaining parallax information by a stereo matching method. .
 本発明の一実施形態によれば、第1の撮像部と第2の撮像部と画像処理装置とを有する撮像装置において、前記第1の撮像部により、第1の波長域の光を受光して、第1の撮像画像を取得するステップと、前記第2の撮像部により、前記第1の波長域とは異なる第2の波長域の光を受光して、第2の撮像画像を取得するステップと、前記第1の撮像画像と前記第2の撮像画像との違いを補正することにより、各々について第1の補正画像と第2の補正画像とを取得する画像補正ステップと、前記第1の補正画像と前記第2の補正画像との共通特徴量に基づき、各々について第1の共通特徴量と第2の共通特徴量とを取得する共通特徴量取得ステップと、前記第1の共通特徴量と前記第2の共通特徴量とを使用して、ステレオマッチングにより視差情報を取得する視差情報取得ステップと、を有する撮像方法である。 According to one embodiment of the present invention, in an imaging device having a first imaging unit, a second imaging unit, and an image processing device, the first imaging unit receives light in a first wavelength range. The second captured image is received by the step of acquiring the first captured image and the second imaging unit receives light in a second wavelength range different from the first wavelength range. An image correction step for acquiring a first corrected image and a second corrected image for each by correcting a difference between the first captured image and the second captured image, and the first A common feature amount acquisition step for acquiring a first common feature amount and a second common feature amount for each based on a common feature amount of the corrected image and the second corrected image; and the first common feature Stereo matching using the second common feature quantity and the second common feature quantity. A parallax information acquiring step of acquiring the parallax information is an imaging method with.
 本発明の一実施形態によれば、前記第1の共通特徴量および前記第2の共通特徴量は、輝度値の微分値の絶対値である撮像方法である。 According to an embodiment of the present invention, in the imaging method, the first common feature amount and the second common feature amount are absolute values of differential values of luminance values.
 本発明の一実施形態によれば、前記第1の共通特徴量または前記第2の共通特徴量は、R、G、Bの全て、あるいは少なくともいずれかの輝度値の微分値の絶対値に基づき取得された撮像方法である。 According to one embodiment of the present invention, the first common feature amount or the second common feature amount is based on an absolute value of a differential value of all of R, G, and / or at least one of the luminance values. It is the acquired imaging method.
 本発明の一実施形態によれば、前記第1の波長域と前記第2の波長域とは一部分が重複し、該重複部分の波長の光の受光量を示す情報を前記第1の共通特徴量および前記第2の共通特徴量として使用する撮像方法である。 According to an embodiment of the present invention, the first wavelength region and the second wavelength region partially overlap, and information indicating the amount of received light having the wavelength of the overlapping portion is the first common feature. And an imaging method used as the second common feature amount.
 本発明の一実施形態によれば、前記第1の撮像部の撮像面と前記第2の撮像部の撮像面とは異なる平面上にあり、前記視差情報取得ステップは、前記第1の共通特徴量から成る第1の共通特徴量画像の探索画素p1を決定するサブステップと、エピポーラ制約に従い、前記探索画素p1に対応する、前記第2の共通特徴量から成る第2の共通特徴量画像上での探索画素p2を求めるサブステップと、前記探索画素p1およびp2を中心として、第1の共通特徴量画像および第2の共通特徴量画像の一致度を計算し、当該一致度に基づき第2の共通特徴量画像の探索画素p2の位置を求めるサブステップと、を少なくとも有し、前記ステレオマッチングにより視差情報を取得する撮像方法である。 According to an embodiment of the present invention, the imaging surface of the first imaging unit and the imaging surface of the second imaging unit are on different planes, and the parallax information acquisition step includes the first common feature. A sub-step for determining a search pixel p1 of the first common feature quantity image made up of a quantity, and a second common feature quantity image made up of the second common feature quantity corresponding to the search pixel p1 according to epipolar constraints A sub-step for obtaining the search pixel p2 at, and a degree of coincidence between the first common feature amount image and the second common feature amount image with the search pixels p1 and p2 as the center, and a second step based on the degree of coincidence. A sub-step of obtaining the position of the search pixel p2 of the common feature amount image, and acquiring parallax information by the stereo matching.
 本発明の一実施形態によれば、前記視差情報取得ステップは、前記第1の補正画像の画素値または前記第2の補正画像の画素値と、前記ステレオマッチングにより取得された視差画像の視差値とをクラスタリングし、該クラスタリングの結果である画素点集合に囲まれた、前記第1の補正画像の画素値または前記第2の補正画像の画素値を1つの物体を表す領域としてクラスタリングする撮像方法である。 According to an embodiment of the present invention, the parallax information acquisition step includes the pixel value of the first corrected image or the pixel value of the second corrected image and the parallax value of the parallax image acquired by the stereo matching. And clustering the pixel values of the first corrected image or the pixel values of the second corrected image surrounded by a set of pixel points as a result of the clustering as a region representing one object It is.
 本発明の一実施形態によれば、第1の波長域の光を受光して、第1の撮像画像を取得する第1の撮像部と、前記第1の波長域とは異なる第2の波長域の光を受光して、第2の撮像画像を取得する第2の撮像部と、前記第1の撮像画像と、前記第2の撮像画像との違いを補正することにより、前記第1の撮像画像から得られた第1の補正画像と前記第2の撮像画像から得られた第2の補正画像とを取得する画像補正部と、前記第1の補正画像と前記第2の補正画像の各々について第1の共通特徴量と第2の共通特徴量とを取得する共通特徴量取得部と、前記第1の共通特徴量と前記第2の共通特徴量とを使用して、ステレオマッチングにより視差情報を取得する視差情報取得部とを備えた撮像装置である。 According to one embodiment of the present invention, a first imaging unit that receives light in a first wavelength range and acquires a first captured image, and a second wavelength that is different from the first wavelength range. By correcting the difference between the second imaging unit that receives the light of the region and acquires the second captured image, the first captured image, and the second captured image, the first captured image An image correction unit for obtaining a first correction image obtained from the captured image and a second correction image obtained from the second captured image; and the first correction image and the second correction image. Stereo matching is performed by using a common feature amount acquisition unit that acquires a first common feature amount and a second common feature amount for each, and the first common feature amount and the second common feature amount. It is an imaging device provided with the parallax information acquisition part which acquires parallax information.
 本発明の一実施形態によれば、第1の波長域の光を受光して撮像された第1の撮像画像と、前記第1の波長域とは異なる第2の波長域の光を受光して撮像された第2の撮像画像とを保持し、各々の撮像画像の違いを補正することにより、前記第1の撮像画像から得られた第1の補正画像と前記第2の撮像画像から得られた第2の補正画像とを取得する画像補正部と、前記第1の補正画像と前記第2の補正画像との共通特徴量に基づき、各々について第1の共通特徴量と第2の共通特徴量とを取得する共通特徴量取得部と、 According to an embodiment of the present invention, a first captured image captured by receiving light in the first wavelength range and light in a second wavelength range different from the first wavelength range are received. The second captured image captured in this manner is retained, and the difference between the captured images is corrected to obtain from the first corrected image obtained from the first captured image and the second captured image. A first common feature amount and a second common feature amount based on a common feature amount of the image correction unit that acquires the second corrected image and the first corrected image and the second corrected image, respectively. A common feature amount acquisition unit for acquiring feature amounts;
 前記第1の共通特徴量と前記第2の共通特徴量とを使用して、ステレオマッチングにより視差情報を取得する視差情報取得部と、を備えた画像処理装置である。 A parallax information acquisition unit that acquires parallax information by stereo matching using the first common feature quantity and the second common feature quantity.
 本発明の一実施形態によれば、不揮発性の記憶媒体に記録されコンピュータに実行させるプログラムであって、第1の波長域の光を受光して、第1の撮像画像を取得するステップと、前記第1の波長域とは異なる第2の波長域の光を受光して、第2の撮像画像を取得するステップと、前記第1の撮像画像と前記第2の撮像画像との違いを補正することにより、各々について第1の補正画像と第2の補正画像とを取得する画像補正ステップと、前記第1の補正画像と前記第2の補正画像との共通特徴量に基づき、各々について第1の共通特徴量と第2の共通特徴量とを取得する共通特徴量取得ステップと、前記第1の共通特徴量と前記第2の共通特徴量とを使用して、ステレオマッチングにより視差情報を取得する視差情報取得ステップと、を有するコンピュータで読み出し可能な記憶媒体に記録されるプログラムである。 According to one embodiment of the present invention, a program recorded in a non-volatile storage medium and executed by a computer, receiving light in a first wavelength range and acquiring a first captured image; Receiving light in a second wavelength range different from the first wavelength range to obtain a second captured image, and correcting a difference between the first captured image and the second captured image Thus, based on the image correction step for obtaining the first correction image and the second correction image for each, and the common feature amount of the first correction image and the second correction image, the first correction image and the second correction image are obtained. Using the common feature amount acquisition step of acquiring one common feature amount and the second common feature amount, and using the first common feature amount and the second common feature amount, the parallax information is obtained by stereo matching. A parallax information acquisition step to be acquired; A program recorded in a computer readable storage medium having.
 本発明の一実施形態によれば、不揮発性の記憶媒体に記録されコンピュータに実行させるプログラムであって、前記第1の共通特徴量および前記第2の共通特徴量は、輝度値の微分値の絶対値である、コンピュータで読み出し可能な記憶媒体に記録されるプログラムである。 According to an embodiment of the present invention, there is provided a program recorded in a non-volatile storage medium and executed by a computer, wherein the first common feature amount and the second common feature amount are differential values of luminance values. It is a program recorded on a computer-readable storage medium that is an absolute value.
 本発明の一実施形態によれば、不揮発性の記憶媒体に記録されコンピュータに実行させるプログラムであって、前記第1の共通特徴量または前記第2の共通特徴量は、R、G、Bの全て、あるいは少なくともいずれかの輝度値の微分値の絶対値に基づき取得された、コンピュータで読み出し可能な記憶媒体に記録されるプログラムである。 According to an embodiment of the present invention, there is provided a program recorded in a non-volatile storage medium and executed by a computer, wherein the first common feature amount or the second common feature amount is R, G, or B. It is a program recorded on a computer-readable storage medium that is acquired based on the absolute value of the derivative value of all or at least one of the luminance values.
 本発明によれば、ステレオマッチング法により視差情報を簡素に取得できるという効果が得られる。 According to the present invention, an effect that the parallax information can be simply obtained by the stereo matching method is obtained.
本発明の一実施形態に係る撮像装置1の構成を示すブロック図である。1 is a block diagram illustrating a configuration of an imaging apparatus 1 according to an embodiment of the present invention. 本発明の第1実施形態に係る撮像装置1の構成を示す構成図である。1 is a configuration diagram illustrating a configuration of an imaging apparatus 1 according to a first embodiment of the present invention. 本発明の第1実施形態に係る撮像部の視野を示す図である。It is a figure which shows the visual field of the imaging part which concerns on 1st Embodiment of this invention. 本発明の第1実施形態に係る世界座標系である車両座標系を示す図である。It is a figure which shows the vehicle coordinate system which is the world coordinate system which concerns on 1st Embodiment of this invention. 本発明の第1実施形態に係るパラメータ取得処理の手順を示すフローチャートである。It is a flowchart which shows the procedure of the parameter acquisition process which concerns on 1st Embodiment of this invention. 本発明の第1実施形態に係る各座標系を示す図である。It is a figure which shows each coordinate system which concerns on 1st Embodiment of this invention. 本発明の第1実施形態に係るラプラシアンフィルタ係数の一例を示す図表である。It is a chart which shows an example of a Laplacian filter coefficient concerning a 1st embodiment of the present invention. 本発明の第1実施形態に係る視差情報取得方法のフローチャートである。It is a flowchart of the parallax information acquisition method which concerns on 1st Embodiment of this invention. 本発明の第1実施形態に係る視差情報取得方法のフローチャートである。It is a flowchart of the parallax information acquisition method which concerns on 1st Embodiment of this invention. 本発明の第1実施形態に係る視差探索方法の概念図である。It is a conceptual diagram of the parallax search method which concerns on 1st Embodiment of this invention. 可視光カメラ110,120の視野FOV110,FOV120を示す図である。It is a figure which shows the visual field FOV110 of the visible light cameras 110 and 120, FOV120.
 以下、図面を参照し、本発明の実施形態について説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings.
 図1は、本発明の一実施形態に係る撮像装置1の構成を示すブロック図である。図1において、撮像装置1は、第1の撮像部11と第2の撮像部12と画像処理装置20を備える。画像処理装置20は、画像補正部21とパラメータ取得部22と共通特徴量取得部23と視差情報取得部24を備える。 FIG. 1 is a block diagram showing a configuration of an imaging apparatus 1 according to an embodiment of the present invention. In FIG. 1, the imaging device 1 includes a first imaging unit 11, a second imaging unit 12, and an image processing device 20. The image processing apparatus 20 includes an image correction unit 21, a parameter acquisition unit 22, a common feature amount acquisition unit 23, and a parallax information acquisition unit 24.
 第1の撮像部11は、第1の波長域の光を受光して撮像する。第2の撮像部12は、第1の撮像部11の第1の波長域とは異なる第2の波長域の光を受光して撮像する。第1の波長域の光と第2の波長域の光の組合せとして、例えば、以下の光組合せ例1,2が挙げられる。
光組合せ例1:第1の波長域の光として「可視光」と第2の波長域の光として「遠赤外線」。
光組合せ例2:第1の波長域の光として「可視光」と第2の波長域の光として「近赤外線」。
The first imaging unit 11 receives and captures light in the first wavelength range. The second imaging unit 12 receives and captures light in a second wavelength range different from the first wavelength range of the first imaging unit 11. Examples of combinations of light in the first wavelength range and light in the second wavelength range include the following light combination examples 1 and 2.
Light combination example 1: “visible light” as light in the first wavelength region and “far infrared light” as light in the second wavelength region.
Light combination example 2: “visible light” as light in the first wavelength range and “near infrared” as light in the second wavelength range.
 なお、上記した光組合せ例1,2以外の光の組合せを用いてもよい。例えば、可視光、近赤外線、短波長赤外線、中波長赤外線、長波長赤外線、遠赤外線、及び紫外線の中から、任意の2つの光を組み合わせてもよい。 It should be noted that light combinations other than the light combination examples 1 and 2 described above may be used. For example, you may combine arbitrary two light from visible light, near infrared rays, short wavelength infrared rays, medium wavelength infrared rays, long wavelength infrared rays, far infrared rays, and ultraviolet rays.
 画像処理装置20には、第1の撮像部11で撮像された第1の撮像画像と第2の撮像部12で撮像された第2の撮像画像とが入力される。画像補正部21は、第1の撮像画像と第2の撮像画像との違いを補正する。パラメータ取得部22は、ステレオマッチングのためのパラメータを取得する。共通特徴量取得部23は、画像補正部21で第1の撮像画像から得られた第1の補正画像と第2の撮像画像から得られた第2の補正画像の各々について共通特徴量を取得する。視差情報取得部24は、共通特徴量取得部23で第1の補正画像について取得された共通特徴量と第2の補正画像について取得された共通特徴量を使用してステレオマッチングにより視差情報を取得する。 The first captured image captured by the first imaging unit 11 and the second captured image captured by the second imaging unit 12 are input to the image processing device 20. The image correction unit 21 corrects the difference between the first captured image and the second captured image. The parameter acquisition unit 22 acquires parameters for stereo matching. The common feature amount acquisition unit 23 acquires a common feature amount for each of the first correction image obtained from the first captured image and the second correction image obtained from the second captured image by the image correction unit 21. To do. The disparity information acquisition unit 24 acquires disparity information by stereo matching using the common feature amount acquired for the first correction image by the common feature amount acquisition unit 23 and the common feature amount acquired for the second correction image. To do.
 本実施形態に係る画像処理装置20は、専用のハードウェアにより実現されるものであってもよく、又は、メモリおよびCPU(中央処理装置)により構成され、画像処理装置20の機能を実現するためのコンピュータプログラムをCPUが実行することによりその機能を実現させるものであってもよい。 The image processing apparatus 20 according to the present embodiment may be realized by dedicated hardware, or configured by a memory and a CPU (central processing unit) to realize the functions of the image processing apparatus 20. The function may be realized by the CPU executing the computer program.
 以下、本発明に係る各実施形態を説明する。 Hereinafter, each embodiment according to the present invention will be described.
[第1実施形態]
 第1実施形態は、図1に示される撮像装置1を車両に適用した実施例である。図2は、本発明の第1実施形態に係る撮像装置1の構成を示す構成図である。図2において、車両30には、第1の撮像部11及び第2の撮像部12が設けられる。第1実施形態では、第1の撮像部11は可視光を受光して撮像するものであり、第2の撮像部12は遠赤外線を受光して撮像するものである。第1の撮像部11は、車両30のフロントガラス(Front shield)31の中央上端部に設置されている。第2の撮像部12は、車両30のフロントバンパー32の中央から左にオフセットした位置に設置されている。第1実施形態では、第1の撮像部11を基準カメラとし、第2の撮像部12を比較カメラとする。
[First Embodiment]
1st Embodiment is an Example which applied the imaging device 1 shown by FIG. 1 to the vehicle. FIG. 2 is a configuration diagram showing the configuration of the imaging apparatus 1 according to the first embodiment of the present invention. In FIG. 2, the vehicle 30 is provided with a first imaging unit 11 and a second imaging unit 12. In the first embodiment, the first imaging unit 11 receives visible light and images, and the second imaging unit 12 receives far infrared light and images. The first imaging unit 11 is installed at the center upper end of a windshield (Front shield) 31 of the vehicle 30. The second imaging unit 12 is installed at a position offset to the left from the center of the front bumper 32 of the vehicle 30. In the first embodiment, the first imaging unit 11 is a reference camera, and the second imaging unit 12 is a comparison camera.
 画像処理装置20は、車両30に設けられた運転支援ユニット41に具備される。第1の撮像部11で撮像された第1の撮像画像と第2の撮像部12で撮像された第2の撮像画像とは、運転支援ユニット41に入力される。運転支援ユニット41に入力された第1の撮像画像及び第2の撮像画像は、運転支援ユニット41に備わる画像処理装置20に入力される。 The image processing apparatus 20 is provided in a driving support unit 41 provided in the vehicle 30. The first captured image captured by the first imaging unit 11 and the second captured image captured by the second imaging unit 12 are input to the driving support unit 41. The first captured image and the second captured image input to the driving support unit 41 are input to the image processing device 20 provided in the driving support unit 41.
 車両30にはCAN(Controller Area Network)42が設けられている。運転支援ユニット41や車両30の他の制御ユニット43a~43fはCAN42に接続されている。運転支援ユニット41は、例えば、CAN42を介して、ブレーキ制御ユニット43aや電動パワーステアリング制御ユニット42bへ制御信号を送信する。制御ユニットとして、他にエンジン制御ユニット、車間距離制御ユニットなどが挙げられる。 The vehicle 30 is provided with a CAN (Controller Area Network) 42. The driving support unit 41 and the other control units 43 a to 43 f of the vehicle 30 are connected to the CAN 42. The driving support unit 41 transmits a control signal to the brake control unit 43a and the electric power steering control unit 42b through the CAN 42, for example. Other examples of the control unit include an engine control unit and an inter-vehicle distance control unit.
 なお、第1の撮像部11及び第2の撮像部12がCAN42に接続され、第1の撮像部11がCAN42を介して第1の撮像画像を運転支援ユニット41へ送信し、第2の撮像部12がCAN42を介して第2の撮像画像を運転支援ユニット41へ送信するようにしてもよい。また、第1の撮像部11又は第2の撮像部12のいずれかと運転支援ユニット41とを同一装置として構成してもよい。例えば、第1の撮像部11と運転支援ユニット41を一体化した装置として構成することが挙げられる。 The first imaging unit 11 and the second imaging unit 12 are connected to the CAN 42, and the first imaging unit 11 transmits the first captured image to the driving support unit 41 via the CAN 42, so that the second imaging is performed. The unit 12 may transmit the second captured image to the driving support unit 41 via the CAN 42. Moreover, you may comprise either the 1st imaging part 11 or the 2nd imaging part 12, and the driving assistance unit 41 as the same apparatus. For example, the first imaging unit 11 and the driving support unit 41 may be configured as an integrated device.
 図3は、第1実施形態に係る撮像部の視野(field of view:FOV)を示す図である。図3において、第1の撮像部11の視野FOV11と第2の撮像部12の視野FOV12とは異なっている。このため第1の撮像部11で撮像された第1の撮像画像と第2の撮像部12で撮像された第2の撮像画像とは、画角および画素数が異なる。そこで、第1実施形態では、各撮像部11,12での光学系による歪みを補正するキャリブレーションに加えて、第1の撮像画像と第2の撮像画像との画角および画素数の違いを補正する処理を行う。 FIG. 3 is a diagram illustrating a field of view (FOV) of the imaging unit according to the first embodiment. In FIG. 3, the field of view FOV11 of the first imaging unit 11 and the field of view FOV12 of the second imaging unit 12 are different. Therefore, the first captured image captured by the first image capturing unit 11 and the second captured image captured by the second image capturing unit 12 are different in view angle and number of pixels. Therefore, in the first embodiment, in addition to calibration for correcting distortion caused by the optical system in each of the imaging units 11 and 12, the difference in the angle of view and the number of pixels between the first captured image and the second captured image are determined. Perform correction processing.
 次に、画像補正部21の動作を説明する。画像補正部21は歪み補正を行う。この歪み補正を説明する。まず一般的なステレオカメラにおける歪み補正方法を説明する。一般にステレオカメラでは、左右のカメラに対してチェスボードパターンなどのターゲットを見せて「Tsai, Zhang」などの手法により光学系歪み率「κ,κ,κ,・・・,s,f,k,k,c,c」を求める。但し、「κ,κ,κ,・・・」は、「r,r,r,・・・」に比例した歪み率パラメータである。rは、画像座標(u,v)への光軸座標(c,c)からの距離であり、[数1]で算出される。光軸座標(c,c)は、画像左上を原点とする画像座標系で表される。 Next, the operation of the image correction unit 21 will be described. The image correction unit 21 performs distortion correction. This distortion correction will be described. First, a distortion correction method in a general stereo camera will be described. In general, a stereo camera shows a target such as a chessboard pattern with respect to the left and right cameras, and the optical system distortion rate “κ 1 , κ 2 , κ 3 ,..., S, f by a technique such as“ Tsai, Zhang ”. , K x , k y , c x , c y ”. However, “κ 1 , κ 2 , κ 3 ,...” Is a distortion rate parameter proportional to “r 2 , r 4 , r 6 ,. r is a distance from the optical axis coordinates (c x , c y ) to the image coordinates (u, v), and is calculated by [Equation 1]. The optical axis coordinates (c x , c y ) are expressed in an image coordinate system with the upper left corner of the image as the origin.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 sは、スキュー(skew)歪みであり、光軸の回転角θによる歪み率「s=tanθ」で表される。k,kは画素サイズであり、通常、「k=k」とする場合が多い。fは、焦点距離である。焦点距離fは、画素サイズkに対する比率で表される場合がある。 s is a skew distortion and is represented by a distortion rate “s = tan θ” due to the rotation angle θ of the optical axis. k x and k y are pixel sizes, and usually “k x = k y ” in many cases. f is a focal length. The focal length f may be expressed as a ratio with respect to the pixel size k x .
 さらに、各カメラ座標系について、世界座標系に対するパン(Pan)角、ピッチ(Pitch)角、ロール(Roll)角を求める。そして、左右のカメラの撮像画像の焦点距離f及び画素サイズk,kが共通になり、且つ、撮像面とエピポーラ線(epipolar line)が平行になるようにアフィン変換(affine transformation)するための専用ロジック(演算回路又は演算プログラム)に設定するパラメータ又は専用LUT(look up table:参照テーブル)を求める。そして、該パラメータ又は専用LUTを、該専用ロジックに設定する。これは、ステレオマッチングに適した画像を得るための準備である。なお、アフィン変換は、平行移動を伴う回転、拡大又は縮小、剪断などの線形写像変換を行うものである。
 以上が一般的なステレオカメラにおける歪み補正方法である。
Further, for each camera coordinate system, a pan angle, a pitch angle, and a roll angle with respect to the world coordinate system are obtained. Then, the focal length f and the pixel size k x of the captured image of the left and right cameras, k y becomes common, and the affine transformation as an imaging surface and the epipolar line (epipolar line) is parallel (affine transformation) is for A parameter or a dedicated LUT (look up table) to be set in the dedicated logic (operation circuit or operation program) is obtained. Then, the parameter or the dedicated LUT is set in the dedicated logic. This is a preparation for obtaining an image suitable for stereo matching. The affine transformation is to perform linear mapping transformation such as rotation, enlargement or reduction, and shear accompanied by parallel movement.
The above is the distortion correction method in a general stereo camera.
 一方、本第1実施形態では、図4に示される世界座標系となる。図4は、第1実施形態に係る世界座標系である車両座標系を示す図である。図4において、車両30には世界座標系{[X,Y,Z],原点O}が定義されている。第1実施形態では、第1の撮像部11(基準カメラ)が右のカメラであり、第2の撮像部12(比較カメラ)が左のカメラであり、この左右のカメラである第1の撮像部11と第2の撮像部12とで、通常は、焦点距離f及び画素サイズk,kが異なっている。この第1の撮像部11、第2の撮像部12で各々撮像された撮像画像のままでは、ステレオマッチングにおけるマッチング演算や視差計算において不都合である。そこで、比較画像(第2の撮像部12による第2の撮像画像)の焦点距離f及び画素サイズk,kが基準画像(第1の撮像部11による第1の撮像画像)の焦点距離f及び画素サイズk,kと同じになるように、アフィン変換するための専用ロジックに設定するパラメータ又は専用LUTを設定する。これにより、ステレオマッチングでのマッチング演算が基準画像と比較画像で同じ画素数のテンプレートを使用でき、又、視差計算が画素数を数えるだけでよく、簡単になる。 On the other hand, in the first embodiment, the world coordinate system shown in FIG. 4 is used. FIG. 4 is a diagram illustrating a vehicle coordinate system that is a world coordinate system according to the first embodiment. In FIG. 4, a world coordinate system {[X, Y, Z], origin O 0 } is defined for the vehicle 30. In the first embodiment, the first imaging unit 11 (reference camera) is the right camera, and the second imaging unit 12 (comparison camera) is the left camera. The first imaging is the left and right cameras. Usually, the focal length f and the pixel sizes k x and k y are different between the unit 11 and the second imaging unit 12. If the captured images respectively captured by the first imaging unit 11 and the second imaging unit 12 are left as they are, it is inconvenient in matching calculation and parallax calculation in stereo matching. Therefore, the focal length of the comparison image focal length f and the pixel size k x, k y is (the first image captured by the first imaging unit 11) reference image (utilizing the second imaging unit 12 and the second captured image) f and pixel size k x, to be the same as k y, set parameters or dedicated LUT set dedicated logic for affine transformation. As a result, the matching operation in stereo matching can use a template having the same number of pixels for the reference image and the comparison image, and the parallax calculation can be performed simply by counting the number of pixels.
 一般に、焦点距離fが同一であり、且つ、撮像面とエピポーラ線が平行になることにより、左右のカメラの撮像面は同一平面上になる。しかし、第1実施形態では、第1の撮像部11(基準カメラ)と第2の撮像部12(比較カメラ)とでエピポーラ線を平行にすることはできるが、カメラ設置位置の制限によって、撮像面を同一平面にすることはできない。具体的には前後に段違いの状態で各撮像画像が存在することになる。そこで、第1実施形態では、撮像面が同一平面でない状態でのステレオマッチング法として、エピポーラ幾何(epipolar geometry)に基づく後述の方法を使用する。 Generally, when the focal length f is the same and the imaging surface and the epipolar line are parallel, the imaging surfaces of the left and right cameras are on the same plane. However, in the first embodiment, the epipolar lines can be made parallel by the first imaging unit 11 (reference camera) and the second imaging unit 12 (comparison camera). The surfaces cannot be coplanar. Specifically, each captured image exists in a stepped state before and after. Therefore, in the first embodiment, a method described later based on epipolar geometry is used as a stereo matching method in a state where the imaging surfaces are not the same plane.
 画像補正部21は、比較画像(第2の撮像部12による第2の撮像画像)に対して、上記で求めたパラメータ又は専用LUTを用いてアフィン変換することにより、歪み補正(WARP)を行う。この歪み補正後の比較画像(第2の補正画像)の焦点距離f及び画素サイズk,kは、基準画像(第1の撮像部11による第1の撮像画像(第1の補正画像として扱う))の焦点距離f及び画素サイズk,kと同じになる。これにより、第1の撮像部11で撮像された第1の撮像画像と、第2の撮像部12で撮像された第2の撮像画像との画角および画素数の違いが補正されたことになる。 The image correction unit 21 performs distortion correction (WARP) by performing affine transformation on the comparison image (second captured image by the second imaging unit 12) using the parameters obtained above or the dedicated LUT. . The focal length f and the pixel sizes k x and k y of the comparison image (second corrected image) after the distortion correction are the reference image (first captured image (first corrected image by the first imaging unit 11)). The same as the focal length f and the pixel sizes k x , k y . Thereby, the difference in the angle of view and the number of pixels between the first captured image captured by the first imaging unit 11 and the second captured image captured by the second imaging unit 12 is corrected. Become.
 以上が画像補正部21の動作の説明である。 The above is the description of the operation of the image correction unit 21.
 次に、図5を参照して第1実施形態に係るパラメータ取得部22の動作を説明する。図5は、第1実施形態に係るパラメータ取得処理の手順を示すフローチャートである。 Next, the operation of the parameter acquisition unit 22 according to the first embodiment will be described with reference to FIG. FIG. 5 is a flowchart showing a procedure of parameter acquisition processing according to the first embodiment.
(ステップS11)パラメータ取得部22はパラメータ(E,K,E,K)の算出を行う。このパラメータ(E,K,E,K)の算出を説明する。図6は、第1実施形態に係る各座標系を示す図である。図6には、世界座標系(車両座標系){[X,Y,Z],原点O}と、基準カメラ(第1の撮像部11)座標系{[U,V,W],原点O(焦点)}と、比較カメラ(第2の撮像部12)座標系{[U,V,W],原点O(焦点)}と、基準画像(第1の撮像部11の撮像画像)座標系{[u,v,1],原点o(撮像画像の左上)}と、比較画像(第2の撮像部12の撮像画像)座標系{[u,v,1],原点o(撮像画像の左上)}と、が示されている。図6において、Sは基準カメラ(第1の撮像部11)のスクリーン(撮像面)、[cx1,cy1,1]はスクリーンSの画像中心、51は基準カメラ光軸である。Sは比較カメラ(第2の撮像部12)のスクリーン(撮像面)、[cx2,cy2,1]はスクリーンSの画像中心、52は比較カメラ光軸である。53はエピポーラ面(epipolar plane)である。e,eはエピポーラ点(epipolar point)である(エピポール(epipole)と呼ばれることもある)。(e-e)はエピポーラ線(epipolar line)である。なお、図6には、比較カメラのスクリーンSの焦点距離f及び画素サイズk,kは、基準カメラのスクリーンSの焦点距離f及び画素サイズk,kと同じに調節後として示している。 (Step S11) The parameter acquisition unit 22 calculates the parameters (E 1 , K 1 , E 2 , K 2 ). Calculation of these parameters (E 1 , K 1 , E 2 , K 2 ) will be described. FIG. 6 is a diagram illustrating each coordinate system according to the first embodiment. In FIG. 6, the world coordinate system (vehicle coordinate system) {[X, Y, Z], origin O 0 } and the reference camera (first imaging unit 11) coordinate system {[U 1 , V 1 , W 1 ], Origin O 1 (focus)}, comparative camera (second imaging unit 12) coordinate system {[U 2 , V 2 , W 2 ], origin O 2 (focus)}, and reference image (first Captured image of imaging unit 11) coordinate system {[u 1 , v 1 , 1], origin o 1 (upper left of captured image)} and comparative image (captured image of second imaging unit 12) coordinate system {[u 2 , v 2 , 1], the origin o 2 (upper left of the captured image)}. In FIG. 6, S 1 is a screen (imaging surface) of the reference camera (first imaging unit 11), [c x1 , c y1 , 1] is an image center of the screen S 1 , and 51 is a reference camera optical axis. S 2 is a screen (imaging surface) of the comparison camera (second imaging unit 12), [c x2, c y2, 1] is the image center of the screen S 2, 52 is a comparative optical axis of the camera. 53 is an epipolar plane. e 1 and e 2 are epipolar points (sometimes called epipoles). (E 1 -e 2 ) is an epipolar line. Incidentally, in FIG. 6, the focal length f and the pixel size k x of the screen S 2 in comparison cameras, k y is the focal length f and the pixel size of the screen S 1 of the base camera k x, equal to the adjusted and k y As shown.
 図6において、世界座標系での物体50の位置をPとし、各カメラ座標系での物体50の位置をPとする(cは1(基準カメラ座標系)又は2(比較カメラ座標系))。すると、物体50の位置P,Pの関係は[数2]で表される。 In FIG. 6, the position of the object 50 in the world coordinate system is P 0, and the position of the object 50 in each camera coordinate system is P c (c is 1 (reference camera coordinate system) or 2 (comparison camera coordinate system). )). Then, the relationship between the positions P 0 and P c of the object 50 is expressed by [Equation 2].
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 [数2]において、Rは、各カメラ(第1の撮像部11、第2の撮像部12)の車両30への取り付けによって生じるパン角、ピッチ角、ロール角で構成される回転行列である。Tは、世界座標系の原点Oに対するカメラ前主点の取り付け位置を表す。 In [Equation 2], R c is a rotation matrix composed of a pan angle, a pitch angle, and a roll angle generated by attaching each camera (the first imaging unit 11 and the second imaging unit 12) to the vehicle 30. is there. T c represents the attachment position of the pre-camera principal point with respect to the origin O 0 of the world coordinate system.
 また、[数3]の様に変形した場合のEを外部カメラパラメータと称する。 In addition, E c in the case of transformation as in [Equation 3] is referred to as an external camera parameter.
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 また、カメラ座標系での物体50の位置Pと、スクリーンS上の画像座標系での物体50の位置pとの関係は、[数4]で表される。 The relationship between the position P c of the object 50 in the camera coordinate system, a position p c of the object 50 in the image coordinate system on the screen S c is represented by [Expression 4].
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
 但し、hはPの第3要素wである。fはスクリーンS,Sで同じに調整後の焦点距離である。k,kはスクリーンS,Sで同じに調整後の画素サイズである。c,cはスクリーンS上の画像中心(光軸位置)である。
 [数4]におけるKを内部カメラパラメータと称する。
Here, h is the third element w c of P c . f is the same focal length after adjustment on the screens S 1 and S 2 . k x and k y are the same pixel sizes after adjustment on the screens S 1 and S 2 . c x, c y is the image center on the screen S c (optical axis position).
K c in [Expression 4] is referred to as an internal camera parameter.
 これにより、基準カメラ(第1の撮像部11)に対して[数5]に示されるパラメータ行列E,Kが得られ、比較カメラ(第2の撮像部12)に対して[数5]に示されるパラメータ行列E,Kが得られる。 As a result, the parameter matrices E 1 and K 1 shown in [Equation 5] are obtained for the reference camera (first imaging unit 11), and [Equation 5] is obtained for the comparative camera (second imaging unit 12). ] parameter matrix E 2, K 2 shown in obtained.
Figure JPOXMLDOC01-appb-M000005
Figure JPOXMLDOC01-appb-M000005
(ステップS12)パラメータ取得部22は、基本行列(E)の算出を行う。この基本行列(E)の算出を説明する。まず、[数3]において「c=1」とした[数6]を変形すると[数7]となる。 (Step S12) The parameter acquisition unit 22 calculates a basic matrix (E 0 ). The calculation of this basic matrix (E 0 ) will be described. First, [Expression 7] is obtained by modifying [Expression 6] in which [c = 1] in [Expression 3].
Figure JPOXMLDOC01-appb-M000006
Figure JPOXMLDOC01-appb-M000006
Figure JPOXMLDOC01-appb-M000007
Figure JPOXMLDOC01-appb-M000007
 次いで、[数3]において「c=2」とした式に[数7]を代入すると[数8]となる。 Next, when [Equation 7] is substituted into the equation [c = 2] in [Equation 3], [Equation 8] is obtained.
Figure JPOXMLDOC01-appb-M000008
Figure JPOXMLDOC01-appb-M000008
 この[数8]において、[数9]とすると[数10]が得られる。 [Equation 8] In [Equation 8], [Equation 9] is obtained when [Equation 9].
Figure JPOXMLDOC01-appb-M000009
Figure JPOXMLDOC01-appb-M000009
Figure JPOXMLDOC01-appb-M000010
Figure JPOXMLDOC01-appb-M000010
 この[数10]は、基準カメラ座標系の位置Pから比較カメラ座標系の位置Pへ座標変換する式である。R,Tは座標変換行列であり、Rは3行3列、Tは3行1列、となる。 The [number 10] is an equation for coordinate transformation to the position P 2 of the comparison camera coordinate system from a position P 1 of the base camera coordinate system. R 0 and T 0 are coordinate transformation matrices, R 0 has 3 rows and 3 columns, and T 0 has 3 rows and 1 column.
 また、図6において、基準カメラ座標系の原点Oから位置Pに至るベクトルと、比較カメラ座標系の原点Oから位置Pに至るベクトルと、比較カメラ座標系の原点Oから基準カメラ座標系の原点Oに至るベクトルとは、同じエピポーラ面53上にあるので、外積の内積がゼロになる。このことから、以下に示すように、エピポーラ方程式の基本行列Eを求める。 In FIG. 6, a vector from the origin O 1 to the position P 1 of the reference camera coordinate system, a vector from the origin O 2 to the position P 2 of the comparison camera coordinate system, and a reference from the origin O 2 of the comparison camera coordinate system. Since the vector reaching the origin O 1 of the camera coordinate system is on the same epipolar plane 53, the inner product of the outer product becomes zero. From this, the basic matrix E 0 of the epipolar equation is obtained as shown below.
 まず、[数11]に示される3つのベクトルの中から、任意の2つのベクトルの外積は[数12]となる。 First, among the three vectors shown in [Equation 11], the outer product of any two vectors becomes [Equation 12].
Figure JPOXMLDOC01-appb-M000011
Figure JPOXMLDOC01-appb-M000011
Figure JPOXMLDOC01-appb-M000012
Figure JPOXMLDOC01-appb-M000012
 但し、「T×T=0」である。 However, “T 0 × T 0 = 0”.
 次いで、[数12]に対して内積をとると[数13]となる。 Next, taking the inner product for [Equation 12] yields [Equation 13].
Figure JPOXMLDOC01-appb-M000013
Figure JPOXMLDOC01-appb-M000013
 次いで、[数13]に対して内積をとると[数14]となる。 Next, taking the inner product for [Equation 13] yields [Equation 14].
Figure JPOXMLDOC01-appb-M000014
Figure JPOXMLDOC01-appb-M000014
 この[数14]により、エピポーラ方程式の基本行列Eが得られる。Eは3行3列である。 By this [Equation 14], a basic matrix E 0 of the epipolar equation is obtained. E 0 has 3 rows and 3 columns.
(ステップS13)パラメータ取得部22は、基礎行列(F)の算出を行う。この基礎行列(F)の算出を説明する。図6において、基準カメラ座標系のスクリーンS上での物体50の画像座標をpとし、比較カメラ座標系のスクリーンS上での物体50の画像座標をpとすると、[数15]である。 (Step S13) The parameter acquisition unit 22 calculates a basic matrix (F 0 ). The calculation of the basic matrix (F 0 ) will be described. 6, the image coordinates of the object 50 on the screen S 1 of the base camera coordinate system and p 1, when the image coordinates of the object 50 on the screen S 2 in comparison camera coordinate system and p 2, [Expression 15 ].
Figure JPOXMLDOC01-appb-M000015
Figure JPOXMLDOC01-appb-M000015
 この[数15]を変形すると[数16]となる。 When this [Equation 15] is transformed, it becomes [Equation 16].
Figure JPOXMLDOC01-appb-M000016
Figure JPOXMLDOC01-appb-M000016
 この[数16]を[数13]に代入すると[数17]となる。 [Substitution 16] Substituting [Equation 16] into [Equation 13] yields [Equation 17].
Figure JPOXMLDOC01-appb-M000017
Figure JPOXMLDOC01-appb-M000017
 この[数17]により、基礎行列Fが[数18]として得られる。Fは3行3列である。 From this [Equation 17], the basic matrix F 0 is obtained as [Equation 18]. F 0 has 3 rows and 3 columns.
Figure JPOXMLDOC01-appb-M000018
Figure JPOXMLDOC01-appb-M000018
 この基礎行列Fにより、基準画像上の位置pに対する、比較画像上での位置p、を探索するときのエピポーラ制約(epipolar constraint)I(図6参照)を求めることができる。エピポーラ制約Iは、[数19]により、3行1列の行列として求められる。 With this basic matrix F 0 , an epipolar constraint I 2 (see FIG. 6) when searching for the position p 2 on the comparison image with respect to the position p 1 on the reference image can be obtained. The epipolar constraint I 2 is obtained as a 3 × 1 matrix by [Equation 19].
Figure JPOXMLDOC01-appb-M000019
Figure JPOXMLDOC01-appb-M000019
 以上がパラメータ取得部22の動作の説明である。 The above is the description of the operation of the parameter acquisition unit 22.
 次に、共通特徴量取得部23の動作を説明する。共通特徴量取得部23は、画像補正部21によって画角および画素数の違いが補正された第1の補正画像と第2の補正画像の各々から、共通特徴量を取得する。第2の補正画像(比較画像)の焦点距離f及び画素サイズk,kは、第1の補正画像(基準画像)の焦点距離f及び画素サイズk,kと同じに補正されている。 Next, the operation of the common feature quantity acquisition unit 23 will be described. The common feature amount acquisition unit 23 acquires a common feature amount from each of the first correction image and the second correction image in which the difference in the angle of view and the number of pixels is corrected by the image correction unit 21. Focal length f and the pixel size k x of the second corrected image (comparison image), k y is the focal length f and the pixel size k x of the first corrected image (reference image), to be the same as correction and k y Yes.
 第1実施形態では、第1の撮像部11は可視光を受光して撮像するもの(可視光カメラ)であり、第2の撮像部12は遠赤外線を受光して撮像するもの(遠赤外線カメラ)である。このように、各々異なる波長帯の光を受光して撮像された各撮像画像の輝度値が意味する物理量は異なる。可視光カメラの撮像画像の輝度値は、可視光線の光量に応じた値であり、物体表面の色や明るさを意味する。一方、遠赤外線カメラの輝度値は、物体表面から黒体放射によって放出される遠赤外線量に応じた値であり、物体表面の温度を意味する。したがって、このままの各撮像画像の輝度値を使用してステレオマッチングすることには意味がない。そこで、第1実施形態では、各撮像画像で共通に存在する共通特徴量として物体の輪郭の情報を使用することを想到した。具体的には、いずれの撮像画像も物体の輪郭で輝度値が大きく変化することから、共通特徴量として、輝度値の微分値の絶対値を使用する。 In the first embodiment, the first image pickup unit 11 receives visible light to pick up an image (visible light camera), and the second image pickup unit 12 receives far infrared light to pick up an image (far infrared camera). ). In this way, the physical quantities that are meant by the luminance values of the captured images captured by receiving light in different wavelength bands are different. The luminance value of the captured image of the visible light camera is a value corresponding to the amount of visible light, and means the color and brightness of the object surface. On the other hand, the luminance value of the far-infrared camera is a value corresponding to the amount of far-infrared emitted from the object surface by black body radiation, and means the temperature of the object surface. Therefore, it is meaningless to perform stereo matching using the luminance value of each captured image as it is. Therefore, in the first embodiment, it has been conceived that information on the contour of an object is used as a common feature amount that exists in common in each captured image. Specifically, since the brightness value of each captured image varies greatly with the contour of the object, the absolute value of the differential value of the brightness value is used as the common feature amount.
 以下、第1実施形態に係る共通特徴量取得方法を説明する。共通特徴量取得部23は、第1の補正画像と第2の補正画像の各々に対して、u,v方向の二次微分となるラプラシアンフィルタ(laplacian filter)処理を行う。このラプラシアンフィルタ処理では、ラプラシアンフィルタ係数を補正画像の画素と一対一で掛け合わせ、その積の総和を算出する(convolution:畳み込み積分)。図7は、第1実施形態に係るラプラシアンフィルタ係数の一例を示す図表である。図7の例は、5×5のラプラシアンフィルタ係数である。 Hereinafter, the common feature amount acquisition method according to the first embodiment will be described. The common feature amount acquisition unit 23 performs a Laplacian filter process that is a second order differential in the u and v directions on each of the first correction image and the second correction image. In this Laplacian filter processing, the Laplacian filter coefficient is multiplied by one-to-one with the pixel of the corrected image, and the sum of the products is calculated (convolution: convolution integration). FIG. 7 is a chart showing an example of Laplacian filter coefficients according to the first embodiment. The example of FIG. 7 is a 5 × 5 Laplacian filter coefficient.
 次いで、共通特徴量取得部23は、ラプラシアンフィルタ処理後の第1の補正画像の微分値の絶対値を算出する。この絶対値が、第1の補正画像の共通特徴量である。この第1の補正画像の共通特徴量から成る画像(第1の共通特徴量画像)は、視差情報取得部24でのステレオマッチングで使用される。また、共通特徴量取得部23は、ラプラシアンフィルタ処理後の第2の補正画像の微分値の絶対値を算出する。この絶対値が、第2の補正画像の共通特徴量である。この第2の補正画像の共通特徴量から成る画像(第2の共通特徴量画像)は、視差情報取得部24でのステレオマッチングで使用される。 Next, the common feature amount acquisition unit 23 calculates the absolute value of the differential value of the first corrected image after the Laplacian filter processing. This absolute value is the common feature amount of the first corrected image. The image composed of the common feature amount of the first corrected image (first common feature amount image) is used for stereo matching in the parallax information acquisition unit 24. Further, the common feature amount acquisition unit 23 calculates the absolute value of the differential value of the second corrected image after the Laplacian filter processing. This absolute value is the common feature amount of the second corrected image. The image composed of the common feature amount of the second corrected image (second common feature amount image) is used for stereo matching in the parallax information acquisition unit 24.
 なお可視光で受光して撮像した第1の撮像画像は、R,G,Bのカラー画像として取得することができる。この場合は、画像補正部21により補正された第1の補正画像も、R,G,Bのカラー画像として取得することが可能である。そこで他の実施形態として、共通特徴量取得部23では、まず、ラプラシアンフィルタ処理をこれらR,G,Bのカラー画像として取得された第1の補正画像のそれぞれに対して行う。これにより、第1の共通特徴量からなる画像として、R,G,Bのそれぞれに対応する、微分値の絶対値により構成されるデータを算出する。そしてこれらR,G,Bのそれぞれの微分値の絶対値のデータに基づき、ステレオマッチングを行う際に、最も効果的な第1の共通特徴量を求めることも可能である。その際、R,G,Bの微分値の絶対値のデータの全てを用いても良いし、R,G,Bのいずれか1つまたは2つを選択して用いても良い。これにより、一層正確なステレオマッチング処理が可能となる。 Note that the first captured image received by visible light and captured can be acquired as an R, G, B color image. In this case, the first corrected image corrected by the image correction unit 21 can also be acquired as an R, G, B color image. Therefore, as another embodiment, the common feature amount acquisition unit 23 first performs Laplacian filter processing on each of the first correction images acquired as the R, G, and B color images. As a result, data composed of absolute values of differential values corresponding to R, G, and B is calculated as an image composed of the first common feature amount. Based on the absolute value data of the differential values of R, G, and B, the most effective first common feature amount can be obtained when performing stereo matching. At this time, all of the absolute value data of the differential values of R, G, and B may be used, or any one or two of R, G, and B may be selected and used. As a result, a more accurate stereo matching process can be performed.
 以上が共通特徴量取得部23の動作の説明である。 The above is the description of the operation of the common feature amount acquisition unit 23.
 次に、視差情報取得部24の動作を説明する。視差情報取得部24は、共通特徴量取得部23で得られた第1の共通特徴量画像と第2の共通特徴量画像を使用してステレオマッチングを行い、視差情報を取得する。図8、図9は、第1実施形態に係る視差情報取得方法のフローチャートである。以下、図8、図9を参照して、第1実施形態に係る視差情報取得方法を説明する。 Next, the operation of the parallax information acquisition unit 24 will be described. The disparity information acquisition unit 24 performs stereo matching using the first common feature amount image and the second common feature amount image obtained by the common feature amount acquisition unit 23, and acquires disparity information. 8 and 9 are flowcharts of the disparity information acquisition method according to the first embodiment. Hereinafter, the parallax information acquisition method according to the first embodiment will be described with reference to FIGS. 8 and 9.
(ステップS21)画像処理装置20が、第1の撮像部11で撮像された第1の撮像画像と第2の撮像部12で撮像された第2の撮像画像とを入力する。 (Step S <b> 21) The image processing apparatus 20 inputs the first captured image captured by the first imaging unit 11 and the second captured image captured by the second imaging unit 12.
(ステップS22)画像補正部21が、ステップS21で入力された第1の撮像画像と第2の撮像画像に対する歪み補正(WARP)を行う。例えば、画像補正部21が、第2の撮像画像に対して専用LUTを用いたアフィン変換を行う。該専用LUTは、第2の撮像画像の焦点距離f及び画素サイズk,kが、アフィン変換により、第1の撮像画像の焦点距離f及び画素サイズk,kと同じになるように、事前に準備されたものである。 (Step S22) The image correction unit 21 performs distortion correction (WARP) on the first captured image and the second captured image input in Step S21. For example, the image correction unit 21 performs affine transformation using a dedicated LUT on the second captured image. LUT the dedicated, the focal length f and the pixel size k x of the second captured image, k y are the affine transformation, the focal length f and the pixel size k x of the first captured image, the same as so as the k y In addition, it has been prepared in advance.
(ステップS23)共通特徴量取得部23が、ステップS22の歪み補正により得られた第1の補正画像と第2の補正画像の各々に対して、ラプラシアンフィルタ処理を行う。 (Step S23) The common feature amount acquisition unit 23 performs Laplacian filter processing on each of the first correction image and the second correction image obtained by the distortion correction in step S22.
(ステップS24)共通特徴量取得部23が、ステップS23のラプラシアンフィルタ処理後の第1の補正画像の微分値の絶対値を算出し、この絶対値(第1の補正画像の共通特徴量)から成る第1の共通特徴量画像を得る。また、共通特徴量取得部23が、ステップS23のラプラシアンフィルタ処理後の第2の補正画像の微分値の絶対値を算出し、この絶対値(第2の補正画像の共通特徴量)から成る第2の共通特徴量画像を得る。 (Step S24) The common feature amount acquisition unit 23 calculates the absolute value of the differential value of the first corrected image after the Laplacian filter processing in Step S23, and from this absolute value (common feature amount of the first corrected image). A first common feature amount image is obtained. Further, the common feature amount acquisition unit 23 calculates the absolute value of the differential value of the second corrected image after the Laplacian filter processing in step S23, and the first value composed of this absolute value (the common feature amount of the second corrected image). Two common feature amount images are obtained.
(ステップS25)視差情報取得部24が、ステップS24で得られた第1の共通特徴量画像(基準画像)において探索開始基準画素「p=[u v 1]」を決める。例えば、第1の共通特徴量画像の一番左上の画素を探索開始基準画素とする。探索開始基準画素は、第1の共通特徴量画像の最初の探索画素となる。 (Step S25) The parallax information acquisition unit 24 determines a search start reference pixel “p 1 = [u 1 v 1 1] T ” in the first common feature amount image (reference image) obtained in step S24. For example, the upper left pixel of the first common feature quantity image is set as a search start reference pixel. The search start reference pixel is the first search pixel of the first common feature amount image.
(ステップS26)視差情報取得部24が、第1の共通特徴量画像の探索画素pに対する、第2の共通特徴量画像(比較画像)上での位置p、を探索するときのエピポーラ制約Iを求める。エピポーラ制約Iは上記[数19]により求められる。エピポーラ制約Iは比較画像探索線の係数となる。なお、視差情報取得部24は、事前に算出された基礎行列Fを保持している。 (Step S26) Epipolar constraint when the parallax information acquisition unit 24 searches for the position p 2 on the second common feature amount image (comparison image) with respect to the search pixel p 1 of the first common feature amount image. determine the I 2. The epipolar constraint I 2 is obtained by the above [Equation 19]. Epipolar constraint I 2 is the coefficient of the comparison image search line. Note that the parallax information acquisition unit 24 holds a basic matrix F 0 calculated in advance.
(ステップS27)視差情報取得部24が、第2の共通特徴量画像において探索開始横位置uを「u=u」に設定する。この探索開始横位置は第2の共通特徴量画像の最初の探索横位置となる。 (Step S27) The parallax information acquisition unit 24 sets the search start lateral position u 2 to “u 2 = u 1 ” in the second common feature amount image. This search start lateral position is the first search lateral position of the second common feature amount image.
(ステップS28)視差情報取得部24が、第2の共通特徴量画像において探索横位置uに対応する探索縦位置vを、図10に示されるようにエピポーラ制約Iに従って求める。図10は第1実施形態に係る視差探索方法の概念図である。
該探索縦位置vは、エピポーラ制約Iの要素(エピポーラ制約線係数)a,b,cを使用して「(c-(a×u))/b」により算出される。
(Step S28) the disparity information acquisition unit 24, a search vertical position v 2 corresponding to the search lateral position u 2 at the second common feature amount image, determined according to epipolar constraints I 2 as shown in FIG. 10. FIG. 10 is a conceptual diagram of the parallax search method according to the first embodiment.
The search vertical position v 2 is calculated by “(c− (a × u 1 )) / b” using elements (epipolar constraint line coefficients) a, b, and c of the epipolar constraint I 2 .
 ここで、ステップS28で算出された「p=[u v 1]」が第2の共通特徴量画像の範囲外である場合には、次のステップS29を飛ばしてステップS30へ進む(図8中には図示せず)。なお、第2の共通特徴量画像において、最初の探索横位置「u=u」と、この探索横位置uに対応する探索縦位置vとを有する探索位置「p=[u v 1]」は、探索開始位置(初期画素位置)である。 Here, when “p 2 = [u 2 v 2 1] T ” calculated in step S28 is outside the range of the second common feature amount image, the next step S29 is skipped and the process proceeds to step S30. (Not shown in FIG. 8). In the second common feature image, a search position “p 2 = [u” having an initial search horizontal position “u 2 = u 1 ” and a search vertical position v 2 corresponding to the search horizontal position u 2. 2 v 2 1] T ”is a search start position (initial pixel position).
(ステップS29)視差情報取得部24が、着目画素p,pを中心にして、第1の共通特徴量画像と第2の共通特徴量画像の一致度を計算する。この一致度の計算(matching)の方法として、例えば、SSD(Sum of Squared Difference:差の単純合計)、SAD(Sum of Absolute Difference:差の絶対値の合計)、NCC(Normalized Cross-Correlation:正規相互相関)、ZNCC(Zero-mean Normalized Cross-Correlation:正規化相互相関)、SGM(Semi-Global Matching)などの方法が挙げられる。 (Step S29) The parallax information acquisition unit 24 calculates the degree of coincidence between the first common feature amount image and the second common feature amount image with the pixels of interest p 1 and p 2 as the center. As a method of calculating the degree of matching, for example, SSD (Sum of Squared Difference), SAD (Sum of Absolute Difference), NCC (Normalized Cross-Correlation: normal) Examples thereof include cross-correlation), ZNCC (Zero-mean Normalized Cross-Correlation), and SGM (Semi-Global Matching).
(ステップS30)視差情報取得部24が、第2の共通特徴量画像の探索横位置uをインクリメントする。 (Step S30) The parallax information acquisition unit 24 increments the search lateral position u2 of the second common feature amount image.
(ステップS31)視差情報取得部24は、第2の共通特徴量画像において探索横位置uが第2の共通特徴量画像の端まで達しているかを判断する。この判断の結果、第2の共通特徴量画像の端まで達している場合には図9のステップS32へ進み、そうではない場合にはステップS28へ戻る。 (Step S31) parallax information acquiring unit 24, the search lateral position u 2 at the second common feature value image to determine whether the reached to the end of the second common feature amount image. As a result of this determination, if the end of the second common feature amount image has been reached, the process proceeds to step S32 in FIG. 9, and if not, the process returns to step S28.
(ステップS32)視差情報取得部24は、第1の共通特徴量画像の探索画素pに対して算出された一致度のうち、最大の一致度である第2の共通特徴量画像の着目画素p(最大一致度比較座標位置)を求める。 (Step S32) The parallax information acquisition unit 24 selects the pixel of interest of the second common feature quantity image that has the highest degree of coincidence among the degrees of coincidence calculated for the search pixel p1 of the first common feature quantity image. p 2 (maximum coincidence comparison coordinate position) is obtained.
(ステップS33)視差情報取得部24は、第2の共通特徴量画像において、探索開始位置(初期画素位置)から最大一致度比較座標位置までの距離を求める。そして、視差情報取得部24は、該求めた距離を画素サイズkで除算する。視差情報取得部24は、該除算結果である商を、第1の共通特徴量画像の探索画素pに対する視差値とする。 (Step S33) The parallax information acquisition unit 24 obtains the distance from the search start position (initial pixel position) to the maximum coincidence comparison coordinate position in the second common feature amount image. Then, the parallax information acquisition unit 24 divides the obtained distance by the pixel size k x . The disparity information acquisition unit 24 sets the quotient that is the result of the division as a disparity value for the search pixel p1 of the first common feature amount image.
(ステップS34)視差情報取得部24は、ステップS33で求めた視差値を視差画像に保存する。この視差値の保存位置は、第1の共通特徴量画像の探索画素pと同じ位置とする。 (Step S34) The parallax information acquisition unit 24 stores the parallax value obtained in step S33 in a parallax image. The storage position of the parallax value is the same position as the search pixel p1 of the first common feature amount image.
(ステップS35)視差情報取得部24は、第1の共通特徴量画像の探索画素「p=[u v 1]をインクリメントする。 (Step S35) The parallax information acquisition unit 24 increments the search pixel “p 1 = [u 1 v 1 1] of the first common feature amount image.
(ステップS36)視差情報取得部24は、探索画素「p=[u v 1]が第1の共通特徴量画像の範囲内であるかを判断する。この判断の結果、第1の共通特徴量画像の範囲内である場合には、第1の共通特徴量画像の探索を継続するために図8のステップS26へ戻る。一方、第1の共通特徴量画像の範囲外である場合には、第1の共通特徴量画像の探索を終了(図8,図9の処理を終了)する。 (Step S36) The parallax information acquisition unit 24 determines whether the search pixel “p 1 = [u 1 v 1 1] is within the range of the first common feature amount image. If it is within the range of the common feature quantity image, the process returns to step S26 in Fig. 8 to continue searching for the first common feature quantity image, whereas it is outside the range of the first common feature quantity image. In this case, the search for the first common feature amount image is terminated (the processes in FIGS. 8 and 9 are terminated).
 上述した図8,図9の処理によって、第1の共通特徴量画像に対応する視差画像が得られる。この視差画像から被写体の3次元情報(例えば、被写体までの3次元距離情報)を取得できる。 A parallax image corresponding to the first common feature amount image is obtained by the processing of FIGS. From this parallax image, three-dimensional information of the subject (for example, three-dimensional distance information to the subject) can be acquired.
 なお、上述したように第1実施形態では、輝度値の微分値の絶対値を共通特徴量としてステレオマッチングを行う。このため、輝度値のままステレオマッチングを行う場合に比べて、視差画像に含まれる視差値が少ない。そこで、視差情報取得部24が、視差画像の情報を補う処理を行うようにしてもよい。具体的には、視差情報取得部24は、第1の共通特徴量画像が得られた元である第1の補正画像(基準画像)の画素値(可視光カメラによる撮像画像の輝度値もしくは色相など)、又は、第2の共通特徴量画像が得られた元である第2の補正画像(比較画像)の画素値(遠赤外線カメラによる撮像画像の輝度値であり被写体の表面温度を表す)と、第1実施形態で得られた視差画像の視差値、すなわち距離情報を基に比較的等距離にある画素を画素点集合、すなわち輪郭と等価な点列としてクラスタリングにより求めておく。次いで、視差情報取得部24は、該クラスタリングの結果である画素点集合に囲まれた、第1の補正画像の画素値、又は、第2の補正画像の輝度値を1つの物体を表す領域としてクラスタリングする。 As described above, in the first embodiment, stereo matching is performed using the absolute value of the differential value of the luminance value as a common feature amount. For this reason, there are few parallax values contained in a parallax image compared with the case where stereo matching is performed with a luminance value. Therefore, the parallax information acquisition unit 24 may perform a process of supplementing the information of the parallax image. Specifically, the parallax information acquisition unit 24 uses the pixel value (the luminance value or hue of the image captured by the visible light camera) of the first corrected image (reference image) from which the first common feature amount image is obtained. Or the pixel value of the second corrected image (comparison image) from which the second common feature amount image is obtained (the luminance value of the image captured by the far-infrared camera and representing the surface temperature of the subject). Based on the parallax value of the parallax image obtained in the first embodiment, that is, distance information, pixels that are relatively equidistant are obtained by clustering as a pixel point set, that is, a point sequence equivalent to a contour. Next, the parallax information acquisition unit 24 uses the pixel value of the first corrected image or the luminance value of the second corrected image surrounded by the pixel point set as a result of the clustering as a region representing one object. Clustering.
 このクラスタリング後の視差画像は、被写体の輪郭に該当する画素に視差値を有し、さらに同一クラスタとしてクラスタリングされた画素領域に同一物体であることを示す識別情報を有する。これにより、輪郭に該当する画素の視差値からは、被写体までの3次元距離情報が得られる。さらに、同一識別情報を有する画素領域が同一物体であると判断できる。 The parallax image after clustering has a parallax value for pixels corresponding to the contour of the subject, and further has identification information indicating that the same object is present in the pixel areas clustered as the same cluster. Thereby, the three-dimensional distance information to the subject is obtained from the parallax value of the pixel corresponding to the contour. Furthermore, it can be determined that pixel areas having the same identification information are the same object.
[3次元距離情報の取得方法の例]
 ここで、第1実施形態に係る視差画像を使用して3次元距離情報を取得する方法の例を説明する。一般に、視差画像において面状に視差値が得られている場合は、「v-disparity」や「virtual disparity」などの手法を用いて、路面と物体の識別が可能であることが知られている。しかし、第1実施形態に係る視差画像では、被写体の輪郭のみに視差値を有し、面状には視差値が得られていない。このため、第1実施形態に係る視差画像において、視差値を有する画素に対して、該視差値と、該画素につながりのある画素の視差値とを比較し、この比較結果に基づいて被写体間の識別(例えば、路面と物体の識別)を行う。例えば、視差値が保存されている輪郭に該当する画素からは、3次元距離情報が得られる。その3次元距離情報の分布から、ある分布が路面などの平面的なものか又は垂直に立っているものかを判断することによって被写体の識別を行う。そして、識別された被写体の輪郭の3次元距離情報を、当該被写体までの3次元距離情報とする。
[Example of acquisition method of three-dimensional distance information]
Here, an example of a method for acquiring three-dimensional distance information using the parallax image according to the first embodiment will be described. In general, when parallax values are obtained in a planar shape in a parallax image, it is known that a road surface and an object can be identified using methods such as “v-disparity” and “virtual disparity”. . However, the parallax image according to the first embodiment has a parallax value only in the contour of the subject, and the parallax value is not obtained in a planar shape. For this reason, in the parallax image according to the first embodiment, for a pixel having a parallax value, the parallax value is compared with the parallax value of a pixel connected to the pixel, and based on the comparison result, (For example, identification of a road surface and an object). For example, three-dimensional distance information is obtained from the pixel corresponding to the contour in which the parallax value is stored. From the distribution of the three-dimensional distance information, the subject is identified by determining whether a certain distribution is planar such as a road surface or is standing vertically. Then, the three-dimensional distance information of the contour of the identified subject is set as the three-dimensional distance information to the subject.
 上述した第1実施形態によれば、可視光カメラと遠赤外線カメラを備えることによって、各カメラの撮像画像から昼夜ともにステレオマッチング法により、被写体までの距離測定等の3次元情報算出に利用できる視差情報を簡素に取得できる。 According to the first embodiment described above, by providing the visible light camera and the far-infrared camera, the parallax that can be used for calculating the three-dimensional information such as the distance measurement to the subject by the stereo matching method from the captured image of each camera day and night. Information can be obtained simply.
 なお、上述した第1実施形態では、基準カメラに可視光カメラを使用し、比較カメラに遠赤外線カメラを使用したが、この逆、つまり、基準カメラに遠赤外線カメラを使用し、比較カメラに可視光カメラを使用するように構成してもよい。 In the first embodiment described above, a visible light camera is used as a reference camera and a far-infrared camera is used as a comparison camera. Conversely, a far-infrared camera is used as a reference camera and a comparison camera is visible. You may comprise so that an optical camera may be used.
[第2実施形態]
 第2実施形態では、共通特徴量の他の例を説明する。第2実施形態において、第1の撮像部11が受光する光の第1の波長域と第2の撮像部12が受光する光の第2の波長域とは、一部分が重複する。第2実施形態では、その重複部分の波長の光の受光量を示す情報を、共通特徴量として使用する。
[Second Embodiment]
In the second embodiment, another example of the common feature amount will be described. In the second embodiment, the first wavelength range of light received by the first imaging unit 11 and the second wavelength range of light received by the second imaging unit 12 are partially overlapped. In the second embodiment, information indicating the received light amount of light having the wavelength of the overlapping portion is used as a common feature amount.
 例えば、第1の撮像部11に係る第1の波長域の光が可視光であり、第2の撮像部12に係る第2の波長域の光が近赤外線および近赤外線に隣接する可視光の赤色の端(長波長側の端)の部分である。この場合、第1の波長域と第2の波長域とは、第2の波長域が有する可視光の赤色の端の部分が重複する。このことから、第1の撮像部11の撮像画像における赤色の画素と、第2の撮像部12の撮像画像の画素との相関をとる。この相関の高い部分の画素を共通特徴量として、ステレオマッチングに使用する。 For example, the light in the first wavelength range related to the first imaging unit 11 is visible light, and the light in the second wavelength range related to the second imaging unit 12 is near-infrared light and visible light adjacent to the near-infrared light. This is the red end (end on the long wavelength side). In this case, the first wavelength region and the second wavelength region overlap the red end portion of the visible light included in the second wavelength region. From this, the red pixel in the captured image of the first imaging unit 11 and the pixel of the captured image of the second imaging unit 12 are correlated. This highly correlated pixel is used for stereo matching as a common feature.
 なお、この例の第2の撮像部12は、近赤外線と可視光の一部分を受光するものであるので、可視光カメラ(第1の撮像部11)の隣に配置できる。これにより、両方のカメラの画像平面は同一にすることができるので、従来のステレオマッチング法により視差画像を得ることが可能である。 In addition, since the 2nd imaging part 12 of this example receives a part of near infrared rays and visible light, it can be arrange | positioned next to a visible light camera (1st imaging part 11). Thereby, since the image plane of both cameras can be made the same, it is possible to obtain a parallax image by the conventional stereo matching method.
 上述した第2実施形態によれば、ステレオマッチング法により視差情報を簡素に取得できるという効果が得られる。 According to the second embodiment described above, an effect that the parallax information can be simply obtained by the stereo matching method is obtained.
 以上、本発明の実施形態について図面を参照して詳述してきたが、具体的な構成はこの実施形態に限られるものではなく、本発明の要旨を逸脱しない範囲の設計変更等も含まれる。 As described above, the embodiment of the present invention has been described in detail with reference to the drawings. However, the specific configuration is not limited to this embodiment, and includes design changes and the like within a scope not departing from the gist of the present invention.
 例えば、図2に示される運転支援ユニット41は、画像処理装置20で取得された視差情報を使用してステレオカメラ認識処理を行い、このステレオカメラ認識処理の結果に基づいて、CAN42を介して、各制御ユニット43a~43fへ制御信号を送信するようにしてもよい。 For example, the driving support unit 41 shown in FIG. 2 performs a stereo camera recognition process using the parallax information acquired by the image processing device 20, and based on the result of the stereo camera recognition process, via the CAN 42, A control signal may be transmitted to each of the control units 43a to 43f.
 例えば、運転支援ユニット41は、画像処理装置20で取得された視差情報から被写体の三次元情報を取得する。次いで、運転支援ユニット41は、該取得した三次元情報から、路面などの走行可能範囲と、ガードレールなどの路上障害物と、先行車または対向車などの路面とは異なる障害物と、を認識する。次いで、運転支援ユニット41は、それら認識した物体のそれぞれとの相対距離および相対速度を求める。次いで、運転支援ユニット41は、該求めた相対距離および相対速度に基づいて、加速、追従、減速、停止または回避などの安全な走行として設定された走行を支援する。 For example, the driving support unit 41 acquires the three-dimensional information of the subject from the parallax information acquired by the image processing device 20. Next, the driving support unit 41 recognizes a travelable range such as a road surface, a road obstacle such as a guard rail, and an obstacle different from the road surface such as a preceding vehicle or an oncoming vehicle from the acquired three-dimensional information. . Next, the driving support unit 41 obtains a relative distance and a relative speed with each of the recognized objects. Next, the driving support unit 41 supports traveling set as safe traveling such as acceleration, following, deceleration, stop, or avoidance based on the obtained relative distance and relative speed.
 さらに、運転支援ユニット41は、道路上の車線に沿って走る機能(Lane Keep)と、道路の状況(渋滞状況、先行車の位置、割り込み車両の有無など)に合わせて、車両30の加速、減速を制御する機能(Auto Cruise Control)と、快適な走行として設定された走行を支援する機能とを有するようにしてもよい。 Further, the driving support unit 41 accelerates the vehicle 30 in accordance with the function of running along the lane on the road (Lane Keep) and the road conditions (congestion status, position of the preceding vehicle, presence of an interrupted vehicle, etc.) You may make it have the function (Auto | Cruise | Control) which controls deceleration, and the function which supports the driving | running | working set as comfortable driving | running | working.
 さらに、運転支援ユニット41は、ナビゲーション装置とデータリンクする機能と、ナビゲーション装置に設定されたコース情報および運転支援ユニット41で認識した車線情報から路面上の走行コースを算出する機能と、該算出した走行コースを走行するように自動運転を支援する機能とを有するようにしてもよい。 Further, the driving support unit 41 has a function of data linking with the navigation device, a function of calculating a traveling course on the road surface from the course information set in the navigation device and the lane information recognized by the driving support unit 41, and the calculated You may make it have a function which supports automatic driving | running | working so that it may drive | work a driving course.
 なお、図11に示されるように、各々異なる視野FOV110,FOV120の可視光カメラ110,120を車両30の車室内フロントガラス上部に配置した場合について説明する。可視光カメラ110の視野FOV110は、可視光カメラ120の視野FOV120よりも広い。この場合、画像補正部21による画像補正において、広角の方の可視光カメラ110による撮像画像の画角が狭角の方の可視光カメラ120による撮像画像の画角と同じになるように、歪み補正で使用されるLUTを作成する。可視光カメラ110,120は同じ車室内フロントガラス上部に並べて配置されているので、両方のカメラの画像平面は同一にすることができる。これにより、従来のステレオマッチング法により視差画像を得ることができ、該視差画像から被写体の三次元情報を取得し、路面と物体の識別を行うことが可能である。 In addition, as shown in FIG. 11, the case where the visible light cameras 110 and 120 having different visual fields FOV 110 and FOV 120 are arranged on the front windshield of the vehicle 30 will be described. The visual field FOV 110 of the visible light camera 110 is wider than the visual field FOV 120 of the visible light camera 120. In this case, in the image correction by the image correction unit 21, distortion is performed so that the angle of view of the image captured by the visible light camera 110 with the wide angle becomes the same as the angle of view of the image captured with the visible light camera 120 with the narrow angle. Create an LUT to be used for correction. Since the visible light cameras 110 and 120 are arranged side by side on the same windshield in the vehicle interior, the image planes of both cameras can be the same. Accordingly, a parallax image can be obtained by a conventional stereo matching method, and it is possible to acquire three-dimensional information of a subject from the parallax image and identify a road surface and an object.
 また、上述した画像処理装置20の機能を実現するためのコンピュータプログラムをコンピュータ読み取り可能な記録媒体に記録して、この記録媒体に記録されたプログラムをコンピュータシステムに読み込ませ、実行するようにしてもよい。なお、ここでいう「コンピュータシステム」とは、OSや周辺機器等のハードウェアを含むものであってもよい。 Further, a computer program for realizing the functions of the above-described image processing apparatus 20 is recorded on a computer-readable recording medium, and the program recorded on the recording medium is read into the computer system and executed. Good. Here, the “computer system” may include an OS and hardware such as peripheral devices.
 また、「コンピュータ読み取り可能な記録媒体」とは、フレキシブルディスク、光磁気ディスク、ROM、フラッシュメモリ等の書き込み可能な不揮発性メモリ、DVD(Digital Versatile Disk)等の可搬媒体、コンピュータシステムに内蔵されるハードディスク等の記憶装置のことをいう。 “Computer-readable recording medium” means a flexible disk, a magneto-optical disk, a ROM, a writable nonvolatile memory such as a flash memory, a portable medium such as a DVD (Digital Versatile Disk), and a built-in computer system. A storage device such as a hard disk.
 さらに「コンピュータ読み取り可能な記録媒体」とは、インターネット等のネットワークや電話回線等の通信回線を介してプログラムが送信された場合のサーバやクライアントとなるコンピュータシステム内部の揮発性メモリ(例えばDRAM(Dynamic Random Access Memory))のように、一定時間プログラムを保持しているものも含むものとする。 Further, the “computer-readable recording medium” means a volatile memory (for example, DRAM (Dynamic DRAM) in a computer system that becomes a server or a client when a program is transmitted through a network such as the Internet or a communication line such as a telephone line. Random Access Memory)), etc., which hold programs for a certain period of time.
 また、上記プログラムは、このプログラムを記憶装置等に格納したコンピュータシステムから、伝送媒体を介して、あるいは、伝送媒体中の伝送波により他のコンピュータシステムに伝送されてもよい。ここで、プログラムを伝送する「伝送媒体」は、インターネット等のネットワーク(通信網)や電話回線等の通信回線(通信線)のように情報を伝送する機能を有する媒体のことをいう。 The program may be transmitted from a computer system storing the program in a storage device or the like to another computer system via a transmission medium or by a transmission wave in the transmission medium. Here, the “transmission medium” for transmitting the program refers to a medium having a function of transmitting information, such as a network (communication network) such as the Internet or a communication line (communication line) such as a telephone line.
 また、上記プログラムは、前述した機能の一部を実現するためのものであっても良い。さらに、前述した機能をコンピュータシステムにすでに記録されているプログラムとの組み合わせで実現できるもの、いわゆる差分ファイル(差分プログラム)であっても良い。 Further, the program may be for realizing a part of the above-described functions. Furthermore, what can implement | achieve the function mentioned above in combination with the program already recorded on the computer system, and what is called a difference file (difference program) may be sufficient.
1…撮像装置、11…第1の撮像部,12…第2の撮像部、20…画像処理装置、21…画像補正部、22…パラメータ取得部、23…共通特徴量取得部、24…視差情報取得部、30…車両、31…フロントガラス、32…フロントバンパー、41…運転支援ユニット、42…CAN、43a~43f…制御ユニット
 
DESCRIPTION OF SYMBOLS 1 ... Imaging device, 11 ... 1st imaging part, 12 ... 2nd imaging part, 20 ... Image processing apparatus, 21 ... Image correction part, 22 ... Parameter acquisition part, 23 ... Common feature-value acquisition part, 24 ... Parallax Information acquisition unit, 30 ... vehicle, 31 ... windshield, 32 ... front bumper, 41 ... driving support unit, 42 ... CAN, 43a to 43f ... control unit

Claims (11)

  1. 第1の撮像部と第2の撮像部と画像処理装置とを有する撮像装置において、
     前記第1の撮像部により、第1の波長域の光を受光して、第1の撮像画像を取得するステップと、
     前記第2の撮像部により、前記第1の波長域とは異なる第2の波長域の光を受光して、第2の撮像画像を取得するステップと、
     前記第1の撮像画像と前記第2の撮像画像との違いを補正することにより、各々について第1の補正画像と第2の補正画像とを取得する画像補正ステップと、
     前記第1の補正画像と前記第2の補正画像との共通特徴量に基づき、各々について第1の共通特徴量と第2の共通特徴量とを取得する共通特徴量取得ステップと、
     前記第1の共通特徴量と前記第2の共通特徴量とを使用して、ステレオマッチングにより視差情報を取得する視差情報取得ステップと、
     を有する撮像方法。
     
    In an imaging device having a first imaging unit, a second imaging unit, and an image processing device,
    Receiving light in a first wavelength range by the first imaging unit to obtain a first captured image;
    Receiving light of a second wavelength range different from the first wavelength range by the second imaging unit, and acquiring a second captured image;
    An image correction step of acquiring a first corrected image and a second corrected image for each by correcting a difference between the first captured image and the second captured image;
    A common feature amount acquiring step for acquiring a first common feature amount and a second common feature amount for each based on a common feature amount of the first corrected image and the second corrected image;
    A parallax information acquisition step of acquiring parallax information by stereo matching using the first common feature quantity and the second common feature quantity;
    An imaging method comprising:
  2.  前記第1の共通特徴量および前記第2の共通特徴量は、輝度値の微分値の絶対値である請求項1に記載の撮像方法。
     
    The imaging method according to claim 1, wherein the first common feature amount and the second common feature amount are absolute values of a differential value of a luminance value.
  3.  前記第1の共通特徴量または前記第2の共通特徴量は、R、G、Bの全て、あるいは少なくともいずれかの輝度値の微分値の絶対値に基づき取得された請求項1に記載の撮像方法。
     
    2. The imaging according to claim 1, wherein the first common feature amount or the second common feature amount is acquired based on an absolute value of a differential value of luminance values of all or at least one of R, G, and B. Method.
  4.  前記第1の波長域と前記第2の波長域とは一部分が重複し、該重複部分の波長の光の受光量を示す情報を前記第1の共通特徴量および前記第2の共通特徴量として使用する請求項1に記載の撮像方法。
     
    The first wavelength region and the second wavelength region partially overlap each other, and information indicating the amount of received light having the wavelength of the overlapping portion is used as the first common feature amount and the second common feature amount. The imaging method according to claim 1 to be used.
  5.  前記第1の撮像部の撮像面と前記第2の撮像部の撮像面とは異なる平面上にあり、
     前記視差情報取得ステップは、
     前記第1の共通特徴量から成る第1の共通特徴量画像の探索画素p1を決定するサブステップと、
     エピポーラ制約に従い、前記探索画素p1に対応する、前記第2の共通特徴量から成る第2の共通特徴量画像上での探索画素p2を求めるサブステップと、
     前記探索画素p1およびp2を中心として、第1の共通特徴量画像および第2の共通特徴量画像の一致度を計算し、当該一致度に基づき第2の共通特徴量画像の探索画素p2の位置を求めるサブステップと、
     を少なくとも有し、前記ステレオマッチングにより視差情報を取得する、請求項1から4のいずれか1項に記載の撮像方法。
     
    The imaging surface of the first imaging unit and the imaging surface of the second imaging unit are on different planes,
    The parallax information acquisition step includes
    A sub-step of determining a search pixel p1 of a first common feature quantity image composed of the first common feature quantity;
    Sub-step for obtaining a search pixel p2 on the second common feature quantity image composed of the second common feature quantity corresponding to the search pixel p1 according to the epipolar constraint;
    The degree of coincidence between the first common feature amount image and the second common feature amount image is calculated around the search pixels p1 and p2, and the position of the search pixel p2 in the second common feature amount image is calculated based on the degree of coincidence. Sub-step for
    5. The imaging method according to claim 1, further comprising: acquiring parallax information by the stereo matching.
  6.  前記視差情報取得ステップは、前記第1の補正画像の画素値または前記第2の補正画像の画素値と、前記ステレオマッチングにより取得された視差画像の視差値とをクラスタリングし、該クラスタリングの結果である画素点集合に囲まれた、前記第1の補正画像の画素値または前記第2の補正画像の画素値を1つの物体を表す領域としてクラスタリングする、
     請求項1から5のいずれか1項に記載の撮像方法。
     
    The parallax information acquisition step clusters the pixel value of the first corrected image or the pixel value of the second corrected image and the parallax value of the parallax image acquired by the stereo matching, and the clustering result Clustering pixel values of the first corrected image or pixel values of the second corrected image surrounded by a set of pixel points as a region representing one object,
    The imaging method according to any one of claims 1 to 5.
  7.  第1の波長域の光を受光して、第1の撮像画像を取得する第1の撮像部と、
     前記第1の波長域とは異なる第2の波長域の光を受光して、第2の撮像画像を取得する第2の撮像部と、
     前記第1の撮像画像と、前記第2の撮像画像との違いを補正することにより、前記第1の撮像画像から得られた第1の補正画像と前記第2の撮像画像から得られた第2の補正画像とを取得する画像補正部と、
     前記第1の補正画像と前記第2の補正画像の各々について第1の共通特徴量と第2の共通特徴量とを取得する共通特徴量取得部と、
     前記第1の共通特徴量と前記第2の共通特徴量とを使用して、ステレオマッチングにより視差情報を取得する視差情報取得部と、
     を備えた撮像装置。
     
    A first imaging unit that receives light in the first wavelength region and acquires a first captured image;
    A second imaging unit that receives light in a second wavelength range different from the first wavelength range and obtains a second captured image;
    By correcting the difference between the first captured image and the second captured image, the first corrected image obtained from the first captured image and the second captured image obtained from the second captured image are displayed. An image correction unit that acquires the two corrected images;
    A common feature amount acquisition unit that acquires a first common feature amount and a second common feature amount for each of the first correction image and the second correction image;
    A parallax information acquisition unit that acquires parallax information by stereo matching using the first common feature quantity and the second common feature quantity;
    An imaging apparatus comprising:
  8.  第1の波長域の光を受光して撮像された第1の撮像画像と、前記第1の波長域とは異なる第2の波長域の光を受光して撮像された第2の撮像画像とを保持し、各々の撮像画像の違いを補正することにより、前記第1の撮像画像から得られた第1の補正画像と前記第2の撮像画像から得られた第2の補正画像とを取得する画像補正部と、
     前記第1の補正画像と前記第2の補正画像との共通特徴量に基づき、各々について第1の共通特徴量と第2の共通特徴量とを取得する共通特徴量取得部と、
     前記第1の共通特徴量と前記第2の共通特徴量とを使用して、ステレオマッチングにより視差情報を取得する視差情報取得部と、
     を備えた画像処理装置。
     
    A first picked-up image picked up by receiving light in the first wavelength region, and a second picked-up image picked up by receiving light in a second wavelength region different from the first wavelength region; Is acquired, and the first corrected image obtained from the first captured image and the second corrected image obtained from the second captured image are acquired by correcting the difference between the captured images. An image correction unit to perform,
    A common feature amount acquisition unit that acquires a first common feature amount and a second common feature amount for each based on a common feature amount of the first correction image and the second correction image;
    A parallax information acquisition unit that acquires parallax information by stereo matching using the first common feature quantity and the second common feature quantity;
    An image processing apparatus.
  9.  不揮発性の記憶媒体に記録されコンピュータに実行させるプログラムであって、
     第1の波長域の光を受光して、第1の撮像画像を取得するステップと、
     前記第1の波長域とは異なる第2の波長域の光を受光して、第2の撮像画像を取得するステップと、
     前記第1の撮像画像と前記第2の撮像画像との違いを補正することにより、各々について第1の補正画像と第2の補正画像とを取得する画像補正ステップと、
     前記第1の補正画像と前記第2の補正画像との共通特徴量に基づき、各々について第1の共通特徴量と第2の共通特徴量とを取得する共通特徴量取得ステップと、
     前記第1の共通特徴量と前記第2の共通特徴量とを使用して、ステレオマッチングにより視差情報を取得する視差情報取得ステップと、
     を有するコンピュータで読み出し可能な記憶媒体に記録されるプログラム。
     
    A program recorded in a non-volatile storage medium and executed by a computer,
    Receiving light in a first wavelength range to obtain a first captured image;
    Receiving light in a second wavelength range different from the first wavelength range to obtain a second captured image;
    An image correction step of acquiring a first corrected image and a second corrected image for each by correcting a difference between the first captured image and the second captured image;
    A common feature amount acquiring step for acquiring a first common feature amount and a second common feature amount for each based on a common feature amount of the first corrected image and the second corrected image;
    A parallax information acquisition step of acquiring parallax information by stereo matching using the first common feature quantity and the second common feature quantity;
    A program recorded on a computer-readable storage medium.
  10.  不揮発性の記憶媒体に記録されコンピュータに実行させるプログラムであって、
     前記第1の共通特徴量および前記第2の共通特徴量は、輝度値の微分値の絶対値である、請求項9に記載のコンピュータで読み出し可能な記憶媒体に記録されるプログラム。
     
    A program recorded in a non-volatile storage medium and executed by a computer,
    The program recorded in the computer-readable storage medium according to claim 9, wherein the first common feature amount and the second common feature amount are absolute values of differential values of luminance values.
  11.  不揮発性の記憶媒体に記録されコンピュータに実行させるプログラムであって、
     前記第1の共通特徴量または前記第2の共通特徴量は、R、G、Bの全て、あるいは少なくともいずれかの輝度値の微分値の絶対値に基づき取得された、
    請求項9に記載のコンピュータで読み出し可能な記憶媒体に記録されるプログラム。
    A program recorded in a non-volatile storage medium and executed by a computer,
    The first common feature amount or the second common feature amount is acquired based on an absolute value of a differential value of all or at least one of R, G, and B,
    The program recorded on the computer-readable storage medium of Claim 9.
PCT/JP2015/065660 2014-05-30 2015-05-29 Image capturing device, image processing device, image processing method, and computer program WO2015182771A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014-113173 2014-05-30
JP2014113173 2014-05-30

Publications (1)

Publication Number Publication Date
WO2015182771A1 true WO2015182771A1 (en) 2015-12-03

Family

ID=54699088

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/065660 WO2015182771A1 (en) 2014-05-30 2015-05-29 Image capturing device, image processing device, image processing method, and computer program

Country Status (1)

Country Link
WO (1) WO2015182771A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3343511A1 (en) * 2016-12-27 2018-07-04 Kabushiki Kaisha Toshiba Image processing apparatus and image processing method
EP3389009A4 (en) * 2015-12-10 2018-12-19 Ricoh Company, Ltd. Image processing device, object recognition device, apparatus control system, image processing method and program
CN114108994A (en) * 2022-01-25 2022-03-01 深圳市门罗智能有限公司 Glass curtain wall installation helping hand system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07225127A (en) * 1994-02-14 1995-08-22 Mitsubishi Motors Corp On-road object recognizing device for vehicle
JPH11153406A (en) * 1997-11-20 1999-06-08 Nissan Motor Co Ltd Obstacle detector for vehicle
WO2007129563A1 (en) * 2006-05-09 2007-11-15 Panasonic Corporation Range finder with image selecting function for finding range
WO2012073722A1 (en) * 2010-12-01 2012-06-07 コニカミノルタホールディングス株式会社 Image synthesis device
JP2013257244A (en) * 2012-06-13 2013-12-26 Sharp Corp Distance measurement device, distance measurement method, and distance measurement program
WO2014054752A1 (en) * 2012-10-04 2014-04-10 アルプス電気株式会社 Image processing device and device for monitoring area in front of vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07225127A (en) * 1994-02-14 1995-08-22 Mitsubishi Motors Corp On-road object recognizing device for vehicle
JPH11153406A (en) * 1997-11-20 1999-06-08 Nissan Motor Co Ltd Obstacle detector for vehicle
WO2007129563A1 (en) * 2006-05-09 2007-11-15 Panasonic Corporation Range finder with image selecting function for finding range
WO2012073722A1 (en) * 2010-12-01 2012-06-07 コニカミノルタホールディングス株式会社 Image synthesis device
JP2013257244A (en) * 2012-06-13 2013-12-26 Sharp Corp Distance measurement device, distance measurement method, and distance measurement program
WO2014054752A1 (en) * 2012-10-04 2014-04-10 アルプス電気株式会社 Image processing device and device for monitoring area in front of vehicle

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3389009A4 (en) * 2015-12-10 2018-12-19 Ricoh Company, Ltd. Image processing device, object recognition device, apparatus control system, image processing method and program
US10546383B2 (en) 2015-12-10 2020-01-28 Ricoh Company, Ltd. Image processing device, object recognizing device, device control system, image processing method, and computer-readable medium
EP3343511A1 (en) * 2016-12-27 2018-07-04 Kabushiki Kaisha Toshiba Image processing apparatus and image processing method
US10726528B2 (en) 2016-12-27 2020-07-28 Kabushiki Kaisha Toshiba Image processing apparatus and image processing method for image picked up by two cameras
CN114108994A (en) * 2022-01-25 2022-03-01 深圳市门罗智能有限公司 Glass curtain wall installation helping hand system

Similar Documents

Publication Publication Date Title
US9749614B2 (en) Image capturing system obtaining scene depth information and focusing method thereof
JP5455124B2 (en) Camera posture parameter estimation device
JP6565188B2 (en) Parallax value deriving apparatus, device control system, moving body, robot, parallax value deriving method, and program
CN113196007B (en) Camera system applied to vehicle
CN107122770B (en) Multi-camera system, intelligent driving system, automobile, method and storage medium
CN107533753A (en) Image processing apparatus
JP6970577B2 (en) Peripheral monitoring device and peripheral monitoring method
TW201403553A (en) Method of automatically correcting bird&#39;s eye images
US20130329019A1 (en) Image processing apparatus that estimates distance information, method of controlling the same, and storage medium
JP6337504B2 (en) Image processing apparatus, moving body, robot, device control method and program
JP6375633B2 (en) Vehicle periphery image display device and vehicle periphery image display method
CN105513074B (en) A kind of scaling method of shuttlecock robot camera and vehicle body to world coordinate system
JP6455164B2 (en) Parallax value deriving apparatus, device control system, moving body, robot, parallax value deriving method, and program
WO2015182771A1 (en) Image capturing device, image processing device, image processing method, and computer program
JP6543935B2 (en) PARALLEL VALUE DERIVING DEVICE, DEVICE CONTROL SYSTEM, MOBILE OBJECT, ROBOT, PARALLEL VALUE DERIVING METHOD, AND PROGRAM
WO2014054752A1 (en) Image processing device and device for monitoring area in front of vehicle
CN106846385B (en) Multi-sensing remote sensing image matching method, device and system based on unmanned aerial vehicle
KR101697229B1 (en) Automatic calibration apparatus based on lane information for the vehicle image registration and the method thereof
JP4696925B2 (en) Image processing device
JP7303064B2 (en) Image processing device and image processing method
KR101714896B1 (en) Robust Stereo Matching Method and Apparatus Under Radiometric Change for Advanced Driver Assistance System
WO2019198399A1 (en) Image processing device and method
EP3051494B1 (en) Method for determining an image depth value depending on an image region, camera system and motor vehicle
KR101293263B1 (en) Image processing apparatus providing distacnce information in a composite image obtained from a plurality of image and method using the same
CN114762019A (en) Camera system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15799032

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15799032

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP