WO2019021591A1 - Dispositif de traitement d'images, procédé de traitement d'images, programme, et système de traitement d'images - Google Patents

Dispositif de traitement d'images, procédé de traitement d'images, programme, et système de traitement d'images Download PDF

Info

Publication number
WO2019021591A1
WO2019021591A1 PCT/JP2018/019088 JP2018019088W WO2019021591A1 WO 2019021591 A1 WO2019021591 A1 WO 2019021591A1 JP 2018019088 W JP2018019088 W JP 2018019088W WO 2019021591 A1 WO2019021591 A1 WO 2019021591A1
Authority
WO
WIPO (PCT)
Prior art keywords
gradient
polarization
depth
unit
pixel
Prior art date
Application number
PCT/JP2018/019088
Other languages
English (en)
Japanese (ja)
Inventor
穎 陸
康孝 平澤
雄飛 近藤
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2019021591A1 publication Critical patent/WO2019021591A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • G01C3/02Details
    • G01C3/06Use of electric means to obtain final indication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images

Definitions

  • This technology relates to an image processing apparatus, an image processing method, a program, and an image processing system, and performs depth calculation for an area without texture.
  • an epipolar plane image is generated from images obtained by a plurality of imaging devices to calculate depth which is distance information to a subject.
  • calculation of depth is performed based on a gradient calculated from an epipolar plane image using texture information in images obtained by a plurality of imaging devices.
  • the method of utilizing the texture information in the image can not detect the gradient from the epipolar plane image when there is no texture. Therefore, it is not possible to calculate the depth for an area without texture.
  • an object of the present technology to provide an image processing apparatus, an image processing method, a program, and an image processing system for calculating the depth of an area without texture.
  • an image processing apparatus including a depth calculation unit that performs depth calculation of a depth calculation target pixel based on polarization epipolar plane images generated from a plurality of polarization imaging images having different polarization directions and viewpoint positions.
  • classification is performed on the basis of an epipolar plane image generated from a plurality of polarization imaging images, in which the depth calculation target pixel is either a pixel having texture information or a pixel having no texture information.
  • the gradient is calculated based on the epipolar plane image, and for pixels without texture information, the gradient is calculated based on the polarization epipolar plane image, and the calculated gradient is converted to depth
  • the depth of the depth calculation target pixel is calculated.
  • a gradient having a calculated reliability of the gradient equal to or higher than a predetermined reliability threshold may be converted into the depth.
  • the gradient based on polarization information is fit to a trigonometric function using pixels on a straight line based on the position of the pixel for depth calculation in the polarization epipolar plane image, and the waveform of the trigonometric function generated by the difference in pixels used for fitting
  • the slope of the straight line that minimizes the difference is taken as the slope of the depth calculation target pixel.
  • the polarization directions of the plurality of polarization imaging images are set to four or more directions within the range of the angle difference less than 180 degrees.
  • the normal to the depth calculation target pixel is calculated using the pixel located in the direction of the gradient calculated using the polarization epipolar plane image, and interpolation processing is performed using the depth and the normal to the depth calculation target pixel. , Get depth at a higher resolution than pixel units.
  • the depth of the depth calculation target pixel is calculated by interpolation processing using the depth and the normal line of the pixel close to the depth calculation target pixel. Further, the normal to the depth calculation target pixel is calculated using a pixel located in the direction of an arbitrary gradient from the depth calculation target pixel for which the gradient can not be calculated.
  • the gradient of the pixel for depth calculation calculated based on the first polarization epipolar plane image or the first epipolar plane image generated from a plurality of polarization imaging images having different viewpoint positions in the first direction is different from the first direction
  • Either the second polarization epipolar plane image generated from a plurality of polarization imaging images different in viewpoint position in the second direction or the gradient of the depth calculation target pixel calculated based on the second epipolar plane image is selected and selected
  • the slope may be converted to depth.
  • one of the gradients is a gradient calculated based on the texture information and the other gradient is a gradient calculated based on the polarization information
  • the gradient calculated based on the texture information is selected.
  • the gradients are both calculated based on the texture information, and if the gradients are both calculated based on the polarization information, then a highly reliable gradient is selected.
  • the second aspect of this technology is The present invention is an image processing method including performing depth calculation of a depth calculation target pixel based on polarization epipolar plane images generated from a plurality of polarization imaging images having different polarization directions and viewpoint positions.
  • the third aspect of this technology is A program that causes a computer to execute image processing using a polarization image. Generating a polarization epipolar plane image from a plurality of polarization imaging images different in polarization direction and viewpoint position; And a program for causing the computer to execute the steps of calculating the depth of the depth calculation target pixel based on the generated polarization epipolar plane image.
  • the program of the present technology is, for example, a storage medium, communication medium such as an optical disc, a magnetic disc, a semiconductor memory, etc., provided in a computer readable format to a general-purpose computer capable of executing various program codes. It is a program that can be provided by a medium or a communication medium such as a network. By providing such a program in a computer readable form, processing according to the program is realized on the computer.
  • the fourth aspect of this technology is An imaging device for acquiring a plurality of polarization imaging images having different polarization directions and viewpoint positions; It has an image processing device which performs depth calculation of a depth calculation target pixel from the plurality of polarization imaging images acquired by the imaging device,
  • the image processing apparatus is A polarization epipolar plane image generation unit that generates a polarization epipolar plane image including pixels for depth calculation from the plurality of polarization imaging images; And a depth calculation unit for calculating the depth of the depth calculation target pixel based on the polarization epipolar plane image generated by the polarization epipolar plane image generation unit.
  • the depth of the depth calculation target pixel is calculated based on polarization epipolar plane images generated from a plurality of polarization imaging images having different polarization directions and viewpoint positions. For this reason, It becomes possible to calculate the depth based on the polarization information for the area without texture.
  • the effects described in the present specification are merely examples and are not limited, and additional effects may be present.
  • FIG. 1 illustrates the configuration of an imaging system using the image processing device of the present technology.
  • the imaging system 10 is configured using an imaging device that acquires a plurality of polarization imaging images different in polarization direction and viewpoint position, and an image processing device that performs image processing using the acquired polarization imaging image.
  • polarizing elements such as a polarizing plate or a polarizing filter, are provided in front of an imaging lens or an imaging element to obtain a plurality of polarized images.
  • a plurality of imaging units are arranged linearly in parallel, and the polarization directions of the polarization elements are set in different directions in each imaging unit.
  • the polarization direction of the polarizing element is set to an angle of 0 degrees or more and less than 180 degrees.
  • six imaging units 20-1 to 20-6 are linearly arranged in parallel.
  • the polarization elements 21-1 to 21-6 of the imaging units 20-1 to 20-6 are set so that the angle difference of the polarization direction with the adjacent imaging units is, for example, 30 degrees and the angle of the polarization direction increases. ing.
  • the six imaging units 20-1 to 20-6 perform imaging, for example, by including a rectangular columnar object OBa and a cylindrical object OB provided at a position farther than the object OBa in the imaging range.
  • Polarized imaging images different in position and polarization direction from each other are acquired and output to the image processing device 30.
  • the imaging apparatus acquires four or more polarization imaging images different in viewpoint position and polarization direction as described later.
  • the image processing apparatus 30 generates polarization epipolar plane images and epipolar plane images from a plurality of polarization imaging images having different polarization directions and viewpoint positions, and uses the generated polarization epipolar plane images to obtain distance information to the subject. Calculate the depth.
  • FIG. 2 illustrates the configuration of the image processing apparatus.
  • the image processing apparatus 30 includes a preprocessing unit 31, a parameter holding unit 32, a polarization epipolar plane image generation unit 33, and a depth calculation unit 34.
  • the preprocessing unit 31 uses the parameters stored in the parameter storage unit 32 to perform preprocessing on the polarization imaging image acquired by the imaging device.
  • the parameter holding unit 32 stores in advance internal parameters and external parameters obtained by performing calibration using a predetermined subject such as a checker board.
  • the internal parameter is a parameter unique to each imaging unit, and is a focal length of the imaging unit, a lens distortion coefficient, or the like.
  • the external parameter is a parameter for specifying the arrangement of the imaging unit, and indicates parallel movement and rotation.
  • the preprocessing unit 31 performs distortion correction or the like of the polarization imaging image using the polarization imaging image and an internal parameter of the imaging unit that has acquired the polarization imaging image.
  • the preprocessing unit 31 performs registration of the polarization imaging image using the polarization imaging image processed using the internal parameter and the external parameter. Therefore, the pre-processed polarization captured image has no positional displacement in the vertical direction of the pixels indicating the desired position of the subject between the images, and causes lateral positional displacement in accordance with the difference in depth which is distance information to the subject. Image.
  • the preprocessing unit 31 outputs the polarization imaging image after the preprocessing to the polarization epipolar plane image generating unit 33.
  • the polarization epipolar plane image generation unit 33 generates a plurality of polarization imaging images having different polarization directions and viewpoint positions so as to include an image indicating a desired position on the subject, in the arrangement direction of viewpoint positions, for example, in the horizontal direction in the configuration shown in FIG.
  • An image is extracted, and an epipolar plane image is generated by arranging the extracted images in the order of the viewpoint position in the vertical direction orthogonal to the alignment direction, at the interval according to the interval of the viewpoint positions (the interval between the imaging units (Baseline)) Do.
  • the polarization epipolar plane image generation unit 33 generates the polarization epipolar plane image by arranging the extracted images in the order of the viewpoint positions in the vertical direction orthogonal to the arrangement direction, at the intervals according to the intervals of the viewpoint positions.
  • Each pixel of the polarization epipolar plane image has polarization direction information indicating the polarization direction of the polarization imaging image to be extracted.
  • FIG. 3 illustrates a plurality of polarization imaging images and an epipolar plane image and a polarization epipolar plane image.
  • FIG. 3A illustrates the polarization directions of the imaging units 20-1 to 20-6.
  • the polarization element 21-1 provided in the imaging unit 20-1 has an angle indicating “0 °” indicating the polarization direction
  • the polarization element 21-2 provided in the imaging unit 20-2 has a polarization direction. The angle is "30 °”.
  • the polarization elements provided in the other imaging units are also set so that the angle difference between the adjacent imaging units in the polarization direction is 30 degrees and the polarization direction angle is increased.
  • a distance ds is set between the respective imaging units, and for example, with the imaging unit 20-1 as a reference, the imaging unit 20-6 is positioned at a distance 5ds.
  • FIG. 3 shows polarization captured images acquired by the imaging units 20-1 to 20-6, and indicates the position of the image indicating the desired position on the subject by a broken line.
  • the position PS1 of the side of the prismatic object OBa the position PS2 of the side surface of the cylindrical object OBb provided at a position farther than the object OBa, and the position PS3 in the side of the prismatic object OBa
  • the left end portion in the image at the position indicated by the broken line is taken as the reference pixel position in the lateral direction in the epipolar plane image and the polarization epipolar plane image.
  • (C) of FIG. 3 shows an epipolar plane image.
  • the horizontal axis is the position in the horizontal direction in the polarization imaging image
  • the vertical axis is the physical distance of the viewpoint position (physical distance between the imaging units).
  • FIG. 3 (d) shows a polarized epipolar plane image.
  • the horizontal axis is the position in the lateral direction in the polarization imaging image
  • the vertical axis is the physical distance of the viewpoint position (physical distance between imaging units).
  • the line connecting the points (PS1-1 to PS1-6) at the position PS1 in each polarization imaging image is a straight line since the intervals of the viewpoint positions are equal.
  • the line connecting the points (PS1-1 to PS1-6) at the position PS1 in each polarization imaging image is a straight line since the distance between the viewpoint positions is equal.
  • a line connecting points PS2 and PS3 is a straight line.
  • the polarization epipolar plane image generation unit 33 generates an epipolar plane image and a polarization epipolar plane image using the plurality of pre-processed polarization imaging images, and outputs the image to the depth calculation unit 34.
  • the depth calculation unit 34 calculates the depth using the epipolar plane image generated by the polarization epipolar plane image generation unit 33 and the polarization epipolar plane image.
  • the depth calculating unit 34 calculates the depth at the position indicating the texture based on the epipolar plane image, and calculates the depth at the position not having the texture based on the polarized epipolar plane image.
  • FIG. 4 illustrates the configuration of the depth calculation unit.
  • the depth calculation unit 34 includes a slope calculation unit 341 and a depth conversion unit 343.
  • the gradient calculation unit 341 calculates the gradient of each pixel in the epipolar plane image generated by the polarization epipolar plane image generation unit 33 and the polarization epipolar plane image.
  • the gradient calculating unit 341 includes a pixel classification unit 3411, a texture gradient calculating unit 3412, and a polarization gradient calculating unit 3413.
  • the pixel classification unit 3411 classifies the pixels of the epipolar plane image into pixels having texture information and an image having no texture information.
  • the pixel classification unit 3411 uses a first derivative filter such as a Sobel filter or a Prewitt filter, or a second derivative filter such as a Laplacian filter, for the determination target pixel, to generate the horizontal differential value Iu and the vertical differential value Is.
  • the pixel classification unit 3411 calculates the texture determination value G by performing the calculation of Expression (1) using the horizontal differential value Iu and the vertical differential value Is.
  • the pixel classification unit 3411 determines that the pixel has texture information, and the texture determination value G is smaller than the determination threshold ⁇ . In this case, it is determined that the pixel does not have texture information.
  • the texture gradient calculation unit 3412 calculates the gradient of the image classified as the pixel having the texture information by the pixel classification unit 3411 using the texture information (color information and texture information of the texture).
  • the texture gradient calculation unit 3412 calculates the gradient ⁇ t by performing the calculation of Equation (2) using, for example, the horizontal differential value Iu and the vertical differential value Is calculated by the pixel classification unit 3411.
  • the texture gradient calculation unit 3412 may calculate the gradient ⁇ t with high accuracy by using pixels in the vicinity region of the gradient calculation target pixel.
  • FIG. 5 is a diagram for explaining the case where the gradient of a pixel having texture information is calculated with high accuracy.
  • the optimal gradient is a direction in which the difference between the pixel value of the calculation target pixel O and the differential value of the differential value with respect to the pixel q in a certain direction is smallest in the vicinity region to the calculation target pixel O.
  • the direction satisfies the equation (3).
  • the evaluation value C ⁇ (O) is a value calculated by Expression (4), and the larger the evaluation value C ⁇ (0), the lower the reliability.
  • “ ⁇ ” is a set (0, 180) of gradient directions.
  • “S ( ⁇ )” is a set of pixels in the direction along the gradient ⁇ t in the vicinity region of the calculation target pixel O.
  • “I (O)” is the pixel value of the calculation target pixel
  • “Iu (O), Is (O)” is the differential value of the calculation target pixel
  • “I (q)” is the pixel value of the pixel q in the vicinity region.
  • “Iu (q), Is (q)” are differential values of the pixel q in the near region.
  • “ ⁇ ” is a parameter for adjusting the weight of the pixel value difference and the gradient difference.
  • the polarization gradient calculation unit 3413 calculates the gradient using polarization information for an image classified by the pixel classification unit 3411 as a pixel having no texture information.
  • FIG. 6 is a diagram for explaining the relationship between the polarization direction and the pixel value (brightness).
  • the light source LT is used to illuminate the subject OB, and the subject OB is imaged by the imaging device CM via the polarization element PL. In this case, it is known that, in the polarization imaging image generated by the imaging device CM, the luminance of the subject OB changes in accordance with the rotation of the polarization element PL.
  • the highest luminance when the polarizing element PL is rotated is Imax, and the lowest luminance is Imin.
  • the x-axis and y-axis in two-dimensional coordinates are the plane direction of the polarizing element PL
  • an angle on the xy plane with respect to the x-axis when the polarizing element PL is rotated is taken as a polarization angle ⁇ .
  • the polarization element PL returns to the original polarization state when rotated 180 degrees and has a cycle of 180 degrees.
  • the polarization angle ⁇ ⁇ when the maximum luminance Imax is observed is taken as the azimuth angle ⁇ .
  • the luminance I observed when the polarizing element PL is rotated can be expressed as Expression (5). That is, the change in luminance caused when the polarization direction of the polarizing element is changed at the position where the polarization angle and the azimuth angle are equal shows a waveform of a trigonometric function. Also, the phase of the trigonometric function waveform changes depending on the polarization angle and the azimuth angle.
  • FIG. 7 illustrates the relationship between the luminance and the polarization angle.
  • the pixels located in the gradient direction when the gradient is a true value are physically the same point viewed from different viewpoint positions.
  • the gradient of the depth calculation object pixel which is a true value
  • the polarization direction of the polarizing element it becomes a waveform of trigonometric function (for example, Sin wave).
  • the phase and the amplitude of the waveform of the trigonometric function are equal.
  • the polarization gradient calculating unit 3413 sets a gradient that minimizes the difference in the waveform of the trigonometric function obtained in the combination in which the polarization directions are different as the gradient of the pixel for which the depth is to be calculated.
  • polarization imaging drawings having four or more polarization directions within the range of angle differences less than 180 degrees are used as the plurality of polarization imaging drawings.
  • FIG. 8 is a diagram for explaining the case where the gradient can be calculated using polarization information.
  • (A) of FIG. 8 shows the gradient of the true value by a solid line, and shows the case where the gradient is incorrect by a broken line.
  • the polarization direction is “0 °, 30 °, 60 °, 90 °, 120 °, 120 °, 150 °” in the longitudinal direction in the polarization epipolar plane image.
  • a pixel value of 6 pixels is obtained.
  • each polarization direction when the gradient is incorrect indicates, for example, different positions of the subject as shown in FIG. 8B. Therefore, if the combination of the polarization directions when obtaining the waveform of the trigonometric function from the pixel values is different, the phase and the amplitude of the trigonometric function are different as shown in (c) of FIG. For example, a triangular function waveform obtained from pixel values of three pixels whose polarization directions are "0 °, 60 °, 120 °" and three pixel pixels whose polarization directions are "30 °, 90 °, 150 °" The waveforms of trigonometric functions determined from the values differ in phase and amplitude.
  • the polarization direction in obtaining the waveform of the trigonometric function from the pixel value Even if the combination of pixel values is different, as shown in (e) of FIG. 8, the phase and amplitude of the trigonometric function match.
  • the waveforms of the trigonometric functions obtained from the values have the same phase and amplitude.
  • the polarization gradient calculating unit 3413 calculates a gradient that minimizes the difference in the waveform of the trigonometric function when the combination of the polarization directions when obtaining the waveform of the trigonometric function is different from the pixel value. For example, the polarization gradient calculating unit 3413 generates a histogram in which the horizontal axis indicates the angle of inclination and the vertical axis indicates the difference, and sets the angle of inclination at which the difference is minimum as the gradient of the true value. In addition, the polarization gradient calculating unit 3413 may determine that the gradient can not be calculated if the difference between the maximum value and the minimum value of the difference is smaller than a preset threshold.
  • the combination of pixels used to detect the amplitude and phase of the waveform of the trigonometric function is not limited to the case where pixels are selected every other pixel.
  • the polarization gradient calculation unit 3413 may detect the amplitude and phase of the waveform of the trigonometric function for each set, with the pixel values at successive positions being a set.
  • the polarization gradient calculation unit 3413 may select pixels at random and make a set, and detect the amplitude and phase of the waveform of the trigonometric function for each set.
  • the pixels used for the combination can be less susceptible to noise compared to using pixels at successive positions. The influence of noise or the like can be reduced by increasing the angle difference in the polarization direction.
  • the polarization gradient calculation unit 3413 detects the phase and amplitude of the waveform of the trigonometric function based on the pixel values of pixels randomly selected from pixels located in the direction of an arbitrary gradient, for example, to reduce the influence of noise. Is repeated a plurality of times to calculate the variance (for example, standard deviation) of the phase and amplitude of the waveform. Furthermore, the polarization gradient calculating unit 3413 repeatedly calculates the dispersion of the phase and amplitude of the waveform of the trigonometric function with the gradient within the range of 0 ° to 180 °.
  • the polarization gradient calculating unit 3413 generates a histogram in which the horizontal axis indicates the angle of the gradient and the vertical axis indicates the dispersion, and determines the gradient with the smallest dispersion as the gradient of the true value.
  • the polarization gradient calculating unit 3413 determines the angle difference between the angle at which the phase difference is minimum and the angle at which the difference in amplitude is minimum, or the angle between the angle at which the phase dispersion is minimum and the angle at which the amplitude dispersion is minimum. When the difference is larger than a preset threshold value, it may be determined that the pixel can not calculate the gradient.
  • the polarization gradient calculating unit 3413 may set either one of the angles as the true value of the gradient, and the difference between the angle and the amplitude at which the phase difference (dispersion) becomes minimum (dispersion).
  • the average value of weighted angles and the weighted average value or the like at which) is minimized may be used as the true value of the gradient.
  • the polarization gradient calculating unit 3413 may use more pixel values than three points. Furthermore, the difference or variance of the phase and amplitude of the waveform of the trigonometric function in the true value gradient may be used as the evaluation value of the gradient. In this case, the larger the difference or variance in the gradient of the true value, the smaller the reliability.
  • the amplitude and the phase of the trigonometric function waveform are constant regardless of the gradient, and the phase is compared with the point on the changing surface. And the variance of the amplitude is too small to calculate the slope of the true value.
  • the normal direction can be calculated based on the pixel values of the polarization image with three or more polarization directions. Therefore, in the present technology, in pixels that do not have texture information, it is a pixel that can not calculate the gradient of the true value, and the amplitude and phase of the trigonometric function waveform are constant regardless of the gradient.
  • a pixel with a small variance is taken as a pixel with normal and not calculated gradient.
  • pixels that do not have texture information pixels that are different from normal pixels and have not calculated gradients, and pixels whose reliability of the gradient is lower than a predetermined reliability threshold value are pixels that have no normal gradients and that have not been calculated. I assume.
  • the depth conversion unit 343 converts the gradients calculated by the texture gradient calculation unit 3412 and the polarization gradient calculation unit 3413 of the gradient calculation unit 341 into depths.
  • the depth conversion unit 343 performs the calculation of Expression (6) using the gradient ⁇ p calculated by the polarization gradient calculation unit 3413, and calculates the depth D to the object position corresponding to the pixel for which the gradient ⁇ x is calculated.
  • “f” is the focal length of the pixel for which the gradient is calculated
  • the gradient ⁇ x is the gradient ⁇ t or the gradient ⁇ p.
  • the depth conversion unit 343 can prevent the calculation of the depth with low reliability by converting the calculated gradient to a depth. .
  • FIG. 9 is a flowchart showing the operation of the first embodiment.
  • the image processing apparatus acquires a plurality of polarization imaging images.
  • the image processing apparatus 30 acquires polarized captured images from a plurality of imaging units, and proceeds to step ST2.
  • step ST2 the image processing apparatus performs pre-processing of the polarization image.
  • the image processing apparatus 30 performs distortion correction and the like on each of the plurality of polarization imaging images acquired from the plurality of imaging units using an internal parameter corresponding to the imaging unit. Furthermore, the image processing apparatus 30 performs registration using the external parameter on each polarization imaging image processed using the internal parameter, and proceeds to step ST3.
  • step ST3 the image processing apparatus generates an epipolar plane image and a polarization epipolar plane image.
  • the image processing apparatus 30 generates, for example, an epipolar plane image and a polarization epipolar plane image by extracting an image of a depth calculation target line from the pre-processed polarization captured image and stacking the images in order of distance of viewpoint position. Each pixel in the polarization epipolar plane image has information indicating the polarization direction.
  • the image processing device 30 generates an epipolar plane image and a polarization epipolar plane image, and proceeds to step ST4.
  • step ST4 the image processing apparatus performs depth calculation.
  • the image processing device 30 calculates the depth to the subject position indicated by the depth calculation target pixel in the depth calculation target line based on the epipolar plane image and the polarization epipolar plane image.
  • FIG. 10 is a flowchart illustrating the depth calculation operation in step ST4.
  • the image processing apparatus performs pixel classification.
  • the image processing device 30 calculates the texture determination value G using the pixel value. Furthermore, the image processing apparatus 30 determines whether the pixel has texture information or not based on the calculated texture determination value G, and proceeds to step ST12.
  • step ST12 the image processing apparatus calculates the gradient using the texture information.
  • the image processing apparatus 30 calculates the gradient ⁇ t using the texture information for the pixel determined to have the texture information, and proceeds to step ST13.
  • step ST13 the image processing apparatus calculates a gradient using polarization information.
  • the image processing apparatus 30 calculates the gradient ⁇ p using polarization information for the pixel determined to have no texture information, and proceeds to step ST14.
  • step ST14 the image processing apparatus calculates the depth.
  • the image processing device 30 converts the gradient ⁇ x calculated in step ST12 and step ST13 into depth.
  • the gradient ⁇ x is the gradient ⁇ t or the gradient ⁇ p.
  • the image processing apparatus performs the above-described processing, and calculates the depth on the basis of the gradient calculated using the texture information, for example, for the position PS1 of the side having the texture in the object OBa. Also, for example, with respect to the position PS3 of the side surface having no texture in the object OBb, the depth is calculated based on the gradient calculated using the polarization information.
  • Second embodiment> Next, a second embodiment of the image processing apparatus will be described. In the second embodiment, the case where depth calculation is performed at a resolution higher than that of the first embodiment will be described.
  • the image processing apparatus has the preprocessing unit 31, the parameter holding unit 32, the polarization epipolar plane image generation unit 33, and the depth calculation unit 34, as in the first embodiment shown in FIG. doing.
  • the preprocessing unit 31 uses the parameters stored in the parameter storage unit 32 to perform preprocessing on polarization imaging images acquired by a plurality of imaging units.
  • the parameter holding unit 32 stores in advance internal parameters and external parameters obtained by performing calibration using a predetermined subject such as a checker board.
  • the preprocessing unit 31 performs distortion correction or the like of the polarization imaging image using the polarization imaging image and an internal parameter of the imaging unit that has acquired the polarization imaging image. Further, the preprocessing unit 31 performs registration of the polarization imaging image using the polarization imaging image processed using the internal parameter and the external parameter.
  • the preprocessing unit 31 outputs the polarization imaging image after the preprocessing to the polarization epipolar plane image generating unit 33.
  • the polarization epipolar plane image generation unit 33 extracts an image in the arrangement direction of the plurality of imaging units, that is, the arrangement direction of the plurality of viewpoint positions so as to include an image indicating a desired position in the subject from the plurality of polarization imaging images Then, the epipolar plane image is generated by arranging the extracted images in the order of the imaging units in the vertical direction orthogonal to the alignment direction, at the intervals according to the intervals of the imaging units (the intervals of the viewpoint positions). In addition, the polarization epipolar plane image generation unit 33 generates the polarization epipolar plane image by arranging the extracted images in the order of the viewpoint positions in the vertical direction orthogonal to the arrangement direction, at the intervals according to the intervals of the viewpoint positions.
  • the polarization epipolar plane image generation unit 33 generates an epipolar plane image and a polarization epipolar plane image using the plurality of pre-processed polarization imaging images, and outputs the epipolar plane image and the polarization epipolar plane image to the depth calculation unit 34.
  • the configuration of the depth calculation unit is different from that of the first embodiment.
  • the depth calculating unit in the second embodiment calculates the normal from the pixel value of the pixel located in the direction of the gradient of the true value in the polarization epipolar plane image, and uses the calculated normal to execute the first embodiment. Perform depth calculation with higher resolution than the form.
  • FIG. 11 illustrates the configuration of the depth calculation unit according to the second embodiment.
  • the depth calculation unit 34 includes a gradient calculation unit 341, a depth conversion unit 343, a normal calculation unit 344, and an integration processing unit 345.
  • the gradient calculation unit 341 calculates the gradient of each pixel in the epipolar plane image generated by the polarization epipolar plane image generation unit 33 and the polarization epipolar plane image.
  • the gradient calculating unit 341 includes a pixel classification unit 3411, a texture gradient calculating unit 3412, and a polarization gradient calculating unit 3413.
  • the pixel classification unit 3411 classifies the pixels of the epipolar plane image into pixels having texture information and an image having no texture information.
  • the pixel classification unit 3411 calculates the texture determination value G as described above for the determination target pixel. If the calculated texture determination value G is equal to or greater than a predetermined determination threshold ⁇ , the pixel classification unit 3411 determines that the pixel has texture information, and the texture determination value G is smaller than the determination threshold ⁇ . In this case, it is determined that the pixel does not have texture information.
  • the texture gradient calculation unit 3412 calculates the gradient ⁇ t as described above using the texture information with respect to the image classified as the pixel having the texture information by the pixel classification unit 3411.
  • the texture gradient calculation unit 3412 outputs the calculated gradient to the depth conversion unit 343.
  • the polarization gradient calculation unit 3413 calculates the gradient using polarization information for pixels that do not have the texture information classified by the pixel classification unit 3411. As described above, the polarization gradient calculation unit 3413 calculates the waveform of the trigonometric function for each different combination of pixels located in the direction of the gradient in the polarization epipolar plane image, and determines the gradient with the smallest waveform difference as the true value. It is a gradient. The polarization gradient calculation unit 3413 outputs the calculated gradient of the true value to the depth conversion unit 343.
  • the polarization gradient calculating unit 3413 outputs the pixel value of the pixel at which the difference in amplitude and the difference in phase are minimum, that is, the pixel value of the pixel positioned in the direction of the gradient of the true value to the normal calculation unit 344. Furthermore, the polarization gradient calculating unit 3413 outputs not only the pixel values of the pixels positioned in the direction of the gradient of the true value, but also the pixel values of the pixels positioned in the direction of the arbitrary gradient to the normal calculation unit 344.
  • the depth conversion unit 343 converts the gradients calculated by the texture gradient calculation unit 3412 and the polarization gradient calculation unit 3413 of the gradient calculation unit 341 into depths.
  • the depth conversion unit 343 converts the gradient ⁇ p calculated by the texture gradient calculation unit 3412 and the polarization gradient calculation unit 3413 into depth as described above.
  • the depth conversion unit 343 outputs the converted depth to the normal calculation unit 344 and the integration processing unit 345.
  • the normal line calculation unit 344 calculates a normal line using the pixel value (brightness) supplied from the polarization gradient calculation unit 3413. As described above, the pixel value observed when the polarization direction is rotated can be expressed as Expression (5).
  • the normal line calculation unit 344 performs fitting to the function shown in equation (5) using the brightness of the polarization image with three or more polarization directions, and The azimuth angle ⁇ at which the maximum brightness is obtained is determined based on the function indicating the relationship.
  • the object surface normal is expressed in a polar coordinate system, and the normal information is an azimuth angle ⁇ and a zenith angle ⁇ z.
  • the zenith angle ⁇ z is an angle from the z-axis toward the normal
  • the azimuth angle ⁇ is an angle in the y-axis direction with respect to the x-axis as described above.
  • the degree of polarization ⁇ can be calculated by performing the operation of equation (7).
  • the relationship between the degree of polarization and the zenith angle is known to have, for example, the characteristics shown in FIG. 12 from the Fresnel equation, and the zenith angle ⁇ z can be determined based on the degree of polarization ⁇ ⁇ from the characteristics shown in FIG.
  • the characteristic shown in FIG. 12 is an example, and the characteristic changes depending on the refractive index of the subject.
  • the normal line calculation unit 344 obtains the relationship between the brightness and the polarization angle from the polarization direction and the brightness of the polarization image based on the pixel values of the polarization image with three or more polarization directions, and obtains the azimuth angle ⁇ To determine The normal line calculation unit 344 calculates the degree of polarization ⁇ using the maximum brightness and the minimum brightness obtained from the relationship between the brightness and the polarization angle, and calculates it based on the characteristic curve indicating the relationship between the degree of polarization and the zenith angle. The zenith angle ⁇ z corresponding to the polarization degree ⁇ is determined.
  • the normal line calculation unit 344 generates normal line information indicating the normal direction of the subject (the azimuth angle ⁇ and the zenith angle ⁇ z) based on the pixel values of the polarization image having three or more polarization directions.
  • the normal calculation unit 344 outputs normal information indicating the calculated normal direction to the integration processing unit 345.
  • the azimuth of the normal line has an indeterminacy of 180 °, if the gradient of the object surface is estimated based on the depth obtained by the depth conversion unit 343, the indeterminacy of the azimuth can be eliminated.
  • the integration processing unit 345 performs integration processing using the depth obtained by the depth conversion unit 343 and the normal calculated by the normal calculation unit 344 to obtain the depth with high resolution and high accuracy.
  • the integration processing unit 345 is denser and higher than the pixel unit based on the depth and normal of the pixel for which the gradient of the true value is calculated based on the polarization information, or the normal of the pixel for which the gradient of the true value is not calculated. Calculate depth with high resolution and high accuracy.
  • FIG. 13 is a diagram for explaining depth interpolation processing based on the depth and the normal.
  • the depth D1 of the pixel position PX1 and the depth D2 of the pixel position PX2 are obtained by the depth conversion unit 343.
  • the normal line calculation unit 344 obtains the normal line F1 of the pixel position PX1 and the normal line F2 of the pixel position PX2.
  • the integration processing unit 345 performs interpolation processing using the depths D1 and D2 and the normals F1 and F2, and calculates, for example, the depth D12 of the object position corresponding to the boundary position PX12 between the pixel position PX1 and the pixel position PX2.
  • the depth D12 can be calculated based on the equation (13).
  • the coordinate of the pixel position PX1 in the image coordinate system with the focal length of the imaging unit as “f” and the image center as the origin is (u1, v1)
  • the coordinate of the pixel position PX2 is (u2, v2)
  • the normal vector based on the normal F1 is (Nx1, Ny1, Nz1)
  • the normal vector based on the normal F2 is (Nx2, Ny2, Nz2)
  • the pixel position PX1, PX2 is the u direction of the image coordinate system (rightward)
  • the relationships of equations (14) and (15) hold. Therefore, the depth D12 can be calculated based on the equation (16).
  • the depth D12 indicates the depth at the boundary position PX12 calculated by linear interpolation using the depth D1 and the depth D2, and by performing the depth interpolation processing based on the normal indicated by the depth and the normal information
  • the depth D12 can be calculated with high accuracy as compared to the case of calculating the depth by linear interpolation without using the normal line information.
  • the integration processing using the normal is, for example, a gradient with a normal which is a pixel for which normal information is generated and a depth is not obtained by performing integration processing in the same manner as in JP-A-2015-114307.
  • the depth of the uncalculated pixel can be calculated with high accuracy.
  • the integration processing unit 345 performs integration processing using the depth and the normal line or the normal line, and calculates the depth with higher resolution and high accuracy than the pixel unit.
  • FIG. 14 is a flow chart showing the operation of the second embodiment.
  • the image processing device performs pixel classification.
  • the image processing device 30 calculates the texture determination value using the pixel value. Furthermore, the image processing apparatus 30 determines whether the pixel has texture information or not based on the calculated texture determination value, and proceeds to step ST22.
  • step ST22 the image processing apparatus calculates the gradient using the texture information.
  • the image processing apparatus 30 calculates the gradient using the texture information for the pixel determined to have the texture information, and proceeds to step ST23.
  • step ST23 the image processing apparatus calculates a gradient using polarization information.
  • the image processing apparatus 30 calculates the gradient of the true value using the polarization information for the pixel determined to have no texture information, and proceeds to step ST24.
  • step ST24 the image processing apparatus calculates the depth.
  • the image processing device 30 calculates the depth based on the gradients calculated in step ST12 and step ST13, and proceeds to step ST25.
  • step ST25 the image processing apparatus generates normal line information.
  • the image processing device 30 calculates the normal based on the polarization information when the gradient is calculated in step ST23, generates normal information indicating the calculated normal, and proceeds to step ST26.
  • step ST26 the image processing apparatus performs integration processing.
  • the image processing device 30 performs integration processing using the depth obtained in the pixel unit and the normal indicated by the normal information, and calculates the depth with a higher resolution and higher accuracy than the pixel unit.
  • the depth can be calculated for a depth calculation target pixel whose gradient can not be calculated based on texture information or polarization information.
  • the image processing apparatus has the preprocessing unit 31, the parameter holding unit 32, the polarization epipolar plane image generation unit 33, and the depth calculation unit 34, as in the first embodiment shown in FIG. doing.
  • the preprocessing unit 31 uses the parameters stored in the parameter storage unit 32 to perform preprocessing on polarization imaging images acquired by a plurality of imaging units.
  • the parameter holding unit 32 stores in advance internal parameters and external parameters obtained by performing calibration using a predetermined subject such as a checker board.
  • the preprocessing unit 31 performs distortion correction or the like of the polarization imaging image using the polarization imaging image and an internal parameter of the imaging unit that has acquired the polarization imaging image. Further, the preprocessing unit 31 performs registration of the polarization imaging image using the polarization imaging image processed using the internal parameter and the external parameter.
  • the preprocessing unit 31 outputs the polarization imaging image after the preprocessing to the polarization epipolar plane image generating unit 33.
  • the polarization epipolar plane image generation unit 33 extracts an image in the arrangement direction of the plurality of imaging units, that is, the arrangement direction of the plurality of viewpoint positions so as to include an image indicating a desired position in the subject from the plurality of polarization imaging images Then, the epipolar plane image is generated by arranging the extracted images in the order of the imaging units in the vertical direction orthogonal to the alignment direction, at the intervals according to the intervals of the imaging units (the intervals of the viewpoint positions). In addition, the polarization epipolar plane image generation unit 33 generates the polarization epipolar plane image by arranging the extracted images in the order of the viewpoint positions in the vertical direction orthogonal to the arrangement direction, at the intervals according to the intervals of the viewpoint positions.
  • the polarization epipolar plane image generation unit 33 generates an epipolar plane image and a polarization epipolar plane image using the plurality of pre-processed polarization imaging images, and outputs the epipolar plane image and the polarization epipolar plane image to the depth calculation unit 34.
  • the third embodiment is different from the first and second embodiments in the configuration of the depth calculating unit.
  • the depth calculation unit in the third embodiment further performs processing of calculating the depth of the depth calculation target pixel for which the gradient can not be calculated based on the texture information and the polarization information.
  • FIG. 15 illustrates the configuration of the depth calculator in the third embodiment.
  • the depth calculation unit 34 includes a gradient calculation unit 341, a depth conversion unit 343, a normal calculation unit 344, an integration processing unit 345, and a depth interpolation unit 346.
  • the gradient calculation unit 341 calculates the gradient of each pixel in the epipolar plane image generated by the polarization epipolar plane image generation unit 33 and the polarization epipolar plane image.
  • the gradient calculating unit 341 includes a pixel classification unit 3411, a texture gradient calculating unit 3412, and a polarization gradient calculating unit 3413.
  • the pixel classification unit 3411 classifies the pixels of the epipolar plane image into pixels having texture information and an image having no texture information.
  • the pixel classification unit 3411 calculates the texture determination value G as described above for the determination target pixel. If the calculated texture determination value G is equal to or greater than a predetermined determination threshold ⁇ , the pixel classification unit 3411 determines that the pixel has texture information, and the texture determination value G is smaller than the determination threshold ⁇ . In this case, it is determined that the pixel does not have texture information.
  • the texture gradient calculation unit 3412 calculates the gradient using the texture information for the pixels having the texture information classified by the pixel classification unit 3411.
  • the texture gradient calculating unit 3412 calculates the gradient ⁇ t as described above using, for example, the differential value calculated by the pixel classification unit 3411.
  • the texture gradient calculation unit 3412 outputs the calculated gradient to the depth conversion unit 343.
  • the texture gradient calculation unit 3412 calculates the gradient ⁇ t as described above using the texture information with respect to the image classified as the pixel having the texture information by the pixel classification unit 3411.
  • the texture gradient calculation unit 3412 outputs the calculated gradient to the depth conversion unit 343.
  • the polarization gradient calculation unit 3413 calculates the gradient using polarization information for pixels that do not have the texture information classified by the pixel classification unit 3411. As described above, the polarization gradient calculation unit 3413 calculates the waveform of the trigonometric function for each different combination of pixels located in the direction of the gradient in the polarization epipolar plane image, and determines the gradient with the smallest waveform difference as the true value. It is a gradient. The polarization gradient calculation unit 3413 outputs the calculated gradient of the true value to the depth conversion unit 343.
  • the polarization gradient calculating unit 3413 outputs the pixel value of the pixel at which the difference in amplitude and the difference in phase are minimum, that is, the pixel value of the pixel positioned in the gradient direction of the true value to the normal calculation unit 344. In addition, the polarization gradient calculating unit 3413 outputs not only the pixel values of the pixels positioned in the direction of the gradient of the true value but also the pixel values of the pixels positioned in the direction of the arbitrary gradient to the normal calculation unit 344. Further, the polarization gradient calculating unit 3413 generates normal-no gradient non-computed pixel information indicating a normal-non-normalized gradient non-computed pixel and outputs the pixel information to the depth interpolation unit 346.
  • the depth conversion unit 343 converts the gradients calculated by the texture gradient calculation unit 3412 and the polarization gradient calculation unit 3413 of the gradient calculation unit 341 into depths.
  • the depth conversion unit 343 converts the gradient ⁇ p calculated by the texture gradient calculation unit 3412 and the polarization gradient calculation unit 3413 into depth as described above.
  • the depth conversion unit 343 outputs the converted depth to the normal calculation unit 344 and the integration processing unit 345.
  • the normal line calculating unit 344 determines the azimuth angle ⁇ at which the brightness is maximum. Further, the normal line calculation unit 344 determines the zenith angle ⁇ z corresponding to the degree of polarization ⁇ calculated using the maximum luminance and the minimum luminance obtained from the relationship between the luminance and the polarization angle. The normal line calculation unit 344 generates normal line information indicating the calculated azimuth angle ⁇ and the zenith angle ⁇ z, and outputs the generated normal line information to the integration processing unit 345.
  • the integration processing unit 345 performs integration processing using the depth obtained by the depth conversion unit 343 and the normal calculated by the normal calculation unit 344 to calculate the depth with high resolution and high accuracy, and the depth interpolation unit 346. Output to
  • the depth interpolation unit 346 calculates the depth of the pixel indicated by the non-normal gradient non-computed pixel information supplied from the polarization gradient calculation unit 3413 by interpolation processing.
  • the non-normal gradient uncomputed pixel is a pixel for which the gradient can not be calculated or a pixel with low reliability of the gradient or the normal, that is, a pixel with strong noise. Therefore, the depth interpolation unit 346 performs interpolation processing using the depth of the pixel which is located around the non-normalized gradient non-computed pixel and the depth is calculated by the integration processing unit 345, and the non-normalized gradient is not calculated. Calculate the depth of the pixel.
  • FIG. 16 is a diagram for explaining the depth interpolation processing of the depth interpolation unit 346.
  • the pixel P1 is the pixel closest to the left with respect to the pixel with no normal gradient but not calculated, and for which the depth is calculated.
  • the pixel P2 is located in the right direction with respect to the normal line no gradient non-computed pixel Pt, and the closest pixel for which the depth is calculated is set as the pixel P2.
  • the distance (for example, the number of pixels) from the normal line non-gradient uncomputed pixel Pt to the pixel P1 is “L1”
  • the distance from the normal line no gradient non-computed pixel Pt to the pixel P2 is “L2” Do.
  • the pixel P1 has a depth "D1”
  • the pixel P2 has a depth "D2”.
  • the depth interpolation unit 346 calculates the depth "Dt" of the no-normal-gradient uncomputed pixel Pt based on Expression (17).
  • the depth interpolation unit 346 performs interpolation processing using the depths of pixels located around the non-normal gradient non-computed pixel to calculate the depth of the non-normal gradient non-computed pixel.
  • FIG. 17 is a flow chart showing the operation of the third embodiment.
  • the image processing apparatus performs pixel classification.
  • the image processing device 30 calculates the texture determination value using the pixel value. Furthermore, the image processing apparatus 30 determines whether the pixel has texture information or not based on the calculated texture determination value, and proceeds to step ST32.
  • step ST32 the image processing apparatus calculates the gradient using the texture information.
  • the image processing apparatus 30 calculates the gradient using the texture information for the pixel determined to have the texture information, and proceeds to step ST33.
  • step ST33 the image processing apparatus calculates a gradient using polarization information.
  • the image processing apparatus 30 calculates a gradient using polarization information for a pixel determined to have no texture information, and proceeds to step ST34.
  • step ST34 the image processing apparatus generates no normal line gradient uncalculated pixel information.
  • the image processing device 30 discriminates the pixel with no normal line without calculating the normal line on the basis of the calculation result of the gradient in step ST33, the reliability of the calculated gradient, the amplitude and phase of the waveform of the trigonometric function used for calculating the gradient. Then, on the basis of the determination result, no normal line gradient uncalculated pixel information is generated, and the process proceeds to step ST35.
  • the image processing apparatus calculates the depth in step ST35.
  • the image processing device 30 calculates the depth based on the gradients calculated in step ST32 and step ST33, and proceeds to step ST36.
  • step ST36 the image processing apparatus generates normal line information.
  • the image processing device 30 calculates the normal based on the polarization information when the gradient is calculated in step ST33, generates normal information indicating the calculated normal, and proceeds to step ST37.
  • step ST37 the image processing apparatus performs integration processing.
  • the image processing device 30 performs integration processing using the depth obtained in pixel units and the normal indicated by the normal information, calculates the depth with high resolution at a higher resolution than the pixel units, and proceeds to step ST38 .
  • step ST38 the image processing apparatus performs depth interpolation processing.
  • the image processing device 30 uses the depth of the non-normalized gradient non-calculated pixel indicated by the non-normalized gradient non-calculated pixel information generated in step ST34 as the depth located near the non-normalized gradient non-calculated pixel. Calculated by interpolation processing.
  • the depth can be calculated even for a pixel whose depth can not be calculated in the first embodiment or the second embodiment.
  • FIG. 18 illustrates the configuration of the imaging apparatus according to the fourth embodiment.
  • the imaging apparatus six imaging units are arranged in the horizontal direction, and six stages are stacked in the vertical direction.
  • Polarizing elements 21- (1, 1) to 21- (6, 6) are provided in the imaging units 20- (1, 1) to 20- (6, 6).
  • FIG. 19 illustrates the polarization direction of the imaging device in the fourth embodiment.
  • the polarization direction of the polarization element is arranged so that the polarization direction has a predetermined angle difference, for example, 30 °, with the imaging units adjacent vertically and horizontally.
  • the polarization direction is “0 °, 30 °”.
  • the image processing apparatus includes the preprocessing unit 31, the parameter holding unit 32, the polarization epipolar plane image generation unit 33, and the depth calculation unit 34, as in the first embodiment shown in FIG. ing.
  • the preprocessing unit 31 uses the parameters stored in the parameter storage unit 32 to perform preprocessing on polarization imaging images acquired by a plurality of imaging units.
  • the parameter holding unit 32 stores in advance internal parameters and external parameters obtained by performing calibration using a predetermined subject such as a checker board.
  • the preprocessing unit 31 performs distortion correction or the like of the polarization imaging image using the polarization imaging image and an internal parameter of the imaging unit that has acquired the polarization imaging image. Further, the preprocessing unit 31 performs registration of the polarization imaging image using the polarization imaging image processed using the internal parameter and the external parameter.
  • the preprocessing unit 31 outputs the polarization imaging image after the preprocessing to the polarization epipolar plane image generating unit 33.
  • the polarization epipolar plane image generation unit 33 generates a first polarization epipolar plane image and a first epipolar plane image from a plurality of polarization imaging images having different viewpoint positions in the first direction, and generates a viewpoint in a second direction different from the first direction.
  • a second polarization epipolar plane image and a second epipolar plane image are generated from a plurality of polarization imaging images different in position.
  • the polarization epipolar plane image generation unit 33 generates an image in the arrangement direction of the plurality of imaging units, that is, the horizontal direction so as to include an image indicating a desired position in the subject from polarization imaging images acquired by the plurality
  • the epipolar plane image is generated by arranging the extracted images in the order of the imaging units in the vertical direction orthogonal to the arrangement direction, in the interval according to the interval of the imaging units (the interval of the viewpoint position).
  • the polarization epipolar plane image generation unit 33 generates the polarization epipolar plane image by arranging the extracted images in the order of the viewpoint positions in the vertical direction orthogonal to the arrangement direction, at the intervals according to the intervals of the viewpoint positions.
  • the polarization epipolar plane image generation unit 33 arranges the plurality of imaging units in the alignment direction, that is, the vertical direction so as to include an image indicating a desired position in the subject from polarization imaging images acquired by the plurality of imaging units arranged in the vertical direction.
  • the image is extracted, and the epipolar plane image is generated by arranging the extracted images in the order of the imaging units in the lateral direction orthogonal to the arrangement direction, in the interval according to the interval of the imaging units (the interval of the viewpoint position).
  • the polarization epipolar plane image generation unit 33 generates the polarization epipolar plane image by arranging the extracted images in the order of the viewpoint position in the horizontal direction orthogonal to the arrangement direction, at an interval corresponding to the interval of the viewpoint position.
  • the polarization epipolar plane image generation unit 33 generates a polarization epipolar plane image (hereinafter referred to as “laterally polarized epipolar plane image”) and an epipolar plane image (hereinafter referred to as “lateral epipolar plane image”) with epipolar lines in the lateral direction and epipolar lines.
  • the polarization epipolar plane image hereinafter referred to as “longitudinal polarization epipolar plane image”
  • the epipolar plane image hereinafter referred to as “longitudinal epipolar plane image” whose vertical directions are the vertical directions are output to the depth calculation unit 34.
  • the fourth embodiment differs from the first to third embodiments in the configuration of the depth calculation unit.
  • the depth calculation unit according to the fourth embodiment includes a gradient calculated using polarization imaging images acquired by a plurality of imaging units aligned in the horizontal direction, and a polarization imaging image acquired by a plurality of imaging units aligned in the vertical direction. Is used to calculate the depth based on the highly reliable gradient.
  • FIG. 20 illustrates the configuration of the depth calculation unit according to the fourth embodiment.
  • the depth calculation unit 34 includes slope calculation units 341 H and 341 V, a slope selection unit 342, a depth conversion unit 343, a normal line calculation unit 344, an integration processing unit 345, and a depth interpolation unit 346.
  • the gradient calculation unit 341H calculates the gradient of the depth calculation target pixel in the lateral epipolar plane image generated by the polarization epipolar plane image generation unit 33.
  • the gradient calculating unit 341H includes a pixel classification unit 3411H, a texture gradient calculating unit 3412H, and a polarization gradient calculating unit 3413H.
  • the pixel classification unit 3411H classifies the pixels of the horizontal epipolar plane image into a pixel having texture information and an image having no texture information.
  • the pixel classification unit 3411H calculates the texture determination value G as described above for the determination target pixel.
  • the pixel classification unit 3411H determines that the pixel has texture information, and the texture determination value G is smaller than the determination threshold ⁇ . In this case, it is determined that the pixel does not have texture information.
  • the texture gradient calculation unit 3412H calculates the gradient ⁇ th using texture information for the pixels having the texture information classified by the pixel classification unit 3411H.
  • the texture gradient calculation unit 3412H calculates the gradient ⁇ th as described above, for example, using the differential value calculated by the pixel classification unit 3411H.
  • the texture gradient calculating unit 3412 H outputs the calculated gradient ⁇ th and the reliability Rth to the gradient selecting unit 342, using the dispersion when the gradient of the true value is calculated as the reliability Rth.
  • the polarization gradient calculation unit 3413H calculates the gradient using polarization information for the pixels that do not have the texture information classified by the pixel classification unit 3411H.
  • the polarization gradient calculation unit 3413H calculates the waveform of the trigonometric function for each different combination of pixels positioned in the gradient direction as described above in the transversely polarized epipolar plane image as described above, and determines the gradient with the smallest waveform difference as the true.
  • the gradient of the value is ⁇ ph.
  • the polarization gradient calculation unit 3413H outputs the calculated gradient ⁇ ph and the reliability Rph to the gradient selection unit 342, using the dispersion when the gradient ⁇ ph of the true value is calculated as the reliability Rph.
  • the pixel classification unit 3411V classifies the pixels of the vertical epipolar plane image into pixels having texture information and an image having no texture information.
  • the pixel classification unit 3411V calculates the texture determination value G as described above for the determination target pixel.
  • the pixel classification unit 3411 V determines that the pixel has texture information, and the texture determination value G is smaller than the determination threshold ⁇ . In this case, it is determined that the pixel does not have texture information.
  • the texture gradient calculation unit 3412 V calculates the gradient ⁇ tv using texture information for pixels having texture information classified by the pixel classification unit 3411 V.
  • the texture gradient calculation unit 3412 V calculates the gradient ⁇ tv as described above, for example, using the differential value calculated by the pixel classification unit 3411 H.
  • the texture gradient calculating unit 3412 V outputs the calculated gradient ⁇ tv and the reliability R tv to the gradient selecting unit 342, using the variance when the gradient of the true value is calculated as the reliability Rtv.
  • the polarization gradient calculation unit 3413V calculates the gradient using polarization information for pixels that do not have the texture information classified by the pixel classification unit 3411V.
  • the polarization gradient calculation unit 3413 V calculates the waveform of the trigonometric function for each different combination of pixels positioned in the gradient direction as described above in the longitudinal polarization epipolar plane image as described above, and determines the gradient with the smallest waveform difference as the true.
  • the gradient of the value is ⁇ pv.
  • the polarization gradient calculating unit 3413 V outputs the calculated gradient ⁇ pv and the reliability Rpv to the gradient selecting unit 342 with the dispersion obtained when the true value gradient ⁇ pv is calculated as the reliability Rpv.
  • the gradient selection unit 342 is a pixel for which the depth is to be calculated based on the gradients ⁇ th, ⁇ tv, ⁇ ph, ⁇ pv and the reliabilities Rth, Rtv, Rph, Rpv obtained from the texture gradient calculators 3412H, 3412V and the polarization gradient calculators 3413H, 3413V.
  • the gradient is selected and output to the depth conversion unit 343.
  • the gradient selecting unit 342 calculates the pixel value of the pixel located in the direction of an arbitrary gradient in the polarization epipolar plane image used for calculating the selected gradient. Are output to the normal calculation unit 344.
  • the gradient selection unit 342 generates non-normalized gradient uncomputed pixel information indicating whether the depth calculation target pixel is a non-normalized gradient uncomputed pixel and outputs the pixel information to the depth interpolation unit 346.
  • FIG. 21 is a diagram for explaining the gradient selection operation. For example, it is determined that the depth calculation target pixel is a pixel having texture information based on the horizontal epipolar plane image, the gradient ⁇ th is calculated based on the texture information, and the pixel having texture information based on the vertical epipolar plane image If it is determined and the gradient ⁇ tv is calculated based on the texture information, the gradient selection unit 342 compares the reliability Rth and the reliability Rtv, selects a gradient with high reliability, and outputs the gradient to the depth conversion unit 343. . In addition, when the reliability of the selected gradient is lower than the reliability threshold ⁇ , the gradient selecting unit 342 may set the calculated gradient as invalid and set the pixel for which the depth is calculated as the pixel without gradient calculation without normal.
  • the pixel whose depth is to be calculated is determined to be a pixel that does not have texture information based on the horizontal epipolar plane image, the gradient ⁇ ph is calculated based on the polarization information, and has texture information based on the vertical epipolar plane image
  • the gradient selecting unit 342 compares the reliability Rph with the reliability Rpv to select a gradient with high reliability, and the depth conversion unit Output to 343.
  • the gradient selecting unit 342 may set the calculated pixel as invalid and set the pixel for which the depth is calculated as the non-normal gradient uncalculated pixel.
  • the pixel whose depth is to be calculated is determined to be a pixel having texture information based on the epipolar plane image in one direction (for example, the horizontal direction), and the gradient (for example, ⁇ th) is calculated based on the texture information.
  • the gradient selecting unit 342 determines that the gradient is highly reliable.
  • the gradient (for example, ⁇ th) calculated based on the texture information is selected and output to the depth conversion unit 343.
  • the gradient selecting unit 342 selects the gradient (for example, ⁇ pv) calculated based on the polarization information to the depth conversion unit 343. You may output it. Furthermore, when the gradient selection unit 342 selects the gradient calculated based on the polarization information instead of the gradient calculated based on the texture information, the reliability of the gradient calculated based on the polarization information is higher than the reliability threshold value ⁇ If it is low, the calculated gradient may be invalidated, and the pixel for which the depth is to be calculated may be the pixel without normal gradient without being calculated.
  • the gradient selection unit 342 when the gradient is calculated based on the epipolar plane image or the polarized epipolar plane image in one direction, and the gradient is not calculated based on the epipolar plane image and the polarized epipolar plane image in the other direction, The gradient selection unit 342 outputs the calculated gradient to the depth conversion unit 343. In addition, when the calculated reliability of the gradient is lower than the reliability threshold, the gradient selecting unit 342 may set the pixel for which the depth is to be calculated as the pixel for which the normal without gradient is calculated.
  • the depth conversion unit 343 converts the gradient selected by the gradient selection unit 342 into depth.
  • the depth conversion unit 343 converts the gradient selected by the gradient selection unit 342 into a depth as described above, and outputs the depth to the normal calculation unit 344 and the integration processing unit 345.
  • the normal line calculation unit 344 calculates the normal line of the pixel with normal line and gradient not calculated. Based on the pixel value (brightness) supplied from the gradient selection unit 342, the normal line calculation unit 344 determines the azimuth angle ⁇ at which the brightness is maximum. Further, the normal line calculation unit 344 determines the zenith angle ⁇ z corresponding to the degree of polarization ⁇ calculated using the maximum luminance and the minimum luminance obtained from the relationship between the luminance and the polarization angle. The calculation of the normal when the gradient selected by the gradient selecting unit 342 is the gradient ⁇ pv calculated by the gradient calculating unit 341V is the pixel used for calculating the gradient ⁇ pH by the gradient calculating unit 341H as described above.
  • the same process as in the case of calculating the normal, that is, the azimuth angle ⁇ and the zenith angle ⁇ z from the values may be performed by rotating the coordinate axes by 90 degrees.
  • the normal line calculation unit 344 generates normal line information indicating the azimuth angle ⁇ and the zenith angle ⁇ z calculated for the normal line and gradient uncalculated pixel, and outputs the generated normal line information to the integration processing unit 345.
  • the integration processing unit 345 performs integration processing using the depth obtained by the depth conversion unit 343 and the normal calculated by the normal calculation unit 344 to calculate the depth with high resolution and high accuracy, and the depth interpolation unit 346. Output to
  • the depth interpolation unit 346 calculates the depth of the non-normalized gradient uncomputed pixel indicated by the non-normalized gradient uncomputed pixel information supplied from the gradient selecting unit 342 by interpolation processing.
  • the depth interpolation unit 346 performs interpolation processing using the depth of the pixel located around the non-normalized gradient uncomputed pixel and for which the depth is calculated by the integration processing unit 345. Calculate the depth.
  • FIG. 22 is a flow chart showing the operation of the fourth embodiment.
  • the image processing apparatus performs pixel classification.
  • the image processing device 30 calculates the texture determination value using the pixel value. Furthermore, based on the calculated texture determination value, the image processing device 30 determines whether the pixel has texture information or not, for each of the horizontal epipolar plane image and the vertical epipolar plane image. Then, the process proceeds to step ST42.
  • step ST42 the image processing apparatus calculates the gradient using the texture information.
  • the image processing apparatus 30 calculates the gradient and the reliability of the pixels determined to have texture information for each of the horizontal epipolar plane image and the vertical epipolar plane image using texture information, and proceeds to step ST43.
  • step ST43 the image processing apparatus calculates a gradient using polarization information.
  • the image processing device 30 calculates the gradient and the reliability by using polarization information for a pixel determined to have no texture information in each of the horizontal polarization epipolar plane image and the vertical polarization epipolar plane image, and performs steps. Go to ST44.
  • step ST44 the image processing apparatus selects a gradient.
  • the image processing apparatus 30 selects a gradient to be used to calculate the depth from the gradients calculated in step ST42 and step ST43, and proceeds to step ST45.
  • step ST45 the image processing apparatus generates no normal line gradient uncalculated pixel information.
  • the image processing apparatus 30 calculates the gradient without normal based on the gradient calculation result in step ST42 and step ST43, the reliability of the gradient selected in step ST44, and the amplitude and phase of the waveform of the trigonometric function used for calculation of the gradient.
  • the uncomputed pixel is determined, the normal line no gradient uncalculated pixel information is generated based on the determination result, and the process proceeds to step ST46.
  • step ST46 the image processing apparatus calculates the depth.
  • the image processing device 30 calculates the depth based on the gradient selected in step ST44 or the gradient selected in step ST44 and whose reliability is equal to or higher than the reliability threshold set in advance, and the process proceeds to step ST47. move on.
  • step ST47 the image processing apparatus generates normal line information.
  • the image processing device 30 calculates normals of the pixels with normals and gradients not calculated, and generates normal information indicating the calculated normals.
  • the gradient selected in step ST44 is a pixel calculated using the polarization epipolar plane image, the reliability of the gradient is lower than the reliability threshold, and the amplitude and phase of the trigonometric function waveform are gradients In the case of a pixel which becomes constant regardless of, that is, with a normal and a gradient uncalculated pixel, the pixel value of the pixel located in the direction of an arbitrary gradient in the polarization epipolar plane image used for calculating the selected gradient Is calculated, normal line information indicating the calculated normal line is generated, and the process proceeds to step ST48.
  • step ST48 the image processing apparatus performs integration processing.
  • the image processing device 30 performs integration processing using the depth obtained in pixel units and the normal line indicated by the normal information, calculates the depth having a resolution higher than that before integration processing, and proceeds to step ST49.
  • step ST49 the image processing apparatus performs depth interpolation processing.
  • the image processing device 30 calculates the depth of the non-normalized gradient non-computed pixel indicated by the non-normalized gradient non-computed pixel information generated at step ST45 by interpolation processing using the depth of the neighboring pixel.
  • the fourth embodiment it is possible to calculate a high-reliability and high-reliability depth using a plurality of polarization imaging images obtained with the viewpoint position as a two-dimensional position.
  • imaging system 10 illustrated the case where a plurality of imaging units with different polarization directions are linearly arranged in parallel
  • a single imaging device can be moved to switch the polarization direction of the polarization element. It is also good.
  • the imaging device is sequentially moved to the positions of the imaging devices 20-1 to 20-6 shown in FIG. 1, and the polarization direction of the polarizing element 21 is switched according to the movement of the imaging device , Images of the objects OBa and OBb without motion.
  • external parameters corresponding to the movement of the imaging device are used.
  • an imaging system is configured in this manner, a plurality of polarization imaging images having different polarization directions and viewpoint positions can be obtained by performing imaging by moving one imaging device in the horizontal direction or the vertical direction without using a plurality of imaging devices. It is possible to calculate the depth with high resolution and accuracy.
  • the technology according to the present disclosure can be applied to various products.
  • the technology according to the present disclosure is realized as a device mounted on any type of mobile object such as a car, an electric car, a hybrid electric car, a motorcycle, a bicycle, personal mobility, an airplane, a drone, a ship, a robot May be
  • FIG. 23 is a block diagram showing a schematic configuration example of a vehicle control system that is an example of a mobile control system to which the technology according to the present disclosure can be applied.
  • Vehicle control system 12000 includes a plurality of electronic control units connected via communication network 12001.
  • the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, an external information detection unit 12030, an in-vehicle information detection unit 12040, and an integrated control unit 12050.
  • a microcomputer 12051, an audio image output unit 12052, and an in-vehicle network I / F (Interface) 12053 are illustrated as a functional configuration of the integrated control unit 12050.
  • the driveline control unit 12010 controls the operation of devices related to the driveline of the vehicle according to various programs.
  • the drive system control unit 12010 includes a drive force generation device for generating a drive force of a vehicle such as an internal combustion engine or a drive motor, a drive force transmission mechanism for transmitting the drive force to the wheels, and a steering angle of the vehicle. adjusting steering mechanism, and functions as a control device of the braking device or the like to generate a braking force of the vehicle.
  • Body system control unit 12020 controls the operation of the camera settings device to the vehicle body in accordance with various programs.
  • the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device of various lamps such as a headlamp, a back lamp, a brake lamp, a blinker or a fog lamp.
  • the body system control unit 12020 the signal of the radio wave or various switches is transmitted from wireless controller to replace the key can be entered.
  • Body system control unit 12020 receives an input of these radio or signal, the door lock device for a vehicle, the power window device, controls the lamp.
  • Outside vehicle information detection unit 12030 detects information outside the vehicle equipped with vehicle control system 12000.
  • an imaging unit 12031 is connected to the external information detection unit 12030.
  • the out-of-vehicle information detection unit 12030 causes the imaging unit 12031 to capture an image outside the vehicle, and receives the captured image.
  • the external information detection unit 12030 may perform object detection processing or distance detection processing of a person, a vehicle, an obstacle, a sign, characters on a road surface, or the like based on the received image.
  • Imaging unit 12031 receives light, an optical sensor for outputting an electric signal corresponding to the received light amount of the light.
  • the imaging unit 12031 can output an electric signal as an image or can output it as distance measurement information.
  • the light image pickup unit 12031 is received may be a visible light, it may be invisible light such as infrared rays.
  • Vehicle information detection unit 12040 detects the vehicle information.
  • a driver state detection unit 12041 that detects a state of a driver is connected to the in-vehicle information detection unit 12040.
  • the driver state detection unit 12041 includes, for example, a camera for imaging the driver, and the in-vehicle information detection unit 12040 determines the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 12041. It may be calculated or it may be determined whether the driver does not go to sleep.
  • the microcomputer 12051 calculates a control target value of the driving force generation device, the steering mechanism or the braking device based on the information inside and outside the vehicle acquired by the outside information detecting unit 12030 or the in-vehicle information detecting unit 12040, and a drive system control unit A control command can be output to 12010.
  • the microcomputer 12051 the driving force generating device on the basis of the information around the vehicle acquired by the outside information detection unit 12030 or vehicle information detection unit 12040, by controlling the steering mechanism or braking device, the driver automatic operation such that autonomously traveling without depending on the operation can be carried out cooperative control for the purpose of.
  • the microcomputer 12051 can output a control command to the body system control unit 12020 based on the information outside the vehicle acquired by the external information detection unit 12030.
  • the microcomputer 12051 controls the headlamps in response to the preceding vehicle or the position where the oncoming vehicle is detected outside the vehicle information detection unit 12030, the cooperative control for the purpose of achieving the anti-glare such as switching the high beam to the low beam It can be carried out.
  • Audio and image output unit 12052 transmits, to the passenger or outside of the vehicle, at least one of the output signal of the voice and image to be output device to inform a visually or aurally information.
  • an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are illustrated as output devices.
  • Display unit 12062 may include at least one of the on-board display and head-up display.
  • FIG. 24 is a diagram illustrating an example of the installation position of the imaging unit 12031.
  • imaging units 12101, 12102, 12103, 12104, and 12105 are provided as the imaging unit 12031.
  • the imaging units 12101, 12102, 12103, 12104, and 12105 are provided, for example, on the front nose of the vehicle 12100, a side mirror, a rear bumper, a back door, an upper portion of a windshield of a vehicle interior, and the like.
  • the imaging unit 12101 provided in the front nose and the imaging unit 12105 provided in the upper part of the windshield in the vehicle cabin mainly acquire an image in front of the vehicle 12100.
  • the imaging units 12102 and 12103 included in the side mirror mainly acquire an image of the side of the vehicle 12100.
  • the imaging unit 12104 provided in the rear bumper or the back door mainly acquires an image of the rear of the vehicle 12100.
  • the imaging unit 12105 provided on the top of the windshield in the passenger compartment is mainly used to detect a leading vehicle or a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like.
  • FIG. 24 shows an example of the imaging range of the imaging units 12101 to 12104.
  • Imaging range 12111 indicates an imaging range of the imaging unit 12101 provided in the front nose
  • imaging range 12112,12113 are each an imaging range of the imaging unit 12102,12103 provided on the side mirror
  • an imaging range 12114 is The imaging range of the imaging part 12104 provided in the rear bumper or the back door is shown. For example, by overlaying the image data captured by the imaging units 12101 to 12104, a bird's eye view of the vehicle 12100 viewed from above can be obtained.
  • At least one of the imaging unit 12101 through 12104 may have a function of obtaining distance information.
  • at least one of the imaging units 12101 to 12104 may be a stereo camera including a plurality of imaging devices, or an imaging device having pixels for phase difference detection.
  • the microcomputer 12051 measures the distance to each three-dimensional object in the imaging ranges 12111 to 12114, and the temporal change of this distance (relative velocity with respect to the vehicle 12100). In particular, it is possible to extract a three-dimensional object traveling at a predetermined speed (for example, 0 km / h or more) in substantially the same direction as the vehicle 12100 as a leading vehicle, in particular by finding the it can. Further, the microcomputer 12051 can set an inter-vehicle distance to be secured in advance before the preceding vehicle, and can perform automatic brake control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like. Automatic operation or the like for autonomously traveling without depending on the way of the driver operation can perform cooperative control for the purpose.
  • automatic brake control including follow-up stop control
  • automatic acceleration control including follow-up start control
  • the microcomputer 12051 converts three-dimensional object data relating to three-dimensional objects into two-dimensional vehicles such as two-wheeled vehicles, ordinary vehicles, large vehicles, classification and extracted, can be used for automatic avoidance of obstacles.
  • the microcomputer 12051 identifies obstacles around the vehicle 12100 into obstacles visible to the driver of the vehicle 12100 and obstacles difficult to see.
  • the microcomputer 12051 determines a collision risk which indicates the risk of collision with the obstacle, when a situation that might collide with the collision risk set value or more, through an audio speaker 12061, a display portion 12062 By outputting a warning to the driver or performing forcible deceleration or avoidance steering via the drive system control unit 12010, driving support for collision avoidance can be performed.
  • At least one of the imaging unit 12101 to 12104 may be an infrared camera that detects infrared rays.
  • the microcomputer 12051 can recognize a pedestrian by determining whether a pedestrian is present in the images captured by the imaging units 12101 to 12104.
  • Such pedestrian recognition is, for example, a procedure for extracting feature points in images captured by the imaging units 12101 to 12104 as an infrared camera, and pattern matching processing on a series of feature points indicating the outline of an object to determine whether it is a pedestrian or not
  • the procedure is to determine Microcomputer 12051 is, determines that the pedestrian in the captured image of the imaging unit 12101 to 12104 is present, recognizing the pedestrian, the sound image output unit 12052 is rectangular outline for enhancement to the recognized pedestrian to superimpose, controls the display unit 12062.
  • the audio image output unit 12052 is, an icon or the like indicating a pedestrian may control the display unit 12062 to display the desired position.
  • the image processing apparatus according to the present disclosure may be applied to the external information detection unit 12030 among the configurations described above.
  • the imaging device according to the present disclosure may be applied to the imaging units 12101, 12102, 12103, 12104, 12105 and the like. For example, if a plurality of polarization imaging images having different polarization directions and viewpoint positions are acquired by the imaging unit 12101 provided in the front nose and the imaging unit 12105 provided in the upper part of the windshield in the vehicle interior, the outside information detection unit 12030 The distance to the vehicle, the pedestrian in front, and the obstacle can be accurately calculated with high resolution.
  • the outside vehicle information detection unit 12030 is positioned on the side surface, for example, when performing parallel parking.
  • the distance to the object can be accurately calculated with high resolution.
  • the imaging units 12102 and 12103 included in the side mirrors perform imaging by switching the polarization direction according to the movement of the car and acquiring a plurality of polarization imaging images having different viewpoint positions and polarization directions, and performing parallel parking. It is also possible to accurately calculate the distance to an object located on the side with high resolution only by passing in front of the space.
  • the outside vehicle information detection unit 12030 may back when performing distance to the following vehicle or right angle parking The distance to the object located at can be accurately calculated with high resolution.
  • the series of processes described in the specification can be performed by hardware, software, or a combination of both.
  • a program recording the processing sequence is installed and executed in a memory in a computer incorporated in dedicated hardware.
  • the program can be installed and executed on a general-purpose computer that can execute various processes.
  • the program can be recorded in advance on a hard disk or a solid state drive (SSD) as a recording medium, or a read only memory (ROM).
  • the program may be a flexible disk, a compact disc read only memory (CD-ROM), a magneto optical (MO) disc, a digital versatile disc (DVD), a BD (Blu-Ray Disc (registered trademark)), a magnetic disc, a semiconductor memory card Etc.
  • CD-ROM compact disc read only memory
  • MO magneto optical
  • DVD digital versatile disc
  • BD Blu-Ray Disc
  • magnetic disc a semiconductor memory card Etc.
  • Such removable recording media can be provided as so-called package software.
  • the program may be installed from the removable recording medium to the computer, or may be transferred from the download site to the computer wirelessly or by wire via a network such as a LAN (Local Area Network) or the Internet.
  • the computer can receive the program transferred in such a manner, and install the program on a recording medium such as a built-in hard disk.
  • the effect described in this specification is an illustration to the last, is not limited, and may have an additional effect which is not described.
  • the present technology should not be construed as being limited to the embodiments of the above-described technology.
  • the embodiments of this technology disclose the present technology in the form of exemplification, and it is obvious that those skilled in the art can modify or substitute the embodiments within the scope of the present technology. That is, in order to determine the gist of the present technology, the claims should be taken into consideration.
  • the image processing apparatus of the present technology can also have the following configuration.
  • An image processing apparatus including a depth calculation unit that calculates the depth of a pixel for which depth calculation is to be performed, based on polarization epipolar plane images generated from a plurality of polarization imaging images having different polarization directions and viewpoint positions.
  • the depth calculation unit A polarization gradient calculation unit that calculates the gradient of the depth calculation target pixel in the polarization epipolar plane image;
  • the image processing apparatus according to (1) further including: a depth conversion unit that converts the gradient calculated by the polarization gradient calculation unit into a depth.
  • the image processing apparatus according to (4), wherein the polarization direction is four or more in a range of an angle difference of less than 180 degrees.
  • a normal line calculation unit that calculates a normal of the depth calculation target pixel using a pixel located in the direction of the gradient calculated by the polarization gradient calculation unit in the polarization epipolar plane image;
  • the image processing apparatus further includes an integrated processing unit that performs interpolation processing using the depth obtained by the depth conversion unit and the normal calculated by the normal calculation unit to obtain a depth that is denser than the pixel unit and has a higher resolution
  • the image processing apparatus according to any one of 2) to 5).
  • the depth interpolation is performed to calculate the depth of the depth calculation object pixel by interpolation processing using the depth of the pixel adjacent to the depth calculation object pixel
  • the image processing apparatus according to any one of (2) to (6), further including a part.
  • It further has a normal line calculation unit which calculates a normal line using pixels located in the direction of an arbitrary gradient in the polarization epipolar plane image,
  • the normal calculation unit calculates the depth using pixels located in a direction of an arbitrary gradient from the depth calculation target pixel for which the gradient can not be calculated.
  • the image processing apparatus according to any one of (2) to (7), which calculates a normal of a target pixel.
  • the depth calculation unit A pixel classification unit that classifies whether the depth calculation target pixel is a pixel having texture information or a pixel not having the texture information based on an epipolar plane image generated from the plurality of polarization imaging images;
  • the image processing apparatus further includes a texture gradient calculation unit that calculates the gradient of the depth calculation target pixel in the epipolar plane image, The texture gradient calculation unit calculates gradients of pixels having the texture information based on the epipolar plane image,
  • the image processing apparatus according to any one of (2) to (8), wherein the polarization gradient calculation unit calculates the gradient of the pixel not having the texture information based on the polarization epipolar plane image.
  • the depth calculation unit The polarization gradient calculator or the texture gradient calculator calculates the first polarization epipolar plane image or the first epipolar plane image generated from a plurality of polarization imaging images having different viewpoint positions in the first direction.
  • the polarization based on the gradient of the depth calculation target pixel and a second polarization epipolar plane image or a second epipolar plane image generated from a plurality of polarization imaging images having different viewpoint positions in a second direction different from the first direction.
  • the image processing apparatus further includes a gradient selection unit that selects any one of the gradients for the depth calculation target pixel calculated by the gradient calculation unit or the texture gradient calculation unit,
  • the texture selection unit When the gradient selection unit is one of the gradients calculated by the texture gradient calculation unit and the other gradient is the gradient calculated by the polarization gradient calculation unit, the texture selection unit The image processing apparatus according to (9), wherein the calculated gradient is selected. (12) The image processing apparatus according to (10), wherein the gradient selection unit selects a gradient with high reliability when the gradients are both calculated by the texture gradient calculation unit or the polarization gradient calculation unit.
  • the depth of the depth calculation target pixel is calculated based on polarization epipolar plane images generated from a plurality of polarization imaging images having different polarization directions and viewpoint positions. Be done. For this reason, it becomes possible to calculate the depth based on the polarization information even in the area without texture. Therefore, it is suitable for an apparatus using a function of acquiring a distance to a subject using a captured image, for example, an electronic apparatus mounted on a mobile object such as an automobile.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

L'invention concerne un système d'imagerie (10) qui utilise une pluralité de dispositifs d'imagerie (20-1 à 20-6) et d'éléments de polarisation (21-1 à 21-6) pour acquérir une pluralité d'images polarisées ayant des directions de polarisation et des positions de point de vue différentes. Un dispositif de traitement d'images (30) classe des pixels soumis à un calcul de profondeur en pixels ayant des informations de texture et en pixels n'ayant pas d'informations de texture d'après une image de plan épipolaire générée à partir de la pluralité d'images polarisées. L'image de plan épipolaire sert à calculer les pentes de pixels ayant des informations de texture. Une image de plan épipolaire polarisée sert à calculer les pentes de pixels n'ayant pas d'informations de texture. La conversion des pentes calculées en profondeurs permet de calculer non seulement les profondeurs de pixels qui ont des informations de texture, mais également les profondeurs de pixels qui n'ont pas d'informations de texture.
PCT/JP2018/019088 2017-07-28 2018-05-17 Dispositif de traitement d'images, procédé de traitement d'images, programme, et système de traitement d'images WO2019021591A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017-146331 2017-07-28
JP2017146331 2017-07-28

Publications (1)

Publication Number Publication Date
WO2019021591A1 true WO2019021591A1 (fr) 2019-01-31

Family

ID=65039657

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/019088 WO2019021591A1 (fr) 2017-07-28 2018-05-17 Dispositif de traitement d'images, procédé de traitement d'images, programme, et système de traitement d'images

Country Status (1)

Country Link
WO (1) WO2019021591A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021010067A1 (fr) * 2019-07-17 2021-01-21 ソニー株式会社 Dispositif, procédé et programme de traitement d'informations
CN112862880A (zh) * 2019-11-12 2021-05-28 Oppo广东移动通信有限公司 深度信息获取方法、装置、电子设备和存储介质
CN113074661A (zh) * 2021-03-26 2021-07-06 华中科技大学 基于极线采样的投影仪对应点高精度匹配方法及其应用
CN114930800A (zh) * 2020-01-09 2022-08-19 索尼集团公司 图像处理装置、图像处理方法和成像装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012247356A (ja) * 2011-05-30 2012-12-13 Canon Inc 撮像モジュール、撮像装置、画像処理装置及び画像処理方法。
JP2013044827A (ja) * 2011-08-23 2013-03-04 Sharp Corp 撮像装置
JP2014199241A (ja) * 2012-07-23 2014-10-23 株式会社リコー ステレオカメラ
JP2015114307A (ja) * 2013-12-16 2015-06-22 ソニー株式会社 画像処理装置と画像処理方法および撮像装置
JP2016081088A (ja) * 2014-10-09 2016-05-16 キヤノン株式会社 画像処理装置、画像処理方法及びプログラム

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012247356A (ja) * 2011-05-30 2012-12-13 Canon Inc 撮像モジュール、撮像装置、画像処理装置及び画像処理方法。
JP2013044827A (ja) * 2011-08-23 2013-03-04 Sharp Corp 撮像装置
JP2014199241A (ja) * 2012-07-23 2014-10-23 株式会社リコー ステレオカメラ
JP2015114307A (ja) * 2013-12-16 2015-06-22 ソニー株式会社 画像処理装置と画像処理方法および撮像装置
JP2016081088A (ja) * 2014-10-09 2016-05-16 キヤノン株式会社 画像処理装置、画像処理方法及びプログラム

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021010067A1 (fr) * 2019-07-17 2021-01-21 ソニー株式会社 Dispositif, procédé et programme de traitement d'informations
CN112862880A (zh) * 2019-11-12 2021-05-28 Oppo广东移动通信有限公司 深度信息获取方法、装置、电子设备和存储介质
CN114930800A (zh) * 2020-01-09 2022-08-19 索尼集团公司 图像处理装置、图像处理方法和成像装置
CN114930800B (zh) * 2020-01-09 2024-05-28 索尼集团公司 图像处理装置、图像处理方法和成像装置
CN113074661A (zh) * 2021-03-26 2021-07-06 华中科技大学 基于极线采样的投影仪对应点高精度匹配方法及其应用
CN113074661B (zh) * 2021-03-26 2022-02-18 华中科技大学 基于极线采样的投影仪对应点高精度匹配方法及其应用

Similar Documents

Publication Publication Date Title
US10055650B2 (en) Vehicle driving assistance device and vehicle having the same
WO2017159382A1 (fr) Dispositif de traitement de signaux et procédé de traitement de signaux
EP3229204B1 (fr) Appareil de fourniture de vue alentour et véhicule le comprenant
JP6834964B2 (ja) 画像処理装置、画像処理方法、およびプログラム
US8305431B2 (en) Device intended to support the driving of a motor vehicle comprising a system capable of capturing stereoscopic images
KR101587147B1 (ko) 차량용 어라운드뷰 제공 장치 및 이를 구비한 차량
WO2019181284A1 (fr) Dispositif de traitement d'informations, dispositif de mouvement, procédé et programme
JP6764573B2 (ja) 画像処理装置、画像処理方法、およびプログラム
JPWO2017158958A1 (ja) 画像処理装置、物体認識装置、機器制御システム、画像処理方法およびプログラム
WO2019073920A1 (fr) Dispositif de traitement d'informations, dispositif mobile et procédé, et programme
JP6597792B2 (ja) 画像処理装置、物体認識装置、機器制御システム、画像処理方法およびプログラム
WO2017104574A1 (fr) Dispositif de traitement d'images, dispositif de reconnaissance d'objets, système de commande d'appareil, procédé de traitement d'images et programme
JP2019046276A (ja) 画像処理装置、および画像処理方法、並びにプログラム
EP3545464A1 (fr) Dispositif de traitement d'informations, dispositif d'imagerie, système de commande d'équipement, objet mobile, procédé de traitement d'informations et support d'enregistrement lisible par ordinateur
WO2019049763A1 (fr) Dispositif de traitement d'image, procédé de traitement d'image et programme
CN114764782A (zh) 多视图汽车和机器人系统中的图像合成
CN111480057B (zh) 图像处理装置、图像处理方法和程序
WO2020116206A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
WO2019021591A1 (fr) Dispositif de traitement d'images, procédé de traitement d'images, programme, et système de traitement d'images
WO2017047282A1 (fr) Dispositif de traitement d'images, dispositif de reconnaissance d'objets, système de commande de dispositif, procédé de traitement d'images et programme
JPWO2017099199A1 (ja) 画像処理装置、物体認識装置、機器制御システム、画像処理方法およびプログラム
EP4176373A1 (fr) Systèmes et procédés de détection d'attaques de projection dans des systèmes d'identification d'objet
JP2017129543A (ja) ステレオカメラ装置及び車両
WO2020036044A1 (fr) Dispositif de traitement d'image, procédé de traitement d'image et programme
CN111465818B (zh) 图像处理设备、图像处理方法、程序和信息处理系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18839451

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18839451

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP