WO2020054429A1 - Image processing device - Google Patents

Image processing device Download PDF

Info

Publication number
WO2020054429A1
WO2020054429A1 PCT/JP2019/033762 JP2019033762W WO2020054429A1 WO 2020054429 A1 WO2020054429 A1 WO 2020054429A1 JP 2019033762 W JP2019033762 W JP 2019033762W WO 2020054429 A1 WO2020054429 A1 WO 2020054429A1
Authority
WO
WIPO (PCT)
Prior art keywords
parallax
image processing
stereo camera
image
unit
Prior art date
Application number
PCT/JP2019/033762
Other languages
French (fr)
Japanese (ja)
Inventor
裕介 内田
稲田 圭介
野中 進一
Original Assignee
日立オートモティブシステムズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日立オートモティブシステムズ株式会社 filed Critical 日立オートモティブシステムズ株式会社
Priority to CN201980049733.5A priority Critical patent/CN112513572B/en
Publication of WO2020054429A1 publication Critical patent/WO2020054429A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • G01C3/02Details
    • G01C3/06Use of electric means to obtain final indication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images

Definitions

  • the present invention relates to an image processing device.
  • a position measuring device that measures a distance to an object existing in a camera field of view based on a principle of triangulation based on a stereo image captured by a stereo camera having a pair of cameras.
  • the principle of triangulation is to calculate the distance from a camera to an object using the displacement (parallax) of the image of the same object captured by the left and right cameras.
  • Derivation of parallax is realized by specifying where an image of an object on one image exists on the other image.
  • Patent Literature 1 discloses an obstacle measurement method that measures a distance to an obstacle by parallax calculation using a stereo image. The method measures the parallax in a high-resolution image based on distance information obtained from a low-resolution image.
  • An obstacle measurement method is disclosed in which a sampling interval of a pixel to be subjected to a correlation operation is set to be large at a short distance and small at a long distance.
  • Patent Document 1 by setting an appropriate sampling interval, high-accuracy distance information can be obtained only in a necessary area, unnecessary calculation can be suppressed, and high-accuracy distance information can be obtained only in a necessary area. It is possible to reduce the operation time and the circuit scale of the device. However, since it is necessary to calculate parallax from each of the low-resolution image and the high-resolution image, the overall calculation amount cannot be reduced much, and there is room for further improvement in suppressing the calculation amount.
  • An image processing apparatus calculates a parallax based on a stereo camera image captured by a stereo camera, and a condition setting unit that sets a condition for determining a calculation target pixel in the stereo camera image.
  • a parallax calculation unit that determines the calculation target pixel in the stereo camera image based on a condition set by the condition setting unit, and calculates the parallax using information on the calculation target pixel. .
  • the amount of calculation required for calculating parallax can be suppressed.
  • FIG. 1 is a block diagram illustrating a basic configuration of an image processing apparatus according to an embodiment of the present invention.
  • FIG. 1 is a block diagram illustrating a configuration of an image processing apparatus according to a first embodiment of the present invention.
  • 4 is a flowchart illustrating an algorithm of parallax calculation in the image processing apparatus according to the first embodiment of the present invention.
  • FIG. 2 is a block diagram illustrating a configuration of an image processing apparatus according to a second embodiment of the present invention.
  • 9 is a flowchart illustrating an algorithm of parallax calculation in the image processing device according to the second embodiment of the present invention.
  • FIG. 6 is a block diagram illustrating a configuration of an image processing apparatus according to a third embodiment of the present invention.
  • FIG. 14 is a diagram showing an example of determining the number of scan lines in the image processing apparatus according to the third embodiment of the present invention.
  • FIG. 9 is a block diagram illustrating a configuration of an image processing apparatus according to a fourth embodiment of the present invention.
  • FIG. 14 is a block diagram illustrating a configuration of an image processing apparatus according to a fifth embodiment of the present invention. Explanatory drawing of pixel thinning used for smoothing calculation in an image processing device according to a fifth embodiment of the present invention Explanatory diagram of pixel arrangement used for smoothing calculation in an image processing device according to a fifth embodiment of the present invention.
  • FIG. 14 is a block diagram illustrating a configuration of an image processing apparatus according to a sixth embodiment of the present invention.
  • FIG. 1 is a block diagram illustrating a basic configuration of an image processing apparatus according to an embodiment of the present invention.
  • the image processing apparatus 10 illustrated in FIG. 1 is connected to the camera 20 and the processor 30, and includes a condition setting unit 101 and a parallax calculation unit 102.
  • the image processing device 10 is mounted on a vehicle, for example, and is used for detecting and recognizing an object such as another vehicle or an obstacle existing around the vehicle.
  • the camera 20 is a stereo camera installed at predetermined intervals on the left and right sides, captures a pair of left and right stereo camera images, and outputs the captured data to the image processing apparatus 10.
  • the image processing apparatus 10 calculates parallax in a stereo camera image using the parallax calculation unit 102 based on the shooting data input from the camera 20.
  • the parallax calculation unit 102 determines a pixel to be calculated in the stereo camera image based on the condition set by the condition setting unit 101, and uses information on the pixel to be calculated, for example, using a calculation method called SGM Is calculated by the following.
  • the calculation of the parallax may be performed using a calculation method other than the SGM.
  • the calculation result of the parallax by the parallax calculation unit 102 is output from the image processing device 10 to the processor 30.
  • the processor 30 performs processing such as object detection and object recognition using the information on the parallax input from the image processing apparatus 10.
  • FIG. 2 is a block diagram illustrating a configuration of the image processing apparatus according to the first embodiment of the present invention.
  • the image processing apparatus 10 includes an FPGA 11 and a memory 12.
  • the FPGA 11 is an arithmetic processing device configured by combining a large number of logic circuits, and has, as its functions, the condition setting unit 101 and the parallax calculating unit 102 described with reference to FIG. Note that the condition setting unit 101 and the parallax calculation unit 102 may be implemented in the image processing apparatus 10 by using something other than the FPGA 11, for example, another logic circuit such as an ASIC, software executed by a CPU, or the like.
  • the memory 12 is a storage device configured using a readable and writable recording medium such as a RAM, an HDD, and a flash memory, and has a stereo camera captured image storage unit 103 and a parallax image storage unit 104 as its functions.
  • a readable and writable recording medium such as a RAM, an HDD, and a flash memory
  • the image data representing the stereo camera image input from the camera 20 to the image processing apparatus 10 is stored in the stereo camera image storage unit 103 in the memory 12 on a frame basis.
  • the parallax calculating unit 102 in the FPGA 11 reads the pixel data of the stereo camera image of each frame from the stereo camera captured image storage unit 103, and performs parallax calculation using the read data, thereby obtaining a parallax image representing parallax in pixel units. Is generated for each frame.
  • the parallax image generated by the parallax calculation unit 102 is output to the processor 30 and stored in the parallax image storage unit 104 in the memory 102.
  • the parallax calculation unit 102 performs smoothing calculation and dissimilarity calculation in the calculation of parallax using a stereo camera image.
  • Smoothing calculation refers to smoothing (averaging) the pixel value of each pixel in a stereo camera image together with the pixel values of a plurality of surrounding pixels, thereby smoothing fluctuations in pixel values between pixels. This is the operation processing.
  • the calculation of dissimilarity is the likelihood that the object shown in one of the stereo camera images and the object shown in the other image are not the same thing, that is, the dissimilarity that indicates how far apart they are. This is the required arithmetic processing.
  • the condition setting unit 101 has the line number determination unit 111.
  • the number-of-lines determining unit 111 reads a parallax image generated by the parallax calculation unit 102 from the stereo camera image of the previous frame and stored in the parallax image storage unit 104, and is executed by the parallax calculation unit 102 based on the parallax image. Determine the number of scan lines used in the smoothing process. That is, the line number determination unit 111 of the present embodiment determines the number of scan lines for scanning the stereo camera image of the current frame based on the parallax calculated by the parallax calculation unit 102 in the past. Then, the parallax calculating unit 102 is notified of the determined number of scan lines.
  • the condition setting unit 101 in the present embodiment performs the condition setting for the parallax calculation unit 102 by notifying the number of scan lines determined by the line number determination unit 111 in this manner.
  • the parallax calculation unit 102 determines the number of pixels notified for each pixel with respect to the stereo camera image of the current frame read from the stereo camera captured image storage unit 103. Set the scan line. Then, smoothing calculation is performed using pixels of the stereo camera image on each set scan line as calculation target pixels. Thereafter, the dissimilarity is calculated for each pixel on the stereo camera image after the smoothing calculation, and the parallax of each pixel is calculated based on the calculation result.
  • the parallax calculation unit 102 outputs the parallax calculation result obtained in pixel units thus obtained to the processor 30 as a parallax image, and stores the parallax image in the parallax image storage unit 104.
  • FIG. 3 is a flowchart showing an algorithm for calculating parallax in the image processing apparatus according to the first embodiment of the present invention.
  • the parallax calculation shown in the flowchart of FIG. 3 is performed in the FPGA 11 when the stereo camera image newly captured by the camera 20 is stored in the stereo camera captured image storage unit 103 as the stereo camera image of the current frame.
  • step S10 the parallax calculation unit 102 reads the stereo camera image of the current frame from the stereo camera captured image storage unit 103, and sequentially selects the pixels from the pixels at the screen edge.
  • step S20 the line number determination unit 111 reads out the parallax of the pixel selected in step S10 from the parallax image storage unit 104 in the parallax image calculated by the parallax calculation unit 102 in the previous frame, and refers to the read parallax.
  • step S30 the number-of-lines determining unit 111 determines whether the parallax referred to in step S20 is larger than a predetermined threshold dth.
  • the process proceeds to step S40, and when the parallax is equal to or smaller than the threshold value dth, the process proceeds to step S50.
  • step S40 the line number determination unit 111 determines that the pixel selected in step S10 corresponds to a short-distance image, and sets a small number of scan lines for the pixel.
  • the number of scan lines is determined to be four.
  • step S50 the line number determination unit 111 determines that the pixel selected in step S10 corresponds to a long-distance image, and sets a large number of scan lines for the pixel.
  • the number of scan lines is determined to be eight.
  • the parallax calculation unit 102 calculates the parallax of the pixel selected in step S10 based on the determined number of scan lines. I do.
  • the smoothing calculation based on the determined number of scan lines is performed on the pixel, and the dissimilarity is optimized to calculate the parallax. Thereby, the parallax in the stereo camera image of the current frame is calculated for each pixel.
  • step S70 the parallax calculation unit 102 determines whether the calculation for all pixels has been completed. If there is a pixel for which the calculation has not been performed yet, the process returns to step S10 to select the next pixel, and the above processing is repeated. When the calculation for all pixels is completed, the obtained parallax image of the current frame is stored in the parallax image storage unit 104, and then the parallax calculation shown in the flowchart of FIG.
  • the number of scan lines to be set has been exemplified as four and eight, respectively, but these are only examples and do not limit the present invention.
  • a plurality of values may be set as the threshold value dth used in the determination in step S30, and the values may be compared with the parallax obtained in the previous frame, so that three or more types of scan lines may be set. In any case, it is preferable to determine that the smaller the value of the parallax obtained in the previous frame, the longer the distance to the image is, and to increase the number of scan lines.
  • FIG. 4 is a block diagram showing a configuration of an image processing device according to the second embodiment of the present invention.
  • the image processing apparatus 10 according to the present embodiment differs from the first embodiment described with reference to FIG. 2 in that the memory 12 does not include the parallax image Is different in that the area information output from is input to the line number determination unit 111.
  • This area information is information set for each object based on the result of the object detection performed by the processor 30, and information indicating which area in the stereo camera image the object corresponds to, Indicating whether the distance is classified into a short distance range or a long distance range.
  • the line number determination unit 111 inputs area information output from the processor 30 in accordance with the parallax calculated from the stereo camera image of the previous frame, and converts the information of the distance range for each area in the area information. Based on this, the number of scan lines used in the smoothing process performed by the parallax calculation unit 102 is determined. That is, the line number determination unit 111 of the present embodiment acquires information on the distance range determined based on the parallax calculated in the past by the parallax calculation unit 102, and based on the acquired information on the distance range, Determine the number. The other points are the same as the first embodiment.
  • FIG. 5 is a flowchart illustrating an algorithm for calculating parallax in the image processing apparatus according to the second embodiment of the present invention.
  • the parallax calculation shown in the flowchart of FIG. 5 is performed in the FPGA 11 when the stereo camera image newly photographed by the camera 20 is stored in the stereo camera photographed image storage unit 103 as the stereo camera image of the current frame.
  • step S ⁇ b> 10 ⁇ / b> A the line number determination unit 111 reads from the processor 30 the determination result of the distance range for each region determined by the processor 30 based on the parallax for the stereo camera image of the previous frame.
  • the distance range of each area corresponding to each object detected from the stereo camera image of the previous frame corresponds to either the short distance or the long distance. Is read.
  • step S20A the parallax calculation unit 102 reads the stereo camera image of the current frame from the stereo camera captured image storage unit 103, and sequentially selects the pixels at the end of the screen.
  • step S30A the line number determination unit 111 determines, based on the determination result of the distance range of the region corresponding to the pixel selected in step S20A, among the determination results of the distance range in the previous frame read in step S10A, It is determined whether or not the object detected in the previous frame is within the short distance area. If the determination result of the distance range is a short distance, it is determined that the object is in the short distance area in the previous frame, and the process proceeds to step S40. If the determination result of the distance range is a long distance, the object is It is determined that it was not in the short distance area in the previous frame, and the process proceeds to step S50.
  • step S40 the line number determination unit 111 determines that the pixel selected in step S20A corresponds to a short-distance image, and sets a smaller number of scan lines for the pixel.
  • the number of scan lines is determined to be four.
  • step S50 the line number determination unit 111 determines that the pixel selected in step S20A corresponds to a long-distance image, and sets a large number of scan lines for the pixel.
  • the number of scan lines is determined to be eight as in the first embodiment.
  • the parallax calculation unit 102 performs the same processing as that in the flowchart of FIG. 3 described in the first embodiment after step S60. . That is, in step S60, the calculation of the parallax based on the determined number of scan lines is performed on the pixel selected in step S20A, and in step S70, it is determined whether the calculation on all the pixels has been completed. If there is a pixel that has not been calculated, the process returns to step S20A to select the next pixel, and then repeats the above processing. When the calculation for all pixels is completed, the parallax calculation shown in the flowchart of FIG. 5 is completed.
  • the number of scan lines set in steps S40 and S50 is not limited to four or eight. Further, by setting the number of distance ranges of each area represented by the area information output from the processor 30 to three or more, three or more types of scan lines may be set. In any case, it is preferable to increase the number of scan lines set for pixels corresponding to the object as the object exists at a longer distance.
  • an example in which the area information obtained from the stereo camera image of the immediately preceding frame is used as the information used to determine the number of scan lines is used as the information used to determine the number of scan lines, but this does not limit the present invention.
  • the number of scan lines is determined based on parallax information in pixel units or pixel group units around pixels.
  • the number of scan lines is determined in the area unit obtained by object detection. Is used to determine the number of scan lines. Therefore, there is an effect that the number of scan lines can be determined based on more global information as compared with the first embodiment.
  • FIG. 6 is a block diagram illustrating a configuration of an image processing apparatus according to the third embodiment of the present invention.
  • the image processing apparatus 10 transmits the information of the object detection result output from the processor 30 to the line number determination unit 111 as compared with the second embodiment described in FIG.
  • the input points are different.
  • This information is information indicating the result of the object detection performed by the processor 30, and includes information indicating in which region in the stereo camera image the object has been detected.
  • the number-of-lines determining unit 111 inputs information of an object detection result output from the processor 30 according to the parallax calculated from the stereo camera image of the previous frame, and based on the information, a parallax calculating unit.
  • the number of scan lines used in the smoothing process performed by 102 is determined. For example, a small number of lines is adopted for a pixel determined to be an image of an already detected object, and a large number of lines is adopted for a pixel determined to be an image for which no object has been detected.
  • the line number determination unit 111 of the present embodiment acquires information on an object detected based on the parallax calculated in the past by the parallax calculation unit 102, and determines the number of scan lines based on the acquired information on the object. decide.
  • the other points are the same as the first and second embodiments.
  • FIG. 7 is a diagram showing an example of determining the number of scan lines in the image processing device according to the third embodiment of the present invention.
  • the number of lines 911 is set to be small even at a long distance.
  • the number of lines 912 is set to be large, so that more accurate parallax calculation is performed.
  • a smaller number of lines 913 is set for the road surface 903 on which the object has been detected.
  • FIG. 8 is a block diagram showing a configuration of an image processing apparatus according to the fourth embodiment of the present invention.
  • the image processing apparatus 10 according to the present embodiment is an example of a sensor capable of acquiring distance information as compared with the second and third embodiments described with reference to FIGS. The difference is that the distance information output from the LIDAR (Light Detection Detection And Ranging) 40 is input to the line number determination unit 111.
  • the LIDAR 40 in FIG. 8 is an example of another sensor capable of acquiring distance information, and the present invention is not limited to this.
  • distance information output from, for example, an infrared depth sensor, an ultrasonic sensor, a millimeter wave radar, or another camera image may be used.
  • the line number determination unit 111 inputs distance information output from the LIDAR 40 for a range of a stereo camera image captured by the camera 20, and performs the same processing as in the first embodiment based on the distance information.
  • the number of scan lines used in the smoothing process performed by the parallax calculation unit 102 is determined. That is, the distance of each pixel represented by the distance information is compared with a predetermined threshold, and the number of scan lines is set smaller for pixels whose distance is larger than the threshold, and the number of scan lines is set smaller for pixels whose distance is smaller than the threshold. Set a large number of lines.
  • the other points are the same as the first to third embodiments.
  • FIG. 9 is a block diagram showing a configuration of an image processing device according to the fifth embodiment of the present invention.
  • the condition setting unit 101 of the FPGA 11 is different from the first embodiment described with reference to FIG. The difference is that a point 112 is provided.
  • the sparse density determination unit 112 reads the parallax image generated by the parallax calculation unit 102 from the stereo camera image of the previous frame and stored in the parallax image storage unit 104, and is executed by the parallax calculation unit 102 based on the parallax image. Determine the sparse density in the smoothing process.
  • the sparse density determining unit 112 of the present embodiment determines the sparse density based on the parallax calculated by the parallax calculating unit 102 in the past. Then, the parallax calculating unit 102 is notified of the determined sparse density.
  • the condition setting unit 101 in the present embodiment sets the condition for the parallax calculation unit 102 by notifying the sparse density determined by the sparse density determination unit 112 in this manner.
  • the parallax calculation unit 102 calculates a ratio corresponding to the notified sparse density to the stereo camera image of the current frame read from the stereo camera captured image storage unit 103. To thin out the pixels. Then, smoothing calculation is performed with the pixels of the remaining stereo camera image not thinned out as calculation target pixels. After that, similar to the first embodiment, the dissimilarity calculation is performed for each pixel on the stereo camera image after the smoothing calculation, and the parallax for each pixel is calculated based on the calculation result.
  • the parallax calculation unit 102 outputs the parallax calculation result obtained in pixel units as a parallax image to the processor 30 and stores the parallax calculation result in the parallax image storage unit 104.
  • FIG. 10 is an explanatory diagram of pixel thinning used in the smoothing calculation in the image processing device according to the fifth embodiment of the present invention.
  • FIG. 10A is a diagram illustrating an example of pixels used in a smoothing calculation of a pixel corresponding to a long-distance image.
  • pixels on each scan line are not thinned out as shown in FIG. A dense smoothing operation is performed in a narrow range on an image.
  • FIG. 10B is a diagram illustrating an example of a pixel used in a smoothing calculation of a pixel corresponding to an image at a short distance.
  • the sparse density determining unit 112 of the present embodiment determines that the larger the value of the parallax obtained in the previous frame is, the shorter the distance to the image is, and increases the ratio of the pixels thinned out from the stereo camera image.
  • the sparse density is determined.
  • FIG. 11 is an explanatory diagram of an arrangement of pixels used for smoothing calculation in the image processing device according to the fifth embodiment of the present invention.
  • FIG. 11A shows an example in which pixels arranged side by side on eight scan lines are used for smoothing calculation.
  • FIG. 11B shows an example in which pixels arranged not according to a scan line but arranged according to a predetermined arrangement pattern are used for smoothing calculation. In each of the arrangement examples shown in FIGS.
  • the pixel arrangement shown in FIG. 11 is an example, and the pixel arrangement used for the smoothing calculation is not limited to this.
  • the sparse density determination unit 112 determines the number of scan lines in the line number determination unit 111 described in the first embodiment, based on the parallax calculated in the past by the parallax calculation unit 102.
  • the present invention is not limited to this.
  • information on the distance range determined based on the disparity calculated in the past by the disparity calculation unit 102 is acquired from the processor 30 and the acquired information on the distance range is acquired.
  • Sparse density may be determined based on Further, similarly to the determination of the number of scan lines in the third embodiment, information of an object detected based on the disparity calculated in the past by the disparity calculating unit 102 is acquired from the processor 30 and based on the acquired information of the object. Alternatively, the density may be determined. Alternatively, similarly to the determination of the number of scan lines in the fourth embodiment, distance information corresponding to the range of the stereo camera image measured by another sensor is obtained, and the sparse density is determined based on the obtained distance information. You may. Other than this, the sparse density can be determined by an arbitrary method.
  • FIG. 12 is a block diagram showing a configuration of an image processing apparatus according to the sixth embodiment of the present invention.
  • the condition setting unit 101 of the FPGA 11 is different from the first embodiment described with reference to FIG. The difference is that a point 112 is further provided.
  • This sparse density determination unit 112 is the same as that described in the fifth embodiment.
  • FIG. 12 illustrates an example in which the memory 12 includes the parallax image storage unit 104, the line number determination unit 111 and the sparse density determination unit 112 use the same method as in the second to fourth embodiments.
  • the parallax image storage unit 104 may be omitted without being provided in the memory 12. Further, as described in the fourth embodiment, when determining the number of scan lines or the sparse density using the distance information measured by another sensor, the distance information from the sensor is used as the line number determination unit. 111 or the sparse density determination unit 112.
  • the image processing apparatus 10 calculates parallax based on a stereo camera image captured by the camera 20.
  • the image processing apparatus 10 includes: a condition setting unit 101 that sets a condition for determining a calculation target pixel in a stereo camera image; and a calculation target pixel in the stereo camera image based on the condition set by the condition setting unit 101. And a parallax calculating unit 102 that calculates parallax using information of the calculation target pixel. With this configuration, the amount of calculation required for calculating the parallax can be suppressed.
  • the condition setting unit 101 has a line number determination unit 111 that determines the number of scan lines for scanning a stereo camera image.
  • the parallax calculation unit 102 sets the number of scan lines determined by the line number determination unit 111 for the stereo camera image, and determines pixels of the stereo camera image on the scan line as calculation target pixels. With this configuration, it is possible to appropriately determine the calculation target pixel according to the condition.
  • the line number determination unit 111 determines the number of scan lines based on the parallax calculated by the parallax calculation unit 102 in the past. At this time, the line number determination unit 111 increases the number of scan lines as the value of the parallax is smaller. With this configuration, it is possible to determine the optimal number of scan lines according to the distance for each pixel from past disparity information.
  • the number-of-lines determining unit 111 acquires information on the distance range determined based on the disparity calculated in the past by the disparity calculating unit 102, and based on the acquired information on the distance range. , Determine the number of scan lines. With this configuration, it is possible to determine the optimal number of scan lines according to the distance for each object from the past disparity information.
  • the line number determination unit 111 acquires information on an object detected based on the parallax calculated in the past by the parallax calculation unit 102, and performs scanning based on the acquired information on the object. Determine the number of lines. With this configuration, it is possible to determine the optimal number of scan lines for each object from past disparity information.
  • the number-of-lines determining unit 111 acquires distance information for a range of a stereo camera image, and determines the number of scan lines based on the acquired distance information. Thus, even when parallax cannot be calculated from a past stereo camera image, it is possible to determine the optimal number of scan lines according to the distance for each pixel.
  • the condition setting unit 101 includes a sparse density determination unit 112 that determines a sparse density representing a ratio of pixels to be thinned out from a stereo camera image.
  • the parallax calculation unit 102 determines the remaining pixels thinned out from the stereo camera image based on the sparse density determined by the sparse density determination unit 112 as calculation target pixels. With this configuration, it is possible to appropriately determine the calculation target pixel according to the condition.
  • the sparse density determination unit 112 can determine the sparse density based on the parallax calculated by the parallax calculation unit 102 in the past. At this time, the sparse density determining unit 112 determines the sparse density such that the larger the value of the parallax, the greater the proportion of pixels thinned out from the stereo camera image. With this configuration, it is possible to determine the optimal sparse density according to the distance for each pixel from the past disparity information.
  • the sparse-density determining unit 112 obtains information on the distance range determined based on the parallax calculated in the past by the parallax calculating unit 102, and obtains the information on the obtained distance range.
  • the sparse density can also be determined based on the information.
  • information on an object detected based on the parallax calculated in the past by the parallax calculation unit 102 may be acquired, and the sparse density may be determined based on the acquired information on the object.
  • SYMBOLS 10 ... Image processing apparatus, 11 ... FPGA, 12 ... Memory, 20 ... Camera, 30 ... Processor, 40 ... LIDAR, 101 ... Condition setting part, 102 ... Parallax calculation part (SGM calculation), 103 ... Stereo camera photography image storage part , 104: parallax image storage unit, 111: line number determination unit, 112: sparse density determination unit

Abstract

This image processing device is used to compute parallax on the basis of a stereo camera image captured by means of a stereo camera, and is provided with: a condition setting unit for setting conditions for determining a calculation target pixel in the stereo camera image; and a parallax computing unit which, on the basis of the conditions set by the condition setting unit, determines the calculation target pixel within the stereo camera image, and uses information relating to the calculation target pixel to compute the parallax.

Description

画像処理装置Image processing device
 本発明は、画像処理装置に関する。 The present invention relates to an image processing device.
 一対のカメラを有するステレオカメラで撮影されたステレオ画像に基づき、カメラ視野内に存在する物体までの距離を三角測量の原理で計測する位置測定装置が知られている。三角測量の原理とはすなわち、左右のカメラによって撮影された同一の物体の像の位置のずれ(視差)を用いて、カメラからその物体までの距離を算出するものである。視差の導出は、物体の一方の画像上における像が、他方の画像上のどこに存在するかを特定することによって実現する。 There is known a position measuring device that measures a distance to an object existing in a camera field of view based on a principle of triangulation based on a stereo image captured by a stereo camera having a pair of cameras. The principle of triangulation is to calculate the distance from a camera to an object using the displacement (parallax) of the image of the same object captured by the left and right cameras. Derivation of parallax is realized by specifying where an image of an object on one image exists on the other image.
 視差の導出としては様々な手法が提案されている。古典的手法では、一方の画像中の複数画素からなる領域に対して、他方の画像中で最も非類似度の低い領域を探索するブロックマッチングが知られている。さらにマッチングの過程で画像中の局所的な情報のみならず画像全体の情報を含めることにより、ミスマッチングを減らして精度を高める手法として、例えばSGM(Semi-Global Matching)と呼ばれる手法なども提案されている。こうした手法では、高い精度が見込める一方で、古典的手法に比べて計算量が増大するという課題がある。 導出 Various methods have been proposed for deriving parallax. In the classical technique, block matching is known in which a region having a plurality of pixels in one image is searched for a region having the lowest degree of dissimilarity in the other image. In addition, a method called SGM (Semi-Global Matching) has been proposed as a method to reduce mismatching and improve accuracy by including not only local information in the image but also information of the entire image in the matching process. ing. While such a technique can be expected to have high accuracy, it has a problem that the amount of calculation increases as compared with the classical technique.
 ステレオカメラを用いた視差の計算における計算量の抑制に関して、例えば特許文献1に記載の技術が知られている。特許文献1には、障害物までの距離をステレオ画像による視差算出で計測する障害物計測方法であって、低解像度画像により求めた距離情報に基づいて、高解像度画像における前記視差算出のための相関演算の対象となる画素のサンプリング間隔を近距離において大きく遠距離において小さく設定することを特徴とする障害物計測方法が開示されている。 Regarding the suppression of the amount of calculation in calculating parallax using a stereo camera, for example, a technique described in Patent Document 1 is known. Patent Literature 1 discloses an obstacle measurement method that measures a distance to an obstacle by parallax calculation using a stereo image. The method measures the parallax in a high-resolution image based on distance information obtained from a low-resolution image. An obstacle measurement method is disclosed in which a sampling interval of a pixel to be subjected to a correlation operation is set to be large at a short distance and small at a long distance.
日本国特開2008-298533号公報Japanese Patent Application Publication No. 2008-298533
 特許文献1の方法では、適切なサンプリング間隔を設定することで必要な領域のみ高精度な距離情報を取得でき、また、不必要な演算を抑制でき、必要な領域のみ高精度な距離情報を取得できるとともに、演算時間の短縮、装置の回路規模の削減を実現できるとしている。しかしながら、低解像度画像と高解像度画像から視差をそれぞれ算出する必要があるため、全体の計算量をあまり低減することができず、計算量の抑制に関してさらなる改善の余地がある。 According to the method of Patent Document 1, by setting an appropriate sampling interval, high-accuracy distance information can be obtained only in a necessary area, unnecessary calculation can be suppressed, and high-accuracy distance information can be obtained only in a necessary area. It is possible to reduce the operation time and the circuit scale of the device. However, since it is necessary to calculate parallax from each of the low-resolution image and the high-resolution image, the overall calculation amount cannot be reduced much, and there is room for further improvement in suppressing the calculation amount.
 本発明による画像処理装置は、ステレオカメラにより撮影されたステレオカメラ画像に基づいて視差を算出するものであって、前記ステレオカメラ画像における計算対象画素を決定するための条件を設定する条件設定部と、前記条件設定部により設定された条件に基づいて、前記ステレオカメラ画像の中で前記計算対象画素を決定し、前記計算対象画素の情報を用いて前記視差を算出する視差算出部と、を備える。 An image processing apparatus according to the present invention calculates a parallax based on a stereo camera image captured by a stereo camera, and a condition setting unit that sets a condition for determining a calculation target pixel in the stereo camera image. A parallax calculation unit that determines the calculation target pixel in the stereo camera image based on a condition set by the condition setting unit, and calculates the parallax using information on the calculation target pixel. .
 本発明によれば、視差の計算に要する計算量を抑制することができる。 According to the present invention, the amount of calculation required for calculating parallax can be suppressed.
本発明の一実施形態に係る画像処理装置の基本構成を示すブロック図1 is a block diagram illustrating a basic configuration of an image processing apparatus according to an embodiment of the present invention. 本発明の第1の実施形態に係る画像処理装置の構成を示すブロック図FIG. 1 is a block diagram illustrating a configuration of an image processing apparatus according to a first embodiment of the present invention. 本発明の第1の実施形態に係る画像処理装置における視差計算のアルゴリズムを示すフローチャート4 is a flowchart illustrating an algorithm of parallax calculation in the image processing apparatus according to the first embodiment of the present invention. 本発明の第2の実施形態に係る画像処理装置の構成を示すブロック図FIG. 2 is a block diagram illustrating a configuration of an image processing apparatus according to a second embodiment of the present invention. 本発明の第2の実施形態に係る画像処理装置における視差計算のアルゴリズムを示すフローチャート9 is a flowchart illustrating an algorithm of parallax calculation in the image processing device according to the second embodiment of the present invention. 本発明の第3の実施形態に係る画像処理装置の構成を示すブロック図FIG. 6 is a block diagram illustrating a configuration of an image processing apparatus according to a third embodiment of the present invention. 本発明の第3の実施形態に係る画像処理装置におけるスキャンライン数決定の例を示す図FIG. 14 is a diagram showing an example of determining the number of scan lines in the image processing apparatus according to the third embodiment of the present invention. 本発明の第4の実施形態に係る画像処理装置の構成を示すブロック図FIG. 9 is a block diagram illustrating a configuration of an image processing apparatus according to a fourth embodiment of the present invention. 本発明の第5の実施形態に係る画像処理装置の構成を示すブロック図FIG. 14 is a block diagram illustrating a configuration of an image processing apparatus according to a fifth embodiment of the present invention. 本発明の第5の実施形態に係る画像処理装置において平滑化計算に用いられる画素の間引きの説明図Explanatory drawing of pixel thinning used for smoothing calculation in an image processing device according to a fifth embodiment of the present invention 本発明の第5の実施形態に係る画像処理装置において平滑化計算に用いられる画素の配置の説明図Explanatory diagram of pixel arrangement used for smoothing calculation in an image processing device according to a fifth embodiment of the present invention. 本発明の第6の実施形態に係る画像処理装置の構成を示すブロック図FIG. 14 is a block diagram illustrating a configuration of an image processing apparatus according to a sixth embodiment of the present invention.
(基本構成)
 始めに、本発明の実施形態の基本構成について説明する。図1は、本発明の一実施形態に係る画像処理装置の基本構成を示すブロック図である。図1に示す画像処理装置10は、カメラ20およびプロセッサ30と接続されており、条件設定部101および視差算出部102を備える。画像処理装置10は、例えば車両に搭載されることで、車両周囲に存在する他車両や障害物等の物体を検知および認識するのに利用される。
(Basic configuration)
First, the basic configuration of the embodiment of the present invention will be described. FIG. 1 is a block diagram illustrating a basic configuration of an image processing apparatus according to an embodiment of the present invention. The image processing apparatus 10 illustrated in FIG. 1 is connected to the camera 20 and the processor 30, and includes a condition setting unit 101 and a parallax calculation unit 102. The image processing device 10 is mounted on a vehicle, for example, and is used for detecting and recognizing an object such as another vehicle or an obstacle existing around the vehicle.
 カメラ20は、左右に所定間隔を空けて設置されたステレオカメラであり、左右一対のステレオカメラ画像を撮影してその撮影データを画像処理装置10に出力する。画像処理装置10は、カメラ20から入力された撮影データに基づいて、視差算出部102によりステレオカメラ画像における視差を算出する。このとき視差算出部102は、条件設定部101により設定される条件に基づいて、ステレオカメラ画像の中で計算対象画素を決定し、その計算対象画素の情報を用いて、例えばSGMと呼ばれる演算手法により視差を算出する。なお、SGM以外の演算手法を用いて視差の算出を行ってもよい。 The camera 20 is a stereo camera installed at predetermined intervals on the left and right sides, captures a pair of left and right stereo camera images, and outputs the captured data to the image processing apparatus 10. The image processing apparatus 10 calculates parallax in a stereo camera image using the parallax calculation unit 102 based on the shooting data input from the camera 20. At this time, the parallax calculation unit 102 determines a pixel to be calculated in the stereo camera image based on the condition set by the condition setting unit 101, and uses information on the pixel to be calculated, for example, using a calculation method called SGM Is calculated by the following. The calculation of the parallax may be performed using a calculation method other than the SGM.
 視差算出部102による視差の算出結果は、画像処理装置10からプロセッサ30へ出力される。プロセッサ30は、画像処理装置10から入力された視差の情報を用いて、物体検知や物体認識などの処理を実行する。 結果 The calculation result of the parallax by the parallax calculation unit 102 is output from the image processing device 10 to the processor 30. The processor 30 performs processing such as object detection and object recognition using the information on the parallax input from the image processing apparatus 10.
 続いて、上記の基本構成を用いた本発明の各実施形態について説明する。 Next, each embodiment of the present invention using the above basic configuration will be described.
(第1の実施形態)
 図2は、本発明の第1の実施形態に係る画像処理装置の構成を示すブロック図である。図2に示すように、本実施形態に係る画像処理装置10は、FPGA11およびメモリ12を備える。FPGA11は、多数の論理回路を組み合わせて構成された演算処理装置であり、その機能として、図1で説明した条件設定部101および視差算出部102を有している。なお、FPGA11以外のもの、例えばASIC等の他の論理回路や、CPUで実行されるソフトウェア等を利用して、画像処理装置10において条件設定部101および視差算出部102を実装してもよい。メモリ12は、RAMやHDD、フラッシュメモリ等の読み書き可能な記録媒体を用いて構成された記憶装置であり、その機能として、ステレオカメラ撮影画像保存部103および視差画像保存部104を有する。
(First embodiment)
FIG. 2 is a block diagram illustrating a configuration of the image processing apparatus according to the first embodiment of the present invention. As shown in FIG. 2, the image processing apparatus 10 according to the present embodiment includes an FPGA 11 and a memory 12. The FPGA 11 is an arithmetic processing device configured by combining a large number of logic circuits, and has, as its functions, the condition setting unit 101 and the parallax calculating unit 102 described with reference to FIG. Note that the condition setting unit 101 and the parallax calculation unit 102 may be implemented in the image processing apparatus 10 by using something other than the FPGA 11, for example, another logic circuit such as an ASIC, software executed by a CPU, or the like. The memory 12 is a storage device configured using a readable and writable recording medium such as a RAM, an HDD, and a flash memory, and has a stereo camera captured image storage unit 103 and a parallax image storage unit 104 as its functions.
 カメラ20から画像処理装置10に入力されたステレオカメラ画像を表す撮影データは、フレーム単位でメモリ12内のステレオカメラ撮影画像保存部103に格納される。FPGA11内の視差算出部102は、ステレオカメラ撮影画像保存部103から各フレームのステレオカメラ画像の画素データを読み出し、これを用いて視差の演算を行うことで、画素単位で視差を表した視差画像をフレームごとに生成する。視差算出部102が生成した視差画像は、プロセッサ30へ出力されるとともに、メモリ102内の視差画像保存部104に格納される。 The image data representing the stereo camera image input from the camera 20 to the image processing apparatus 10 is stored in the stereo camera image storage unit 103 in the memory 12 on a frame basis. The parallax calculating unit 102 in the FPGA 11 reads the pixel data of the stereo camera image of each frame from the stereo camera captured image storage unit 103, and performs parallax calculation using the read data, thereby obtaining a parallax image representing parallax in pixel units. Is generated for each frame. The parallax image generated by the parallax calculation unit 102 is output to the processor 30 and stored in the parallax image storage unit 104 in the memory 102.
 視差算出部102は、ステレオカメラ画像を用いた視差の演算において、平滑化計算および非類似度の計算を実施する。平滑化計算とは、ステレオカメラ画像内の各画素の画素値を、その周囲にある複数の画素の画素値と合わせて平滑化(平均化)することで、画素間の画素値の変動を滑らかにする演算処理のことである。非類似度の計算とは、ステレオカメラ画像の一方に映っている物体と他方の画像に映っている物体とが同一物ではないことの確からしさ、すなわちどの程度かけ離れているかを表す非類似度を求める演算処理のことである。 The parallax calculation unit 102 performs smoothing calculation and dissimilarity calculation in the calculation of parallax using a stereo camera image. Smoothing calculation refers to smoothing (averaging) the pixel value of each pixel in a stereo camera image together with the pixel values of a plurality of surrounding pixels, thereby smoothing fluctuations in pixel values between pixels. This is the operation processing. The calculation of dissimilarity is the likelihood that the object shown in one of the stereo camera images and the object shown in the other image are not the same thing, that is, the dissimilarity that indicates how far apart they are. This is the required arithmetic processing.
 本実施形態では、条件設定部101はライン数決定部111を有している。ライン数決定部111は、前フレームのステレオカメラ画像から視差算出部102が生成して視差画像保存部104に格納された視差画像を読み出し、その視差画像に基づいて、視差算出部102が実施する平滑化処理において用いられるスキャンラインの数を決定する。すなわち、本実施形態のライン数決定部111は、過去に視差算出部102により算出された視差に基づいて、現フレームのステレオカメラ画像をスキャンするスキャンラインの数を決定する。そして、決定したスキャンラインの数を視差算出部102に通知する。本実施形態における条件設定部101は、このようにしてライン数決定部111により決定されたスキャンラインの数を通知することで、視差算出部102に対する条件設定を行う。 In the present embodiment, the condition setting unit 101 has the line number determination unit 111. The number-of-lines determining unit 111 reads a parallax image generated by the parallax calculation unit 102 from the stereo camera image of the previous frame and stored in the parallax image storage unit 104, and is executed by the parallax calculation unit 102 based on the parallax image. Determine the number of scan lines used in the smoothing process. That is, the line number determination unit 111 of the present embodiment determines the number of scan lines for scanning the stereo camera image of the current frame based on the parallax calculated by the parallax calculation unit 102 in the past. Then, the parallax calculating unit 102 is notified of the determined number of scan lines. The condition setting unit 101 in the present embodiment performs the condition setting for the parallax calculation unit 102 by notifying the number of scan lines determined by the line number determination unit 111 in this manner.
 ライン数決定部111からスキャンライン数の通知が行われると、視差算出部102は、ステレオカメラ撮影画像保存部103から読み出した現フレームのステレオカメラ画像に対して、画素ごとに通知された数のスキャンラインを設定する。そして、設定した各スキャンライン上にあるステレオカメラ画像の画素を計算対象画素として、平滑化計算を実施する。その後、平滑化計算後のステレオカメラ画像に対して非類似度の計算を画素ごとに実施し、その計算結果に基づいて、各画素の視差を演算する。こうして得られた画素単位の視差の算出結果を、視差算出部102は視差画像としてプロセッサ30へ出力するとともに、視差画像保存部104に格納する。 When the number of scan lines is notified from the line number determination unit 111, the parallax calculation unit 102 determines the number of pixels notified for each pixel with respect to the stereo camera image of the current frame read from the stereo camera captured image storage unit 103. Set the scan line. Then, smoothing calculation is performed using pixels of the stereo camera image on each set scan line as calculation target pixels. Thereafter, the dissimilarity is calculated for each pixel on the stereo camera image after the smoothing calculation, and the parallax of each pixel is calculated based on the calculation result. The parallax calculation unit 102 outputs the parallax calculation result obtained in pixel units thus obtained to the processor 30 as a parallax image, and stores the parallax image in the parallax image storage unit 104.
 図3は、本発明の第1の実施形態に係る画像処理装置における視差計算のアルゴリズムを示すフローチャートである。図3のフローチャートに示す視差計算は、カメラ20によって新たに撮影されたステレオカメラ画像が現フレームのステレオカメラ画像としてステレオカメラ撮影画像保存部103に格納されると、FPGA11において実施される。 FIG. 3 is a flowchart showing an algorithm for calculating parallax in the image processing apparatus according to the first embodiment of the present invention. The parallax calculation shown in the flowchart of FIG. 3 is performed in the FPGA 11 when the stereo camera image newly captured by the camera 20 is stored in the stereo camera captured image storage unit 103 as the stereo camera image of the current frame.
 ステップS10において、視差算出部102は、ステレオカメラ撮影画像保存部103から現フレームのステレオカメラ画像を読み出し、画面端にある画素から順に選択する。 In step S10, the parallax calculation unit 102 reads the stereo camera image of the current frame from the stereo camera captured image storage unit 103, and sequentially selects the pixels from the pixels at the screen edge.
 ステップS20において、ライン数決定部111は、前フレームで視差算出部102が算出した視差画像のうち、ステップS10で選択された画素の視差を視差画像保存部104から読み出して参照する。 In step S20, the line number determination unit 111 reads out the parallax of the pixel selected in step S10 from the parallax image storage unit 104 in the parallax image calculated by the parallax calculation unit 102 in the previous frame, and refers to the read parallax.
 ステップS30において、ライン数決定部111は、ステップS20で参照した視差が所定の閾値dthよりも大きいか否かを判定する。視差が閾値dthよりも大きい場合はステップS40に進み、閾値dth以下の場合はステップS50に進む。 In step S30, the number-of-lines determining unit 111 determines whether the parallax referred to in step S20 is larger than a predetermined threshold dth. When the parallax is larger than the threshold value dth, the process proceeds to step S40, and when the parallax is equal to or smaller than the threshold value dth, the process proceeds to step S50.
 ステップS40において、ライン数決定部111は、ステップS10で選択された画素は近距離の像に対応すると判断し、当該画素に対するスキャンラインの数を少なく設定する。本実施形態では、例えばスキャンラインの数を4本に決定する。 In step S40, the line number determination unit 111 determines that the pixel selected in step S10 corresponds to a short-distance image, and sets a small number of scan lines for the pixel. In the present embodiment, for example, the number of scan lines is determined to be four.
 ステップS50において、ライン数決定部111は、ステップS10で選択された画素は遠距離の像に対応すると判断し、当該画素に対するスキャンラインの数を多く設定する。本実施形態では、例えばスキャンラインの数を8本に決定する。 In step S50, the line number determination unit 111 determines that the pixel selected in step S10 corresponds to a long-distance image, and sets a large number of scan lines for the pixel. In the present embodiment, for example, the number of scan lines is determined to be eight.
 ステップS40またはS50でライン数決定部111によりスキャンラインの数が決定されたら、ステップS60において視差算出部102は、ステップS10で選択した画素に対して、決定されたスキャンライン数に基づく視差の演算を行う。ここでは、決定されたスキャンライン数による平滑化計算を当該画素に対して行うとともに、非類似度の最適化を行い、視差を算出する。これにより、現フレームのステレオカメラ画像における視差が画素単位で算出される。 When the number of scan lines is determined by the line number determination unit 111 in step S40 or S50, in step S60, the parallax calculation unit 102 calculates the parallax of the pixel selected in step S10 based on the determined number of scan lines. I do. Here, the smoothing calculation based on the determined number of scan lines is performed on the pixel, and the dissimilarity is optimized to calculate the parallax. Thereby, the parallax in the stereo camera image of the current frame is calculated for each pixel.
 ステップS70において、視差算出部102は、全画素に対する演算が終了したか否かを判定する。まだ演算を実施していない画素がある場合はステップS10に戻って次の画素を選択した後、上記の処理を繰り返す。全画素に対する演算を終了した場合は、得られた現フレームの視差画像を視差画像保存部104に格納した後、図3のフローチャートに示す視差計算を終了する。 に お い て In step S70, the parallax calculation unit 102 determines whether the calculation for all pixels has been completed. If there is a pixel for which the calculation has not been performed yet, the process returns to step S10 to select the next pixel, and the above processing is repeated. When the calculation for all pixels is completed, the obtained parallax image of the current frame is stored in the parallax image storage unit 104, and then the parallax calculation shown in the flowchart of FIG.
 なお、上記ステップS40、S50では、設定するスキャンライン数の数を4本、8本としてそれぞれ例示したが、これらは一例であり、本発明を限定するものではない。また、ステップS30の判定に用いられる閾値dthに複数の値を設定し、前フレームで求められた視差とそれぞれ比較することで、3種類以上のスキャンライン数を設定できるようにしてもよい。いずれの場合でも、前フレームで求められた視差の値が小さいほど、像までの距離が遠いと判断してスキャンラインの数を増大させるようにすることが好ましい。 In the above steps S40 and S50, the number of scan lines to be set has been exemplified as four and eight, respectively, but these are only examples and do not limit the present invention. Alternatively, a plurality of values may be set as the threshold value dth used in the determination in step S30, and the values may be compared with the parallax obtained in the previous frame, so that three or more types of scan lines may be set. In any case, it is preferable to determine that the smaller the value of the parallax obtained in the previous frame, the longer the distance to the image is, and to increase the number of scan lines.
(第2の実施形態)
 次に本発明の第2の実施形態について説明する。本実施形態では、ライン数決定部111においてスキャンラインの数を決定する際に、プロセッサ30で行われた物体検知の結果に基づく領域ごとの距離レンジを利用する例を説明する。
(Second embodiment)
Next, a second embodiment of the present invention will be described. In the present embodiment, an example will be described in which, when the number of scan lines is determined by the line number determination unit 111, a distance range for each area based on the result of object detection performed by the processor 30 is used.
 図4は、本発明の第2の実施形態に係る画像処理装置の構成を示すブロック図である。図4に示すように、本実施形態に係る画像処理装置10は、図2で説明した第1の実施形態と比べて、メモリ12に視差画像保存部104が設けられていない点と、プロセッサ30から出力された領域情報がライン数決定部111へ入力される点が異なっている。この領域情報は、プロセッサ30で行われた物体検知の結果に基づいて物体毎に設定される情報であり、当該物体がステレオカメラ画像内のどの領域に対応するかを表す情報と、当該物体までの距離が近距離と遠距離のどちらの距離レンジに分類されるかを表す情報とを含んでいる。 FIG. 4 is a block diagram showing a configuration of an image processing device according to the second embodiment of the present invention. As shown in FIG. 4, the image processing apparatus 10 according to the present embodiment differs from the first embodiment described with reference to FIG. 2 in that the memory 12 does not include the parallax image Is different in that the area information output from is input to the line number determination unit 111. This area information is information set for each object based on the result of the object detection performed by the processor 30, and information indicating which area in the stereo camera image the object corresponds to, Indicating whether the distance is classified into a short distance range or a long distance range.
 本実施形態において、ライン数決定部111は、前フレームのステレオカメラ画像から計算された視差に応じてプロセッサ30から出力される領域情報を入力し、その領域情報における領域ごとの距離レンジの情報に基づいて、視差算出部102が実施する平滑化処理において用いられるスキャンラインの数を決定する。すなわち、本実施形態のライン数決定部111は、過去に視差算出部102により算出された視差に基づき判定された距離レンジの情報を取得し、取得した距離レンジの情報に基づいて、スキャンラインの数を決定する。これ以外の点では、第1の実施形態と同様である。 In the present embodiment, the line number determination unit 111 inputs area information output from the processor 30 in accordance with the parallax calculated from the stereo camera image of the previous frame, and converts the information of the distance range for each area in the area information. Based on this, the number of scan lines used in the smoothing process performed by the parallax calculation unit 102 is determined. That is, the line number determination unit 111 of the present embodiment acquires information on the distance range determined based on the parallax calculated in the past by the parallax calculation unit 102, and based on the acquired information on the distance range, Determine the number. The other points are the same as the first embodiment.
 図5は、本発明の第2の実施形態に係る画像処理装置における視差計算のアルゴリズムを示すフローチャートである。図5のフローチャートに示す視差計算は、カメラ20によって新たに撮影されたステレオカメラ画像が現フレームのステレオカメラ画像としてステレオカメラ撮影画像保存部103に格納されると、FPGA11において実施される。 FIG. 5 is a flowchart illustrating an algorithm for calculating parallax in the image processing apparatus according to the second embodiment of the present invention. The parallax calculation shown in the flowchart of FIG. 5 is performed in the FPGA 11 when the stereo camera image newly photographed by the camera 20 is stored in the stereo camera photographed image storage unit 103 as the stereo camera image of the current frame.
 ステップS10Aにおいて、ライン数決定部111は、前フレームのステレオカメラ画像に対する視差に基づいてプロセッサ30が判定した領域ごとの距離レンジの判定結果を、プロセッサ30から読み込む。ここでは、プロセッサ30から出力される前述の領域情報を取得することで、前フレームのステレオカメラ画像から検知された各物体に対応する各領域の距離レンジが近距離と遠距離のどちらに該当するかを示す情報を読み込む。 In step S <b> 10 </ b> A, the line number determination unit 111 reads from the processor 30 the determination result of the distance range for each region determined by the processor 30 based on the parallax for the stereo camera image of the previous frame. Here, by acquiring the above-described area information output from the processor 30, the distance range of each area corresponding to each object detected from the stereo camera image of the previous frame corresponds to either the short distance or the long distance. Is read.
 ステップS20Aにおいて、視差算出部102は、ステレオカメラ撮影画像保存部103から現フレームのステレオカメラ画像を読み出し、画面端にある画素から順に選択する。 In step S20A, the parallax calculation unit 102 reads the stereo camera image of the current frame from the stereo camera captured image storage unit 103, and sequentially selects the pixels at the end of the screen.
 ステップS30Aにおいて、ライン数決定部111は、ステップS10Aで読み込んだ前フレームでの距離レンジの判定結果のうち、ステップS20Aで選択した画素に対応する領域の距離レンジの判定結果に基づいて、当該画素に対して前フレームで検知された物体が近距離領域内にあったか否かを判定する。距離レンジの判定結果が近距離である場合は、当該物体が前フレームで近距離領域内にあったと判断してステップS40に進み、距離レンジの判定結果が遠距離である場合は、当該物体が前フレームで近距離領域内にはなかったと判断してステップS50に進む。 In step S30A, the line number determination unit 111 determines, based on the determination result of the distance range of the region corresponding to the pixel selected in step S20A, among the determination results of the distance range in the previous frame read in step S10A, It is determined whether or not the object detected in the previous frame is within the short distance area. If the determination result of the distance range is a short distance, it is determined that the object is in the short distance area in the previous frame, and the process proceeds to step S40. If the determination result of the distance range is a long distance, the object is It is determined that it was not in the short distance area in the previous frame, and the process proceeds to step S50.
 ステップS40において、ライン数決定部111は、ステップS20Aで選択された画素は近距離の像に対応すると判断し、当該画素に対するスキャンラインの数を少なく設定する。本実施形態では、第1の実施形態と同様に、例えばスキャンラインの数を4本に決定する。 In step S40, the line number determination unit 111 determines that the pixel selected in step S20A corresponds to a short-distance image, and sets a smaller number of scan lines for the pixel. In the present embodiment, as in the first embodiment, for example, the number of scan lines is determined to be four.
 ステップS50において、ライン数決定部111は、ステップS20Aで選択された画素は遠距離の像に対応すると判断し、当該画素に対するスキャンラインの数を多く設定する。本実施形態では、第1の実施形態と同様に、例えばスキャンラインの数を8本に決定する。 In step S50, the line number determination unit 111 determines that the pixel selected in step S20A corresponds to a long-distance image, and sets a large number of scan lines for the pixel. In the present embodiment, for example, the number of scan lines is determined to be eight as in the first embodiment.
 ステップS40またはS50でライン数決定部111によりスキャンラインの数が決定されたら、視差算出部102はステップS60以降において、第1の実施形態で説明した図3のフローチャートと同様の処理をそれぞれ実施する。すなわち、ステップS60では、ステップS20Aで選択した画素に対して、決定されたスキャンライン数に基づく視差の演算を行い、ステップS70では、全画素に対する演算が終了したか否かを判定する。まだ演算を実施していない画素がある場合はステップS20Aに戻って次の画素を選択した後、上記の処理を繰り返す。全画素に対する演算を終了した場合は、図5のフローチャートに示す視差計算を終了する。 When the number of scan lines is determined by the line number determination unit 111 in step S40 or S50, the parallax calculation unit 102 performs the same processing as that in the flowchart of FIG. 3 described in the first embodiment after step S60. . That is, in step S60, the calculation of the parallax based on the determined number of scan lines is performed on the pixel selected in step S20A, and in step S70, it is determined whether the calculation on all the pixels has been completed. If there is a pixel that has not been calculated, the process returns to step S20A to select the next pixel, and then repeats the above processing. When the calculation for all pixels is completed, the parallax calculation shown in the flowchart of FIG. 5 is completed.
 なお、本実施形態でも第1の実施形態と同様に、ステップS40、S50でそれぞれ設定するスキャンラインの数は4本、8本に限定されない。また、プロセッサ30から出力される領域情報が表す各領域の距離レンジの分類数を3つ以上とすることで、3種類以上のスキャンライン数を設定できるようにしてもよい。いずれの場合でも、遠距離に存在する物体ほど、当該物体に対応する画素に対して設定するスキャンラインの数を増大させるようにすることが好ましい。 In the present embodiment, as in the first embodiment, the number of scan lines set in steps S40 and S50 is not limited to four or eight. Further, by setting the number of distance ranges of each area represented by the area information output from the processor 30 to three or more, three or more types of scan lines may be set. In any case, it is preferable to increase the number of scan lines set for pixels corresponding to the object as the object exists at a longer distance.
 さらに、上記の説明ではスキャンライン数の決定に用いる情報として直前1フレームのステレオカメラ画像から求められた領域情報を用いる例を示したが、これは本発明を限定するものではない。例えば、前フレームと前々フレームのそれぞれの領域情報を元に、現フレームの領域情報を予測して用いることも可能である。また,画像処理装置10が搭載されている車両の走行情報、例えばステアリングの情報を元に、現フレームで予測される領域情報を補正することも可能である。 Further, in the above description, an example in which the area information obtained from the stereo camera image of the immediately preceding frame is used as the information used to determine the number of scan lines, but this does not limit the present invention. For example, it is also possible to predict and use the area information of the current frame based on the area information of each of the previous frame and the frame before the previous frame. Further, it is also possible to correct area information predicted in the current frame based on traveling information of a vehicle on which the image processing device 10 is mounted, for example, steering information.
 前述の第1の実施形態では、画素単位または画素周辺の画素群単位の視差情報に基づいてスキャンライン数を判断しているのに対して、本実施形態では、物体検知により求められた領域単位の距離レンジの情報を用いてスキャンライン数を判断している。そのため、第1の実施形態と比べて、より大域的な情報を元にスキャンライン数を判断できる効果がある。 In the above-described first embodiment, the number of scan lines is determined based on parallax information in pixel units or pixel group units around pixels. In the present embodiment, the number of scan lines is determined in the area unit obtained by object detection. Is used to determine the number of scan lines. Therefore, there is an effect that the number of scan lines can be determined based on more global information as compared with the first embodiment.
(第3の実施形態)
 次に本発明の第3の実施形態について説明する。本実施形態では、ライン数決定部111においてスキャンラインの数を決定する際に、プロセッサ30で行われた物体検知の結果を利用する例を説明する。
(Third embodiment)
Next, a third embodiment of the present invention will be described. In the present embodiment, an example will be described in which the number of scan lines is determined by the line number determination unit 111 using the result of object detection performed by the processor 30.
 図6は、本発明の第3の実施形態に係る画像処理装置の構成を示すブロック図である。図6に示すように、本実施形態に係る画像処理装置10は、図4で説明した第2の実施形態と比べて、プロセッサ30から出力された物体検知結果の情報がライン数決定部111へ入力される点が異なっている。この情報は、プロセッサ30で行われた物体検知の結果を表す情報であり、ステレオカメラ画像内のどの領域に対して物体が検知されたかを表す情報を含んでいる。 FIG. 6 is a block diagram illustrating a configuration of an image processing apparatus according to the third embodiment of the present invention. As shown in FIG. 6, the image processing apparatus 10 according to the present embodiment transmits the information of the object detection result output from the processor 30 to the line number determination unit 111 as compared with the second embodiment described in FIG. The input points are different. This information is information indicating the result of the object detection performed by the processor 30, and includes information indicating in which region in the stereo camera image the object has been detected.
 本実施形態において、ライン数決定部111は、前フレームのステレオカメラ画像から計算された視差に応じてプロセッサ30から出力される物体検知結果の情報を入力し、その情報に基づいて、視差算出部102が実施する平滑化処理において用いられるスキャンラインの数を決定する。例えば、既に検知済みの物体の像と判定した画素に対しては少ないライン数を採用し、物体検知済みでない像と判定した画素に対しては多いライン数を採用する。すなわち、本実施形態のライン数決定部111は、過去に視差算出部102により算出された視差に基づき検知された物体の情報を取得し、取得した物体の情報に基づいて、スキャンラインの数を決定する。これ以外の点では、第1、第2の実施形態と同様である。 In the present embodiment, the number-of-lines determining unit 111 inputs information of an object detection result output from the processor 30 according to the parallax calculated from the stereo camera image of the previous frame, and based on the information, a parallax calculating unit. The number of scan lines used in the smoothing process performed by 102 is determined. For example, a small number of lines is adopted for a pixel determined to be an image of an already detected object, and a large number of lines is adopted for a pixel determined to be an image for which no object has been detected. That is, the line number determination unit 111 of the present embodiment acquires information on an object detected based on the parallax calculated in the past by the parallax calculation unit 102, and determines the number of scan lines based on the acquired information on the object. decide. The other points are the same as the first and second embodiments.
 図7は、本発明の第3の実施形態に係る画像処理装置におけるスキャンライン数決定の例を示す図である。図7に示すように、前フレームで物体検知済みの車両の像901の領域内画素に対する視差計算では、遠距離であってもライン数911を少なく設定する。一方、前フレームの時点では物体検知されていない車両の像902に対しては、ライン数912を多く設定することで、より精度の高い視差計算を行うようにする。また、物体検知済みの路面903に対しては、より少ないライン数913を設定する。 FIG. 7 is a diagram showing an example of determining the number of scan lines in the image processing device according to the third embodiment of the present invention. As shown in FIG. 7, in the parallax calculation for the pixels in the area of the image 901 of the vehicle in which the object has been detected in the previous frame, the number of lines 911 is set to be small even at a long distance. On the other hand, for the image 902 of the vehicle for which no object is detected at the time of the previous frame, the number of lines 912 is set to be large, so that more accurate parallax calculation is performed. Further, a smaller number of lines 913 is set for the road surface 903 on which the object has been detected.
 本実施形態では、前フレームでの物体検知の結果を用いてスキャンライン数を設定することで、過去に距離情報を得たことの無い像に対しては、より精度の高い視差計算を実施することができる。 In the present embodiment, by setting the number of scan lines using the result of object detection in the previous frame, more accurate parallax calculation is performed for an image for which distance information has not been obtained in the past. be able to.
(第4の実施形態)
 次に本発明の第4の実施形態について説明する。本実施形態では、ライン数決定部111においてスキャンラインの数を決定する際に、他のセンサで測定した距離情報を利用する例を説明する。
(Fourth embodiment)
Next, a fourth embodiment of the present invention will be described. In the present embodiment, an example will be described in which the line number determination unit 111 uses the distance information measured by another sensor when determining the number of scan lines.
 図8は、本発明の第4の実施形態に係る画像処理装置の構成を示すブロック図である。図8に示すように、本実施形態に係る画像処理装置10は、図4、6図でそれぞれ説明した第2、第3の実施形態と比べて、距離情報を取得可能なセンサの一例であるLIDAR(Light Detection And Ranging)40から出力された距離情報がライン数決定部111へ入力される点が異なっている。なお、図8のLIDAR40は距離情報を取得可能な他のセンサの一例であり、本発明ではこれに限定されない。他にも、例えば赤外線デプスセンサ、超音波センサ、ミリ波レーダー、他のカメラ映像などから出力される距離情報を利用してもよい。 FIG. 8 is a block diagram showing a configuration of an image processing apparatus according to the fourth embodiment of the present invention. As shown in FIG. 8, the image processing apparatus 10 according to the present embodiment is an example of a sensor capable of acquiring distance information as compared with the second and third embodiments described with reference to FIGS. The difference is that the distance information output from the LIDAR (Light Detection Detection And Ranging) 40 is input to the line number determination unit 111. Note that the LIDAR 40 in FIG. 8 is an example of another sensor capable of acquiring distance information, and the present invention is not limited to this. Alternatively, distance information output from, for example, an infrared depth sensor, an ultrasonic sensor, a millimeter wave radar, or another camera image may be used.
 本実施形態において、ライン数決定部111は、カメラ20が撮影するステレオカメラ画像の範囲に対してLIDAR40から出力される距離情報を入力し、この距離情報に基づいて、第1の実施形態と同様に、視差算出部102が実施する平滑化処理において用いられるスキャンラインの数を決定する。すなわち、距離情報が表す各画素の距離を所定の閾値と比較して、距離が閾値よりも大きい画素に対してはスキャンラインの数を少なく設定し、距離が閾値以下の画素に対してはスキャンラインの数を多く設定する。これ以外の点では、第1~第3の実施形態と同様である。 In the present embodiment, the line number determination unit 111 inputs distance information output from the LIDAR 40 for a range of a stereo camera image captured by the camera 20, and performs the same processing as in the first embodiment based on the distance information. Next, the number of scan lines used in the smoothing process performed by the parallax calculation unit 102 is determined. That is, the distance of each pixel represented by the distance information is compared with a predetermined threshold, and the number of scan lines is set smaller for pixels whose distance is larger than the threshold, and the number of scan lines is set smaller for pixels whose distance is smaller than the threshold. Set a large number of lines. The other points are the same as the first to third embodiments.
(第5の実施形態)
 次に本発明の第5の実施形態について説明する。本実施形態では、第1~第4の実施形態で説明したライン数決定部111の代わりに、平滑化計算において間引く画素の割合を表す疎密度を決定する疎密度決定部112を有する画像処理装置の例を説明する。
(Fifth embodiment)
Next, a fifth embodiment of the present invention will be described. In the present embodiment, instead of the line number determining unit 111 described in the first to fourth embodiments, an image processing apparatus having a sparse density determining unit 112 that determines a sparse density representing a ratio of pixels to be thinned out in a smoothing calculation Will be described.
 図9は、本発明の第5の実施形態に係る画像処理装置の構成を示すブロック図である。図9に示すように、本実施形態に係る画像処理装置10は、図2で説明した第1の実施形態と比べて、FPGA11の条件設定部101がライン数決定部111の代わりに疎密度決定部112を有する点が異なっている。疎密度決定部112は、前フレームのステレオカメラ画像から視差算出部102が生成して視差画像保存部104に格納された視差画像を読み出し、その視差画像に基づいて、視差算出部102が実施する平滑化処理における疎密度を決定する。すなわち、本実施形態の疎密度決定部112は、過去に視差算出部102により算出された視差に基づいて、疎密度を決定する。そして、決定した疎密度を視差算出部102に通知する。本実施形態における条件設定部101は、このようにして疎密度決定部112により決定された疎密度を通知することで、視差算出部102に対する条件設定を行う。 FIG. 9 is a block diagram showing a configuration of an image processing device according to the fifth embodiment of the present invention. As shown in FIG. 9, in the image processing apparatus 10 according to the present embodiment, the condition setting unit 101 of the FPGA 11 is different from the first embodiment described with reference to FIG. The difference is that a point 112 is provided. The sparse density determination unit 112 reads the parallax image generated by the parallax calculation unit 102 from the stereo camera image of the previous frame and stored in the parallax image storage unit 104, and is executed by the parallax calculation unit 102 based on the parallax image. Determine the sparse density in the smoothing process. That is, the sparse density determining unit 112 of the present embodiment determines the sparse density based on the parallax calculated by the parallax calculating unit 102 in the past. Then, the parallax calculating unit 102 is notified of the determined sparse density. The condition setting unit 101 in the present embodiment sets the condition for the parallax calculation unit 102 by notifying the sparse density determined by the sparse density determination unit 112 in this manner.
 疎密度決定部112から疎密度の通知が行われると、視差算出部102は、ステレオカメラ撮影画像保存部103から読み出した現フレームのステレオカメラ画像に対して、通知された疎密度に応じた割合で画素の間引きを行う。そして、間引かれなかった残りのステレオカメラ画像の画素を計算対象画素として、平滑化計算を実施する。その後は第1の実施形態と同様に、平滑化計算後のステレオカメラ画像に対して非類似度の計算を画素ごとに実施し、その計算結果に基づいて、画素ごとの視差を演算する。こうして得られた画素単位の視差の算出結果を、視差算出部102は視差画像としてプロセッサ30へ出力するとともに、視差画像保存部104に格納する。 When notification of the sparse density is performed from the sparse density determination unit 112, the parallax calculation unit 102 calculates a ratio corresponding to the notified sparse density to the stereo camera image of the current frame read from the stereo camera captured image storage unit 103. To thin out the pixels. Then, smoothing calculation is performed with the pixels of the remaining stereo camera image not thinned out as calculation target pixels. After that, similar to the first embodiment, the dissimilarity calculation is performed for each pixel on the stereo camera image after the smoothing calculation, and the parallax for each pixel is calculated based on the calculation result. The parallax calculation unit 102 outputs the parallax calculation result obtained in pixel units as a parallax image to the processor 30 and stores the parallax calculation result in the parallax image storage unit 104.
 図10は、本発明の第5の実施形態に係る画像処理装置において平滑化計算に用いられる画素の間引きの説明図である。図10(a)は、遠距離の像に対応する画素の平滑化計算で用いられる画素の例を示す図である。前フレームの視差画像で表された視差が小さく、そのため遠距離の像に対応すると判断された画素については、図10(a)に示すように、各スキャンライン上の画素を間引かずに、画像上の狭い範囲で密な平滑化演算が実施されるようにする。一方、図10(b)は、近距離の像に対応する画素の平滑化計算で用いられる画素の例を示す図である。前フレームの視差画像で表された視差が大きく、そのため近距離の像に対応すると判断された画素については、図10(b)に示すように、各スキャンライン上の画素を間引いて、画像上の広い範囲で少ない画素を用いて平滑化演算が実施されるようにする。このように、本実施形態の疎密度決定部112では、前フレームで求められた視差の値が大きいほど、像までの距離が近いと判断してステレオカメラ画像から間引く画素の割合が増大するように、疎密度を決定することが好ましい。 FIG. 10 is an explanatory diagram of pixel thinning used in the smoothing calculation in the image processing device according to the fifth embodiment of the present invention. FIG. 10A is a diagram illustrating an example of pixels used in a smoothing calculation of a pixel corresponding to a long-distance image. As for the pixels for which the parallax represented by the parallax image of the previous frame is small and which is determined to correspond to an image at a long distance, pixels on each scan line are not thinned out as shown in FIG. A dense smoothing operation is performed in a narrow range on an image. On the other hand, FIG. 10B is a diagram illustrating an example of a pixel used in a smoothing calculation of a pixel corresponding to an image at a short distance. As for the pixels for which the parallax represented by the parallax image of the previous frame is large and which is determined to correspond to an image at a short distance, pixels on each scan line are thinned out as shown in FIG. Smoothing operation is performed using a small number of pixels in a wide range. As described above, the sparse density determining unit 112 of the present embodiment determines that the larger the value of the parallax obtained in the previous frame is, the shorter the distance to the image is, and increases the ratio of the pixels thinned out from the stereo camera image. Preferably, the sparse density is determined.
 なお、図10では平滑化演算に用いる画素がスキャンライン上に並んでいる例を説明したが、ライン上に並ぶものに限る必要はない。図11は、本発明の第5の実施形態に係る画像処理装置において平滑化計算に用いられる画素の配置の説明図である。図11(a)は、8本のスキャンライン上に並んで配置された各画素を平滑化計算に採用する例を示している。一方、図11(b)は、スキャンライン上には並んでおらず、所定の配置パターンに従って配置された各画素を平滑化計算に採用する例を示している。図11(a)、図11(b)いずれの配置例においても、疎密度決定部112により決定された疎密度に基づいて前述のような画素の間引きを行い、平滑化演算を実施することができる。このときどの画素を間引くかについては、予め決定しておくことができる。ただし、図11に示した画素の配置は一例であり、平滑化計算に用いる画素の配置はこれに限定されない。 In FIG. 10, the example in which the pixels used for the smoothing operation are arranged on the scan line has been described. However, it is not necessary to limit the pixels to be arranged on the line. FIG. 11 is an explanatory diagram of an arrangement of pixels used for smoothing calculation in the image processing device according to the fifth embodiment of the present invention. FIG. 11A shows an example in which pixels arranged side by side on eight scan lines are used for smoothing calculation. On the other hand, FIG. 11B shows an example in which pixels arranged not according to a scan line but arranged according to a predetermined arrangement pattern are used for smoothing calculation. In each of the arrangement examples shown in FIGS. 11A and 11B, it is possible to perform the above-described pixel thinning based on the sparse density determined by the sparse density determination unit 112 and perform a smoothing operation. it can. At this time, which pixels to thin out can be determined in advance. However, the pixel arrangement shown in FIG. 11 is an example, and the pixel arrangement used for the smoothing calculation is not limited to this.
 また、本実施形態では疎密度決定部112において、第1の実施形態で説明したライン数決定部111におけるスキャンライン数の決定と同様に、過去に視差算出部102により算出された視差に基づいて疎密度を決定する例を説明したが、本発明はこれに限定されない。例えば、第2の実施形態におけるスキャンライン数の決定と同様に、過去に視差算出部102により算出された視差に基づき判定された距離レンジの情報をプロセッサ30から取得し、取得した距離レンジの情報に基づいて疎密度を決定してもよい。また、第3の実施形態におけるスキャンライン数の決定と同様に、過去に視差算出部102により算出された視差に基づき検知された物体の情報をプロセッサ30から取得し、取得した物体の情報に基づいて疎密度を決定してもよい。あるいは、第4の実施形態におけるスキャンライン数の決定と同様に、他のセンサで測定されたステレオカメラ画像の範囲に対応する距離情報を取得し、取得した距離情報に基づいて疎密度を決定してもよい。これ以外にも、任意の方法で疎密度を決定することができる。 Further, in the present embodiment, the sparse density determination unit 112 determines the number of scan lines in the line number determination unit 111 described in the first embodiment, based on the parallax calculated in the past by the parallax calculation unit 102. Although the example of determining the sparse density has been described, the present invention is not limited to this. For example, similarly to the determination of the number of scan lines in the second embodiment, information on the distance range determined based on the disparity calculated in the past by the disparity calculation unit 102 is acquired from the processor 30 and the acquired information on the distance range is acquired. Sparse density may be determined based on Further, similarly to the determination of the number of scan lines in the third embodiment, information of an object detected based on the disparity calculated in the past by the disparity calculating unit 102 is acquired from the processor 30 and based on the acquired information of the object. Alternatively, the density may be determined. Alternatively, similarly to the determination of the number of scan lines in the fourth embodiment, distance information corresponding to the range of the stereo camera image measured by another sensor is obtained, and the sparse density is determined based on the obtained distance information. You may. Other than this, the sparse density can be determined by an arbitrary method.
(第6の実施形態)
 次に本発明の第6の実施形態について説明する。本実施形態では、第1~第4の実施形態で説明したライン数決定部111と、第5の実施形態で説明した疎密度決定部112とを条件設定部101が有する画像処理装置の例を説明する。
(Sixth embodiment)
Next, a sixth embodiment of the present invention will be described. In the present embodiment, an example of an image processing apparatus in which the condition setting unit 101 includes the line number determination unit 111 described in the first to fourth embodiments and the sparse density determination unit 112 described in the fifth embodiment will be described. explain.
 図12は、本発明の第6の実施形態に係る画像処理装置の構成を示すブロック図である。図12に示すように、本実施形態に係る画像処理装置10は、図2で説明した第1の実施形態と比べて、FPGA11の条件設定部101がライン数決定部111に加えて疎密度決定部112をさらに有する点が異なっている。この疎密度決定部112は、第5の実施形態で説明したのと同様である。なお、図12ではメモリ12が視差画像保存部104を有している例を示しているが、ライン数決定部111や疎密度決定部112が第2~第4の各実施形態と同様の手法でスキャンライン数や疎密度の決定を行う場合には、視差画像保存部104をメモリ12に設けずに省略してもよい。また、第4の実施形態で説明したように、他のセンサで測定した距離情報を利用してスキャンライン数や疎密度の決定を行う場合には、当該センサからの距離情報をライン数決定部111や疎密度決定部112へ入力すればよい。 FIG. 12 is a block diagram showing a configuration of an image processing apparatus according to the sixth embodiment of the present invention. As shown in FIG. 12, in the image processing apparatus 10 according to the present embodiment, the condition setting unit 101 of the FPGA 11 is different from the first embodiment described with reference to FIG. The difference is that a point 112 is further provided. This sparse density determination unit 112 is the same as that described in the fifth embodiment. Although FIG. 12 illustrates an example in which the memory 12 includes the parallax image storage unit 104, the line number determination unit 111 and the sparse density determination unit 112 use the same method as in the second to fourth embodiments. When the number of scan lines and the sparse density are determined by using, the parallax image storage unit 104 may be omitted without being provided in the memory 12. Further, as described in the fourth embodiment, when determining the number of scan lines or the sparse density using the distance information measured by another sensor, the distance information from the sensor is used as the line number determination unit. 111 or the sparse density determination unit 112.
 以上説明した本発明の実施形態によれば、以下の作用効果が得られる。 According to the embodiment of the present invention described above, the following operational effects can be obtained.
(1)画像処理装置10は、カメラ20により撮影されたステレオカメラ画像に基づいて視差を算出する。画像処理装置10は、ステレオカメラ画像における計算対象画素を決定するための条件を設定する条件設定部101と、条件設定部101により設定された条件に基づいて、ステレオカメラ画像の中で計算対象画素を決定し、計算対象画素の情報を用いて視差を算出する視差算出部102とを備える。このようにしたので、視差の計算に要する計算量を抑制することができる。 (1) The image processing apparatus 10 calculates parallax based on a stereo camera image captured by the camera 20. The image processing apparatus 10 includes: a condition setting unit 101 that sets a condition for determining a calculation target pixel in a stereo camera image; and a calculation target pixel in the stereo camera image based on the condition set by the condition setting unit 101. And a parallax calculating unit 102 that calculates parallax using information of the calculation target pixel. With this configuration, the amount of calculation required for calculating the parallax can be suppressed.
(2)第1~第4、第6の各実施形態において、条件設定部101は、ステレオカメラ画像をスキャンするスキャンラインの数を決定するライン数決定部111を有する。視差算出部102は、ライン数決定部111により決定された数のスキャンラインをステレオカメラ画像に対して設定し、スキャンライン上にあるステレオカメラ画像の画素を、計算対象画素に決定する。このようにしたので、条件に応じた計算対象画素を適切に決定することができる。 (2) In each of the first to fourth and sixth embodiments, the condition setting unit 101 has a line number determination unit 111 that determines the number of scan lines for scanning a stereo camera image. The parallax calculation unit 102 sets the number of scan lines determined by the line number determination unit 111 for the stereo camera image, and determines pixels of the stereo camera image on the scan line as calculation target pixels. With this configuration, it is possible to appropriately determine the calculation target pixel according to the condition.
(3)第1の実施形態において、ライン数決定部111は、過去に視差算出部102により算出された視差に基づいて、スキャンラインの数を決定する。このときライン数決定部111は、視差の値が小さいほどスキャンラインの数を増大させるようにする。このようにしたので、過去の視差情報から画素ごとの距離に応じた最適なスキャンラインの数を決定することができる。 (3) In the first embodiment, the line number determination unit 111 determines the number of scan lines based on the parallax calculated by the parallax calculation unit 102 in the past. At this time, the line number determination unit 111 increases the number of scan lines as the value of the parallax is smaller. With this configuration, it is possible to determine the optimal number of scan lines according to the distance for each pixel from past disparity information.
(4)第2の実施形態において、ライン数決定部111は、過去に視差算出部102により算出された視差に基づき判定された距離レンジの情報を取得し、取得した距離レンジの情報に基づいて、スキャンラインの数を決定する。このようにしたので、過去の視差情報から物体ごとの距離に応じた最適なスキャンラインの数を決定することができる。 (4) In the second embodiment, the number-of-lines determining unit 111 acquires information on the distance range determined based on the disparity calculated in the past by the disparity calculating unit 102, and based on the acquired information on the distance range. , Determine the number of scan lines. With this configuration, it is possible to determine the optimal number of scan lines according to the distance for each object from the past disparity information.
(5)第3の実施形態において、ライン数決定部111は、過去に視差算出部102により算出された視差に基づき検知された物体の情報を取得し、取得した物体の情報に基づいて、スキャンラインの数を決定する。このようにしたので、過去の視差情報から物体ごとに最適なスキャンラインの数を決定することができる。 (5) In the third embodiment, the line number determination unit 111 acquires information on an object detected based on the parallax calculated in the past by the parallax calculation unit 102, and performs scanning based on the acquired information on the object. Determine the number of lines. With this configuration, it is possible to determine the optimal number of scan lines for each object from past disparity information.
(6)第4の実施形態において、ライン数決定部111は、ステレオカメラ画像の範囲に対する距離情報を取得し、取得した距離情報に基づいて、スキャンラインの数を決定する。このようにしたので、過去のステレオカメラ画像から視差を算出できなかった場合でも、画素ごとの距離に応じた最適なスキャンラインの数を決定することができる。 (6) In the fourth embodiment, the number-of-lines determining unit 111 acquires distance information for a range of a stereo camera image, and determines the number of scan lines based on the acquired distance information. Thus, even when parallax cannot be calculated from a past stereo camera image, it is possible to determine the optimal number of scan lines according to the distance for each pixel.
(7)第5、第6の各実施形態において、条件設定部101は、ステレオカメラ画像から間引く画素の割合を表す疎密度を決定する疎密度決定部112を有する。視差算出部102は、疎密度決定部112により決定された疎密度に基づいてステレオカメラ画像から間引いた残りの画素を、計算対象画素に決定する。このようにしたので、条件に応じた計算対象画素を適切に決定することができる。 (7) In each of the fifth and sixth embodiments, the condition setting unit 101 includes a sparse density determination unit 112 that determines a sparse density representing a ratio of pixels to be thinned out from a stereo camera image. The parallax calculation unit 102 determines the remaining pixels thinned out from the stereo camera image based on the sparse density determined by the sparse density determination unit 112 as calculation target pixels. With this configuration, it is possible to appropriately determine the calculation target pixel according to the condition.
(8)第5、第6の各実施形態において、疎密度決定部112は、過去に視差算出部102により算出された視差に基づいて、疎密度を決定することができる。このとき疎密度決定部112は、視差の値が大きいほどステレオカメラ画像から間引く画素の割合が増大するように疎密度を決定する。このようにしたので、過去の視差情報から画素ごとの距離に応じた最適な疎密度を決定することができる。 (8) In the fifth and sixth embodiments, the sparse density determination unit 112 can determine the sparse density based on the parallax calculated by the parallax calculation unit 102 in the past. At this time, the sparse density determining unit 112 determines the sparse density such that the larger the value of the parallax, the greater the proportion of pixels thinned out from the stereo camera image. With this configuration, it is possible to determine the optimal sparse density according to the distance for each pixel from the past disparity information.
(9)第5、第6の各実施形態において、疎密度決定部112は、過去に視差算出部102により算出された視差に基づき判定された距離レンジの情報を取得し、取得した距離レンジの情報に基づいて、疎密度を決定することもできる。また、過去に視差算出部102により算出された視差に基づき検知された物体の情報を取得し、取得した物体の情報に基づいて、疎密度を決定することもできる。さらに、ステレオカメラ画像の範囲に対する距離情報を取得し、取得した距離情報に基づいて、疎密度を決定することもできる。このようにすれば、状況に応じて最適な疎密度を決定することができる。 (9) In each of the fifth and sixth embodiments, the sparse-density determining unit 112 obtains information on the distance range determined based on the parallax calculated in the past by the parallax calculating unit 102, and obtains the information on the obtained distance range. The sparse density can also be determined based on the information. Further, information on an object detected based on the parallax calculated in the past by the parallax calculation unit 102 may be acquired, and the sparse density may be determined based on the acquired information on the object. Furthermore, it is also possible to acquire distance information with respect to the range of the stereo camera image and determine the sparse density based on the acquired distance information. This makes it possible to determine the optimum sparse density according to the situation.
 以上、本発明の実施形態について述べたが、本発明は前述の実施形態に限定されるものでなく、特許請求の範囲に記載された範囲を逸脱しない範囲で種々の変更を行うことができる。例えば、上記の各実施形態では車両に搭載されて使用される画像処理装置10について説明したが、他のシステムで用いられる画像処理装置においても本発明を適用可能である。 Although the embodiments of the present invention have been described above, the present invention is not limited to the above-described embodiments, and various changes can be made without departing from the scope described in the claims. For example, in each of the embodiments described above, the image processing apparatus 10 used by being mounted on a vehicle has been described, but the present invention is also applicable to an image processing apparatus used in another system.
 以上説明した実施形態や変形例はあくまで一例であり、発明の特徴が損なわれない限り、本発明はこれらの内容に限定されるものではない。また、上記では種々の実施形態や変形例を説明したが、本発明はこれらの内容に限定されるものではない。本発明の技術的思想の範囲内で考えられるその他の態様も本発明の範囲内に含まれる。 The embodiments and modifications described above are merely examples, and the present invention is not limited to these contents as long as the features of the invention are not impaired. Although various embodiments and modifications have been described above, the present invention is not limited to these contents. Other embodiments that can be considered within the scope of the technical concept of the present invention are also included in the scope of the present invention.
 次の優先権基礎出願の開示内容は引用文としてここに組み込まれる。
 日本国特許出願2018-168862(2018年9月10日出願)
The disclosure of the following priority application is incorporated herein by reference.
Japanese patent application 2018-168882 (filed on September 10, 2018)
10…画像処理装置、11…FPGA、12…メモリ、20…カメラ、30…プロセッサ、40…LIDAR、101…条件設定部、102…視差算出部(SGM演算)、103…ステレオカメラ撮影画像保存部、104…視差画像保存部、111…ライン数決定部、112…疎密度決定部 DESCRIPTION OF SYMBOLS 10 ... Image processing apparatus, 11 ... FPGA, 12 ... Memory, 20 ... Camera, 30 ... Processor, 40 ... LIDAR, 101 ... Condition setting part, 102 ... Parallax calculation part (SGM calculation), 103 ... Stereo camera photography image storage part , 104: parallax image storage unit, 111: line number determination unit, 112: sparse density determination unit

Claims (13)

  1.  ステレオカメラにより撮影されたステレオカメラ画像に基づいて視差を算出する画像処理装置であって、
     前記ステレオカメラ画像における計算対象画素を決定するための条件を設定する条件設定部と、
     前記条件設定部により設定された条件に基づいて、前記ステレオカメラ画像の中で前記計算対象画素を決定し、前記計算対象画素の情報を用いて前記視差を算出する視差算出部と、を備える画像処理装置。
    An image processing apparatus that calculates parallax based on a stereo camera image captured by a stereo camera,
    A condition setting unit for setting a condition for determining a calculation target pixel in the stereo camera image,
    A parallax calculating unit that determines the calculation target pixel in the stereo camera image based on the condition set by the condition setting unit, and calculates the parallax using information on the calculation target pixel. Processing equipment.
  2.  請求項1に記載の画像処理装置において、
     前記条件設定部は、前記ステレオカメラ画像をスキャンするスキャンラインの数を決定するライン数決定部を有し、
     前記視差算出部は、前記ライン数決定部により決定された数の前記スキャンラインを前記ステレオカメラ画像に対して設定し、前記スキャンライン上にある前記ステレオカメラ画像の画素を、前記計算対象画素に決定する画像処理装置。
    The image processing apparatus according to claim 1,
    The condition setting unit has a line number determination unit that determines the number of scan lines to scan the stereo camera image,
    The parallax calculation unit sets the number of the scan lines determined by the line number determination unit for the stereo camera image, the pixels of the stereo camera image on the scan line, the calculation target pixel Image processing device to determine.
  3.  請求項2に記載の画像処理装置において、
     前記ライン数決定部は、過去に前記視差算出部により算出された前記視差に基づいて、前記スキャンラインの数を決定する画像処理装置。
    The image processing apparatus according to claim 2,
    The image processing device, wherein the number-of-lines determining unit determines the number of the scan lines based on the parallax calculated by the parallax calculating unit in the past.
  4.  請求項3に記載の画像処理装置において、
     前記ライン数決定部は、前記視差の値が小さいほど前記スキャンラインの数を増大させる画像処理装置。
    The image processing apparatus according to claim 3,
    The image processing device, wherein the line number determining unit increases the number of the scan lines as the value of the parallax is smaller.
  5.  請求項2に記載の画像処理装置において、
     前記ライン数決定部は、過去に前記視差算出部により算出された前記視差に基づき判定された距離レンジの情報を取得し、取得した前記距離レンジの情報に基づいて、前記スキャンラインの数を決定する画像処理装置。
    The image processing apparatus according to claim 2,
    The line number determining unit obtains information on a distance range determined based on the parallax calculated by the parallax calculating unit in the past, and determines the number of the scan lines based on the obtained information on the distance range. Image processing device.
  6.  請求項2に記載の画像処理装置において、
     前記ライン数決定部は、過去に前記視差算出部により算出された前記視差に基づき検知された物体の情報を取得し、取得した前記物体の情報に基づいて、前記スキャンラインの数を決定する画像処理装置。
    The image processing apparatus according to claim 2,
    The line number determination unit obtains information on an object detected based on the parallax calculated in the past by the parallax calculation unit, and determines the number of the scan lines based on the obtained information on the object. Processing equipment.
  7.  請求項2に記載の画像処理装置において、
     前記ライン数決定部は、前記ステレオカメラ画像の範囲に対する距離情報を取得し、取得した前記距離情報に基づいて、前記スキャンラインの数を決定する画像処理装置。
    The image processing apparatus according to claim 2,
    The image processing device, wherein the number-of-lines determining unit obtains distance information for a range of the stereo camera image, and determines the number of the scan lines based on the obtained distance information.
  8.  請求項1から請求項7のいずれか一項に記載の画像処理装置において、
     前記条件設定部は、前記ステレオカメラ画像から間引く画素の割合を表す疎密度を決定する疎密度決定部を有し、
     前記視差算出部は、前記疎密度決定部により決定された前記疎密度に基づいて前記ステレオカメラ画像から間引いた残りの画素を、前記計算対象画素に決定する画像処理装置。
    The image processing apparatus according to any one of claims 1 to 7,
    The condition setting unit includes a sparse density determination unit that determines a sparse density representing a ratio of pixels to be thinned out from the stereo camera image,
    The image processing device, wherein the parallax calculation unit determines remaining pixels thinned out from the stereo camera image based on the sparse density determined by the sparse density determination unit as the calculation target pixels.
  9.  請求項8に記載の画像処理装置において、
     前記疎密度決定部は、過去に前記視差算出部により算出された前記視差に基づいて、前記疎密度を決定する画像処理装置。
    The image processing device according to claim 8,
    The image processing device, wherein the sparse density determination unit determines the sparse density based on the parallax calculated by the parallax calculation unit in the past.
  10.  請求項9に記載の画像処理装置において、
     前記疎密度決定部は、前記視差の値が大きいほど前記ステレオカメラ画像から間引く画素の割合が増大するように前記疎密度を決定する画像処理装置。
    The image processing apparatus according to claim 9,
    The image processing device, wherein the sparse density determining unit determines the sparse density such that the larger the value of the parallax, the greater the proportion of pixels to be thinned out from the stereo camera image.
  11.  請求項8に記載の画像処理装置において、
     前記疎密度決定部は、過去に前記視差算出部により算出された前記視差に基づき判定された距離レンジの情報を取得し、取得した前記距離レンジの情報に基づいて、前記疎密度を決定する画像処理装置。
    The image processing device according to claim 8,
    The sparse density determining unit acquires information on a distance range determined based on the parallax calculated by the parallax calculating unit in the past, and determines the sparse density based on the acquired information on the distance range. Processing equipment.
  12.  請求項8に記載の画像処理装置において、
     前記疎密度決定部は、過去に前記視差算出部により算出された前記視差に基づき検知された物体の情報を取得し、取得した前記物体の情報に基づいて、前記疎密度を決定する画像処理装置。
    The image processing device according to claim 8,
    An image processing apparatus that obtains information on an object detected based on the parallax calculated in the past by the parallax calculating unit, and determines the sparse density based on the obtained information on the object. .
  13.  請求項8に記載の画像処理装置において、
     前記疎密度決定部は、前記ステレオカメラ画像の範囲に対する距離情報を取得し、取得した前記距離情報に基づいて、前記疎密度を決定する画像処理装置。
    The image processing device according to claim 8,
    The image processing device, wherein the sparse density determining unit acquires distance information for a range of the stereo camera image, and determines the sparse density based on the acquired distance information.
PCT/JP2019/033762 2018-09-10 2019-08-28 Image processing device WO2020054429A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201980049733.5A CN112513572B (en) 2018-09-10 2019-08-28 Image processing apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-168862 2018-09-10
JP2018168862A JP7066580B2 (en) 2018-09-10 2018-09-10 Image processing equipment

Publications (1)

Publication Number Publication Date
WO2020054429A1 true WO2020054429A1 (en) 2020-03-19

Family

ID=69777549

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/033762 WO2020054429A1 (en) 2018-09-10 2019-08-28 Image processing device

Country Status (3)

Country Link
JP (1) JP7066580B2 (en)
CN (1) CN112513572B (en)
WO (1) WO2020054429A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001266128A (en) * 2000-03-21 2001-09-28 Nippon Telegr & Teleph Corp <Ntt> Method and device for obtaining depth information and recording medium recording depth information obtaining program
JP2004279031A (en) * 2003-03-12 2004-10-07 Toyota Central Res & Dev Lab Inc Apparatus and method for detecting distance distribution
JP2008241273A (en) * 2007-03-26 2008-10-09 Ihi Corp Laser radar device and its control method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004309868A (en) * 2003-04-08 2004-11-04 Sony Corp Imaging device and stereoscopic video generating device
JPWO2012111755A1 (en) * 2011-02-18 2014-07-07 ソニー株式会社 Image processing apparatus and image processing method
CN104871058B (en) * 2012-12-19 2017-04-12 富士胶片株式会社 Image processing device, imaging device and image processing method
CN104537668B (en) * 2014-12-29 2017-08-15 浙江宇视科技有限公司 A kind of quick parallax image computational methods and device
WO2018037479A1 (en) * 2016-08-23 2018-03-01 株式会社日立製作所 Image processing device, stereo camera device, and image processing method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001266128A (en) * 2000-03-21 2001-09-28 Nippon Telegr & Teleph Corp <Ntt> Method and device for obtaining depth information and recording medium recording depth information obtaining program
JP2004279031A (en) * 2003-03-12 2004-10-07 Toyota Central Res & Dev Lab Inc Apparatus and method for detecting distance distribution
JP2008241273A (en) * 2007-03-26 2008-10-09 Ihi Corp Laser radar device and its control method

Also Published As

Publication number Publication date
CN112513572B (en) 2022-10-18
JP2020041891A (en) 2020-03-19
CN112513572A (en) 2021-03-16
JP7066580B2 (en) 2022-05-13

Similar Documents

Publication Publication Date Title
RU2529594C1 (en) Calibration device, distance measurement system, calibration method and calibration programme
JP5404263B2 (en) Parallax calculation method and parallax calculation device
JP6589926B2 (en) Object detection device
JP5752618B2 (en) Stereo parallax calculation device
KR102015706B1 (en) Apparatus and method for measureing a crack in a concrete surface
US9898669B2 (en) Traveling road surface detection device and traveling road surface detection method
CN105335955A (en) Object detection method and object detection apparatus
JP6566768B2 (en) Information processing apparatus, information processing method, and program
JP6515650B2 (en) Calibration apparatus, distance measuring apparatus and calibration method
JP2006090896A (en) Stereo image processor
JP2019207456A (en) Geometric transformation matrix estimation device, geometric transformation matrix estimation method, and program
JP2013174494A (en) Image processing device, image processing method, and vehicle
JP2017181476A (en) Vehicle location detection device, vehicle location detection method and vehicle location detection-purpose computer program
JP2008309637A (en) Obstruction measuring method, obstruction measuring apparatus, and obstruction measuring system
TWI571099B (en) Device and method for depth estimation
JP2015230703A (en) Object detection device and object detection method
JP2009092551A (en) Method, apparatus and system for measuring obstacle
JP2021051347A (en) Distance image generation apparatus and distance image generation method
WO2020054429A1 (en) Image processing device
JP2006072757A (en) Object detection system
JPWO2017090097A1 (en) Vehicle external recognition device
JP2018092547A (en) Image processing apparatus, image processing method, and program
JPH1096607A (en) Object detector and plane estimation method
WO2020095549A1 (en) Imaging device
JP6936557B2 (en) Search processing device and stereo camera device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19859413

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19859413

Country of ref document: EP

Kind code of ref document: A1