WO2021245972A1 - Computation device and parallax calculation method - Google Patents

Computation device and parallax calculation method Download PDF

Info

Publication number
WO2021245972A1
WO2021245972A1 PCT/JP2021/003114 JP2021003114W WO2021245972A1 WO 2021245972 A1 WO2021245972 A1 WO 2021245972A1 JP 2021003114 W JP2021003114 W JP 2021003114W WO 2021245972 A1 WO2021245972 A1 WO 2021245972A1
Authority
WO
WIPO (PCT)
Prior art keywords
parallax
correction
unit
comparison
pixel
Prior art date
Application number
PCT/JP2021/003114
Other languages
French (fr)
Japanese (ja)
Inventor
圭介 稲田
裕介 内田
進一 野中
雅士 高田
Original Assignee
日立Astemo株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日立Astemo株式会社 filed Critical 日立Astemo株式会社
Priority to DE112021001906.6T priority Critical patent/DE112021001906T5/en
Publication of WO2021245972A1 publication Critical patent/WO2021245972A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • G01C3/02Details
    • G01C3/06Use of electric means to obtain final indication
    • G01C3/08Use of electric radiation detectors
    • G01C3/085Use of electric radiation detectors with electronic parallax measurement

Definitions

  • the present invention relates to an arithmetic unit and a parallax calculation method.
  • Patent Document 1 describes an image processing apparatus that performs image processing of a plurality of image data having different viewpoints, the acquisition means for acquiring the plurality of image data, and the plurality of image data acquired by the acquisition means.
  • the reference image area is set in the image of the first image data which is the reference image
  • the reference image area is set in the image of the second image data which is the reference image
  • the reference image area and the reference image area are set.
  • the arithmetic processing means includes an arithmetic processing means for calculating the amount of misalignment of the plurality of image data by a correlation calculation with the image data, and the arithmetic processing means has a brightness change value of an adjacent pixel at a boundary portion in a discriminatory direction of the reference image region less than a threshold value. It is characterized in that the reference image region having a shape is determined, and the correlation calculation is performed between the determined reference image region and the reference image region having a shape corresponding to the shape of the reference image region.
  • the image processing apparatus is disclosed.
  • the arithmetic unit is a target for calculating the difference between the reference image acquired by the first imaging unit, the input unit into which the reference image acquired by the second imaging unit is input, and the reference image.
  • a target point setting unit that sets a target pixel for parallax calculation, which is a pixel
  • a comparison point setting unit that sets a comparison target pixel that is a candidate for a pixel in the reference image corresponding to the target pixel for parallax calculation, and a predetermined pixel as a reference.
  • a storage unit that stores standard shape information that defines the evaluation target area to be evaluated, a pre-correction reference area specifying unit that specifies a pre-correction reference area based on the misalignment calculation target pixel based on the standard shape information, and the above.
  • the pre-correction comparison area specifying unit that specifies the pre-correction comparison area based on the comparison target pixel, and the pre-correction comparison area for each pixel included in the pre-correction reference area.
  • the evaluation target area is obtained by correcting the standard shape information to the corrected shape information based on the comparison between the evaluation value and the threshold value and the evaluation value generation unit that calculates the evaluation value which is an index similar to the corresponding pixel.
  • the correction unit for maintaining or narrowing the information of the reference image in the corrected reference region specified based on the correction target pixel based on the correction shape information, and the comparison target pixel based on the correction shape information. It is provided with a parallax calculation unit that calculates the misalignment of the parallax calculation target pixel based on the information of the reference image in the specified corrected comparison region.
  • a parallax calculation unit that calculates the misalignment of the parallax calculation target pixel based on the information of the reference image in the specified corrected comparison region.
  • a comparison target pixel that is a pixel candidate in the reference image corresponding to the disparity calculation target pixel
  • pre-correction reference region based on the disparity calculation target pixel based on the standard shape information.
  • the pre-correction comparison region based on the comparison target pixel is specified, and the pre-correction comparison region corresponds to each pixel included in the pre-correction reference region.
  • Maintaining or narrowing the evaluation target area by calculating an evaluation value which is an index similar to a pixel and correcting the standard shape information to the corrected shape information based on the comparison between the evaluation value and the threshold value. And the information of the reference image in the corrected reference region specified based on the corrected shape information and the corrected reference pixel specified as a reference, and the corrected comparison specified based on the corrected shape information based on the comparison target pixel. It includes calculating the parallax of the misparity calculation target pixel based on the information of the reference image in the region.
  • the calculation accuracy of parallax can be improved.
  • FIG. 1 Figure showing another example of standard shape information and corrected shape information Functional configuration diagram of the matching block generation unit in the second embodiment
  • Functional block diagram of arithmetic unit in 4th Embodiment A flowchart showing the processing of the parallax generation unit in the fourth embodiment.
  • FIG. 1 is a functional configuration diagram of the arithmetic unit 1 according to the present invention.
  • the arithmetic unit 1 is connected to the first image pickup unit 2 and the second image pickup unit 3.
  • the arithmetic unit 1 has, as its functions, an input unit 10, an evaluation value generation unit 11, a matching block generation unit 12, a parallax generation unit 13, a recognition processing unit 4, a vehicle control unit 5, and a non-volatile storage. It has a storage unit 15 which is a device.
  • the input unit 10 is a communication interface corresponding to, for example, IEEE802.3.
  • the input unit 10 acquires a photographed image obtained by being photographed by the first image pickup unit 2 and the second image pickup unit 3.
  • the captured image of the first imaging unit 2 is referred to as a reference image 100
  • the captured image of the second imaging unit 3 is referred to as a reference image 101.
  • the reference image 100 and the reference image 101 are convenient names, and both may be interchanged.
  • the input unit 10 inputs the reference image 100 and the reference image 101 to the evaluation value generation unit 11, the matching block generation unit 12, and the parallax generation unit 13.
  • the evaluation value generation unit 11, the matching block generation unit 12, the parallax generation unit 13, the recognition processing unit 4, and the vehicle control unit 5 are, for example, a program in which a CPU, which is a central processing unit, is stored in a storage unit 15. Is realized by expanding it into a RAM, which is a readable / writable storage device, and executing the above. However, these are realized by FPGA (Field Programmable Gate Array), which is a rewritable logic circuit instead of the combination of CPU, storage unit 15, and RAM, and ASIC (Application Specific Integrated Circuit), which is an integrated circuit for specific applications. May be good. Further, instead of the combination of the CPU, the storage unit 15, and the RAM, these may be realized by a combination of different configurations, for example, a combination of the CPU, the storage unit 15, RAM and the FPGA. Note that the CPU and RAM are not shown in FIG.
  • the evaluation value generation unit 11 calculates the evaluation value 103 and outputs it to the matching block generation unit 12 as described later. The processing of the evaluation value generation unit 11 will be described in detail later.
  • the matching block generation unit 12 In the matching block generation unit 12, the reference image 100 and the reference image 101 are input from the input unit 10, the search area information 106 is input from the parallax generation unit 13, and the evaluation value 108 is input from the evaluation value generation unit 11.
  • the matching block generation unit 12 generates the pre-correction reference area 921 and the pre-correction comparison area 923 and outputs them to the evaluation value generation unit 11, and generates the correction shape information 912 and outputs it to the parallax generation unit 13. More specifically, the matching block generation unit 12 generates the pre-correction reference region 921 and the pre-correction comparison region 923 using the search region information 106. Further, the matching block generation unit 12 generates the correction shape information 912 using the reference image 101, the reference image 101, and the evaluation value 108.
  • the parallax generation unit 13 the reference image 100 and the reference image 101 are input from the input unit 10, and the correction shape information 912 is input from the matching block generation unit 12.
  • the parallax generation unit 13 generates the parallax information 105 and outputs it to the recognition processing unit 4.
  • the recognition processing unit 4 performs various recognition processes by inputting the parallax information 105.
  • the recognition process in the recognition process unit 4 there is a three-dimensional object detection using parallax information.
  • the recognition target there are position information, type information, motion information, and danger information of the subject.
  • position information there is a direction and a distance from the own vehicle.
  • type information include pedestrians, adults, children, the elderly, animals, rockfalls, bicycles, peripheral vehicles, peripheral structures, and curbs.
  • motion information include pedestrian and bicycle wobbling, jumping out, crossing, moving direction, moving speed, and moving locus.
  • danger information examples include pedestrian jumping out, falling rocks, sudden stop, sudden deceleration, and abnormal operation of surrounding vehicles such as sudden steering.
  • the recognition result generated by the recognition processing unit 4 is supplied to the vehicle control unit 5.
  • the vehicle control unit 5 performs various vehicle controls based on the recognition result.
  • the vehicle control unit 5 As an example of vehicle control performed by the vehicle control unit 5, brake control, steering wheel control, accelerator control, in-vehicle lamp control, horn generation, in-vehicle camera control, peripheral vehicles connected via a network, and remote center equipment There is information output about the observation object around the image pickup device. As a specific example, there is speed and brake control according to the parallax information 105 of an obstacle existing in front of the vehicle. In addition, instead of the parallax information, the distance information that can be generated based on the parallax information may be used.
  • the vehicle control unit 5 may perform subject detection processing based on the image processing result using the reference image 100 or the reference image 101, or may be connected to the vehicle control unit 5.
  • the displayed display device may be displayed with an image obtained through the first image pickup unit 2 or the second image pickup unit 3 or for the viewer to recognize, map information, congestion information, or the like.
  • Information on the observation target detected based on the image processing result may be supplied to the information device that processes the traffic information of the above.
  • FIG. 2 is a diagram illustrating definitions of terms in the present embodiment.
  • the upper part of FIG. 2 shows that the third information is generated by combining the first information and the second information.
  • the lower part of FIG. 2 is a diagram illustrating the first information and the third information.
  • the first information in FIG. 2 is the parallax calculation target pixel 901 and the comparison target pixel 902. Both the parallax calculation target pixel 901 and the comparison target pixel 902 are information for defining a region to be evaluated, that is, an evaluation target region.
  • the parallax calculation target pixel 901 is information on the coordinates of the pixel for which the parallax is calculated in the reference image 100, more specifically, the pixel.
  • the parallax calculation target pixel 901 has a width of 1 pixel in the X direction and a width of 1 pixel in the Y direction.
  • the parallax calculation target pixel 901 in this embodiment is a certain pixel.
  • the comparison target pixel 902 is a pixel set as a candidate for a pair with the parallax calculation target pixel 901 in the reference image 101, and more specifically, information on the coordinates of the pixel. Is.
  • the number of pixels of the comparison target pixel 902 is the same as that of the parallax calculation target pixel 901.
  • the comparison target pixel 902 exists inside the search range 930 set with reference to the parallax calculation target pixel 901.
  • the second information in FIG. 2 is standard shape information 911 and corrected shape information 912.
  • the standard shape information 911 is information indicating a region relative to a reference pixel, for example, information such as "a region of ⁇ 7 pixels in the X direction and ⁇ 3 pixels in the Y direction centered on the reference pixel". Is.
  • the standard shape information 911 is common to all the parallax calculation target pixels 901 and the comparison target pixels 902.
  • the standard shape information 911 is preset and stored in the storage unit 15.
  • Each region represented by the standard shape information 911 and the corrected shape information 912 can be called a "matching block” because it is used for matching, that is, matching with the reference image 100 and the reference image 101. Further, the region represented by the standard shape information 911 is also referred to as a “matching block before correction”, and the region represented by the correction shape information 912 is also referred to as a “matching block after correction”.
  • the corrected shape information 912 is information that is calculated for each comparison target pixel 902 and indicates a region relative to the reference pixel.
  • the corrected shape information 912 is, for example, information that "a region excluding a total of 4 pixels, one pixel each at the corner from a rectangular area of ⁇ 7 pixels in the X direction and ⁇ 3 pixels in the Y direction centered on the reference pixel". Is.
  • the corrected shape information 912 is commonly used for the reference image 100 and the reference image 101.
  • the area represented by the corrected shape information 912 is an area equal to or smaller than the area represented by the standard shape information 911. Details will be described later.
  • the third information in FIG. 2 is the pre-correction reference area 921, the post-correction reference area 922, the pre-correction comparison area 923, and the post-correction comparison area 924. All of these four pieces of information are information indicating the coordinates of a plurality of pixels in the reference image 100 or the reference image 101.
  • the pre-correction reference area 921 is an area obtained by applying the standard shape information 911 with the parallax calculation target pixel 901 as a reference in the reference image 100.
  • the standard shape information 911 is common to all pixels as described above, for example, if the parallax calculation target pixel 901 moves by one pixel in the X direction, the entire pre-correction reference region 921 moves by one pixel in the X direction and is compared. No matter how the pixel 902 changes, the pre-correction reference region 921 does not change.
  • the corrected reference area 922 is an area obtained by applying the corrected shape information 912 with the parallax calculation target pixel 901 as a reference in the reference image 100. Since the corrected shape information 912 exists as many as the number of combinations of the parallax calculation target pixel 901 and the comparison target pixel 902 as described above, the corrected reference region 922 is the same regardless of which of the parallax calculation target pixel 901 and the comparison target pixel 902 changes. May change. Since the standard shape information 911 and the corrected shape information 912 may be the same, in that case, the pre-correction reference area 921 and the post-correction reference area 922 are the same.
  • the pre-correction comparison area 923 is an area obtained by applying the standard shape information 911 with reference to the comparison target pixel 902 in the reference image 101. Since the standard shape information 911 is common to all pixels as described above, for example, if the comparison target pixel 902 moves by one pixel in the X direction, the entire pre-correction comparison area 923 moves by one pixel in the X direction.
  • the corrected comparison area 924 is an area obtained by applying the corrected shape information 912 with reference to the comparison target pixel 902 in the reference image 101. Since the corrected shape information 912 exists as many as the number of combinations of the parallax calculation target pixel 901 and the comparison target pixel 902 as described above, the corrected comparison area 924 is the same regardless of which of the parallax calculation target pixel 901 and the comparison target pixel 902 changes. May change.
  • FIG. 3 is a detailed view of the functional configuration of the matching block generation unit 12. However, in FIG. 3, the operation of the evaluation value generation unit 11 will also be described.
  • the matching block generation unit 12 includes a first generation unit 20, a second generation unit 21, and a threshold value generation unit 22.
  • the first generation unit 20 repeatedly generates the pre-correction reference area 921 and the pre-correction comparison area 923 based on the search area information 106 supplied from the parallax generation unit 13, and causes the second generation unit 21 and the evaluation value generation unit 11. Supply.
  • the search area information 106 is information on the search range in the reference image 101 corresponding to the parallax calculation target pixel 901, the standard shape information 911, and the parallax calculation target pixel 901. That is, the first generation unit 20 generates the pre-correction reference region 921 from the parallax calculation target pixel 901 and the standard shape information 911, and from the search range and the standard shape information 911 in the reference image 101 corresponding to the parallax calculation target pixel 901.
  • the pre-correction comparison area 923 is generated. For example, when the search range has a range of 100 pixels, 100 pre-correction comparison areas 923 are generated. Then, the first generation unit 20 generates, for example, 100 combinations of the pre-correction reference region 921 and the pre-correction comparison region 923 without changing the value of the pre-correction reference region 921, and outputs the combination.
  • the evaluation value generation unit 11 calculates the evaluation value 103 each time the combination of the pre-correction reference region 921 and the pre-correction comparison region 923 is received from the first generation unit 20, and outputs the evaluation value 103 to the threshold value generation unit 22 and the second generation unit 21. do.
  • the evaluation value generation unit 11 acquires specific pixel information in the pre-correction reference region 921, for example, a luminance value, by combining the pre-correction reference region 921 and the reference image 100.
  • the evaluation value generation unit 11 acquires specific pixel information in the pre-correction comparison area 923, for example, a luminance value, by combining the pre-correction comparison area 923 and the reference image 101.
  • the evaluation value generation unit 11 calculates the evaluation value 103 for each pixel by using the information of the pixel of the pre-correction reference area 921 and the information of the pixel of the pre-correction comparison area 923.
  • the evaluation value 103 there is a brightness fluctuation amount of each pixel in the matching block of the reference image 100 and the pixel in the matching block of the corresponding reference image 101.
  • SAD Sum of Absolute Difference
  • ZSAD Zero means Sum of Absolute Difference
  • SSD Squared Difference
  • NCC Normalized Cross Correlation
  • ZNCC Zero means Correlation
  • ZNCC Zero means Correlation
  • the pre-correction reference area 921 and the pre-correction comparison area 923 are generated by the standard shape information 911 common to different reference points, the number of pixels of the pre-correction reference area 921 and the pre-correction comparison area 923 are the same.
  • the standard shape information 911 is an area of 15 pixels in the X direction and 7 pixels in the Y direction
  • the pre-correction reference area 921 and the pre-correction comparison area 923 both have 105 pixels, and 105 evaluation values 103 are calculated. To.
  • the evaluation value 103 there is a luminance fluctuation amount or a luminance fluctuation direction or a luminance fluctuation pattern with each pixel in the matching block of the reference image 100 and the reference image 101 and its peripheral pixels.
  • the luminance fluctuation pattern there is an increase / decrease pattern from the left with respect to three horizontal pixels including the left and right adjacent pixels of the target pixel. For example, when the left adjacent pixel value is 10, the target pixel value is 25, and the right adjacent pixel value is 80, the increase / decrease pattern is “increase ⁇ increase”.
  • the pre-correction reference region 921 and the pre-correction comparison region 923 are supplied from the first generation unit 20, the similarity determination threshold value 201 is supplied from the threshold value generation unit 22, and the evaluation value generation unit 11 supplies the evaluation value. 103 is supplied.
  • the second generation unit 21 generates the correction shape information 912 by removing the pixels whose evaluation value 103 exceeds the similarity determination threshold value 201 (hereinafter referred to as invalid pixels) in the standard shape information 911, and causes the parallax generation unit 13 to generate the correction shape information 912. Supply.
  • pixels other than the invalid pixels in the standard shape information 911 will be referred to as effective pixels.
  • the corrected shape information 912 may be expressed as an effective pixel flag for each pixel.
  • the flag information is assigned to each pixel of the correction shape information 912, and is a flag information that is an effective pixel when the value is "1" and an invalid pixel when the value is "0".
  • FIG. 4 is a diagram showing an example of processing of the second generation unit 21.
  • the upper part of FIG. 4 shows the luminance value in each pixel of the pre-correction reference area 921
  • the middle part of FIG. 4 shows the luminance value in each pixel of the pre-correction comparison area 923.
  • the shape of the pre-correction reference region 921 and the pre-correction comparison region 923 is standard shape information 911.
  • the evaluation value 103 is the absolute value of the difference in luminance
  • the lower part of FIG. 4 shows the absolute value of the difference in the luminance value between the upper part of FIG. 4 and the middle part of FIG.
  • the value of each pixel in the lower part of FIG. 4 is provided by the evaluation value generation unit 11 as the evaluation value 103.
  • the second generation unit 21 determines that each of the four corners is an invalid pixel as shown in the lower part of FIG. 4, and is a valid pixel.
  • a region having a cross shape at a certain center is calculated as the correction shape information 912.
  • the threshold value generation unit 22 calculates the similarity determination threshold value 201.
  • the similarity determination threshold value 201 may be a fixed and specified predetermined threshold value. This threshold value may be a fixed threshold value, or although not described in this embodiment, a threshold value designated from the outside via the input unit may be used.
  • a function based on the evaluation value 103 of the parallax generation target pixel in the reference image 100 and the pixel in the reference image 101 corresponding to the pixel for example, a similarity determination threshold value generation function is used. There is a method to use.
  • FIG. 5 is a diagram showing an example of the similarity determination threshold generation function 900.
  • the horizontal axis of FIG. 5 is an evaluation value 103 (SV) of the parallax calculation target pixel 901 in the reference image 100 and the comparison target pixel 902 in the reference image 101.
  • the vertical axis of FIG. 5 is the similarity determination threshold TH0.
  • LMmax, LMmin, and OFFSET may be fixed values, or may be specified from the outside via the input unit, although not described in this embodiment.
  • the similarity determination threshold generation function 900 is expressed by, for example, Equation 1 below.
  • Similarity determination threshold TH0 LMmin (SV ⁇ LMmin) K0 x SV + OFFSET (LMmin ⁇ SV ⁇ LMmax) LMmax (LMmax ⁇ SV) ... (Equation 1)
  • Equation 1 shows that there are three cases: when SV is LMmin or less, when SV is larger than LMmin and LMmax or less, and when SV is larger than LMmax.
  • Another example of the similarity determination threshold value 201 is a function based on distance information. For example, various parameters (LMmin, LMmax, OFFSET, etc.) used in the function based on the fixed threshold value, the threshold value specified from the outside, and the evaluation value 103 may be changed based on the parallax information and the distance information obtained in advance.
  • various parameters LMmin, LMmax, OFFSET, etc.
  • FIG. 6 is a diagram showing an example of standard shape information 911 and corrected shape information 912.
  • the size of the area indicated by the standard shape information 911 is 7 pixels in the vertical direction and 15 pixels in the horizontal direction, so that the reference area 921 before correction also has the same size.
  • the parallax calculation target pixel 901 is arranged in the center of the pre-correction reference region 921. Further, the pixels shown by hatching in the figure are examples of invalid pixels.
  • the region indicated by the correction shape information 912 generated by the second generation unit 21 is a region in which invalid pixels are removed from the pre-correction reference region 921 and is a region based on the parallax calculation target pixel 901. be.
  • FIG. 7 is a flowchart showing the processing of the parallax generation unit 13.
  • the parallax generation unit 13 executes the process shown in FIG. 7 each time the reference image 100 and the reference image 101 are received from the input unit 10.
  • the parallax generation unit 13 specifies the parallax calculation target pixel 901 from the reference image 100.
  • this step among the pixels included in the reference image 100, one of the pixels for which the parallax has not been calculated yet is specified as the parallax calculation target pixel 901.
  • the parallax generation unit 13 reads the standard shape information 911 from the storage unit 15.
  • the parallax generation unit 13 specifies the search range in the reference image 101.
  • the search range in the reference image is fixed at ⁇ 100 pixels in the X direction and ⁇ 5 pixels in the Y direction centering on the parallax calculation target pixel 901 of the reference image.
  • the search range in the reference image 101 is specified as (200, 295) to (400, 305).
  • the number of pixels included in the search range is "1111", which is the product of 100x2 + 1 and 5x2 + 1.
  • the search range in the reference image 101 may be predetermined, or different values may be set for each pixel of the reference image 100 by a method not described in the present embodiment.
  • the parallax generation unit 13 transmits the search area information 106 to the matching block generation unit 12.
  • the standard shape information and the information of the search range in the reference image are transmitted to the matching block generation unit 12 to the effect that the coordinates of the parallax calculation target pixel 901 are (300,300).
  • the parallax generation unit 13 receives the correction shape information 912 for the entire area of the search area. For example, in the above example, the parallax generation unit 13 receives "1111" correction shape information 912s.
  • step S306 the parallax generation unit 13 identifies the corrected comparison area 924 most similar to the corrected reference area 922 from the search area of the reference image 101.
  • the corrected shape information 912 acquired in step S305 is used. The details of this step will be described later with reference to FIG.
  • the parallax generation unit 13 calculates the distance of the parallax calculation target pixel from the arithmetic unit 1. For this calculation, for example, the value of the difference between the X coordinate value of the parallax calculation target pixel in the reference image 100 and the X coordinate value of the center of the block specified in step S306 in the reference image 101, and a look-up table created in advance. It is calculated using and.
  • the parallax generation unit 13 determines whether or not the distance has been calculated for all the pixels in the reference image. When the parallax generation unit 13 determines that the distance has been calculated for all the pixels in the reference image 100, the process shown in FIG. 7 is terminated, and when it is determined that there are pixels of the reference image 100 for which the distance has not been calculated, a step is taken. Return to S301.
  • FIG. 8 is a flowchart showing the details of step S306 in FIG. 7.
  • the search area information is specified by the process up to step S303, and the corrected shape information 912 is obtained for the entire search area in step S305.
  • step S321 which is the first process in FIG. 8, the parallax generation unit 13 specifies the comparison target pixel 902 in the reference image 101.
  • the comparison target pixel 902 is any coordinate that is included in the range of coordinates specified by the information of the search range in the reference image included in the search area information and that has not been processed in steps S322 to S325.
  • the parallax generation unit 13 reads the correction shape information 912 corresponding to the comparison target pixel 902.
  • the target of reading in this step is included in the information obtained in step S305 of FIG.
  • the parallax generation unit 13 applies the corrected shape information 912 with the parallax calculation target pixel 901 as a reference in the reference image 100, and specifies the corrected reference region 922.
  • the parallax generation unit 13 applies the corrected shape information 912 with reference to the comparison target pixel 902 in the reference image 101, and specifies the corrected comparison region 924.
  • the parallax generation unit 13 calculates the similarity between the luminance information included in the corrected reference region 922 specified in step S323 and the luminance information included in the corrected comparison region 924 specified in step S324. ..
  • the parallax generation unit 13 determines whether or not the similarity is calculated for the entire region included in the range of coordinates specified by the information of the search range in the reference image.
  • the parallax generation unit 13 proceeds to step S327 when it is determined that the similarity has been calculated for the entire region, and returns to step S321 when it is determined that there are pixels for which the similarity has not been calculated.
  • the parallax generation unit 13 identifies the corrected comparison area 924 most similar to the corrected reference area 922, for example, the corrected comparison area 924 having the smallest SAD (Sum of Abusolute Difference), and performs the process shown in FIG. finish.
  • SAD Sud of Abusolute Difference
  • FIG. 9 is a flowchart showing the processing of the matching block generation unit 12.
  • the matching block generation unit 12 executes the process shown in FIG. 9 each time the search area information 106 is received from the parallax generation unit 13. However, in FIG. 9, it is described as the first step in order to clearly indicate the reception of the search area information 106.
  • the matching block generation unit 12 first receives the search area information 106 from the parallax generation unit 13 in step S331. In the following step S332, the matching block generation unit 12 generates one pre-correction reference area 921 and a plurality of pre-correction comparison areas 923 using the received search area information 106, and proceeds to step S333. In step S333, the matching block generation unit 12 transmits a set of pre-correction reference area 921 and pre-correction comparison area 923 to the evaluation value generation unit 11. However, the combination of the pre-correction reference area 921 and the pre-correction comparison area 923 transmitted in this step is a combination of the pre-correction reference area 921 and the pre-correction comparison area 923 that have not been transmitted yet.
  • the matching block generation unit 12 receives the evaluation value 103 from the evaluation value generation unit 11.
  • the evaluation value 103 received in this step is the same number as the number of pixels of the standard shape information 911, and corresponds to each pixel of the standard shape information 911, in other words, each pixel of the pre-correction reference region 921.
  • the evaluation value generation unit 11 also referred to the reference image 100 and the reference image 101 to acquire the luminance information and calculate the evaluation value 103. did.
  • the matching block generation unit 12 causes the threshold value generation unit 22 to generate the similarity determination threshold value 201.
  • the second generation unit 21 compares each evaluation value 103 with the similarity determination threshold value 201.
  • the corrected shape information 912 is generated based on the comparison result in step S336 and transmitted to the parallax generation unit 13.
  • the matching block generation unit 12 determines whether or not all the pre-correction comparison regions 923 generated in step S332 have been processed. The matching block generation unit 12 ends the process shown in FIG. 9 when it is determined that all the uncorrected comparison areas 923 have been processed, and returns to step S333 when it is determined that the unprocessed uncorrected comparison area 923 exists. ..
  • FIG. 10 is a diagram showing details of the functions of the arithmetic unit 1.
  • FIG. 10 shows the details of the functional configuration diagram shown in FIG. 1, but the recognition processing unit 4 and the vehicle control unit 5 are not shown for convenience of illustration.
  • Newly described in FIG. 10 are a target point setting unit 801 and a comparison point setting unit 802, a pre-correction reference area specifying unit 803, a pre-correction comparison area specifying unit 804, a correction unit 805, and a post-correction reference area specifying unit 806.
  • the target point setting unit 801 sets the parallax calculation target pixel 901, which is the target pixel for which the parallax is calculated in the reference image 100.
  • the target point setting unit 801 is included in the parallax generation unit 13.
  • the comparison point setting unit 802 identifies the comparison target pixel 902 which is a pixel candidate in the reference image 101 corresponding to the parallax calculation target pixel 901.
  • the comparison point setting unit 802 is included in the first generation unit 20 of the matching block generation unit 12.
  • the pre-correction reference area specifying unit 803 specifies the pre-correction reference area 921 based on the parallax calculation target pixel 901 based on the standard shape information 911.
  • the pre-correction reference region specifying unit 803 is included in the first generation unit 20 of the matching block generation unit 12.
  • the pre-correction comparison area specifying unit 804 specifies the pre-correction comparison area 923 with reference to the comparison target pixel 902 based on the standard shape information 911.
  • the pre-correction comparison area specifying unit 804 is included in the first generation unit 20 of the matching block generation unit 12.
  • the correction unit 805 maintains or narrows the evaluation target area by correcting the standard shape information 911 to the correction shape information 912 based on the comparison between the evaluation value 103 and the similarity determination threshold value 201.
  • the correction unit 805 is included in the second generation unit 21 of the matching block generation unit 12.
  • the corrected reference area specifying unit 806 specifies the corrected reference area 922 based on the parallax calculation target pixel 901 based on the corrected shape information 912.
  • the corrected reference region specifying unit 806 is included in the parallax generation unit 13.
  • the corrected comparison area specifying unit 807 specifies the corrected comparison area 924 based on the comparison target pixel 902 based on the corrected shape information 912.
  • the corrected comparison area specifying unit 807 is included in the parallax generation unit 13.
  • the parallax calculation unit 808 specifies the information of the reference image 100 in the corrected reference region 922 specified with reference to the parallax calculation target pixel 901 based on the corrected shape information 912, and the comparison target pixel 902 with reference to the corrected shape information 912.
  • the parallax of the parallax calculation target pixel 901 is calculated based on the information of the reference image 101 in the corrected comparison area 924.
  • the arithmetic unit 1 is an object for calculating the difference between the reference image 100 acquired by the first imaging unit 2, the input unit 10 into which the reference image 101 acquired by the second imaging unit 3 is input, and the reference image 100.
  • a storage unit 15 that stores standard shape information 911 that defines an evaluation target area based on a predetermined pixel, and a correction that specifies a pre-correction reference area 921 that is based on the parallax calculation target pixel 901 based on the standard shape information 911.
  • the standard is obtained.
  • the correction unit 805 that maintains or narrows the evaluation target area by correcting the shape information 911 to the correction shape information 912, and the reference in the corrected reference area 922 that is specified based on the correction shape information 912 with reference to the parallax calculation target pixel 901.
  • the parallax calculation unit 808 that calculates the misalignment of the parallax calculation target pixel 901 based on the information of the image 100 and the information of the reference image 101 in the corrected comparison region 924 specified with reference to the comparison target pixel 902 based on the correction shape information 912. And prepare. In this way, the arithmetic unit 1 corrects the evaluation target area according to the degree of similarity between the pixels of the reference image 100 and the reference image 101 before correction, so that the adverse effect of a small number of pixels having an extreme difference can be eliminated. The calculation accuracy of parallax can be improved.
  • the similarity determination threshold value 201 is based on the degree of similarity with the information of the reference image 100 in the pre-correction reference region 921 and the information of the reference image 101 in the pre-correction comparison region 923. It is determined. Therefore, a more appropriate threshold value can be set as compared with the case where the similarity determination threshold value 201 is set to a fixed value.
  • the evaluation value generation unit 11 calculates an evaluation value for each pixel included in the pre-correction reference region 921 based on the difference in brightness from the corresponding pixel in the pre-correction comparison region 923. Therefore, the dissociation between the reference image 100 and the reference image 101 can be evaluated.
  • FIG. 11 is a diagram showing another example of the standard shape information 911 and the corrected shape information 912.
  • the size of the region indicated by the standard shape information 911 is 6 pixels in the vertical direction and 12 pixels in the horizontal direction, and the reference region 921 before correction also has the same size.
  • the parallax calculation target pixel 901 has a size of 2 vertical pixels and 2 horizontal pixels.
  • the corrected shape information 912 is the same as the standard shape information 911.
  • the parallax calculation target pixel 901 may be one pixel as shown in the first embodiment, or may be a pixel block composed of a plurality of pixels as shown in FIG.
  • the parallax calculation target pixel 901 is composed of a plurality of pixels, any of the average value, the minimum value, the maximum value, and the intermediate value of the similarity of each pixel included in the parallax calculation target pixel 901 is adopted. You may.
  • the arithmetic unit 1 does not have to include the recognition processing unit 4 and the vehicle control unit 5. Further, the function of the arithmetic unit 1 may be realized by a plurality of hardware devices, for example, two or more electronic control devices.
  • Modification 3 The parallax generation unit 13 only needs to be able to specify the difference between the pixels in the baseline length direction and the corresponding pixels of the reference image 101 in the reference image 100, and does not have to calculate a specific distance. That is, the process of step S307 in FIG. 7 may be deleted.
  • FIGS. 12 to 13 A second embodiment of the arithmetic unit will be described with reference to FIGS. 12 to 13.
  • the same components as those in the first embodiment are designated by the same reference numerals, and the differences will be mainly described.
  • the points not particularly described are the same as those in the first embodiment.
  • This embodiment is different from the first embodiment in that the matching block generation unit 12 mainly includes a parallax invalidity determination unit.
  • FIG. 12 is a functional configuration diagram of the matching block generation unit 12A in the second embodiment.
  • the matching block generation unit 12A further includes a parallax invalidity determination unit 30 in addition to the configuration of the first embodiment.
  • the parallax invalidity determination unit 30 determines whether or not the total number of effective parallax existing in the correction shape information 912 is equal to or less than a predetermined number based on the correction shape information 912 supplied from the second generation unit 21. When the total number of effective parallax is determined to be less than or equal to a predetermined number, the parallax invalidity determination unit 30 outputs the parallax invalid information 300, which is information indicating that the parallax corresponding to the comparison target pixel 902 is invalid. When the parallax invalidity determination unit 30 determines that the total number of effective parallax is larger than a predetermined number, the parallax invalidity determination unit 30 outputs the corrected shape information 912 as it is.
  • the parallax generation unit 13 determines that the combination of the parallax calculation target pixel 901 and the comparison target pixel 902 corresponding to the parallax invalid information 300 is invalid, and excludes them from the candidates for calculating the parallax.
  • FIG. 13 is a diagram showing an example of the parallax invalidity determination unit 30.
  • the size of the area indicated by the standard shape information 911 is 7 pixels in the vertical direction and 15 pixels in the horizontal direction, so that the reference area 921 before correction also has the same size.
  • the parallax calculation target pixel 901 is arranged in the center of the pre-correction reference region 921. Further, the pixels indicated by hatching in the figure indicate invalid pixels.
  • the number of effective pixels is small, and for example, the effective pixels do not reach the threshold value of 20% or more of the standard shape information 911, so that the parallax invalidity determination unit 30 determines that the parallax is invalid.
  • the arithmetic unit 1A includes a parallax invalidity determination unit 30 that changes the corrected shape information 912 to the parallax invalid information 300 when the number of pixels in the evaluation target area defined by the corrected shape information 912 is smaller than a predetermined threshold value. ..
  • the parallax calculation unit 808 sets the parallax corresponding to the combination of the parallax calculation target pixel 901 and the comparison target pixel 902 corresponding to the parallax invalid information 300 as the invalid parallax. Therefore, it is possible to prevent the output of poor quality parallax information that occurs when the total number of effective pixels is small.
  • FIG. 14 is a functional configuration diagram of the matching block generation unit 12B in the third embodiment.
  • the matching block generation unit 12B further includes an area determination unit 40 in addition to the configuration of the first embodiment.
  • the area determination unit 40 detects an object by image processing for at least the pre-correction reference area 921 of the reference image 100. Then, the area determination unit 40 outputs the correction shape information 912 output by the second generation unit 21 as it is in the area where the same object as the area of the parallax calculation target pixel 901 exists, and the object different from the area of the parallax calculation target pixel 901.
  • the area in which is present outputs foreign matter parallax invalid information 400, which is information indicating that parallax is invalid.
  • the area determination unit 40 when the area determination unit 40 detects a road and a tire in the pre-correction reference area 921 and a road exists in the parallax calculation target pixel 901, the area determination unit 40 displays the foreign body parallax invalid information 400 in the tire area. Output.
  • the object detection targeting the reference image 100 may be executed by another configuration included in the arithmetic unit 1, or may be executed by an apparatus different from the arithmetic unit 1.
  • the object detection by the area determination unit 40 may be easily realized by using, for example, a change in brightness. That is, when there is a change in brightness of a predetermined threshold value or more from the parallax calculation target pixel 901 toward the peripheral portion of the reference image 100, the pixel region from the pixel having the change of the predetermined value or more to the boundary of the corrected reference region 922 is a foreign object.
  • the parallax invalid information 400 may be used.
  • the parallax generation unit 13 determines that the combination of the parallax calculation target pixel 901 and the comparison target pixel 902 corresponding to the foreign body parallax invalid information 400 is invalid, and excludes them from the candidates for calculating the parallax.
  • the arithmetic unit 1B includes an area determination unit 40 that identifies the boundary of an object based on the information of the reference image 100 in the pre-correction reference area 921.
  • the parallax calculation unit 808 calculates the parallax using the information of the pixels belonging to the same region as the parallax calculation target pixel 901 due to the boundary in the reference image 100. Therefore, the parallax can be calculated without explicitly using the information of the pixels that are different from the parallax calculation target pixel 901 and are expected to have different parallax.
  • the area determination unit 40 may recognize the object by using the information output by the sensor (not shown). For example, the area determination unit 40 may perform object detection using an output of a monocular camera, LiDAR (Light Detection and Ranging), an ultrasonic sensor, a millimeter wave radar, or the like.
  • a monocular camera LiDAR (Light Detection and Ranging)
  • an ultrasonic sensor LiDAR (Light Detection and Ranging)
  • a millimeter wave radar or the like.
  • FIGS. 15 to 19 A fourth embodiment of the arithmetic unit will be described with reference to FIGS. 15 to 19.
  • the same components as those in the first embodiment are designated by the same reference numerals, and the differences will be mainly described.
  • the points not particularly described are the same as those in the first embodiment.
  • the present embodiment is different from the first embodiment mainly in that the calculation is made more efficient.
  • the degree of similarity is not designated by a reference numeral, but in the present embodiment, the degree of similarity is designated by a reference numeral 107.
  • FIG. 15 is a functional configuration diagram of the arithmetic unit 1C according to the fourth embodiment.
  • the operations of the matching block generation unit 12C and the parallax generation unit 13C are different from those of the first embodiment. Further, in the present embodiment, the matching block generation unit 12C transmits the similarity degree 107 to the parallax generation unit 13C instead of the correction shape information 912.
  • FIG. 16 is a flowchart showing the processing of the parallax generation unit 13C in the fourth embodiment.
  • FIG. 16 replaces step S305 and step S306 in FIG. 7 with steps S305A and S306A as compared to FIG. 7 in the first embodiment.
  • the points not particularly described are the same as those in FIG. 7.
  • step S305A the parallax generation unit 13C receives the similarity 107 for the entire search range from the matching block generation unit 12C.
  • step S306A the parallax generation unit 13C uses the similarity 107 for the entire search range received in step S305A to make a corrected comparison area 924 most similar to the corrected reference area 922, for example, a corrected comparison with the minimum SAD. Identify region 924. Since the processes after step S307 are the same as those in FIG. 7, the description thereof will be omitted.
  • 17 and 18 are flowcharts showing the processing of the matching block generation unit 12C in the fourth embodiment.
  • 17 is the same as FIG. 9 in the first embodiment, and the processes up to step S335 are the same, and the processes after that are different.
  • the points not particularly described are the same as those in FIG.
  • step S351 executed after step S335, the matching block generation unit 12C initializes the variable sum to zero and proceeds to step S352.
  • step S352 the matching block generation unit 12C selects one of the evaluation values received in step S334. However, the evaluation values selected here are those that have not been selected so far.
  • step S353 the matching block generation unit 12C compares the evaluation value selected in step S352 with the threshold value generated in step S335. Then, the matching block generation unit 12C proceeds to step S354 in FIG. 18 via the circled A.
  • step S354 the matching block generation unit 12C determines the result of comparison in step S353.
  • the matching block generation unit 12C proceeds to step S355 when it is determined that the evaluation value is less than the threshold value, and proceeds to step S356 when it is determined that the evaluation value is equal to or more than the threshold value.
  • step S355 the matching block generation unit 12C adds the value of the evaluation value to the variable sum and proceeds to step S356.
  • step S356 the matching block generation unit 12C determines whether or not all the evaluation values received in step S334 have been evaluated.
  • step S357 the matching block generation unit 12C goes through the circled B to step 17 in FIG. Return to S352.
  • step S357 the matching block generation unit 12C transmits the value of the variable sum as the similarity 107 to the parallax generation unit 13C.
  • step S3308 the matching block generation unit 12C determines whether or not all the pre-correction comparison regions 923 have been processed.
  • the processing shown in FIG. 18 is terminated, and when it is determined that the unprocessed uncorrected comparison area 923 exists, the circled C Return to step S333 in FIG. 17 via.
  • the above is the processing of the matching block generation unit 12C.
  • FIG. 19 is a diagram showing details of the function in the fourth embodiment, and corresponds to FIG. 10 in the first embodiment.
  • the corrected reference area specifying unit 806 and the corrected comparison area specifying unit 807 are moved from the parallax generation unit 13 to the second generation unit 21 of the matching block generation unit 12C as compared with FIG. Since the second generation unit 21 compares the evaluation value 103 with the similarity determination threshold value 201 and adds it to the variable sum (S353 to S355), the generation of the corrected shape information 912, the corrected reference area 922, and the corrected comparison area 924. Can be considered to be collectively performing the generation of and the calculation of the similarity 107.
  • the parallax generation unit 13C including the parallax calculation unit 808 calculates the parallax using the evaluation value 103 calculated by the evaluation value generation unit 11. Specifically, the parallax generation unit 13C acquires and uses the similarity 107 obtained by aggregating the evaluation values 103 calculated by the evaluation value generation unit 11 by the second generation unit 21 of the matching block generation unit 12C. Therefore, the parallax generation unit 13C does not need to specify the corrected comparison area 924 using the comparison target pixel 902 again, and does not need to refer to the pixel values of the reference image 100 and the reference image 101. Therefore, the entire arithmetic unit 1C does not need to be referred to.
  • the arithmetic processing can be reduced. That is, as compared with the prior art, it is possible to perform high-speed processing using the same arithmetic resource, or to reduce the arithmetic resource and keep the processing speed the same.
  • the parallax generation unit 13C used the similarity 107 received from the matching block generation unit 12C as it is for the evaluation.
  • the parallax generation unit 13C may process the similarity 107 received from the matching block generation unit 12C and use it for evaluation. For example, when the similarity 107 is a value of SAD, the value of ZSAD may be calculated using this and this value may be used for evaluation.
  • the present invention is not limited to the above-described embodiment, but includes various modifications.
  • the above-described embodiment has been described in detail in order to explain the present invention in an easy-to-understand manner, and is not necessarily limited to the one including all the described configurations.
  • it is possible to replace a part of the configuration of one embodiment with the configuration of another embodiment and it is also possible to add the configuration of another embodiment to the configuration of one embodiment.
  • each of the above configurations may be partially or wholly configured by hardware or may be configured to be realized by executing a program on a processor.
  • control lines and information lines indicate those that are considered necessary for explanation, and do not necessarily indicate all the control lines and information lines in the product. In practice, it can be considered that almost all configurations are interconnected.
  • the configuration of the functional block is only an example.
  • Several functional configurations shown as separate functional blocks may be integrally configured, or the configuration represented by one functional block diagram may be divided into two or more functions. Further, a configuration in which a part of the functions of each functional block is provided in another functional block may be provided.
  • the storage unit 15 in which the program of the arithmetic unit 1 is stored may be a rewritable storage device such as a flash memory. Further, the program may be read from another device via a medium that can be used by the input unit 10 of the arithmetic unit 1.
  • the medium refers to, for example, a storage medium that can be attached to and detached from an input / output interface, or a communication medium, that is, a network such as wired, wireless, or optical, or a carrier wave or digital signal that propagates in the network.
  • some or all of the functions realized by the program may be realized by the hardware circuit or FPGA.
  • Pre-correction reference area specifying unit 804 ... Pre-correction comparison area specifying unit 805 ... Correction unit 806 ... After correction Reference area specifying unit 807 ... After correction Comparison area specifying unit 808 ...
  • Parallax calculation unit 900 ... Parallax determination threshold generation function 901 ... Parallax calculation target pixel 902 ... Comparison target pixel 911 ... Standard shape information 912 ... Corrected shape information 921 ... Before correction Reference area 922 ... Reference area after correction 923 ... Comparison area before correction 924 ... Comparison area after correction 930 ... Search range

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

This computation device comprises an input part into which a base image acquired by a first imaging part and a reference image acquired by a second imaging part are inputted, a target point setting part for setting a parallax calculation target pixel which is a target pixel for calculating parallax in the base image, a comparison point setting part for setting a comparison target pixel which is a pixel candidate corresponding to the parallax calculation target pixel in the reference image, a storage part for storing standard shape information which prescribes an estimation target region based on a predetermined pixel, a pre-correction base region specification part for specifying a pre-correction base region based on the parallax calculation target pixel from the standard shape information, a pre-correction comparison region specification part for specifying a pre-correction comparison region based on the comparison target pixel from the standard shape information, an estimation value generation part for calculating an estimation value for each pixel included in the pre-correction base region, the estimation value being an index of similarity to a corresponding pixel in the pre-correction comparison region, a correction part for correcting the standard shape information into corrected shape information from comparison between the estimation value and a threshold value, thereby maintaining or narrowing the estimation target region, and a parallax calculation part for calculating parallax of the parallax calculation target pixel from information of the base image in a post-correction base region specified on the basis of the parallax calculation target pixel from the corrected shape information and information of the reference image in a post-correction comparison region specified on the basis of the comparison target pixel from the corrected shape information.

Description

演算装置、視差算出方法Arithmetic logic unit, parallax calculation method
 本発明は、演算装置、および視差算出方法に関する。 The present invention relates to an arithmetic unit and a parallax calculation method.
 ステレオカメラを使用してカメラと観測対象物との距離を算出し、観測対象物の認識処理を行う手法が広く知られている。特許文献1には、視点の異なる複数の画像データの画像処理を行う画像処理装置であって、前記複数の画像データを取得する取得手段と、前記取得手段によって取得された前記複数の画像データのうち、基準画像である第1の画像データの画像に基準画像領域を設定し、参照画像である第2の画像データの画像に参照画像領域を設定して、前記基準画像領域と前記参照画像領域との相関演算によって前記複数の画像データの視差量を算出する演算処理手段と、を備え、前記演算処理手段は、前記基準画像領域の視差方向の境界部における隣接画素の輝度変化値が閾値未満となる形状を有する前記基準画像領域を決定し、決定された前記基準画像領域と、当該基準画像領域の形状に対応する形状を有する前記参照画像領域との間で前記相関演算を行うことを特徴とする画像処理装置が開示されている。 A method of calculating the distance between the camera and the observation object using a stereo camera and performing recognition processing of the observation object is widely known. Patent Document 1 describes an image processing apparatus that performs image processing of a plurality of image data having different viewpoints, the acquisition means for acquiring the plurality of image data, and the plurality of image data acquired by the acquisition means. Among them, the reference image area is set in the image of the first image data which is the reference image, the reference image area is set in the image of the second image data which is the reference image, and the reference image area and the reference image area are set. The arithmetic processing means includes an arithmetic processing means for calculating the amount of misalignment of the plurality of image data by a correlation calculation with the image data, and the arithmetic processing means has a brightness change value of an adjacent pixel at a boundary portion in a discriminatory direction of the reference image region less than a threshold value. It is characterized in that the reference image region having a shape is determined, and the correlation calculation is performed between the determined reference image region and the reference image region having a shape corresponding to the shape of the reference image region. The image processing apparatus is disclosed.
日本国特開2020-21126号公報Japanese Patent Application Laid-Open No. 2020-21126
 特許文献1に記載されている発明では、視差の算出精度に改善の余地がある。 In the invention described in Patent Document 1, there is room for improvement in the calculation accuracy of parallax.
 本発明の第1の態様による演算装置は、第1撮像部が取得する基準画像、および第2撮像部が取得する参照画像が入力される入力部と、前記基準画像において視差を算出する対象の画素である視差算出対象画素を設定する対象点設定部と、前記視差算出対象画素に対応する前記参照画像における画素の候補である比較対象画素を設定する比較点設定部と、所定の画素を基準とする評価対象領域を規定する標準形状情報を記憶する記憶部と、前記標準形状情報に基づき、前記視差算出対象画素を基準とする補正前基準領域を特定する補正前基準領域特定部と、前記標準形状情報に基づき、前記比較対象画素を基準とする補正前比較領域を特定する補正前比較領域特定部と、前記補正前基準領域に含まれるそれぞれの画素を対象に、前記補正前比較領域の対応する画素との類似の指標である評価値を算出する評価値生成部と、前記評価値と閾値との比較に基づき、前記標準形状情報を補正形状情報に補正することで、前記評価対象領域を維持または狭める補正部と、前記補正形状情報に基づき前記視差算出対象画素を基準として特定される補正後基準領域における前記基準画像の情報、および前記補正形状情報に基づき前記比較対象画素を基準として特定される補正後比較領域における前記参照画像の情報、に基づき前記視差算出対象画素の視差を算出する視差算出部とを備える。
 本発明の第2の態様による視差算出方法は、第1撮像部が取得する基準画像、および第2撮像部が取得する参照画像が入力される入力部と、所定の画素を基準とする評価対象領域を規定する標準形状情報を記憶する記憶部とを備える演算装置が実行する視差算出方法であって、前記基準画像において視差を算出する対象の画素である視差算出対象画素を設定することと、前記視差算出対象画素に対応する前記参照画像における画素の候補である比較対象画素を特定することと、前記標準形状情報に基づき、前記視差算出対象画素を基準とする補正前基準領域を特定することと、前記標準形状情報に基づき、前記比較対象画素を基準とする補正前比較領域を特定することと、前記補正前基準領域に含まれるそれぞれの画素を対象に、前記補正前比較領域の対応する画素との類似の指標である評価値を算出することと、前記評価値と閾値との比較に基づき、前記標準形状情報を補正形状情報に補正することで、前記評価対象領域を維持または狭めることと、前記補正形状情報に基づき前記視差算出対象画素を基準として特定される補正後基準領域における前記基準画像の情報、および前記補正形状情報に基づき前記比較対象画素を基準として特定される補正後比較領域における前記参照画像の情報、に基づき前記視差算出対象画素の視差を算出することとを含む。
The arithmetic unit according to the first aspect of the present invention is a target for calculating the difference between the reference image acquired by the first imaging unit, the input unit into which the reference image acquired by the second imaging unit is input, and the reference image. A target point setting unit that sets a target pixel for parallax calculation, which is a pixel, a comparison point setting unit that sets a comparison target pixel that is a candidate for a pixel in the reference image corresponding to the target pixel for parallax calculation, and a predetermined pixel as a reference. A storage unit that stores standard shape information that defines the evaluation target area to be evaluated, a pre-correction reference area specifying unit that specifies a pre-correction reference area based on the misalignment calculation target pixel based on the standard shape information, and the above. Based on the standard shape information, the pre-correction comparison area specifying unit that specifies the pre-correction comparison area based on the comparison target pixel, and the pre-correction comparison area for each pixel included in the pre-correction reference area. The evaluation target area is obtained by correcting the standard shape information to the corrected shape information based on the comparison between the evaluation value and the threshold value and the evaluation value generation unit that calculates the evaluation value which is an index similar to the corresponding pixel. With the correction unit for maintaining or narrowing, the information of the reference image in the corrected reference region specified based on the correction target pixel based on the correction shape information, and the comparison target pixel based on the correction shape information. It is provided with a parallax calculation unit that calculates the misalignment of the parallax calculation target pixel based on the information of the reference image in the specified corrected comparison region.
In the method for calculating the disparity according to the second aspect of the present invention, an evaluation target based on a reference image acquired by the first imaging unit, an input unit into which a reference image acquired by the second imaging unit is input, and a predetermined pixel is used as a reference. It is a parallax calculation method executed by a calculation device including a storage unit for storing standard shape information defining an area, and sets a misparity calculation target pixel which is a target pixel for calculating parallax in the reference image. To specify a comparison target pixel that is a pixel candidate in the reference image corresponding to the disparity calculation target pixel, and to specify a pre-correction reference region based on the disparity calculation target pixel based on the standard shape information. And, based on the standard shape information, the pre-correction comparison region based on the comparison target pixel is specified, and the pre-correction comparison region corresponds to each pixel included in the pre-correction reference region. Maintaining or narrowing the evaluation target area by calculating an evaluation value which is an index similar to a pixel and correcting the standard shape information to the corrected shape information based on the comparison between the evaluation value and the threshold value. And the information of the reference image in the corrected reference region specified based on the corrected shape information and the corrected reference pixel specified as a reference, and the corrected comparison specified based on the corrected shape information based on the comparison target pixel. It includes calculating the parallax of the misparity calculation target pixel based on the information of the reference image in the region.
 本発明によれば、視差の算出精度を向上できる。 According to the present invention, the calculation accuracy of parallax can be improved.
演算装置の機能構成図Functional block diagram of arithmetic unit 用語の定義を説明する図Diagram explaining the definition of terms マッチングブロック生成部の機能構成の詳細図Detailed diagram of the functional configuration of the matching block generator 第2生成部の処理の一例を示す図The figure which shows an example of the processing of the 2nd generation part 類似度判定閾値生成関数の一例を示す図The figure which shows an example of the similarity judgment threshold generation function 標準形状情報および補正形状情報の一例を示す図The figure which shows an example of standard shape information and correction shape information 視差生成部の処理を示すフローチャートFlow chart showing the processing of the parallax generator 図7におけるステップS306の詳細を示すフローチャートA flowchart showing the details of step S306 in FIG. マッチングブロック生成部の処理を示すフローチャートFlowchart showing the processing of the matching block generator 第1の実施の形態における演算装置が有する機能の詳細を示す図The figure which shows the detail of the function which the arithmetic unit has in 1st Embodiment. 標準形状情報および補正形状情報の別の一例を示す図Figure showing another example of standard shape information and corrected shape information 第2の実施の形態におけるマッチングブロック生成部の機能構成図Functional configuration diagram of the matching block generation unit in the second embodiment 視差無効判定部の一例を示す図The figure which shows an example of the parallax invalidity determination part 第3の実施の形態におけるマッチングブロック生成部の機能構成図Functional configuration diagram of the matching block generation unit in the third embodiment 第4の実施の形態における演算装置の機能構成図Functional block diagram of arithmetic unit in 4th Embodiment 第4の実施の形態における視差生成部の処理を示すフローチャートA flowchart showing the processing of the parallax generation unit in the fourth embodiment. 第4の実施の形態におけるマッチングブロック生成部の処理を示すフローチャートA flowchart showing the processing of the matching block generation unit in the fourth embodiment. 第4の実施の形態におけるマッチングブロック生成部の処理を示すフローチャートA flowchart showing the processing of the matching block generation unit in the fourth embodiment. 第4の実施の形態における演算装置が有する機能の詳細を示す図The figure which shows the detail of the function which the arithmetic unit has in 4th Embodiment
―第1の実施の形態―
 以下、図1~図10を参照して、演算装置の第1の実施の形態を説明する。
-First embodiment-
Hereinafter, the first embodiment of the arithmetic unit will be described with reference to FIGS. 1 to 10.
(機能構成の概要)
 図1は、本発明に係る演算装置1の機能構成図である。演算装置1は、第1撮像部2と第2撮像部3とに接続される。演算装置1は、その機能として、入力部10と、評価値生成部11と、マッチングブロック生成部12と、視差生成部13と、認識処理部4と、車両制御部5と、不揮発性の記憶装置である記憶部15を有する。
(Overview of functional configuration)
FIG. 1 is a functional configuration diagram of the arithmetic unit 1 according to the present invention. The arithmetic unit 1 is connected to the first image pickup unit 2 and the second image pickup unit 3. The arithmetic unit 1 has, as its functions, an input unit 10, an evaluation value generation unit 11, a matching block generation unit 12, a parallax generation unit 13, a recognition processing unit 4, a vehicle control unit 5, and a non-volatile storage. It has a storage unit 15 which is a device.
 入力部10は、たとえばIEEE802.3に対応する通信インタフェースである。入力部10は、第1撮像部2および第2撮像部3が撮影して得られた撮影画像を取得する。以下では、第1撮像部2の撮影画像を基準画像100と呼び、第2撮像部3の撮影画像を参照画像101と呼ぶ。なお基準画像100および参照画像101は便宜的な名称であり、両者を入れ替えてもよい。入力部10は、基準画像100および参照画像101を評価値生成部11、マッチングブロック生成部12、および視差生成部13に入力する。 The input unit 10 is a communication interface corresponding to, for example, IEEE802.3. The input unit 10 acquires a photographed image obtained by being photographed by the first image pickup unit 2 and the second image pickup unit 3. Hereinafter, the captured image of the first imaging unit 2 is referred to as a reference image 100, and the captured image of the second imaging unit 3 is referred to as a reference image 101. The reference image 100 and the reference image 101 are convenient names, and both may be interchanged. The input unit 10 inputs the reference image 100 and the reference image 101 to the evaluation value generation unit 11, the matching block generation unit 12, and the parallax generation unit 13.
 評価値生成部11と、マッチングブロック生成部12と、視差生成部13と、認識処理部4と、車両制御部5とは、たとえば中央演算装置であるCPUが、記憶部15に格納されたプログラムを、読み書き可能な記憶装置であるRAMに展開して実行することで実現される。ただしこれらは、CPU、記憶部15、およびRAMの組み合わせの代わりに書き換え可能な論理回路であるFPGA(Field Programmable Gate Array)や特定用途向け集積回路であるASIC(Application Specific Integrated Circuit)により実現されてもよい。またこれらは、CPU、記憶部15、およびRAMの組み合わせの代わりに、異なる構成の組み合わせ、たとえばCPU、記憶部15、RAMとFPGAの組み合わせにより実現されてもよい。なお図1では、CPUおよびRAMは不図示である。 The evaluation value generation unit 11, the matching block generation unit 12, the parallax generation unit 13, the recognition processing unit 4, and the vehicle control unit 5 are, for example, a program in which a CPU, which is a central processing unit, is stored in a storage unit 15. Is realized by expanding it into a RAM, which is a readable / writable storage device, and executing the above. However, these are realized by FPGA (Field Programmable Gate Array), which is a rewritable logic circuit instead of the combination of CPU, storage unit 15, and RAM, and ASIC (Application Specific Integrated Circuit), which is an integrated circuit for specific applications. May be good. Further, instead of the combination of the CPU, the storage unit 15, and the RAM, these may be realized by a combination of different configurations, for example, a combination of the CPU, the storage unit 15, RAM and the FPGA. Note that the CPU and RAM are not shown in FIG.
 評価値生成部11は、入力部10から基準画像100および参照画像101が入力され、マッチングブロック生成部12から補正前基準領域921および補正前比較領域923が入力される。評価値生成部11は後述するように評価値103を算出してマッチングブロック生成部12に出力する。評価値生成部11の処理は後に詳述する。 In the evaluation value generation unit 11, the reference image 100 and the reference image 101 are input from the input unit 10, and the pre-correction reference area 921 and the pre-correction comparison area 923 are input from the matching block generation unit 12. The evaluation value generation unit 11 calculates the evaluation value 103 and outputs it to the matching block generation unit 12 as described later. The processing of the evaluation value generation unit 11 will be described in detail later.
 マッチングブロック生成部12は、入力部10から基準画像100および参照画像101が入力され、視差生成部13から探索領域情報106が入力され、評価値生成部11から評価値108が入力される。マッチングブロック生成部12は、補正前基準領域921および補正前比較領域923を生成して評価値生成部11に出力し、補正形状情報912を生成して視差生成部13に出力する。詳述すると、マッチングブロック生成部12は、探索領域情報106を用いて補正前基準領域921および補正前比較領域923を生成する。またマッチングブロック生成部12は、参照画像101、参照画像101、および評価値108を用いて補正形状情報912を生成する。 In the matching block generation unit 12, the reference image 100 and the reference image 101 are input from the input unit 10, the search area information 106 is input from the parallax generation unit 13, and the evaluation value 108 is input from the evaluation value generation unit 11. The matching block generation unit 12 generates the pre-correction reference area 921 and the pre-correction comparison area 923 and outputs them to the evaluation value generation unit 11, and generates the correction shape information 912 and outputs it to the parallax generation unit 13. More specifically, the matching block generation unit 12 generates the pre-correction reference region 921 and the pre-correction comparison region 923 using the search region information 106. Further, the matching block generation unit 12 generates the correction shape information 912 using the reference image 101, the reference image 101, and the evaluation value 108.
 視差生成部13は、入力部10から基準画像100および参照画像101が入力され、マッチングブロック生成部12から補正形状情報912が入力される。視差生成部13は視差情報105を生成して認識処理部4に出力する。 In the parallax generation unit 13, the reference image 100 and the reference image 101 are input from the input unit 10, and the correction shape information 912 is input from the matching block generation unit 12. The parallax generation unit 13 generates the parallax information 105 and outputs it to the recognition processing unit 4.
 認識処理部4は、視差情報105を入力として、様々な認識処理を実施する。認識処理部4における認識処理の一例としては視差情報を用いた立体物検知がある。また、認識対象の一例として、被写体の位置情報、種類情報、動作情報、危険情報がある。位置情報の一例として、自車両からの方向と距離がある。種類情報の一例として、歩行者、大人、子供、高齢者、動物、落石、自転車、周辺車両、周辺構造物、縁石がある。動作情報の一例として、歩行者や自転車のふらつき、飛び出し、横切り、移動方向、移動速度、移動軌跡がある。危険情報の一例として、歩行者飛び出し、落石、急停止や急減速や急ハンドルなど周辺車両の異常動作などがある。認識処理部4で生成した認識結果は車両制御部5に供給される。車両制御部5では、認識結果に基づく様々な車両制御が行われる。 The recognition processing unit 4 performs various recognition processes by inputting the parallax information 105. As an example of the recognition process in the recognition process unit 4, there is a three-dimensional object detection using parallax information. Further, as an example of the recognition target, there are position information, type information, motion information, and danger information of the subject. As an example of position information, there is a direction and a distance from the own vehicle. Examples of type information include pedestrians, adults, children, the elderly, animals, rockfalls, bicycles, peripheral vehicles, peripheral structures, and curbs. Examples of motion information include pedestrian and bicycle wobbling, jumping out, crossing, moving direction, moving speed, and moving locus. Examples of danger information include pedestrian jumping out, falling rocks, sudden stop, sudden deceleration, and abnormal operation of surrounding vehicles such as sudden steering. The recognition result generated by the recognition processing unit 4 is supplied to the vehicle control unit 5. The vehicle control unit 5 performs various vehicle controls based on the recognition result.
 車両制御部5で実施される車両制御の一例として、ブレーキ制御、ハンドル制御、アクセル制御、車載ランプ制御、警告音発生、車載カメラ制御、ネットワークを介して接続された周辺車両や遠隔地センタ機器への撮像装置周辺の観測対象物に関する情報出力などがある。具体的な一例としては、車両前方に存在する障害物の視差情報105に応じた速度やブレーキ制御がある。なお、視差情報の代わりに視差情報に基づき生成可能な距離情報を用いてもよい。 As an example of vehicle control performed by the vehicle control unit 5, brake control, steering wheel control, accelerator control, in-vehicle lamp control, horn generation, in-vehicle camera control, peripheral vehicles connected via a network, and remote center equipment There is information output about the observation object around the image pickup device. As a specific example, there is speed and brake control according to the parallax information 105 of an obstacle existing in front of the vehicle. In addition, instead of the parallax information, the distance information that can be generated based on the parallax information may be used.
 また、本実施例では記載していないが、車両制御部5は、基準画像100や参照画像101を用いた画像処理結果を元に被写体検知処理を行ってもよいし、車両制御部5に接続された表示機器に対して、第1撮像部2または第2撮像部3を介して得られた画像や、視聴者に認識させるための表示を行っても良いし、地図情報や、渋滞情報などの交通情報を処理する情報機器に対して、画像処理結果に基づき検知した観測対象物の情報を供給してもよい。 Further, although not described in this embodiment, the vehicle control unit 5 may perform subject detection processing based on the image processing result using the reference image 100 or the reference image 101, or may be connected to the vehicle control unit 5. The displayed display device may be displayed with an image obtained through the first image pickup unit 2 or the second image pickup unit 3 or for the viewer to recognize, map information, congestion information, or the like. Information on the observation target detected based on the image processing result may be supplied to the information device that processes the traffic information of the above.
(用語の定義)
 図2は、本実施の形態における用語の定義を説明する図である。図2の上部は、第1の情報と第2の情報を組み合わせて第3の情報が生成されることを示す。図2の下部は、第1の情報および第3の情報を例示する図である。
(Definition of terms)
FIG. 2 is a diagram illustrating definitions of terms in the present embodiment. The upper part of FIG. 2 shows that the third information is generated by combining the first information and the second information. The lower part of FIG. 2 is a diagram illustrating the first information and the third information.
 図2における第1の情報とは、視差算出対象画素901および比較対象画素902である。視差算出対象画素901および比較対象画素902はいずれも、評価の対象となる領域、すなわち評価対象領域を規定するための情報である。視差算出対象画素901は、基準画像100において視差を算出する対象となる画素、より具体的にはその画素の座標の情報である。本実施の形態では、視差算出対象画素901はX方向に1ピクセル幅、Y方向に1ピクセル幅である。 The first information in FIG. 2 is the parallax calculation target pixel 901 and the comparison target pixel 902. Both the parallax calculation target pixel 901 and the comparison target pixel 902 are information for defining a region to be evaluated, that is, an evaluation target region. The parallax calculation target pixel 901 is information on the coordinates of the pixel for which the parallax is calculated in the reference image 100, more specifically, the pixel. In the present embodiment, the parallax calculation target pixel 901 has a width of 1 pixel in the X direction and a width of 1 pixel in the Y direction.
 すなわち本実施の形態における視差算出対象画素901は、ある1画素である。比較対象画素902は、参照画像101において視差算出対象画素901とのペアの候補に設定されている画素、より具体的にはその画素の座標の情報である。である。比較対象画素902の画素数は視差算出対象画素901と同一である。比較対象画素902は、視差算出対象画素901を基準に設定される探索範囲930の内部に存在する。 That is, the parallax calculation target pixel 901 in this embodiment is a certain pixel. The comparison target pixel 902 is a pixel set as a candidate for a pair with the parallax calculation target pixel 901 in the reference image 101, and more specifically, information on the coordinates of the pixel. Is. The number of pixels of the comparison target pixel 902 is the same as that of the parallax calculation target pixel 901. The comparison target pixel 902 exists inside the search range 930 set with reference to the parallax calculation target pixel 901.
 図2における第2の情報とは、標準形状情報911および補正形状情報912である。標準形状情報911とは、基準となる画素に対する相対的な領域を示す情報であり、たとえば「基準となる画素を中心とするX方向に±7画素、Y方向に±3画素の領域」という情報である。本実施の形態では、標準形状情報911は全ての視差算出対象画素901および比較対象画素902について共通する。標準形状情報911はあらかじめ設定されており、記憶部15に格納されている。 The second information in FIG. 2 is standard shape information 911 and corrected shape information 912. The standard shape information 911 is information indicating a region relative to a reference pixel, for example, information such as "a region of ± 7 pixels in the X direction and ± 3 pixels in the Y direction centered on the reference pixel". Is. In the present embodiment, the standard shape information 911 is common to all the parallax calculation target pixels 901 and the comparison target pixels 902. The standard shape information 911 is preset and stored in the storage unit 15.
 標準形状情報911および補正形状情報912により表されるそれぞれの領域は、基準画像100と参照画像101との照合、すなわちマッチングに用いられるので、「マッチングブロック」と呼ぶことができる。また、標準形状情報911により表される領域は「補正前のマッチングブロック」、補正形状情報912により表される領域は「補正後のマッチングブロック」とも呼ぶ。 Each region represented by the standard shape information 911 and the corrected shape information 912 can be called a "matching block" because it is used for matching, that is, matching with the reference image 100 and the reference image 101. Further, the region represented by the standard shape information 911 is also referred to as a “matching block before correction”, and the region represented by the correction shape information 912 is also referred to as a “matching block after correction”.
 補正形状情報912とは、比較対象画素902ごとに算出される、基準となる画素に対する相対的な領域を示す情報である。補正形状情報912はたとえば、「基準となる画素を中心とするX方向に±7画素、Y方向に±3画素の矩形領域から角部の各1画素、合計4画素を除いた領域」という情報である。補正形状情報912は、基準画像100と参照画像101とに共通して使用される。補正形状情報912により表される領域は、標準形状情報911により表される領域以下の面積である。詳しくは後述する。 The corrected shape information 912 is information that is calculated for each comparison target pixel 902 and indicates a region relative to the reference pixel. The corrected shape information 912 is, for example, information that "a region excluding a total of 4 pixels, one pixel each at the corner from a rectangular area of ± 7 pixels in the X direction and ± 3 pixels in the Y direction centered on the reference pixel". Is. The corrected shape information 912 is commonly used for the reference image 100 and the reference image 101. The area represented by the corrected shape information 912 is an area equal to or smaller than the area represented by the standard shape information 911. Details will be described later.
 図2における第3の情報とは、補正前基準領域921、補正後基準領域922、補正前比較領域923、および補正後比較領域924である。この4つの情報はいずれも、基準画像100または参照画像101における複数の画素の座標を示す情報である。補正前基準領域921とは、基準画像100において視差算出対象画素901を基準として標準形状情報911を適用して得られる領域である。標準形状情報911は前述のとおり全画素に対して共通なので、たとえば視差算出対象画素901がX方向に1画素だけ移動すると補正前基準領域921の全体がX方向に1画素だけ移動し、比較対象画素902がどのように変化しても補正前基準領域921は変化がない。 The third information in FIG. 2 is the pre-correction reference area 921, the post-correction reference area 922, the pre-correction comparison area 923, and the post-correction comparison area 924. All of these four pieces of information are information indicating the coordinates of a plurality of pixels in the reference image 100 or the reference image 101. The pre-correction reference area 921 is an area obtained by applying the standard shape information 911 with the parallax calculation target pixel 901 as a reference in the reference image 100. Since the standard shape information 911 is common to all pixels as described above, for example, if the parallax calculation target pixel 901 moves by one pixel in the X direction, the entire pre-correction reference region 921 moves by one pixel in the X direction and is compared. No matter how the pixel 902 changes, the pre-correction reference region 921 does not change.
 補正後基準領域922とは、基準画像100において視差算出対象画素901を基準として補正形状情報912を適用して得られる領域である。補正形状情報912は前述のとおり視差算出対象画素901と比較対象画素902の組合せの数だけ存在するので、視差算出対象画素901および比較対象画素902のいずれが変化した場合でも補正後基準領域922は変化する可能性がある。なお、標準形状情報911と補正形状情報912が同一の場合もありえるので、その場合には補正前基準領域921と補正後基準領域922が同一となる。 The corrected reference area 922 is an area obtained by applying the corrected shape information 912 with the parallax calculation target pixel 901 as a reference in the reference image 100. Since the corrected shape information 912 exists as many as the number of combinations of the parallax calculation target pixel 901 and the comparison target pixel 902 as described above, the corrected reference region 922 is the same regardless of which of the parallax calculation target pixel 901 and the comparison target pixel 902 changes. May change. Since the standard shape information 911 and the corrected shape information 912 may be the same, in that case, the pre-correction reference area 921 and the post-correction reference area 922 are the same.
 補正前比較領域923とは、参照画像101において比較対象画素902を基準として標準形状情報911を適用して得られる領域である。標準形状情報911は前述のとおり全画素に対して共通なので、たとえば比較対象画素902がX方向に1画素だけ移動すると補正前比較領域923の全体がX方向に1画素だけ移動する。 The pre-correction comparison area 923 is an area obtained by applying the standard shape information 911 with reference to the comparison target pixel 902 in the reference image 101. Since the standard shape information 911 is common to all pixels as described above, for example, if the comparison target pixel 902 moves by one pixel in the X direction, the entire pre-correction comparison area 923 moves by one pixel in the X direction.
 補正後比較領域924とは、参照画像101において比較対象画素902を基準として補正形状情報912を適用して得られる領域である。補正形状情報912は前述のとおり視差算出対象画素901と比較対象画素902の組合せの数だけ存在するので、視差算出対象画素901および比較対象画素902のいずれが変化した場合でも補正後比較領域924は変化する可能性がある。 The corrected comparison area 924 is an area obtained by applying the corrected shape information 912 with reference to the comparison target pixel 902 in the reference image 101. Since the corrected shape information 912 exists as many as the number of combinations of the parallax calculation target pixel 901 and the comparison target pixel 902 as described above, the corrected comparison area 924 is the same regardless of which of the parallax calculation target pixel 901 and the comparison target pixel 902 changes. May change.
(機能の詳細)
 図3は、マッチングブロック生成部12の機能構成の詳細図である。ただし図3では評価値生成部11の動作も併せて説明する。マッチングブロック生成部12は、第1生成部20と、第2生成部21と、閾値生成部22とを備える。
(Details of function)
FIG. 3 is a detailed view of the functional configuration of the matching block generation unit 12. However, in FIG. 3, the operation of the evaluation value generation unit 11 will also be described. The matching block generation unit 12 includes a first generation unit 20, a second generation unit 21, and a threshold value generation unit 22.
 第1生成部20は、視差生成部13から供給される探索領域情報106に基づき、補正前基準領域921および補正前比較領域923を繰り返し生成して第2生成部21および評価値生成部11に供給する。探索領域情報106とは、視差算出対象画素901、標準形状情報911、および視差算出対象画素901に対応する参照画像101における探索範囲の情報である。すなわち第1生成部20は、視差算出対象画素901と標準形状情報911とから補正前基準領域921を生成し、視差算出対象画素901に対応する参照画像101における探索範囲と標準形状情報911とから補正前比較領域923を生成する。たとえばその探索範囲が100画素の範囲を有する場合には、補正前比較領域923は100個生成される。そして第1生成部20は、補正前基準領域921の値は変更せずに、補正前基準領域921と補正前比較領域923の組合せをたとえば100組生成して出力する。 The first generation unit 20 repeatedly generates the pre-correction reference area 921 and the pre-correction comparison area 923 based on the search area information 106 supplied from the parallax generation unit 13, and causes the second generation unit 21 and the evaluation value generation unit 11. Supply. The search area information 106 is information on the search range in the reference image 101 corresponding to the parallax calculation target pixel 901, the standard shape information 911, and the parallax calculation target pixel 901. That is, the first generation unit 20 generates the pre-correction reference region 921 from the parallax calculation target pixel 901 and the standard shape information 911, and from the search range and the standard shape information 911 in the reference image 101 corresponding to the parallax calculation target pixel 901. The pre-correction comparison area 923 is generated. For example, when the search range has a range of 100 pixels, 100 pre-correction comparison areas 923 are generated. Then, the first generation unit 20 generates, for example, 100 combinations of the pre-correction reference region 921 and the pre-correction comparison region 923 without changing the value of the pre-correction reference region 921, and outputs the combination.
 評価値生成部11は、第1生成部20から補正前基準領域921と補正前比較領域923の組合せを受信するたびに評価値103を算出して閾値生成部22および第2生成部21に出力する。評価値生成部11は、補正前基準領域921と基準画像100とを組み合わせることで、補正前基準領域921における具体的な画素の情報、たとえば輝度値を取得する。同様に評価値生成部11は、補正前比較領域923と参照画像101とを組み合わせることで、補正前比較領域923における具体的な画素の情報、たとえば輝度値を取得する。そして評価値生成部11は、補正前基準領域921の画素の情報と補正前比較領域923の画素の情報とを用いて画素ごとの評価値103を算出する。 The evaluation value generation unit 11 calculates the evaluation value 103 each time the combination of the pre-correction reference region 921 and the pre-correction comparison region 923 is received from the first generation unit 20, and outputs the evaluation value 103 to the threshold value generation unit 22 and the second generation unit 21. do. The evaluation value generation unit 11 acquires specific pixel information in the pre-correction reference region 921, for example, a luminance value, by combining the pre-correction reference region 921 and the reference image 100. Similarly, the evaluation value generation unit 11 acquires specific pixel information in the pre-correction comparison area 923, for example, a luminance value, by combining the pre-correction comparison area 923 and the reference image 101. Then, the evaluation value generation unit 11 calculates the evaluation value 103 for each pixel by using the information of the pixel of the pre-correction reference area 921 and the information of the pixel of the pre-correction comparison area 923.
 評価値103の一例として、基準画像100のマッチングブロック内の各画素と、対応する参照画像101のマッチングブロック内の画素の輝度変動量がある。一般的な輝度変動量の一例として、SAD(Sum of Absolute Difference)、ZSAD(Zero means Sum of Absolute Difference)、SSD(Sum of Squared Difference)、NCC(Normalized Cross Correlation)、ZNCC(Zero means Normalized Cross Correlation)などがある。 As an example of the evaluation value 103, there is a brightness fluctuation amount of each pixel in the matching block of the reference image 100 and the pixel in the matching block of the corresponding reference image 101. As an example of a general brightness fluctuation amount, SAD (Sum of Absolute Difference), ZSAD (Zero means Sum of Absolute Difference), SSD (Sum of Squared Difference), NCC (Normalized Cross Correlation), ZNCC (Zero means Correlation), ZNCC (Zero means Correlation) )and so on.
 補正前基準領域921と補正前比較領域923は、異なる基準点と共通する標準形状情報911により生成されるので、補正前基準領域921と補正前比較領域923の画素数は同一である。標準形状情報911がX方向に15画素、Y方向に7画素の領域の場合は、補正前基準領域921および補正前比較領域923はいずれも105画素を有し、評価値103は105個算出される。 Since the pre-correction reference area 921 and the pre-correction comparison area 923 are generated by the standard shape information 911 common to different reference points, the number of pixels of the pre-correction reference area 921 and the pre-correction comparison area 923 are the same. When the standard shape information 911 is an area of 15 pixels in the X direction and 7 pixels in the Y direction, the pre-correction reference area 921 and the pre-correction comparison area 923 both have 105 pixels, and 105 evaluation values 103 are calculated. To.
 評価値103の別の一例として、基準画像100および参照画像101のマッチングブロック内の各画素およびその周辺画素との輝度変動量または輝度変動方向または輝度変動パタンがある。輝度変動パタンの一例として、対象となる画素の左右隣接画素を含む水平3画素に対する左からの増減パタンがある。例えば、左隣接画素値が10、対象画素値が25、右隣接画素値が80の場合、増減パタンは「増加→増加」となる。 As another example of the evaluation value 103, there is a luminance fluctuation amount or a luminance fluctuation direction or a luminance fluctuation pattern with each pixel in the matching block of the reference image 100 and the reference image 101 and its peripheral pixels. As an example of the luminance fluctuation pattern, there is an increase / decrease pattern from the left with respect to three horizontal pixels including the left and right adjacent pixels of the target pixel. For example, when the left adjacent pixel value is 10, the target pixel value is 25, and the right adjacent pixel value is 80, the increase / decrease pattern is “increase → increase”.
 第2生成部21は、第1生成部20から補正前基準領域921および補正前比較領域923が供給され、閾値生成部22から類似度判定閾値201が供給され、評価値生成部11から評価値103が供給される。第2生成部21は、標準形状情報911において評価値103が類似度判定閾値201を超える画素(以下、無効画素と呼ぶ)を除去することにより補正形状情報912を生成して視差生成部13に供給する。以下では、標準形状情報911における無効画素以外の画素を有効画素と呼ぶ。なお補正形状情報912は、画素毎の有効画素フラグとして表現されてもよい。有効画素フラグの一例として、補正形状情報912の各画素に対して割り当てられ、「1」の場合に有効画素、「0」の場合に無効画素となるフラグ情報である。 In the second generation unit 21, the pre-correction reference region 921 and the pre-correction comparison region 923 are supplied from the first generation unit 20, the similarity determination threshold value 201 is supplied from the threshold value generation unit 22, and the evaluation value generation unit 11 supplies the evaluation value. 103 is supplied. The second generation unit 21 generates the correction shape information 912 by removing the pixels whose evaluation value 103 exceeds the similarity determination threshold value 201 (hereinafter referred to as invalid pixels) in the standard shape information 911, and causes the parallax generation unit 13 to generate the correction shape information 912. Supply. In the following, pixels other than the invalid pixels in the standard shape information 911 will be referred to as effective pixels. The corrected shape information 912 may be expressed as an effective pixel flag for each pixel. As an example of the effective pixel flag, the flag information is assigned to each pixel of the correction shape information 912, and is a flag information that is an effective pixel when the value is "1" and an invalid pixel when the value is "0".
 図4は、第2生成部21の処理の一例を示す図である。図4の上段は、補正前基準領域921の各画素における輝度値を示しており、図4の中段は、補正前比較領域923の各画素における輝度値を示している。なお補正前基準領域921および補正前比較領域923の領域の形状は標準形状情報911である。図4に示す例では、評価値103が輝度の差の絶対値であり、図4の下段は図4の上段と図4の中段の輝度値の差の絶対値を示している。図4の下段の各画素の値が評価値103として評価値生成部11から提供される。 FIG. 4 is a diagram showing an example of processing of the second generation unit 21. The upper part of FIG. 4 shows the luminance value in each pixel of the pre-correction reference area 921, and the middle part of FIG. 4 shows the luminance value in each pixel of the pre-correction comparison area 923. The shape of the pre-correction reference region 921 and the pre-correction comparison region 923 is standard shape information 911. In the example shown in FIG. 4, the evaluation value 103 is the absolute value of the difference in luminance, and the lower part of FIG. 4 shows the absolute value of the difference in the luminance value between the upper part of FIG. 4 and the middle part of FIG. The value of each pixel in the lower part of FIG. 4 is provided by the evaluation value generation unit 11 as the evaluation value 103.
 閾値生成部22が算出する類似度判定閾値201が「50」の場合には、第2生成部21は図4の下部に示すように四隅の各1画素が無効画素と判断され、有効画素である中心の十字の形状の領域が補正形状情報912として算出される。 When the similarity determination threshold value 201 calculated by the threshold value generation unit 22 is “50”, the second generation unit 21 determines that each of the four corners is an invalid pixel as shown in the lower part of FIG. 4, and is a valid pixel. A region having a cross shape at a certain center is calculated as the correction shape information 912.
 閾値生成部22は、類似度判定閾値201を算出する。類似度判定閾値201は、固定で指定された所定の閾値であってもよい。この閾値は、固定閾値であってもよいし、本実施例では記載していないが、入力部を介して外部から指定された閾値を用いてもよい。類似度判定閾値201の別の一例として、基準画像100内の視差生成対象画素と、該画素に対応する参照画像101内の画素との評価値103に基づく関数、たとえば類似度判定閾値生成関数を利用する手法がある。 The threshold value generation unit 22 calculates the similarity determination threshold value 201. The similarity determination threshold value 201 may be a fixed and specified predetermined threshold value. This threshold value may be a fixed threshold value, or although not described in this embodiment, a threshold value designated from the outside via the input unit may be used. As another example of the similarity determination threshold value 201, a function based on the evaluation value 103 of the parallax generation target pixel in the reference image 100 and the pixel in the reference image 101 corresponding to the pixel, for example, a similarity determination threshold value generation function is used. There is a method to use.
 図5は、類似度判定閾値生成関数900の一例を示す図である。図5の横軸は基準画像100内の視差算出対象画素901と、参照画像101内の比較対象画素902との評価値103(SV)である。図5の縦軸は類似度判定閾値TH0である。LMmax、LMmin、OFFSETは固定値でもよいし、本実施例では記載していないが、入力部を介して外部から指定された値を用いてもよい。類似度判定閾値生成関数900はたとえば以下の式1により表される。 FIG. 5 is a diagram showing an example of the similarity determination threshold generation function 900. The horizontal axis of FIG. 5 is an evaluation value 103 (SV) of the parallax calculation target pixel 901 in the reference image 100 and the comparison target pixel 902 in the reference image 101. The vertical axis of FIG. 5 is the similarity determination threshold TH0. LMmax, LMmin, and OFFSET may be fixed values, or may be specified from the outside via the input unit, although not described in this embodiment. The similarity determination threshold generation function 900 is expressed by, for example, Equation 1 below.
 類似度判定閾値TH0=
   LMmin        (SV≦LMmin)
   K0×SV+OFFSET (LMmin<SV≦LMmax)
   LMmax        (LMmax<SV)・・・(式1)
Similarity determination threshold TH0 =
LMmin (SV≤LMmin)
K0 x SV + OFFSET (LMmin <SV≤LMmax)
LMmax (LMmax <SV) ... (Equation 1)
 すなわち式1は、SVがLMmin以下の場合、SVがLMminよりも大きくLMmax以下の場合、SVがLMmaxよりも大きい場合、の3つの場合わけがされていることを示している。 That is, Equation 1 shows that there are three cases: when SV is LMmin or less, when SV is larger than LMmin and LMmax or less, and when SV is larger than LMmax.
 類似度判定閾値201の別の一例として、距離情報に基づく関数がある。例えば、あらかじめ求められている視差情報や距離情報に基づき、固定閾値や外部から指定された閾値、評価値103に基づく関数に用いられる各種パラメータ(LMmin、LMmax、OFFSET等)を変えてもよい。 Another example of the similarity determination threshold value 201 is a function based on distance information. For example, various parameters (LMmin, LMmax, OFFSET, etc.) used in the function based on the fixed threshold value, the threshold value specified from the outside, and the evaluation value 103 may be changed based on the parallax information and the distance information obtained in advance.
 図6は、標準形状情報911および補正形状情報912の一例を示す図である。この例では標準形状情報911により示される領域のサイズは、縦7画素、横15画素なので、補正前基準領域921も同じサイズを有する。視差算出対象画素901が補正前基準領域921の中央に配置される。また、図中のハッチングで示される画素は、無効画素の一例である。図6に示す例では、第2生成部21によって生成される補正形状情報912が示す領域は、補正前基準領域921から無効画素を除去した領域であり視差算出対象画素901を基準とする領域である。 FIG. 6 is a diagram showing an example of standard shape information 911 and corrected shape information 912. In this example, the size of the area indicated by the standard shape information 911 is 7 pixels in the vertical direction and 15 pixels in the horizontal direction, so that the reference area 921 before correction also has the same size. The parallax calculation target pixel 901 is arranged in the center of the pre-correction reference region 921. Further, the pixels shown by hatching in the figure are examples of invalid pixels. In the example shown in FIG. 6, the region indicated by the correction shape information 912 generated by the second generation unit 21 is a region in which invalid pixels are removed from the pre-correction reference region 921 and is a region based on the parallax calculation target pixel 901. be.
 図7は、視差生成部13の処理を示すフローチャートである。視差生成部13は、入力部10から基準画像100および参照画像101を受信するたびに図7に示す処理を実行する。ステップS301では視差生成部13は、基準画像100から視差算出対象画素901を特定する。本ステップでは、基準画像100に含まれる画素のうち、まだ視差を算出していない、いずれかの画素が視差算出対象画素901として特定される。 FIG. 7 is a flowchart showing the processing of the parallax generation unit 13. The parallax generation unit 13 executes the process shown in FIG. 7 each time the reference image 100 and the reference image 101 are received from the input unit 10. In step S301, the parallax generation unit 13 specifies the parallax calculation target pixel 901 from the reference image 100. In this step, among the pixels included in the reference image 100, one of the pixels for which the parallax has not been calculated yet is specified as the parallax calculation target pixel 901.
 続くステップS302では視差生成部13は、記憶部15から標準形状情報911を読み取る。続くステップS303では視差生成部13は、参照画像101における探索範囲を特定する。たとえば、説明を簡素化するために参照画像における探索範囲は、基準画像の視差算出対象画素901を中心にX方向に±100画素、Y方向に±5画素で固定であると仮定する。この場合に視差算出対象画素901の座標が(300,300)の場合に、参照画像101における探索範囲は、(200、295)~(400,305)と特定される。すなわちこの例では、探索範囲に含まれる画素は100x2+1と5x2+1の積である「1111」個である。参照画像101における探索範囲は、あらかじめ定められていてもよいし、本実施の形態では説明しない手法により基準画像100の画素ごとに異なる値が設定されてもよい。 In the following step S302, the parallax generation unit 13 reads the standard shape information 911 from the storage unit 15. In the following step S303, the parallax generation unit 13 specifies the search range in the reference image 101. For example, in order to simplify the explanation, it is assumed that the search range in the reference image is fixed at ± 100 pixels in the X direction and ± 5 pixels in the Y direction centering on the parallax calculation target pixel 901 of the reference image. In this case, when the coordinates of the parallax calculation target pixel 901 are (300,300), the search range in the reference image 101 is specified as (200, 295) to (400, 305). That is, in this example, the number of pixels included in the search range is "1111", which is the product of 100x2 + 1 and 5x2 + 1. The search range in the reference image 101 may be predetermined, or different values may be set for each pixel of the reference image 100 by a method not described in the present embodiment.
 続くステップS304では視差生成部13は、マッチングブロック生成部12に探索領域情報106を送信する。上述する例では、視差算出対象画素901の座標が(300,300)である旨、標準形状情報、および参照画像における探索範囲の情報がマッチングブロック生成部12に送信される。続くステップS305では視差生成部13は、探索領域の全域分の補正形状情報912を受信する。たとえば前述の例では、視差生成部13は「1111」個の補正形状情報912を受信する。 In the following step S304, the parallax generation unit 13 transmits the search area information 106 to the matching block generation unit 12. In the above example, the standard shape information and the information of the search range in the reference image are transmitted to the matching block generation unit 12 to the effect that the coordinates of the parallax calculation target pixel 901 are (300,300). In the following step S305, the parallax generation unit 13 receives the correction shape information 912 for the entire area of the search area. For example, in the above example, the parallax generation unit 13 receives "1111" correction shape information 912s.
 続くステップS306では視差生成部13は、参照画像101の探索領域から補正後基準領域922に最も類似する補正後比較領域924を特定する。ただし本ステップでは、ステップS305において取得した補正形状情報912を用いる。本ステップの詳細は後に図8を参照して説明する。 In the following step S306, the parallax generation unit 13 identifies the corrected comparison area 924 most similar to the corrected reference area 922 from the search area of the reference image 101. However, in this step, the corrected shape information 912 acquired in step S305 is used. The details of this step will be described later with reference to FIG.
 続くステップS307では視差生成部13は、視差算出対象画素の演算装置1からの距離を算出する。この算出にはたとえば、基準画像100における視差算出対象画素のX座標値と参照画像101におけるステップS306において特定されたブロックの中心のX座標値との差の値と、あらかじめ作成されたルックアップテーブルとを用いて算出される。続くステップS308では視差生成部13は、基準画像における全画素について距離を算出したか否かを判断する。視差生成部13は、基準画像100における全画素について距離を算出したと判断する場合は図7に示す処理を終了し、距離を算出していない基準画像100の画素が存在すると判断する場合はステップS301に戻る。 In the following step S307, the parallax generation unit 13 calculates the distance of the parallax calculation target pixel from the arithmetic unit 1. For this calculation, for example, the value of the difference between the X coordinate value of the parallax calculation target pixel in the reference image 100 and the X coordinate value of the center of the block specified in step S306 in the reference image 101, and a look-up table created in advance. It is calculated using and. In the following step S308, the parallax generation unit 13 determines whether or not the distance has been calculated for all the pixels in the reference image. When the parallax generation unit 13 determines that the distance has been calculated for all the pixels in the reference image 100, the process shown in FIG. 7 is terminated, and when it is determined that there are pixels of the reference image 100 for which the distance has not been calculated, a step is taken. Return to S301.
 図8は、図7におけるステップS306の詳細を示すフローチャートである。図8の処理を開始する時点で、ステップS303までの処理により探索領域情報が特定されており、ステップS305において探索領域全域について補正形状情報912が得られている。 FIG. 8 is a flowchart showing the details of step S306 in FIG. 7. At the time of starting the process of FIG. 8, the search area information is specified by the process up to step S303, and the corrected shape information 912 is obtained for the entire search area in step S305.
 図8における最初の処理であるステップS321では視差生成部13は、参照画像101における比較対象画素902を特定する。比較対象画素902は、探索領域情報に含まれる参照画像における探索範囲の情報により特定される座標の範囲に含まれ、かつステップS322~S325の処理がされていないいずれかの座標である。 In step S321, which is the first process in FIG. 8, the parallax generation unit 13 specifies the comparison target pixel 902 in the reference image 101. The comparison target pixel 902 is any coordinate that is included in the range of coordinates specified by the information of the search range in the reference image included in the search area information and that has not been processed in steps S322 to S325.
 続くステップS322では視差生成部13は、比較対象画素902に対応する補正形状情報912を読み込む。なお本ステップにおける読み込みの対象は、図7のステップS305において得た情報に含まれている。続くステップS323では視差生成部13は、基準画像100において視差算出対象画素901を基準として補正形状情報912を適用し、補正後基準領域922を特定する。 In the following step S322, the parallax generation unit 13 reads the correction shape information 912 corresponding to the comparison target pixel 902. The target of reading in this step is included in the information obtained in step S305 of FIG. In the following step S323, the parallax generation unit 13 applies the corrected shape information 912 with the parallax calculation target pixel 901 as a reference in the reference image 100, and specifies the corrected reference region 922.
 続くステップS324では視差生成部13は、参照画像101において比較対象画素902を基準として補正形状情報912を適用し、補正後比較領域924を特定する。続くステップS315では視差生成部13は、ステップS323において特定した補正後基準領域922に含まれる輝度情報と、ステップS324において特定した補正後比較領域924に含まれる輝度情報と、の類似度を算出する。 In the following step S324, the parallax generation unit 13 applies the corrected shape information 912 with reference to the comparison target pixel 902 in the reference image 101, and specifies the corrected comparison region 924. In the following step S315, the parallax generation unit 13 calculates the similarity between the luminance information included in the corrected reference region 922 specified in step S323 and the luminance information included in the corrected comparison region 924 specified in step S324. ..
 続くステップS326では視差生成部13は、参照画像における探索範囲の情報により特定される座標の範囲に含まれる全領域を対象として類似度を算出したか否かを判断する。視差生成部13は、全領域を対象として類似度を算出したと判断する場合はステップS327に進み、類似度を算出していない画素が存在すると判断する場合はステップS321に戻る。ステップS327では視差生成部13は、補正後基準領域922に最も類似する補正後比較領域924、たとえばSAD(Sum of Abusolute Difference)が最小の補正後比較領域924を特定して図8に示す処理を終了する。 In the following step S326, the parallax generation unit 13 determines whether or not the similarity is calculated for the entire region included in the range of coordinates specified by the information of the search range in the reference image. The parallax generation unit 13 proceeds to step S327 when it is determined that the similarity has been calculated for the entire region, and returns to step S321 when it is determined that there are pixels for which the similarity has not been calculated. In step S327, the parallax generation unit 13 identifies the corrected comparison area 924 most similar to the corrected reference area 922, for example, the corrected comparison area 924 having the smallest SAD (Sum of Abusolute Difference), and performs the process shown in FIG. finish.
 図9は、マッチングブロック生成部12の処理を示すフローチャートである。マッチングブロック生成部12は、視差生成部13から探索領域情報106を受信するたびに図9に示す処理を実行する。ただし図9では探索領域情報106の受信を明示するために最初のステップとして記載している。 FIG. 9 is a flowchart showing the processing of the matching block generation unit 12. The matching block generation unit 12 executes the process shown in FIG. 9 each time the search area information 106 is received from the parallax generation unit 13. However, in FIG. 9, it is described as the first step in order to clearly indicate the reception of the search area information 106.
 マッチングブロック生成部12はまずステップS331において、視差生成部13から探索領域情報106を受信する。続くステップS332ではマッチングブロック生成部12は、受信した探索領域情報106を用いて1つの補正前基準領域921と、複数の補正前比較領域923とを生成してステップS333に進む。ステップS333ではマッチングブロック生成部12は、1組の補正前基準領域921および補正前比較領域923を評価値生成部11に送信する。ただし本ステップにおいて送信する補正前基準領域921と補正前比較領域923との組合せは、まだ送信していない補正前基準領域921と補正前比較領域923との組合せである。 The matching block generation unit 12 first receives the search area information 106 from the parallax generation unit 13 in step S331. In the following step S332, the matching block generation unit 12 generates one pre-correction reference area 921 and a plurality of pre-correction comparison areas 923 using the received search area information 106, and proceeds to step S333. In step S333, the matching block generation unit 12 transmits a set of pre-correction reference area 921 and pre-correction comparison area 923 to the evaluation value generation unit 11. However, the combination of the pre-correction reference area 921 and the pre-correction comparison area 923 transmitted in this step is a combination of the pre-correction reference area 921 and the pre-correction comparison area 923 that have not been transmitted yet.
 続くステップS334ではマッチングブロック生成部12は、評価値生成部11から評価値103を受信する。本ステップで受信する評価値103は、標準形状情報911の画素数と同一の数であり、標準形状情報911の各画素、換言すると補正前基準領域921の各画素に対応する。ステップS333において評価値生成部11に送信したのは座標の情報であったが、評価値生成部11は基準画像100および参照画像101も参照してその輝度情報を取得して評価値103を算出した。続くステップS335ではマッチングブロック生成部12は、閾値生成部22が類似度判定閾値201を生成する。 In the following step S334, the matching block generation unit 12 receives the evaluation value 103 from the evaluation value generation unit 11. The evaluation value 103 received in this step is the same number as the number of pixels of the standard shape information 911, and corresponds to each pixel of the standard shape information 911, in other words, each pixel of the pre-correction reference region 921. Although it was the coordinate information that was transmitted to the evaluation value generation unit 11 in step S333, the evaluation value generation unit 11 also referred to the reference image 100 and the reference image 101 to acquire the luminance information and calculate the evaluation value 103. did. In the following step S335, the matching block generation unit 12 causes the threshold value generation unit 22 to generate the similarity determination threshold value 201.
 続くステップS336では第2生成部21は、それぞれの評価値103と類似度判定閾値201とを比較する。続くステップS337ではステップS336における比較結果に基づき補正形状情報912を生成して視差生成部13に送信する。続くステップS338ではマッチングブロック生成部12は、ステップS332において生成した全ての補正前比較領域923を処理したか否かを判断する。マッチングブロック生成部12は、全ての補正前比較領域923を処理したと判断する場合は図9に示す処理を終了し、未処理の補正前比較領域923が存在すると判断する場合はステップS333に戻る。 In the following step S336, the second generation unit 21 compares each evaluation value 103 with the similarity determination threshold value 201. In the following step S337, the corrected shape information 912 is generated based on the comparison result in step S336 and transmitted to the parallax generation unit 13. In the following step S338, the matching block generation unit 12 determines whether or not all the pre-correction comparison regions 923 generated in step S332 have been processed. The matching block generation unit 12 ends the process shown in FIG. 9 when it is determined that all the uncorrected comparison areas 923 have been processed, and returns to step S333 when it is determined that the unprocessed uncorrected comparison area 923 exists. ..
(機能の詳細)
 図10は、演算装置1が有する機能の詳細を示す図である。図10は図1に示した機能構成図の詳細を示しているが、図示の都合により認識処理部4および車両制御部5は記載していない。図10において新たに記載したのは、対象点設定部801、比較点設定部802、補正前基準領域特定部803、補正前比較領域特定部804、補正部805、補正後基準領域特定部806、補正後比較領域特定部807、および視差算出部808である。
(Details of function)
FIG. 10 is a diagram showing details of the functions of the arithmetic unit 1. FIG. 10 shows the details of the functional configuration diagram shown in FIG. 1, but the recognition processing unit 4 and the vehicle control unit 5 are not shown for convenience of illustration. Newly described in FIG. 10 are a target point setting unit 801 and a comparison point setting unit 802, a pre-correction reference area specifying unit 803, a pre-correction comparison area specifying unit 804, a correction unit 805, and a post-correction reference area specifying unit 806. The corrected comparison area specifying unit 807 and the parallax calculation unit 808.
 対象点設定部801は、基準画像100において視差を算出する対象の画素である視差算出対象画素901を設定する。対象点設定部801は視差生成部13に含まれる。比較点設定部802は、視差算出対象画素901に対応する参照画像101における画素の候補である比較対象画素902を特定する。比較点設定部802はマッチングブロック生成部12の第1生成部20に含まれる。 The target point setting unit 801 sets the parallax calculation target pixel 901, which is the target pixel for which the parallax is calculated in the reference image 100. The target point setting unit 801 is included in the parallax generation unit 13. The comparison point setting unit 802 identifies the comparison target pixel 902 which is a pixel candidate in the reference image 101 corresponding to the parallax calculation target pixel 901. The comparison point setting unit 802 is included in the first generation unit 20 of the matching block generation unit 12.
 補正前基準領域特定部803は、標準形状情報911に基づき、視差算出対象画素901を基準とする補正前基準領域921を特定する。補正前基準領域特定部803はマッチングブロック生成部12の第1生成部20に含まれる。補正前比較領域特定部804は、標準形状情報911に基づき、比較対象画素902を基準とする補正前比較領域923を特定する。補正前比較領域特定部804はマッチングブロック生成部12の第1生成部20に含まれる。 The pre-correction reference area specifying unit 803 specifies the pre-correction reference area 921 based on the parallax calculation target pixel 901 based on the standard shape information 911. The pre-correction reference region specifying unit 803 is included in the first generation unit 20 of the matching block generation unit 12. The pre-correction comparison area specifying unit 804 specifies the pre-correction comparison area 923 with reference to the comparison target pixel 902 based on the standard shape information 911. The pre-correction comparison area specifying unit 804 is included in the first generation unit 20 of the matching block generation unit 12.
 補正部805は、評価値103と類似度判定閾値201との比較に基づき、標準形状情報911を補正形状情報912に補正することで、評価対象領域を維持または狭める。補正部805はマッチングブロック生成部12の第2生成部21に含まれる。 The correction unit 805 maintains or narrows the evaluation target area by correcting the standard shape information 911 to the correction shape information 912 based on the comparison between the evaluation value 103 and the similarity determination threshold value 201. The correction unit 805 is included in the second generation unit 21 of the matching block generation unit 12.
 補正後基準領域特定部806は、補正形状情報912に基づき、視差算出対象画素901を基準とする補正後基準領域922を特定する。補正後基準領域特定部806は視差生成部13に含まれる。補正後比較領域特定部807は、補正形状情報912に基づき、比較対象画素902を基準とする補正後比較領域924を特定する。補正後比較領域特定部807は視差生成部13に含まれる。 The corrected reference area specifying unit 806 specifies the corrected reference area 922 based on the parallax calculation target pixel 901 based on the corrected shape information 912. The corrected reference region specifying unit 806 is included in the parallax generation unit 13. The corrected comparison area specifying unit 807 specifies the corrected comparison area 924 based on the comparison target pixel 902 based on the corrected shape information 912. The corrected comparison area specifying unit 807 is included in the parallax generation unit 13.
 視差算出部808は、補正形状情報912に基づき視差算出対象画素901を基準として特定される補正後基準領域922における基準画像100の情報、および補正形状情報912に基づき比較対象画素902を基準として特定される補正後比較領域924における参照画像101の情報、に基づき視差算出対象画素901の視差を算出する。 The parallax calculation unit 808 specifies the information of the reference image 100 in the corrected reference region 922 specified with reference to the parallax calculation target pixel 901 based on the corrected shape information 912, and the comparison target pixel 902 with reference to the corrected shape information 912. The parallax of the parallax calculation target pixel 901 is calculated based on the information of the reference image 101 in the corrected comparison area 924.
 上述した第1の実施の形態によれば、次の作用効果が得られる。
(1)演算装置1は、第1撮像部2が取得する基準画像100、および第2撮像部3が取得する参照画像101が入力される入力部10と、基準画像100において視差を算出する対象の画素である視差算出対象画素901を設定する対象点設定部801と、視差算出対象画素901に対応する参照画像101における画素の候補である比較対象画素902を特定する比較点設定部802と、所定の画素を基準とする評価対象領域を規定する標準形状情報911を記憶する記憶部15と、標準形状情報911に基づき、視差算出対象画素901を基準とする補正前基準領域921を特定する補正前基準領域特定部803と、標準形状情報911に基づき、比較対象画素902を基準とする補正前比較領域923を特定する補正前比較領域特定部804と、補正前基準領域921に含まれるそれぞれの画素を対象に、補正前比較領域923の対応する画素との類似の指標である評価値103を算出する評価値生成部11と、評価値103と類似度判定閾値201との比較に基づき、標準形状情報911を補正形状情報912に補正することで、評価対象領域を維持または狭める補正部805と、補正形状情報912に基づき視差算出対象画素901を基準として特定される補正後基準領域922における基準画像100の情報、および補正形状情報912に基づき比較対象画素902を基準として特定される補正後比較領域924における参照画像101の情報、に基づき視差算出対象画素901の視差を算出する視差算出部808とを備える。このように演算装置1は、評価対象領域を補正前の基準画像100と参照画像101の各画素の類似の度合いに応じて補正するので、極端な相違を有する少数の画素による悪影響を排除でき、視差の算出精度を向上できる。
According to the first embodiment described above, the following effects can be obtained.
(1) The arithmetic unit 1 is an object for calculating the difference between the reference image 100 acquired by the first imaging unit 2, the input unit 10 into which the reference image 101 acquired by the second imaging unit 3 is input, and the reference image 100. The target point setting unit 801 for setting the disparity calculation target pixel 901, which is a pixel of the above, and the comparison point setting unit 802 for specifying the comparison target pixel 902, which is a pixel candidate in the reference image 101 corresponding to the disparity calculation target pixel 901. A storage unit 15 that stores standard shape information 911 that defines an evaluation target area based on a predetermined pixel, and a correction that specifies a pre-correction reference area 921 that is based on the parallax calculation target pixel 901 based on the standard shape information 911. Each of the pre-correction area specifying unit 804 that specifies the pre-correction comparison area 923 based on the comparison target pixel 902 and the pre-correction reference area 921 based on the pre-reference area specifying unit 803 and the standard shape information 911. Based on the comparison between the evaluation value generation unit 11 that calculates the evaluation value 103, which is an index similar to the corresponding pixel in the pre-correction comparison area 923, and the evaluation value 103 and the similarity determination threshold 201, the standard is obtained. The correction unit 805 that maintains or narrows the evaluation target area by correcting the shape information 911 to the correction shape information 912, and the reference in the corrected reference area 922 that is specified based on the correction shape information 912 with reference to the parallax calculation target pixel 901. The parallax calculation unit 808 that calculates the misalignment of the parallax calculation target pixel 901 based on the information of the image 100 and the information of the reference image 101 in the corrected comparison region 924 specified with reference to the comparison target pixel 902 based on the correction shape information 912. And prepare. In this way, the arithmetic unit 1 corrects the evaluation target area according to the degree of similarity between the pixels of the reference image 100 and the reference image 101 before correction, so that the adverse effect of a small number of pixels having an extreme difference can be eliminated. The calculation accuracy of parallax can be improved.
(2)類似度判定閾値201は、図5に示したように、補正前基準領域921における基準画像100の情報、および補正前比較領域923における参照画像101の情報との類似の度合いに基づいて決定される。そのため、類似度判定閾値201を固定値とする場合に比べて、より適切な閾値を設定できる。 (2) As shown in FIG. 5, the similarity determination threshold value 201 is based on the degree of similarity with the information of the reference image 100 in the pre-correction reference region 921 and the information of the reference image 101 in the pre-correction comparison region 923. It is determined. Therefore, a more appropriate threshold value can be set as compared with the case where the similarity determination threshold value 201 is set to a fixed value.
(3)評価値生成部11は、補正前基準領域921に含まれるそれぞれの画素を対象に、補正前比較領域923の対応する画素との輝度の差に基づき評価値を算出する。そのため、基準画像100と参照画像101との乖離を評価できる。 (3) The evaluation value generation unit 11 calculates an evaluation value for each pixel included in the pre-correction reference region 921 based on the difference in brightness from the corresponding pixel in the pre-correction comparison region 923. Therefore, the dissociation between the reference image 100 and the reference image 101 can be evaluated.
(変形例1)
 図11は、標準形状情報911および補正形状情報912の別の一例を示す図である。図11に示す例では、標準形状情報911により示される領域のサイズは縦6画素、横12画素であり、補正前基準領域921も同じサイズを有する。また図11に示す例では視差算出対象画素901は、縦2画素、横2画素の大きさを有する。図11に示す例では無効画素が存在しないため、補正形状情報912は標準形状情報911と同一となる。
(Modification 1)
FIG. 11 is a diagram showing another example of the standard shape information 911 and the corrected shape information 912. In the example shown in FIG. 11, the size of the region indicated by the standard shape information 911 is 6 pixels in the vertical direction and 12 pixels in the horizontal direction, and the reference region 921 before correction also has the same size. Further, in the example shown in FIG. 11, the parallax calculation target pixel 901 has a size of 2 vertical pixels and 2 horizontal pixels. In the example shown in FIG. 11, since the invalid pixel does not exist, the corrected shape information 912 is the same as the standard shape information 911.
 さらに視差算出対象画素901は、第1の実施の形態において示したように1画素であってもよいし、図11に示すように複数画素で構成される画素ブロックであってもよい。なお視差算出対象画素901が複数の画素から構成される場合の類似度は、視差算出対象画素901に含まれる各画素の類似度の平均値、最小値、最大値、および中間値のいずれを採用してもよい。 Further, the parallax calculation target pixel 901 may be one pixel as shown in the first embodiment, or may be a pixel block composed of a plurality of pixels as shown in FIG. When the parallax calculation target pixel 901 is composed of a plurality of pixels, any of the average value, the minimum value, the maximum value, and the intermediate value of the similarity of each pixel included in the parallax calculation target pixel 901 is adopted. You may.
(変形例2)
 演算装置1は、認識処理部4および車両制御部5を備えなくてもよい。また演算装置1が有する機能が複数のハードウエア装置、たとえば2以上の電子制御装置により実現されてもよい。
(Modification 2)
The arithmetic unit 1 does not have to include the recognition processing unit 4 and the vehicle control unit 5. Further, the function of the arithmetic unit 1 may be realized by a plurality of hardware devices, for example, two or more electronic control devices.
(変形例3)
 視差生成部13は、基準画像100における参照画像101の対応する画素との基線長方向の画素の差を特定できればよく、具体的な距離を算出しなくてもよい。すなわち図7のステップS307の処理を削除してもよい。
(Modification 3)
The parallax generation unit 13 only needs to be able to specify the difference between the pixels in the baseline length direction and the corresponding pixels of the reference image 101 in the reference image 100, and does not have to calculate a specific distance. That is, the process of step S307 in FIG. 7 may be deleted.
―第2の実施の形態―
 図12~図13を参照して、演算装置の第2の実施の形態を説明する。以下の説明では、第1の実施の形態と同じ構成要素には同じ符号を付して相違点を主に説明する。特に説明しない点については、第1の実施の形態と同じである。本実施の形態では、主に、マッチングブロック生成部12が視差無効判定部を備える点で、第1の実施の形態と異なる。
-Second embodiment-
A second embodiment of the arithmetic unit will be described with reference to FIGS. 12 to 13. In the following description, the same components as those in the first embodiment are designated by the same reference numerals, and the differences will be mainly described. The points not particularly described are the same as those in the first embodiment. This embodiment is different from the first embodiment in that the matching block generation unit 12 mainly includes a parallax invalidity determination unit.
 図12は、第2の実施の形態におけるマッチングブロック生成部12Aの機能構成図である。マッチングブロック生成部12Aは第1の実施の形態の構成に加えて視差無効判定部30をさらに備える。 FIG. 12 is a functional configuration diagram of the matching block generation unit 12A in the second embodiment. The matching block generation unit 12A further includes a parallax invalidity determination unit 30 in addition to the configuration of the first embodiment.
 視差無効判定部30は、第2生成部21から供給される補正形状情報912に基づき補正形状情報912内に存在する有効視差の総数が所定数以下であるか否かを判断する。視差無効判定部30は、有効視差の総数が所定数以下であると判断する場合は、比較対象画素902に対応する視差が無効である旨の情報である視差無効情報300を出力する。視差無効判定部30は、有効視差の総数が所定数よりも多いと判断する場合は、補正形状情報912をそのまま出力する。 The parallax invalidity determination unit 30 determines whether or not the total number of effective parallax existing in the correction shape information 912 is equal to or less than a predetermined number based on the correction shape information 912 supplied from the second generation unit 21. When the total number of effective parallax is determined to be less than or equal to a predetermined number, the parallax invalidity determination unit 30 outputs the parallax invalid information 300, which is information indicating that the parallax corresponding to the comparison target pixel 902 is invalid. When the parallax invalidity determination unit 30 determines that the total number of effective parallax is larger than a predetermined number, the parallax invalidity determination unit 30 outputs the corrected shape information 912 as it is.
 視差生成部13は、視差無効情報300に対応する視差算出対象画素901と比較対象画素902の組合せは無効と判断し、視差を算出する際の候補から除外する。 The parallax generation unit 13 determines that the combination of the parallax calculation target pixel 901 and the comparison target pixel 902 corresponding to the parallax invalid information 300 is invalid, and excludes them from the candidates for calculating the parallax.
 図13は、視差無効判定部30の一例を示す図である。この例では標準形状情報911により示される領域のサイズは、縦7画素、横15画素なので、補正前基準領域921も同じサイズを有する。視差算出対象画素901が補正前基準領域921の中央に配置される。また、図中のハッチングで示される画素は、無効画素を示す。図13に示す例では有効画素が少なく、たとえば有効画素が標準形状情報911の20%以上という閾値に達しないので視差無効判定部30は視差が無効と判断する。 FIG. 13 is a diagram showing an example of the parallax invalidity determination unit 30. In this example, the size of the area indicated by the standard shape information 911 is 7 pixels in the vertical direction and 15 pixels in the horizontal direction, so that the reference area 921 before correction also has the same size. The parallax calculation target pixel 901 is arranged in the center of the pre-correction reference region 921. Further, the pixels indicated by hatching in the figure indicate invalid pixels. In the example shown in FIG. 13, the number of effective pixels is small, and for example, the effective pixels do not reach the threshold value of 20% or more of the standard shape information 911, so that the parallax invalidity determination unit 30 determines that the parallax is invalid.
 上述した第2の実施の形態によれば、次の作用効果が得られる。
(4)演算装置1Aは、補正形状情報912により規定される評価対象領域の画素数が所定の閾値よりも小さい場合に補正形状情報912を視差無効情報300に変更する視差無効判定部30を備える。視差算出部808は、視差無効情報300に対応する視差算出対象画素901および比較対象画素902の組合せに対応する視差を無効視差とする。そのため、有効画素の総数が少ない場合に発生する品質が悪い視差情報の出力を防止できる。
According to the second embodiment described above, the following effects can be obtained.
(4) The arithmetic unit 1A includes a parallax invalidity determination unit 30 that changes the corrected shape information 912 to the parallax invalid information 300 when the number of pixels in the evaluation target area defined by the corrected shape information 912 is smaller than a predetermined threshold value. .. The parallax calculation unit 808 sets the parallax corresponding to the combination of the parallax calculation target pixel 901 and the comparison target pixel 902 corresponding to the parallax invalid information 300 as the invalid parallax. Therefore, it is possible to prevent the output of poor quality parallax information that occurs when the total number of effective pixels is small.
―第3の実施の形態―
 図14を参照して、演算装置の第3の実施の形態を説明する。以下の説明では、第1の実施の形態と同じ構成要素には同じ符号を付して相違点を主に説明する。特に説明しない点については、第1の実施の形態と同じである。本実施の形態では、主に、マッチングブロック生成部12が領域判定部を備える点で、第1の実施の形態と異なる。
-Third embodiment-
A third embodiment of the arithmetic unit will be described with reference to FIG. In the following description, the same components as those in the first embodiment are designated by the same reference numerals, and the differences will be mainly described. The points not particularly described are the same as those in the first embodiment. This embodiment is different from the first embodiment in that the matching block generation unit 12 mainly includes a region determination unit.
 図14は、第3の実施の形態におけるマッチングブロック生成部12Bの機能構成図である。マッチングブロック生成部12Bは第1の実施の形態の構成に加えて領域判定部40をさらに備える。 FIG. 14 is a functional configuration diagram of the matching block generation unit 12B in the third embodiment. The matching block generation unit 12B further includes an area determination unit 40 in addition to the configuration of the first embodiment.
 領域判定部40は、基準画像100の少なくとも補正前基準領域921を対象として画像処理により物体検出を行う。そして領域判定部40は、視差算出対象画素901の領域と同一の物体が存在する領域は第2生成部21が出力する補正形状情報912をそのまま出力し、視差算出対象画素901の領域と異なる物体が存在する領域は視差が無効である旨の情報である異物視差無効情報400を出力する。たとえば領域判定部40が補正前基準領域921に道路とタイヤとを検出し、視差算出対象画素901には道路が存在する場合には、領域判定部40はタイヤの領域は異物視差無効情報400を出力する。なお基準画像100を対象とする物体検出は、演算装置1に含まれる他の構成が実行してもよいし、演算装置1とは異なる装置が実行してもよい。 The area determination unit 40 detects an object by image processing for at least the pre-correction reference area 921 of the reference image 100. Then, the area determination unit 40 outputs the correction shape information 912 output by the second generation unit 21 as it is in the area where the same object as the area of the parallax calculation target pixel 901 exists, and the object different from the area of the parallax calculation target pixel 901. The area in which is present outputs foreign matter parallax invalid information 400, which is information indicating that parallax is invalid. For example, when the area determination unit 40 detects a road and a tire in the pre-correction reference area 921 and a road exists in the parallax calculation target pixel 901, the area determination unit 40 displays the foreign body parallax invalid information 400 in the tire area. Output. The object detection targeting the reference image 100 may be executed by another configuration included in the arithmetic unit 1, or may be executed by an apparatus different from the arithmetic unit 1.
 領域判定部40による物体検出は、たとえば輝度の変化を用いて簡易に実現してもよい。すなわち視差算出対象画素901から基準画像100の周辺部に向けて所定閾値以上の輝度の変化がある場合に、所定値以上の変化がある画素から補正後基準領域922の境界までの画素領域を異物視差無効情報400としてもよい。 The object detection by the area determination unit 40 may be easily realized by using, for example, a change in brightness. That is, when there is a change in brightness of a predetermined threshold value or more from the parallax calculation target pixel 901 toward the peripheral portion of the reference image 100, the pixel region from the pixel having the change of the predetermined value or more to the boundary of the corrected reference region 922 is a foreign object. The parallax invalid information 400 may be used.
 視差生成部13は、異物視差無効情報400に対応する視差算出対象画素901と比較対象画素902の組合せは無効と判断し、視差を算出する際の候補から除外する。 The parallax generation unit 13 determines that the combination of the parallax calculation target pixel 901 and the comparison target pixel 902 corresponding to the foreign body parallax invalid information 400 is invalid, and excludes them from the candidates for calculating the parallax.
 上述した第3の実施の形態によれば、次の作用効果が得られる。
(5)演算装置1Bは、補正前基準領域921における基準画像100の情報に基づいて物体の境界を特定する領域判定部40を備える。視差算出部808は、基準画像100において境界により視差算出対象画素901と同一の領域に属する画素の情報を用いて視差を算出する。そのため、視差算出対象画素901とは異なる物体である、視差が異なることが想定される画素の情報を明示的に用いずに視差を算出することができる。
According to the third embodiment described above, the following effects can be obtained.
(5) The arithmetic unit 1B includes an area determination unit 40 that identifies the boundary of an object based on the information of the reference image 100 in the pre-correction reference area 921. The parallax calculation unit 808 calculates the parallax using the information of the pixels belonging to the same region as the parallax calculation target pixel 901 due to the boundary in the reference image 100. Therefore, the parallax can be calculated without explicitly using the information of the pixels that are different from the parallax calculation target pixel 901 and are expected to have different parallax.
(第3の実施の形態の変形例1)
 領域判定部40は、不図示のセンサが出力する情報を用いて物体を認識してもよい。たとえば領域判定部40は、単眼カメラ、LiDAR(Light Detection and Ranging)、超音波センサ、ミリ波レーダーなどの出力を用いて物体検出を行ってもよい。
(Modification 1 of the third embodiment)
The area determination unit 40 may recognize the object by using the information output by the sensor (not shown). For example, the area determination unit 40 may perform object detection using an output of a monocular camera, LiDAR (Light Detection and Ranging), an ultrasonic sensor, a millimeter wave radar, or the like.
―第4の実施の形態―
 図15~図19を参照して、演算装置の第4の実施の形態を説明する。以下の説明では、第1の実施の形態と同じ構成要素には同じ符号を付して相違点を主に説明する。特に説明しない点については、第1の実施の形態と同じである。本実施の形態では、主に、演算を効率化する点で、第1の実施の形態と異なる。なおこれまでの実施の形態では類似度に符号を付していなかったが、本実施の形態では類似度に符号107を付す。
-Fourth Embodiment-
A fourth embodiment of the arithmetic unit will be described with reference to FIGS. 15 to 19. In the following description, the same components as those in the first embodiment are designated by the same reference numerals, and the differences will be mainly described. The points not particularly described are the same as those in the first embodiment. The present embodiment is different from the first embodiment mainly in that the calculation is made more efficient. In the previous embodiments, the degree of similarity is not designated by a reference numeral, but in the present embodiment, the degree of similarity is designated by a reference numeral 107.
 図15は、第4の実施の形態における演算装置1Cの機能構成図である。マッチングブロック生成部12Cおよび視差生成部13Cの動作が第1の実施の形態と異なる。また本実施の形態では、マッチングブロック生成部12Cは視差生成部13Cに、補正形状情報912の代わりに類似度107を送信する。 FIG. 15 is a functional configuration diagram of the arithmetic unit 1C according to the fourth embodiment. The operations of the matching block generation unit 12C and the parallax generation unit 13C are different from those of the first embodiment. Further, in the present embodiment, the matching block generation unit 12C transmits the similarity degree 107 to the parallax generation unit 13C instead of the correction shape information 912.
 図16は、第4の実施の形態における視差生成部13Cの処理を示すフローチャートである。図16は第1の実施の形態における図7と比較すると、図7におけるステップS305およびステップS306がステップS305AおよびステップS306Aに置き換えられる。特に説明しない点は図7と同様である。 FIG. 16 is a flowchart showing the processing of the parallax generation unit 13C in the fourth embodiment. FIG. 16 replaces step S305 and step S306 in FIG. 7 with steps S305A and S306A as compared to FIG. 7 in the first embodiment. The points not particularly described are the same as those in FIG. 7.
 ステップS305Aでは視差生成部13Cは、マッチングブロック生成部12Cから探索範囲全域分の類似度107を受信する。続くステップS306Aでは視差生成部13Cは、ステップS305Aにおいて受信した探索範囲全域分の類似度107を用いて、補正後基準領域922に最も類似する補正後比較領域924、たとえばSADが最小の補正後比較領域924を特定する。ステップS307以降の処理は図7と同様なので説明を省略する。 In step S305A, the parallax generation unit 13C receives the similarity 107 for the entire search range from the matching block generation unit 12C. In the following step S306A, the parallax generation unit 13C uses the similarity 107 for the entire search range received in step S305A to make a corrected comparison area 924 most similar to the corrected reference area 922, for example, a corrected comparison with the minimum SAD. Identify region 924. Since the processes after step S307 are the same as those in FIG. 7, the description thereof will be omitted.
 図17および図18は、第4の実施の形態におけるマッチングブロック生成部12Cの処理を示すフローチャートである。図17は第1の実施の形態における図9と比較すると、ステップS335までの処理は同一であり、それ以降の処理が異なる。特に説明しない点は図9と同一である。 17 and 18 are flowcharts showing the processing of the matching block generation unit 12C in the fourth embodiment. 17 is the same as FIG. 9 in the first embodiment, and the processes up to step S335 are the same, and the processes after that are different. The points not particularly described are the same as those in FIG.
 ステップS335の次に実行されるステップS351ではマッチングブロック生成部12Cは、変数sumをゼロで初期化してステップS352に進む。ステップS352ではマッチングブロック生成部12Cは、ステップS334において受信した評価値の1つを選択する。ただしここで選択する評価値は、これまでに選択していないものである。続くステップS353ではマッチングブロック生成部12Cは、ステップS352において選択した評価値と、ステップS335において生成した閾値とを比較する。そしてマッチングブロック生成部12Cは、丸囲みのAを経由して図18におけるステップS354に進む。 In step S351 executed after step S335, the matching block generation unit 12C initializes the variable sum to zero and proceeds to step S352. In step S352, the matching block generation unit 12C selects one of the evaluation values received in step S334. However, the evaluation values selected here are those that have not been selected so far. In the following step S353, the matching block generation unit 12C compares the evaluation value selected in step S352 with the threshold value generated in step S335. Then, the matching block generation unit 12C proceeds to step S354 in FIG. 18 via the circled A.
 ステップS354ではマッチングブロック生成部12Cは、ステップS353における比較の結果を判断する。マッチングブロック生成部12Cは、評価値は閾値未満であると判断する場合はステップS355に進み、評価値は閾値以上であると判断する場合はステップS356に進む。ステップS355ではマッチングブロック生成部12Cは、変数sumに評価値の値を加算してステップS356に進む。ステップS356ではマッチングブロック生成部12Cは、ステップS334において受信した全ての評価値を評価したか否かを判断する。マッチングブロック生成部12Cは、全ての評価値を評価したと判断する場合はステップS357に進み、評価していない評価値が存在すると判断する場合は、丸囲みのBを経由して図17におけるステップS352に戻る。 In step S354, the matching block generation unit 12C determines the result of comparison in step S353. The matching block generation unit 12C proceeds to step S355 when it is determined that the evaluation value is less than the threshold value, and proceeds to step S356 when it is determined that the evaluation value is equal to or more than the threshold value. In step S355, the matching block generation unit 12C adds the value of the evaluation value to the variable sum and proceeds to step S356. In step S356, the matching block generation unit 12C determines whether or not all the evaluation values received in step S334 have been evaluated. When the matching block generation unit 12C determines that all the evaluation values have been evaluated, the process proceeds to step S357, and when it is determined that there are evaluation values that have not been evaluated, the matching block generation unit 12C goes through the circled B to step 17 in FIG. Return to S352.
 ステップS357ではマッチングブロック生成部12Cは、変数sumの値を類似度107として視差生成部13Cに送信する。続くステップS338ではマッチングブロック生成部12Cは、全ての補正前比較領域923を処理したか否かを判断する。マッチングブロック生成部12Cは、全ての補正前比較領域923を処理したと判断する場合は図18に示す処理を終了し、未処理の補正前比較領域923が存在すると判断する場合は丸囲みのCを経由して図17におけるステップS333に戻る。以上がマッチングブロック生成部12Cの処理である。 In step S357, the matching block generation unit 12C transmits the value of the variable sum as the similarity 107 to the parallax generation unit 13C. In the following step S338, the matching block generation unit 12C determines whether or not all the pre-correction comparison regions 923 have been processed. When the matching block generation unit 12C determines that all the uncorrected comparison areas 923 have been processed, the processing shown in FIG. 18 is terminated, and when it is determined that the unprocessed uncorrected comparison area 923 exists, the circled C Return to step S333 in FIG. 17 via. The above is the processing of the matching block generation unit 12C.
 図19は、第4の実施の形態における機能の詳細を示す図であり、第1の実施の形態における図10に対応する。図19は、図10と比較すると補正後基準領域特定部806および補正後比較領域特定部807が視差生成部13からマッチングブロック生成部12Cの第2生成部21に移動している。第2生成部21は、評価値103を類似度判定閾値201と比較して変数sumに加算するので(S353~S355)、補正形状情報912の生成、補正後基準領域922および補正後比較領域924の生成、および類似度107の算出をまとめて実行しているとみなすことができる。 FIG. 19 is a diagram showing details of the function in the fourth embodiment, and corresponds to FIG. 10 in the first embodiment. In FIG. 19, the corrected reference area specifying unit 806 and the corrected comparison area specifying unit 807 are moved from the parallax generation unit 13 to the second generation unit 21 of the matching block generation unit 12C as compared with FIG. Since the second generation unit 21 compares the evaluation value 103 with the similarity determination threshold value 201 and adds it to the variable sum (S353 to S355), the generation of the corrected shape information 912, the corrected reference area 922, and the corrected comparison area 924. Can be considered to be collectively performing the generation of and the calculation of the similarity 107.
 上述した第4の実施の形態によれば、次の作用効果が得られる。
(6)視差算出部808を含む視差生成部13Cは、評価値生成部11が算出する評価値103を利用して視差を算出する。具体的には、評価値生成部11が算出する評価値103をマッチングブロック生成部12Cの第2生成部21が集計して得られる類似度107を視差生成部13Cが取得して利用する。そのため視差生成部13Cが比較対象画素902を用いた補正後比較領域924の特定を改めて行う必要がなく、基準画像100や参照画像101の画素値を参照する必要もないため、演算装置1Cの全体として演算処理を削減できる。すなわち従来技術と比較して、同一の演算リソースを用いて高速に処理することや、演算リソースを低減して処理速度は同一に保つことができる。
According to the fourth embodiment described above, the following effects can be obtained.
(6) The parallax generation unit 13C including the parallax calculation unit 808 calculates the parallax using the evaluation value 103 calculated by the evaluation value generation unit 11. Specifically, the parallax generation unit 13C acquires and uses the similarity 107 obtained by aggregating the evaluation values 103 calculated by the evaluation value generation unit 11 by the second generation unit 21 of the matching block generation unit 12C. Therefore, the parallax generation unit 13C does not need to specify the corrected comparison area 924 using the comparison target pixel 902 again, and does not need to refer to the pixel values of the reference image 100 and the reference image 101. Therefore, the entire arithmetic unit 1C does not need to be referred to. The arithmetic processing can be reduced. That is, as compared with the prior art, it is possible to perform high-speed processing using the same arithmetic resource, or to reduce the arithmetic resource and keep the processing speed the same.
(第4の実施の形態の変形例)
 上述した第4の実施の形態では、視差生成部13Cは、マッチングブロック生成部12Cから受信した類似度107をそのまま評価に使用した。しかし視差生成部13Cはマッチングブロック生成部12Cから受信した類似度107を加工して評価に使用してもよい。たとえば類似度107がSADの値である場合に、これを利用してZSADの値を算出し、この値を評価に使用してもよい。
(Modified example of the fourth embodiment)
In the fourth embodiment described above, the parallax generation unit 13C used the similarity 107 received from the matching block generation unit 12C as it is for the evaluation. However, the parallax generation unit 13C may process the similarity 107 received from the matching block generation unit 12C and use it for evaluation. For example, when the similarity 107 is a value of SAD, the value of ZSAD may be calculated using this and this value may be used for evaluation.
 なお、本発明は上記した実施例に限定されるものではなく、様々な変形例が含まれる。例えば、上記した実施例は本発明を分かりやすく説明するために詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されるものではない。また、ある実施例の構成の一部を他の実施例の構成に置き換えることが可能であり、また、ある実施例の構成に他の実施例の構成を加えることも可能である。また、各実施例の構成の一部について、他の構成の追加・削除・置換をすることが可能である。 The present invention is not limited to the above-described embodiment, but includes various modifications. For example, the above-described embodiment has been described in detail in order to explain the present invention in an easy-to-understand manner, and is not necessarily limited to the one including all the described configurations. Further, it is possible to replace a part of the configuration of one embodiment with the configuration of another embodiment, and it is also possible to add the configuration of another embodiment to the configuration of one embodiment. Further, it is possible to add / delete / replace a part of the configuration of each embodiment with another configuration.
 また、上記の各構成は、それらの一部又は全部が、ハードウエアで構成されても、プロセッサでプログラムが実行されることにより実現されるように構成されてもよい。また、制御線や情報線は説明上必要と考えられるものを示しており、製品上必ずしも全ての制御線や情報線を示しているとは限らない。実際には殆ど全ての構成が相互に接続されていると考えてもよい。 Further, each of the above configurations may be partially or wholly configured by hardware or may be configured to be realized by executing a program on a processor. In addition, the control lines and information lines indicate those that are considered necessary for explanation, and do not necessarily indicate all the control lines and information lines in the product. In practice, it can be considered that almost all configurations are interconnected.
 上述した各実施の形態および変形例において、機能ブロックの構成は一例に過ぎない。別々の機能ブロックとして示したいくつかの機能構成を一体に構成してもよいし、1つの機能ブロック図で表した構成を2以上の機能に分割してもよい。また各機能ブロックが有する機能の一部を他の機能ブロックが備える構成としてもよい。 In each of the above-described embodiments and modifications, the configuration of the functional block is only an example. Several functional configurations shown as separate functional blocks may be integrally configured, or the configuration represented by one functional block diagram may be divided into two or more functions. Further, a configuration in which a part of the functions of each functional block is provided in another functional block may be provided.
 演算装置1のプログラムが格納される記憶部15は、書き換え可能な記憶装置、たとえばフラッシュメモリなどでもよい。また、演算装置1の入力部10が利用可能な媒体を介して、他の装置からプログラムが読み込まれてもよい。ここで媒体とは、例えば入出力インタフェースに着脱可能な記憶媒体、または通信媒体、すなわち有線、無線、光などのネットワーク、または当該ネットワークを伝搬する搬送波やディジタル信号、を指す。また、プログラムにより実現される機能の一部または全部がハードウエア回路やFPGAにより実現されてもよい。 The storage unit 15 in which the program of the arithmetic unit 1 is stored may be a rewritable storage device such as a flash memory. Further, the program may be read from another device via a medium that can be used by the input unit 10 of the arithmetic unit 1. Here, the medium refers to, for example, a storage medium that can be attached to and detached from an input / output interface, or a communication medium, that is, a network such as wired, wireless, or optical, or a carrier wave or digital signal that propagates in the network. In addition, some or all of the functions realized by the program may be realized by the hardware circuit or FPGA.
 上述した各実施の形態および変形例は、それぞれ組み合わせてもよい。上記では、種々の実施の形態および変形例を説明したが、本発明はこれらの内容に限定されるものではない。本発明の技術的思想の範囲内で考えられるその他の態様も本発明の範囲内に含まれる。 The above-mentioned embodiments and modifications may be combined. Although various embodiments and modifications have been described above, the present invention is not limited to these contents. Other aspects considered within the scope of the technical idea of the present invention are also included within the scope of the present invention.
1、1A、1B、1C…演算装置
10…入力部
11…評価値生成部
12、12A、12B、12C…マッチングブロック生成部
13、13C…視差生成部
13C…視差生成部
15…記憶部
20…第1生成部
21…第2生成部
22…閾値生成部
30…視差無効判定部
40…領域判定部
100…基準画像
101…参照画像
103…評価値
106…探索領域情報
107…類似度
201…類似度判定閾値
300…視差無効情報
400…異物視差無効情報
801…対象点設定部
802…比較点設定部
803…補正前基準領域特定部
804…補正前比較領域特定部
805…補正部
806…補正後基準領域特定部
807…補正後比較領域特定部
808…視差算出部
900…類似度判定閾値生成関数
901…視差算出対象画素
902…比較対象画素
911…標準形状情報
912…補正形状情報
921…補正前基準領域
922…補正後基準領域
923…補正前比較領域
924…補正後比較領域
930…探索範囲
1, 1A, 1B, 1C ... Calculation device 10 ... Input unit 11 ... Evaluation value generation unit 12, 12A, 12B, 12C ... Matching block generation unit 13, 13C ... Parallax generation unit 13C ... Parallax generation unit 15 ... Storage unit 20 ... 1st generation unit 21 ... 2nd generation unit 22 ... threshold generation unit 30 ... parallax invalidity determination unit 40 ... area determination unit 100 ... reference image 101 ... reference image 103 ... evaluation value 106 ... search area information 107 ... similarity 201 ... similarity Degree determination threshold 300 ... Parallax invalid information 400 ... Foreign object parallax invalid information 801 ... Target point setting unit 802 ... Comparison point setting unit 803 ... Pre-correction reference area specifying unit 804 ... Pre-correction comparison area specifying unit 805 ... Correction unit 806 ... After correction Reference area specifying unit 807 ... After correction Comparison area specifying unit 808 ... Parallax calculation unit 900 ... Parallax determination threshold generation function 901 ... Parallax calculation target pixel 902 ... Comparison target pixel 911 ... Standard shape information 912 ... Corrected shape information 921 ... Before correction Reference area 922 ... Reference area after correction 923 ... Comparison area before correction 924 ... Comparison area after correction 930 ... Search range

Claims (7)

  1.  第1撮像部が取得する基準画像、および第2撮像部が取得する参照画像が入力される入力部と、
     前記基準画像において視差を算出する対象の画素である視差算出対象画素を設定する対象点設定部と、
     前記視差算出対象画素に対応する前記参照画像における画素の候補である比較対象画素を設定する比較点設定部と、
     所定の画素を基準とする評価対象領域を規定する標準形状情報を記憶する記憶部と、
     前記標準形状情報に基づき、前記視差算出対象画素を基準とする補正前基準領域を特定する補正前基準領域特定部と、
     前記標準形状情報に基づき、前記比較対象画素を基準とする補正前比較領域を特定する補正前比較領域特定部と、
     前記補正前基準領域に含まれるそれぞれの画素を対象に、前記補正前比較領域の対応する画素との類似の指標である評価値を算出する評価値生成部と、
     前記評価値と閾値との比較に基づき、前記標準形状情報を補正形状情報に補正することで、前記評価対象領域を維持または狭める補正部と、
     前記補正形状情報に基づき前記視差算出対象画素を基準として特定される補正後基準領域における前記基準画像の情報、および前記補正形状情報に基づき前記比較対象画素を基準として特定される補正後比較領域における前記参照画像の情報、に基づき前記視差算出対象画素の視差を算出する視差算出部とを備える演算装置。
    An input unit into which a reference image acquired by the first imaging unit and a reference image acquired by the second imaging unit are input, and
    A target point setting unit for setting a parallax calculation target pixel, which is a target pixel for which parallax is calculated in the reference image, and a target point setting unit.
    A comparison point setting unit that sets a comparison target pixel that is a pixel candidate in the reference image corresponding to the parallax calculation target pixel, and a comparison point setting unit.
    A storage unit that stores standard shape information that defines an evaluation target area based on a predetermined pixel, and a storage unit.
    A pre-correction reference area specifying unit that specifies a pre-correction reference area based on the parallax calculation target pixel based on the standard shape information, and a pre-correction reference area specifying unit.
    A pre-correction comparison area specifying unit that specifies a pre-correction comparison area based on the comparison target pixel based on the standard shape information, and a pre-correction comparison area specifying unit.
    An evaluation value generation unit that calculates an evaluation value that is an index similar to the corresponding pixel in the pre-correction comparison area for each pixel included in the pre-correction reference area.
    A correction unit that maintains or narrows the evaluation target area by correcting the standard shape information to the correction shape information based on the comparison between the evaluation value and the threshold value.
    In the information of the reference image in the corrected reference region specified with the parallax calculation target pixel as a reference based on the corrected shape information, and in the corrected comparison region specified with the comparison target pixel as a reference based on the corrected shape information. A calculation device including a parallax calculation unit that calculates the parallax of the pixel for which the parallax is calculated based on the information of the reference image.
  2.  請求項1に記載の演算装置において、
     前記閾値は、補正前基準領域における前記基準画像の情報と、および補正前比較領域における前記参照画像の情報との類似の度合いに基づいて決定される演算装置。
    In the arithmetic unit according to claim 1,
    The threshold value is an arithmetic unit determined based on the degree of similarity between the information of the reference image in the pre-correction reference region and the information of the reference image in the pre-correction comparison region.
  3.  請求項1に記載の演算装置において、
     前記評価値生成部は、前記補正前基準領域に含まれるそれぞれの画素を対象に、前記補正前比較領域の対応する画素との輝度の差に基づき前記評価値を算出する演算装置。
    In the arithmetic unit according to claim 1,
    The evaluation value generation unit is an arithmetic unit that calculates the evaluation value based on the difference in brightness between each pixel included in the pre-correction reference region and the corresponding pixel in the pre-correction comparison region.
  4.  請求項1に記載の演算装置において、
     前記補正形状情報により規定される評価対象領域の画素数が所定の閾値よりも小さい場合に前記補正形状情報を視差無効情報に変更する視差無効判定部をさらに備え、
     前記視差算出部は、前記視差無効情報に対応する前記視差算出対象画素および前記比較対象画素の組合せに対応する視差を無効視差とする演算装置。
    In the arithmetic unit according to claim 1,
    Further provided with a parallax invalidity determining unit that changes the corrected shape information to parallax invalid information when the number of pixels in the evaluation target area defined by the corrected shape information is smaller than a predetermined threshold value.
    The parallax calculation unit is an arithmetic unit that uses parallax corresponding to a combination of the parallax calculation target pixel corresponding to the parallax invalid information and the comparison target pixel as an invalid parallax.
  5.  請求項1に記載の演算装置において、
     補正前基準領域における前記基準画像の情報に基づいて物体の境界を特定する領域判定部をさらに備え、
     前記視差算出部は、前記基準画像において前記境界により前記視差算出対象画素と同一の領域に属する画素の情報を用いて視差を算出する演算装置。
    In the arithmetic unit according to claim 1,
    Further, a region determination unit for specifying the boundary of the object based on the information of the reference image in the pre-correction reference region is provided.
    The parallax calculation unit is an arithmetic unit that calculates parallax using information of pixels belonging to the same region as the pixel to be calculated for parallax due to the boundary in the reference image.
  6.  請求項1に記載の演算装置において、
     前記視差算出部は、前記評価値生成部が算出する前記評価値を利用して前記視差を算出する演算装置。
    In the arithmetic unit according to claim 1,
    The parallax calculation unit is an arithmetic unit that calculates the parallax using the evaluation value calculated by the evaluation value generation unit.
  7.  第1撮像部が取得する基準画像、および第2撮像部が取得する参照画像が入力される入力部と、所定の画素を基準とする評価対象領域を規定する標準形状情報を記憶する記憶部とを備える演算装置が実行する視差算出方法であって、
     前記基準画像において視差を算出する対象の画素である視差算出対象画素を設定することと、
     前記視差算出対象画素に対応する前記参照画像における画素の候補である比較対象画素を特定することと、
     前記標準形状情報に基づき、前記視差算出対象画素を基準とする補正前基準領域を特定することと、
     前記標準形状情報に基づき、前記比較対象画素を基準とする補正前比較領域を特定することと、
     前記補正前基準領域に含まれるそれぞれの画素を対象に、前記補正前比較領域の対応する画素との類似の指標である評価値を算出することと、
     前記評価値と閾値との比較に基づき、前記標準形状情報を補正形状情報に補正することで、前記評価対象領域を維持または狭めることと、
     前記補正形状情報に基づき前記視差算出対象画素を基準として特定される補正後基準領域における前記基準画像の情報、および前記補正形状情報に基づき前記比較対象画素を基準として特定される補正後比較領域における前記参照画像の情報、に基づき前記視差算出対象画素の視差を算出することとを含む視差算出方法。
     
    An input unit for inputting a reference image acquired by the first imaging unit and a reference image acquired by the second imaging unit, and a storage unit for storing standard shape information defining an evaluation target area based on a predetermined pixel. It is a parallax calculation method executed by an arithmetic unit including.
    Setting the parallax calculation target pixel, which is the target pixel for which the parallax is calculated in the reference image,
    Identifying a comparison target pixel that is a pixel candidate in the reference image corresponding to the parallax calculation target pixel, and
    Based on the standard shape information, specifying the pre-correction reference region based on the parallax calculation target pixel, and
    Based on the standard shape information, specifying the pre-correction comparison area based on the comparison target pixel, and
    For each pixel included in the pre-correction reference region, an evaluation value which is an index similar to the corresponding pixel in the pre-correction comparison region is calculated.
    By correcting the standard shape information to the corrected shape information based on the comparison between the evaluation value and the threshold value, the evaluation target area can be maintained or narrowed.
    In the information of the reference image in the corrected reference region specified with the parallax calculation target pixel as a reference based on the corrected shape information, and in the corrected comparison region specified with the comparison target pixel as a reference based on the corrected shape information. A parallax calculation method including calculating the parallax of the parallax calculation target pixel based on the information of the reference image.
PCT/JP2021/003114 2020-06-05 2021-01-28 Computation device and parallax calculation method WO2021245972A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
DE112021001906.6T DE112021001906T5 (en) 2020-06-05 2021-01-28 COMPUTING DEVICE AND PARALLAX CALCULATION METHOD

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020098674A JP2021192174A (en) 2020-06-05 2020-06-05 Arithmetic device, and parallax calculation method
JP2020-098674 2020-06-05

Publications (1)

Publication Number Publication Date
WO2021245972A1 true WO2021245972A1 (en) 2021-12-09

Family

ID=78830353

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/003114 WO2021245972A1 (en) 2020-06-05 2021-01-28 Computation device and parallax calculation method

Country Status (3)

Country Link
JP (1) JP2021192174A (en)
DE (1) DE112021001906T5 (en)
WO (1) WO2021245972A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117372431B (en) * 2023-12-07 2024-02-20 青岛天仁微纳科技有限责任公司 Image detection method of nano-imprint mold

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006221603A (en) * 2004-08-09 2006-08-24 Toshiba Corp Three-dimensional-information reconstructing apparatus, method and program
JP2011165117A (en) * 2010-02-15 2011-08-25 Nec System Technologies Ltd Apparatus, method and program for processing image
JP2011164905A (en) * 2010-02-09 2011-08-25 Konica Minolta Holdings Inc Device for retrieving corresponding point
JP2018132897A (en) * 2017-02-14 2018-08-23 日立オートモティブシステムズ株式会社 In-vehicle environment recognition apparatus
JP2020021126A (en) * 2018-07-30 2020-02-06 キヤノン株式会社 Image processing device and control method thereof, distance detection device, imaging device, program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006221603A (en) * 2004-08-09 2006-08-24 Toshiba Corp Three-dimensional-information reconstructing apparatus, method and program
JP2011164905A (en) * 2010-02-09 2011-08-25 Konica Minolta Holdings Inc Device for retrieving corresponding point
JP2011165117A (en) * 2010-02-15 2011-08-25 Nec System Technologies Ltd Apparatus, method and program for processing image
JP2018132897A (en) * 2017-02-14 2018-08-23 日立オートモティブシステムズ株式会社 In-vehicle environment recognition apparatus
JP2020021126A (en) * 2018-07-30 2020-02-06 キヤノン株式会社 Image processing device and control method thereof, distance detection device, imaging device, program

Also Published As

Publication number Publication date
DE112021001906T5 (en) 2023-01-26
JP2021192174A (en) 2021-12-16

Similar Documents

Publication Publication Date Title
CN108460734B (en) System and method for image presentation by vehicle driver assistance module
US8432447B2 (en) Stripe pattern detection system, stripe pattern detection method, and program for stripe pattern detection
JP2015182604A (en) Image processing apparatus and image processing program
US10719949B2 (en) Method and apparatus for monitoring region around vehicle
JP6471522B2 (en) Camera parameter adjustment device
JP6253467B2 (en) Image processing apparatus and image processing program
JP2015194487A (en) Disparity deriving apparatus, movable apparatus, robot, method of deriving disparity, method of producing disparity, and program
KR20200005865A (en) Wide area surround view monitoring apparatus for vehicle and control method thereof
US9813694B2 (en) Disparity value deriving device, equipment control system, movable apparatus, robot, and disparity value deriving method
JP2013174494A (en) Image processing device, image processing method, and vehicle
JP2012073927A (en) Driving support apparatus
JP2006318060A (en) Apparatus, method, and program for image processing
WO2021245972A1 (en) Computation device and parallax calculation method
JP2019066308A (en) Parallax calculating device
CN115147580A (en) Image processing apparatus, image processing method, mobile apparatus, and storage medium
US20210397857A1 (en) Perception system for autonomous vehicles
JP2023184572A (en) Electronic apparatus, movable body, imaging apparatus, and control method for electronic apparatus, program, and storage medium
JP6683245B2 (en) Image processing device, image processing method, image processing program, object recognition device, and device control system
JP2021051347A (en) Distance image generation apparatus and distance image generation method
JP6747176B2 (en) Image processing device, photographing device, program, device control system and device
JP2013161187A (en) Object recognition device
US11373389B2 (en) Partitioning images obtained from an autonomous vehicle camera
JP6515547B2 (en) PARALLEL VALUE DERIVING DEVICE, DEVICE CONTROL SYSTEM, MOBILE OBJECT, ROBOT, PARALLEL VALUE PRODUCTION METHOD, AND PROGRAM
US20230286548A1 (en) Electronic instrument, movable apparatus, distance calculation method, and storage medium
KR20190066396A (en) Collision warning method and device using heterogeneous cameras

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21816794

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 21816794

Country of ref document: EP

Kind code of ref document: A1