WO2017002367A1 - Disparity image generation device, disparity image generation method, disparity image generation program, object recognition device, and equipment control system - Google Patents

Disparity image generation device, disparity image generation method, disparity image generation program, object recognition device, and equipment control system Download PDF

Info

Publication number
WO2017002367A1
WO2017002367A1 PCT/JP2016/003129 JP2016003129W WO2017002367A1 WO 2017002367 A1 WO2017002367 A1 WO 2017002367A1 JP 2016003129 W JP2016003129 W JP 2016003129W WO 2017002367 A1 WO2017002367 A1 WO 2017002367A1
Authority
WO
WIPO (PCT)
Prior art keywords
disparity
valid
value
pixel
pair
Prior art date
Application number
PCT/JP2016/003129
Other languages
French (fr)
Inventor
Sadao Takahashi
Hiroyoshi Sekiguchi
Original Assignee
Ricoh Company, Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2016088603A external-priority patent/JP6805534B2/en
Application filed by Ricoh Company, Ltd. filed Critical Ricoh Company, Ltd.
Priority to CN201680037648.3A priority Critical patent/CN107735812B/en
Priority to KR1020177037261A priority patent/KR102038570B1/en
Priority to EP16817477.9A priority patent/EP3317850B1/en
Publication of WO2017002367A1 publication Critical patent/WO2017002367A1/en
Priority to US15/854,461 priority patent/US10520309B2/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • G01C3/02Details
    • G01C3/06Use of electric means to obtain final indication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/993Evaluation of the quality of the acquired pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present invention relates to a disparity image generation device, a disparity image generation method, a disparity image generation program, an object recognition device, and an equipment control system.
  • a technique for detecting an object such as a person or an automobile rapidly by measuring distances with millimetric wave radar, laser radar, or a stereo camera for example.
  • a position of a road surface is detected after interpolating a disparity of the object, and the object being in contact with the road surface is detected.
  • a detection output of such an object is used for automatic brake control, automatic steering wheel control, or the like.
  • a disparity in a horizontal direction needs to be detected.
  • a method for detecting a disparity known are a block matching method and a sub-pixel interpolation method.
  • PTL 1 Japanese Laid-open Patent Publication No. 11-351862 discloses a technique for creating an interpolated disparity image, when there are pixels having the same disparity on the left and the right in a disparity image from which an object at the same height as the road surface is eliminated, by substituting a disparity value into a pixel between the left pixel and the right pixel to detect a forward vehicle in a driver's own lane and obtain a distance.
  • a disparity of a portion having a virtually vertical edge or texture can be detected with high accuracy.
  • there is difficulty in detecting the disparity at a virtually horizontal edge in the block matching method in the related art, even if the disparity can be detected, much noise is disadvantageously included therein.
  • a three-dimensional object such as a preceding vehicle is a box-shaped object, and can be regarded as a collection of perpendicular lines on the left and right ends and horizontal lines connecting the perpendicular lines on the left and right ends. It is hard to detect the disparity of this object except the perpendicular lines on both ends. This means that there is a valid disparity at a portion where a vertical edge is present.
  • the perpendicular lines are not recognized as one object, and erroneously recognized as two objects running side by side.
  • a technique has been developed in which the perpendicular lines can be recognized as one object by interpolating the disparity.
  • this technique has a problem in that there is difficulty in recognizing the object correctly because the disparity is interpolated between the automobile and another automobile running side by side, a nearby sign, and another three-dimensional object.
  • disparities between the automobile and the other three-dimensional object are interpolated with the same value, so that the objects having different sizes are recognized.
  • the disparity always includes an error, so that interpolation cannot be made by interpolating the disparities of the same disparity value, and difficulty still arises in recognizing the object.
  • the present invention is made in view of the above described problem, and provides a disparity image generation device, a disparity image generation method, a disparity image generation program, an object recognition device, and an equipment control system for generating a disparity image appropriate for recognizing an object.
  • one aspect of the present invention includes a valid pixel determination unit configured to determine a valid pixel based on a feature value of each pixel in a captured image; and a validation unit configured to validate a disparity, which is not a valid disparity, near a valid disparity corresponding to the valid pixel in a disparity image corresponding to the captured image.
  • a disparity image appropriate for recognizing an object can be generated.
  • Fig. 1 is a schematic diagram illustrating a schematic configuration of an equipment control system according to a first embodiment.
  • Fig. 2 is a block diagram illustrating a schematic configuration of an imaging unit and an analyzing unit disposed in the equipment control system according to the first embodiment.
  • Fig. 3 is a block diagram illustrating a functional configuration of the analyzing unit according to the first embodiment.
  • Fig. 4 is a block diagram illustrating a functional configuration of a principal part of a disparity arithmetic unit according to the first embodiment.
  • Fig. 5 is a flowchart illustrating processing performed by the disparity arithmetic unit according to the first embodiment.
  • Fig. 6 is a diagram illustrating a specific example of processing performed by the disparity arithmetic unit illustrated in Fig. 5.
  • Fig. 7 is a diagram for explaining a case in which a result of invalidation determination in the processing illustrated in Fig. 5 is true.
  • Fig. 8 is a diagram for explaining a case in which the result of invalidation determination in the processing illustrated in Fig. 5 is false.
  • Fig. 9 is a diagram for explaining a threshold that can prevents a false result of invalidation determination from occurring in the processing illustrated in Fig. 5.
  • Fig. 10 is another diagram for explaining a threshold that can prevents a false result of invalidation determination from occurring in the processing illustrated in Fig. 5.
  • Fig. 11 is a diagram for explaining minimum value processing and exception processing performed by the disparity arithmetic unit.
  • FIG. 12 is a diagram illustrating an example of an arithmetic result of the disparity arithmetic unit according to the first embodiment.
  • Fig. 13 is a diagram illustrating another example of the arithmetic result of the disparity arithmetic unit according to the first embodiment.
  • Fig. 14 is a diagram illustrating an example of a graph of a matching processing result.
  • Fig. 15 is a diagram illustrating another example of the graph of the matching processing result.
  • Fig. 16 is a specific functional block diagram of a valid disparity determination unit.
  • Fig. 17 is a further specific functional block diagram of the valid disparity determination unit.
  • Fig. 18 is a schematic diagram for explaining an operation of setting a valid disparity pair in a valid disparity pair setting unit.
  • Fig. 18 is a schematic diagram for explaining an operation of setting a valid disparity pair in a valid disparity pair setting unit.
  • FIG. 19 is a functional block diagram of a valid disparity determination unit in an equipment control system according to a second embodiment.
  • Fig. 20 is a schematic diagram for explaining a search range of the valid disparity pair.
  • Fig. 21 is a functional block diagram of a valid disparity determination unit in an equipment control system according to a third embodiment.
  • Fig. 22 is a functional block diagram of a principal part of an equipment control system according to a fourth embodiment.
  • Fig. 23 is a flowchart illustrating a procedure of disparity image generation processing in a disparity image generator in the equipment control system according to the fourth embodiment.
  • Fig. 24 is a diagram for explaining a conventional determination method that erroneously determines a pixel in which erroneous matching occurs to be a valid pixel.
  • Fig. 25 is a diagram for explaining the disparity image generator in the equipment control system according to the fourth embodiment that accurately detects the valid pixel to be output.
  • First Embodiment Fig. 1 is a schematic diagram illustrating a schematic configuration of an equipment control system according to a first embodiment.
  • the equipment control system is disposed in a vehicle 1 such as an automobile as an example of equipment.
  • the equipment control system includes an imaging unit 2, an analyzing unit 3, a control unit 4, and a display unit 5.
  • the imaging unit 2 is disposed near a room mirror on a windshield 6 of the vehicle 1, and takes an image of the vehicle 1 in a traveling direction, for example.
  • Various pieces of data including image data obtained through an imaging operation of the imaging unit 2 are supplied to the analyzing unit 3.
  • the analyzing unit 3 analyzes an object to be recognized such as a road surface on which the vehicle 1 is traveling, a vehicle preceding the vehicle 1, a pedestrian, and an obstacle based on the various pieces of data supplied from the imaging unit 2.
  • the control unit 4 gives a warning and the like to a driver of the vehicle 1 via the display unit 5 based on an analysis result of the analyzing unit 3.
  • the control unit 4 supports traveling by controlling various onboard devices, performing steering wheel control or brake control of the vehicle 1, for example, based on the analysis result.
  • Fig. 2 is a schematic block diagram of the imaging unit 2 and the analyzing unit 3.
  • the imaging unit 2 has a stereo camera configuration including two imaging units 10A and 10B, for example.
  • the two imaging units 10A and 10B have the same configuration.
  • the imaging units 10A and 10B includes imaging lenses 11A and 11B, image sensors 12A and 12B in which light receiving elements are two-dimensionally arranged, and controllers 13A and 13B that drive the image sensors 12A and 12B to take an image.
  • the analyzing unit 3 is an example of an object recognition device, and includes a field-programmable gate array (FPGA) 14, a random access memory (RAM) 15, and a read only memory (ROM) 16.
  • the analyzing unit 3 also includes a serial interface (serial IF) 18 and a data IF 19.
  • the FPGA 14 to the data IF 19 are connected with each other via a data bus line 21 of the analyzing unit 3.
  • the imaging unit 2 and the analyzing unit 3 are connected with each other via the data bus line 21 and a serial bus line 20.
  • the RAM 15 stores disparity image data and the like generated based on luminance image data supplied from the imaging unit 2.
  • the ROM 16 stores an operation system and various programs such as an object detection program including a disparity image generation program.
  • the FPGA 14 operates in accordance with the disparity image generation program included in the object detection program. As described later in detail, the FPGA 14 causes one of captured images captured by the imaging units 10A and 10B to be a reference image, and causes the other one thereof to be a comparative image. The FPGA 14 calculates a position shift amount between a corresponding image portion on the reference image and a corresponding image portion on the comparative image, both corresponding to the same point in an imaging region, as a disparity value (disparity image data) of the corresponding image portion.
  • the FPGA 14 in calculating the disparity from a stereo image captured by the imaging unit 2, the FPGA 14 previously calculates many disparities based on block matching for a pixel position having an edge and other portions. Thereafter, the disparity of the pixel having an edge is validated. When a difference between the validated disparity that has been validated and a validated disparity positioned nearby is equal to or smaller than a predetermined value, a disparity between the validated disparity that has been validated and the validated disparity positioned nearby is validated. “Validate” means to specify (or extract) the disparity as information used for processing of recognizing an object.
  • the equipment control system can appropriately generate disparity information of the preceding vehicle not only at a vehicle edge but also in the vehicle and other spaces.
  • the vehicle 1 can be recognized as one object with correct size and distance, and the vehicle 1 can be prevented from being coupled with another object to be erroneously detected.
  • a CPU 17 operates based on the operation system stored in the ROM 16, and performs overall imaging control on the imaging units 10A and 10B.
  • the CPU 17 loads the object detection program from the ROM 16, and performs various pieces of processing using the disparity image data written into the RAM 15.
  • the CPU 17 refers to controller area network (CAN) information such as vehicle speed, acceleration, a steering angle, and a yaw rate acquired from each sensor disposed in the vehicle 1 via the data IF 19, and performs processing of recognizing the object to be recognized such as a road surface, a guardrail, a vehicle, and a person, disparity calculation, calculation of a distance to the object to be recognized, and the like.
  • CAN controller area network
  • the CPU 17 supplies a processing result to the control unit 4 illustrated in Fig. 1 via the serial IF 18 or the data IF 19.
  • the control unit 4 is an example of a control device, and performs, for example, brake control, vehicle speed control, and steering wheel control based on data as the processing result.
  • the control unit 4 causes the display unit 5 to display a warning and the like based on the data as the processing result. This configuration can support driving of the vehicle 1 by the driver.
  • the following specifically describes an operation of generating the disparity image and an operation of recognizing the object to be recognized in the equipment control system according to the first embodiment.
  • the polynomial expression is, for example, based on a quintic polynomial expression regarding x (a horizontal direction position of the image) and y (a vertical direction position of the image). Accordingly, a parallel luminance image can be obtained in which distortion of an optical system in the imaging units 10A and 10B is corrected.
  • Such luminance images (a right captured image and a left captured image) are supplied to the FPGA 14 of the analyzing unit 3.
  • Fig. 3 is a functional block diagram of each function that is implemented when the FPGA 14 executes the object detection program stored in the ROM 16 in the equipment control system according to the first embodiment.
  • the FPGA 14 implements a captured image corrector 31, a disparity arithmetic unit 32, a disparity image generator 33, and a recognition processor 34 by executing the object detection program.
  • the captured image corrector 31 performs correction such as gamma correction and distortion correction (parallelization of left and right captured images) on a left captured image and a right captured image.
  • the disparity arithmetic unit 32 calculates a disparity value d from the left and right captured images corrected by the captured image corrector 31. Details about the disparity arithmetic unit 32 will be described later.
  • the disparity image generator 33 generates a disparity image using the disparity value d calculated by the disparity arithmetic unit 32.
  • the disparity image represents a pixel value corresponding to the disparity value d calculated for each pixel on the reference image as a pixel value of each pixel.
  • the recognition processor 34 recognizes an object preceding the vehicle and generates recognition data as a recognition result using the disparity image generated by the disparity image generator 33.
  • Part or all of the captured image corrector 31 to the recognition processor 34 may be implemented as hardware such as an integrated circuit (IC).
  • the object detection program may be recorded and provided in a computer-readable recording medium such as a compact disc read only memory (CD-ROM), a flexible disk (FD), a compact disc recordable (CD-R), a DVD, a Blu-ray Disc (registered trademark), and a semiconductor memory, as an installable or executable file.
  • the DVD is an abbreviation for a “digital versatile disc”.
  • the object detection program may be provided to be installed via a network such as the Internet.
  • the object detection program may be embedded and provided in a ROM in a device, for example.
  • the disparity arithmetic unit 32 assumes luminance image data of the imaging unit 10A as reference image data, assumes luminance image data of the imaging unit 10B as comparative image data, and generates disparity image data representing disparity between the reference image data and the comparative image data. Specifically, the disparity arithmetic unit 32 defines a block including a plurality of pixels (for example, 16 pixels ⁇ 1 pixel) around one disparity of interest (pixel of interest) for a predetermined “row” of the reference image data.
  • a block having the same size as that of the defined block of the reference image data is shifted in a horizontal line direction (X-direction) one pixel by one pixel.
  • the disparity arithmetic unit 32 calculates a correlation value representing a correlation between a feature value indicating a feature of a pixel value of the defined block in the reference image data and a feature value indicating a feature of a pixel value of each block in the comparative image data.
  • the disparity arithmetic unit 32 performs matching processing of selecting a block in the comparative image data that is most correlated with the block in the reference image data from among blocks in the comparative image data based on the calculated correlation value. Thereafter, the disparity arithmetic unit 32 calculates, as the disparity value d, a position shift amount between the pixel of interest of the block in the reference image data and the corresponding pixel of the block in the comparative image data selected in the matching processing. By performing such processing of calculating the disparity value d on the entire region or a specific region of the reference image data, the disparity image data is obtained.
  • a value of each pixel (luminance value) in the block can be used.
  • the correlation value the sum total of absolute values of differences between values of the respective pixels (luminance values) in the block in the reference image data and values of the respective pixels (luminance values) in the block in the comparative image data corresponding to the former pixels can be used, for example. In this case, a block in which the sum total is the smallest is detected as a most correlated block.
  • FIG. 4 is a block diagram illustrating a functional configuration of a principal part of the disparity arithmetic unit 32.
  • the disparity arithmetic unit 32 includes an information processor 440 and an information storage unit 450, which can communicate with each other.
  • the information processor 440 includes a non-similarity calculator 441, an inclination calculator 442, a local minimum value detector 443, a threshold setting unit 444, a flag controller 445, a counter controller 446, and a validity determination unit 447.
  • the information storage unit 450 includes a non-similarity register 451, an inclination register 452, a threshold register 453, a flag register 454, and a local minimum value counter 455.
  • the validity determination unit 447 performs an operation including an invalidation unit.
  • the non-similarity calculator 441 is an example of an evaluation value calculator, calculates a non-similarity as an evaluation value of a correlation between the reference image and the comparative image (an evaluation value of matching) using a zero-mean sum of squared difference (ZSSD) method disclosed in a reference (Japanese Laid-open Patent Publication No. 2013-45278), for example, and writes the non-similarity into the non-similarity register 451.
  • ZSSD zero-mean sum of squared difference
  • a sum of squared difference (SSD) method In place of the zero-mean sum of squared difference (ZSSD) method, a sum of squared difference (SSD) method, a sum of absolute difference (SAD) method, or a zero-mean sum of absolute difference (ZSAD) method may be used.
  • an estimation value is used.
  • an equiangular linear method or a quadratic curve method As a method of estimating the estimation value, for example, an equiangular linear method or a quadratic curve method can be used.
  • an error occurs in the estimated disparity value at the sub-pixel level.
  • an estimation error correction (EEC) method and the like may be used for reducing an estimation error.
  • the inclination calculator 442 calculates an inclination of non-similarity from a difference value of non-similarity at adjacent shift positions in a case in which the comparative image is shifted with respect to the reference image, and writes the inclination into the inclination register 452.
  • the local minimum value detector 443 is an example of an extreme value detector, and detects the local minimum value of non-similarity as an extreme value of the evaluation value of correlation based on the fact that the inclination value calculated by the inclination calculator 442 is changed from negative to positive.
  • the threshold setting unit 444 is an example of an updater. When a value held by the flag register 454 is “0” (when a flag is off), the threshold setting unit 444 generates an upper threshold Uth and a lower threshold Lth as set values for a range of the local minimum value above and below the local minimum value based on the local minimum value detected by the local minimum value detector 443, and writes the upper threshold Uth and the lower threshold Lth into the threshold register 453. At this point, the flag controller 445 writes a value “1” indicating that the upper threshold Uth and the lower threshold Lth are updated into the flag register 454.
  • the counter controller 446 as an example of a counter counts up the value of the local minimum value counter 455. The value of the local minimum value counter 455 represents the number of minimum values of non-similarity within a range of the threshold held by the threshold register 453.
  • the counter controller 446 counts up the value of the local minimum value counter 455.
  • the counter controller 446 includes a resetting unit.
  • the counter controller 446 resets the value of the local minimum value counter 455.
  • the flag controller 445 writes “0” into the flag register 454, and resets the flag.
  • FIG. 5 is a flowchart illustrating processing performed by the disparity arithmetic unit 32 illustrated in Fig. 4.
  • Fig. 6 is a diagram illustrating a specific example of the processing illustrated in Fig. 5.
  • the horizontal axis in Fig. 6 indicates a search range, that is, a shift amount (deviation) of a pixel position in the comparative image with respect to a pixel position in the reference image, and the vertical axis indicates the non-similarity as the evaluation value for matching.
  • the following describes an operation performed by the disparity arithmetic unit 32 with reference to these drawings.
  • the flowchart illustrated in Fig. 5 is performed for each pixel of the reference image.
  • the search range of the comparative image with respect to the pixels in the reference image is 1 to 68 pixels.
  • no data is written into the non-similarity register 451, the inclination register 452, and the threshold register 453. “0” is set to the flag register 454, and an initial value of the local minimum value counter 455 is “0”.
  • a value (flag) held by the flag register 454 is represented by C
  • a count value of the local minimum value counter 455 is represented by cnt.
  • Step S303 the process proceeds in order of Step S301, Step S302, and Step S303, and the inclination calculator 442 calculates an inclination between the data(1) and data(2) (Step S303).
  • the inclination is calculated from a difference between two pieces of data “data(2) - data(1)”.
  • Step S306 the data(2) is written into the non-similarity register 451 to be held (Step S306), whether t is the last value is determined (Step S307), t is incremented to 3 (Step S312) based on a determination result (No at Step S307), and the process proceeds to Step S301.
  • Step S317 When “data(2) - data(1)” is negative and “data(3) - data(2)” is positive, it is determined that the inclination is changed from negative to positive (Yes at Step S313). In this case, held data(t-1), that is, the data(2) is determined to be the local minimum value herein (Step S317).
  • the data(1) to the data(3) in Fig. 6 correspond to the processes described above. That is, the data(2) is determined to be the local minimum value at Step S317, and the upper threshold Uth1 and the lower threshold Lth1 are updated (as initial setting in this case) above and below the local minimum value at Step S320.
  • the upper threshold Uth and the lower threshold Lth are assumed to be “data(2) + predetermined value” and “data(2) - predetermined value”, respectively.
  • the counter controller 446 counts up the value of the local minimum value counter 455 (Step S321).
  • the count value is “1”.
  • the count value represents the number of local minimum values within a range of the threshold (equal to or smaller than the upper threshold Uth1, and equal to or larger than the lower threshold Lth1) set at Step S320.
  • Step S321 the process proceeds in order of Step S305, Step S306, and Step S307, and the reference numeral of inclination (positive in this case) and the matching data (the data(3) in this case) are held, whether t is the last is determined (not the last in this case, so that No at S307), t is incremented to 4 (Step S312), and the process proceeds to Step S301.
  • Step S313 the process proceeds in order of Step S301, Step S302, Step S303, Step S304, and Step S313, and whether the inclination is changed from negative to positive is determined to detect the local minimum value (Step S313).
  • Step S313 the process proceeds in order of Step S301, Step S302, Step S303, Step S304, and Step S313, and it is determined whether the inclination is changed from negative to positive (Step S313).
  • Step S315 it is determined whether the local minimum value has been previously generated and the data(t) (data(5) in this case) becomes lower than the lower threshold. In a case of No at Step S315, that is, when the local minimum value is not previously generated or when the data(t) is not lower than the lower threshold even though the local minimum value is generated, the process proceeds to Step S305.
  • Step S301 proceeds in order of Step S301, Step S302, Step S303, Step S304, and Step S313, and it is determined whether the inclination is changed from negative to positive.
  • Step S301 proceeds in order of Step S301, Step S302, Step S303, Step S304, and Step S313, and it is determined whether the inclination is changed from negative to positive.
  • the upper threshold Uth1 and the lower threshold Lth1 are updated to be an upper threshold Uth2 and a lower threshold Lth2, respectively, as illustrated in Fig. 6.
  • the upper threshold Uth2 and the lower threshold Lth2 are “data(7) + predetermined value” and “data(7) - predetermined value”, respectively.
  • Step S319 it is determined whether data(9) as the local minimum value determined at Step S317 is within a range of the lower threshold (Lth2) and the upper threshold (Uth2). If the data(9) is within the range (Yes at Step S319), the data(9) is counted up (Step S321), and the process proceeds to Step S305. If the data(9) is out of the range (No at Step S319), the process directly proceeds to Step S305.
  • the data(9) is within the range of the lower threshold (Lth2) and the upper threshold (Uth2), so that the data(9) is counted up and the value of the local minimum value counter 455 becomes “2”.
  • the count value “2” means that there are two local minimum values within the range of the latest thresholds (the upper threshold Uth2 and the lower threshold Lth2 in this case).
  • Step S301 the processes are repeated in order of Step S301 ⁇ Step S302 ⁇ Step S303 ⁇ Step S304 ⁇ Step S313 until the last t (68 in this case) of the search range is reached.
  • Step S307 the counter controller 446 outputs the count value of the local minimum value counter 455 (Step S308).
  • the validity determination unit 447 determines whether the count value is equal to or larger than a predetermined value (for example, 2) (Step S309). If the count value is equal to or larger than the predetermined value (Yes at Step S309), the validity determination unit 447 determines that the count value is invalid (Step S310), and sets the flag so that the recognition processor 34 does not use the disparity value of the pixel in the reference image (Step S311).
  • a predetermined value for example, 2
  • Fig. 14 illustrates a graph of an example of a matching processing result.
  • the horizontal axis indicates the search range, that is, the shift amount (deviation) of the pixel position in the comparative image with respect to the pixel position in the reference image
  • the vertical axis indicates the non-similarity as the evaluation value for correlation.
  • the non-similarity is the smallest for the seventh search pixel surrounded with a circle, so that 7 is the most probable disparity value.
  • a negative value on the horizontal axis is of a search range for obtaining a sub-pixel disparity.
  • a process of calculating the disparity of an object having a repetitive pattern on an external appearance such as a building on which windows having the same design are lined, a tile wall on which the same shapes and figures are lined, a fence, a load-carrying platform of a truck vehicle, or a load-carrying platform of a trailer vehicle, as illustrated in Fig. 15, two or more (six in the example of Fig. 15) matched portions may appear in some cases, so that the most probable disparity value may be erroneously output.
  • an erroneous disparity value is actually output (erroneous matching), the erroneous disparity value indicating that the object having the repetitive pattern present at a distant position is positioned nearby.
  • the erroneous disparity value indicating that the object having the repetitive pattern present at a distant position is positioned nearby.
  • the disparity value for the distance of 5 m and the disparity value for the distance of 2 m are mixed to be output. Due to this, in object recognition processing at a rear stage, one wall is recognized as two walls including a wall having a distance of 2 m from the own vehicle and a wall having a distance of 5 m from the own vehicle. Then a brake is operated although the distance between the tile wall and the own vehicle is 5 m, which is called “erroneous braking”.
  • the disparity arithmetic unit 32 searches for neither the number of values of non-similarity close to each other nor the most probable disparity value after calculation of the non-similarity in the search range is finished.
  • the disparity arithmetic unit 32 counts the number of local minimum values of non-similarity and searches for the disparity at the same time.
  • the disparity arithmetic unit 32 updates the predetermined range, and counts the number of local minimum values of non-similarity in the updated range. Due to this, when a repetitive pattern appears, a time until the disparity arithmetic unit 32 determines whether to use the repetitive pattern for the object recognition processing can be shortened without increasing of a processing time.
  • “erroneous braking” can be suppressed.
  • search is performed in an ascending order of t. Search may also be performed in a descending order of t.
  • the upper threshold and the lower threshold are set in accordance with the local minimum value. Alternatively, optional upper threshold and lower threshold may be initially set at the time when the procedure is started.
  • the non-similarity is used as the evaluation value for correlation, the value of the non-similarity being reduced as the correlation is increased.
  • a similarity the value of which increases as the correlation increases, may also be used.
  • the upper threshold Uth1 and the lower threshold Lth1 are “data(2) + predetermined value” and “data(2) - predetermined value”, respectively, and the upper threshold Uth2 and the lower threshold Lth2 are “data(7) + predetermined value” and “data(7) - predetermined value”, respectively. That is, the upper threshold Uth and the lower threshold Lth are calculated to be set using expressions of “newly detected local minimum value + predetermined value” and “newly detected local minimum value - predetermined value”, respectively.
  • the upper threshold and the lower threshold calculated by the above expressions are referred to as a first upper threshold and a first lower threshold, respectively.
  • Step S310 a result of invalidation determination in the processing illustrated in Fig. 5 is true, and a case in which the result is false.
  • the following also describes a second upper threshold and a second lower threshold serving as an upper threshold and a lower threshold by which the occurrence of the false result can be reduced.
  • FIG. 7 is a diagram for explaining a case in which a result of invalidation determination in the processing illustrated in Fig. 5 is true.
  • the horizontal axis and the vertical axis in Fig. 7 and Fig. 8 to Fig. 13 are the same as those in Fig. 6, and indicate the search range and the non-similarity, respectively.
  • Fig. 7 illustrates a matching processing result in a case in which an image in the search range has a repetitive pattern having much texture. Amplitude of non-similarity (for example, ZSSD) is large due to the much texture.
  • Uth (first upper threshold) and Lth (first lower threshold) are set to be “data(ta) + k (predetermined value)” and “data(ta) - k (predetermined value)”, respectively, with respect to data(ta) as the local minimum value, and three local minimum values in the range of threshold are counted. Based on this correct count value, a true determination result (invalid) can be obtained.
  • FIG. 8 is a diagram for explaining a case in which the result of invalidation determination in the processing illustrated in Fig. 5 is false.
  • Fig. 8 illustrates a matching processing result in a case in which the image in the search range has no repetitive pattern and has less texture. The amplitude of non-similarity is small due to the less texture.
  • the minimum value data(tc) is present at one point. Accordingly, although a correct disparity value tc can be obtained, five local minimum values are counted in a range of Uth (first upper threshold) and Lth (first lower threshold) set above and below data(tb) as the local minimum value, not being the minimum value. Then invalidation is determined based on the count value.
  • Fig. 9 and Fig. 10 are examples of diagrams for explaining a second threshold that can prevents a false result of invalidation determination from occurring in the processing illustrated in Fig. 5.
  • Fig. 9 and Fig. 10 illustrate a matching processing result in a case in which the image in the search range is the same as that in Fig. 7 and Fig. 8, respectively.
  • the second upper threshold and the second lower threshold are set to values corresponding to the newly detected local minimum value. That is, in the case of Fig. 9 for example, Uth (second upper threshold) and Lth (second lower threshold) are set to be “data(ta) ⁇ Um” and “data(ta) ⁇ Lm”, respectively, with respect to the data(ta) as the local minimum value.
  • Um and Lm are coefficients representing a ratio. Values of Um and Lm satisfy “Um > 1 > Lm”, and may be any values so long as an updated upper threshold is smaller than the lower threshold before updating. In the case of Fig. 9, similarly to the case of Fig. 7, three local minimum values within the range of threshold are counted.
  • Uth (second upper threshold) and Lth (second lower threshold) are set to be “data(tc) ⁇ Um” and “data(tc) ⁇ Lm”, respectively, with respect to the data(tc) as the smallest local minimum value.
  • the count value of the local minimum value is “1”, so that a correct disparity value tc is employed.
  • the upper threshold and the lower threshold corresponding to the local minimum value are calculated by multiplying the local minimum value by the coefficient.
  • k in Fig. 7 and Fig. 8 may be changed depending on the local minimum value in place of being fixed at a predetermined value.
  • Minimum value processing and exception processing Fig. 11 is a diagram for explaining minimum value processing and exception processing performed by the disparity arithmetic unit 32.
  • disparity value calculation As a basis of disparity value calculation, calculating the disparity value in which the non-similarity such as ZSSD is the minimum value is a prerequisite, so that, in addition to an algorithm for counting the number of local minimum values within the range of the upper and lower thresholds at a minimum level illustrated in Fig. 5, a pure minimum value and a disparity value corresponding thereto need to be successively processed to be searched for.
  • the pure minimum value is successively processed, and when the minimum value is smaller than the lower threshold Lth that is finally updated, the disparity that gives the minimum value is output.
  • invalidation determination is forcibly performed as exception processing.
  • the local minimum value counter 455 is reset to 0 at Step S316 based on data(68) as the minimum value.
  • invalidation determination is forcibly performed.
  • the search range t indicated by the horizontal axis of A in Fig. 11 is -2 to 65, and data(65) on the right end is the minimum value.
  • the search range t indicated by the horizontal axis of B in Fig. 11 is -2 to 65, and data(-2) on the left end is the minimum value.
  • the local minimum value counter 455 is counted up. For example, only the left end is included in the finally determined threshold range, an output count value is counted up by 1. For example, only the right end is included in the finally determined threshold range, the output count value is counted up by 1. For example, both of the left end and the right end are included in the finally determined threshold range, the output count value is counted up by 2.
  • Fig. 12 is a diagram illustrating a first example of an arithmetic result of the disparity arithmetic unit 32
  • Fig. 13 is a diagram illustrating a second example thereof.
  • the horizontal axis indicates the search range
  • the non-similarity indicated by the vertical axis is ZSSD calculated using a block of 7 pixels ⁇ 7 pixels.
  • a negative portion of the search range is used for obtaining the sub-pixel disparity.
  • Fig. 12 is obtained by calculating disparity values of a captured image of a window of a building.
  • the upper threshold and the lower threshold are finally updated values (set in accordance with the local minimum value of the 8th pixel in the search range, in this case).
  • the number of local minimum values in this threshold range is 4, and invalidation is determined by the validity determination unit 447.
  • Fig. 13 is obtained by calculating disparity values of a captured image of a tile wall.
  • the upper threshold and the lower threshold are finally updated values (set in accordance with the local minimum value of the 23rd pixel in the search range, in this case).
  • the number of local minimum values in this threshold range is 2, and invalidation is determined by the validity determination unit 447.
  • the smallest non-similarity is detected from among the first non-similarity, the second non-similarity, and the third non-similarity, and the smallest non-similarity is compared with the fourth and subsequent non-similarities.
  • Detection processing may be performed for each pixel to detect the minimum value of non-similarity so that the minimum value of non-similarity within a predetermined threshold is detected. In this case, a load of arithmetic processing of non-similarity on the FPGA 14 can be reduced.
  • the non-similarity indicating the minimum value may be detected, and the number of non-similarities included within the threshold determined based on the minimum value of non-similarity may be detected to detect the minimum value of non-similarity.
  • the disparity image generator 33 includes an edge validation unit 103, a pair position calculator 104, and an intra-pair disparity validation unit 105.
  • the edge validation unit 103 is an example of a valid pixel determination unit to which the disparity value d (disparity image) calculated by the disparity arithmetic unit 32 and the luminance image generated by the captured image corrector 31 are supplied.
  • the edge validation unit 103 determines, to be an edge pixel, a pixel in which an amount of an edge component is equal to or larger than a predetermined component amount in the luminance image, and validates the disparity value at the pixel position.
  • the amount of edge component in the luminance image is an example of a feature value.
  • the edge pixel is a valid pixel, and the disparity corresponding to the edge pixel on the disparity image is a valid disparity.
  • the pair position calculator 104 is an example of a calculator, assumes two adjacent valid disparities on the same line in the disparity image as a valid disparity pair, and calculates a distance difference in a depth direction in a real space and an interval in a horizontal direction (positional relation) of the disparities. The pair position calculator 104 then determines whether the distance difference is within the predetermined threshold range, and whether the interval in the horizontal direction is within another predetermined threshold range in accordance with the disparity value of the valid disparity pair.
  • the intra-pair disparity validation unit 105 is an example of a validation unit, and validates the intra-pair disparity in the valid disparity pair (disparity between the valid disparity pair) when both of the distance difference and the interval in the horizontal direction determined by the pair position calculator 104 are within the threshold range.
  • An intra-pair disparity to be validated is in the vicinity of two disparity values of the valid disparity pair.
  • Fig. 17 illustrates a more detailed functional block diagram of the disparity image generator 33.
  • a valid disparity determination unit 102 includes the edge validation unit 103, the pair position calculator 104, and the intra-pair disparity validation unit 105.
  • the edge validation unit 103 includes an edge amount calculator 106 and a comparator 107.
  • the pair position calculator 104 includes a valid disparity pair setting unit 108, a pair interval calculator 109, a comparator 110, a pair depth difference calculator 111, a comparator 112, and a parameter memory 113.
  • the intra-pair disparity validation unit 105 includes a validation determination unit 114 and a valid disparity determination unit 115.
  • the valid disparity pair setting unit 108 is an example of a pair setting unit.
  • Each of the pair interval calculator 109 and the pair depth difference calculator 111 is an example of a calculator.
  • the valid disparity determination unit 115 is an example of a validation unit.
  • the edge amount calculator 106 of the edge validation unit 103 calculates an edge amount from the luminance image.
  • a method for calculating the edge amount for example, a Sobel filter or a secondary differential filter can be used. Considering reduction of hardware and characteristics of block matching processing, a difference between pixels on both ends on the same line as the pixel of interest may be used.
  • the comparator 107 of the edge validation unit 103 compares an absolute value of the edge amount calculated by the edge amount calculator 106 with an edge amount threshold determined in advance, and supplies a comparison output thereof as a valid disparity flag to the valid disparity pair setting unit 108 of the pair position calculator 104.
  • the absolute value of the calculated edge amount is larger than the edge amount threshold, the comparison output at high level is supplied to the valid disparity pair setting unit 108 (the valid disparity flag is turned on).
  • the comparison output at low level is supplied to the valid disparity pair setting unit 108 (the valid disparity flag is turned off).
  • the valid disparity flag and the disparity image within the predetermined range described above are supplied to the valid disparity pair setting unit 108 of the pair position calculator 104.
  • the valid disparity pair setting unit 108 sets, as the valid disparity pair, two disparities that are not adjacent to each other at pixel positions closest to each other on the same line for which the valid disparity flag is turned on.
  • Fig. 18 illustrates an example in which a first valid disparity pair, a second valid disparity pair, and a third valid disparity pair are set.
  • the pair interval calculator 109 calculates the interval between the pair in the horizontal direction in the real space from the disparity value of the left pixel of the valid disparity pair and the interval between the pair (pixel unit) on the disparity image.
  • the pair interval calculator 109 calculates depths from the respective two disparities of the valid disparity pair, and calculates an absolute value of a difference between the depths.
  • the comparator 110 compares the interval between the pair in the horizontal direction calculated by the pair interval calculator 109 with a pair interval threshold.
  • the pair interval threshold is determined in advance with reference to an actual width of an object to be detected. For example, to detect a person alone, the pair interval threshold is set to be a width occupied by a person. For example, a width of a large-size vehicle is prescribed to be 2500 mm at the maximum in Japan. Thus, in detecting a vehicle, the pair interval threshold is set to be the maximum width of the vehicle that is legally prescribed.
  • the comparator 110 compares such a pair interval threshold with the interval between the pair in the horizontal direction calculated by the pair interval calculator 109, and supplies a comparison output to the validation determination unit 114 of the intra-pair disparity validation unit 105.
  • the pair depth difference calculator 111 calculates a depth difference in the valid disparity pair described above.
  • a depth difference threshold read from the parameter memory 113 using the pair of disparity values is supplied to the comparator 112.
  • the depth difference threshold read from the parameter memory 113 is determined in accordance with a distance calculated from the disparity value of the left pixel of the valid disparity pair.
  • the depth difference threshold is determined in accordance with the distance because resolution of the disparity obtained from the stereo image of the imaging unit 2 is lowered when the distance to the object to be detected is large, and variance of detection distance becomes large. Accordingly, corresponding to the valid disparity value or a distance calculated therefrom, depth difference thresholds such as 10%, 15%, and 20% of the distance are stored in the parameter memory 113.
  • the comparator 112 compares the depth difference threshold with the depth difference in the valid disparity pair, and supplies a comparison output to the validation determination unit 114 of the intra-pair disparity validation unit 105.
  • the validation determination unit 114 of the intra-pair disparity validation unit 105 performs intra-pair region validation determination. That is, when the comparison outputs supplied from the comparators 110 and 112 respectively indicate that the interval between the pair in the horizontal direction is equal to or smaller than the pair interval threshold and the pair depth difference is equal to or smaller than the depth difference threshold, the validation determination unit 114 determines that an intra-pair region is valid.
  • the disparity (intra-pair disparity) present in the intra-pair region determined to be valid is supplied to the valid disparity determination unit 115.
  • the valid disparity determination unit 115 determines the supplied intra-pair disparity to be the valid disparity, and outputs the intra-pair disparity as the valid disparity.
  • the disparity value range of the pair of disparities means, assuming that two values of the pair of disparities are D1 and D2 (D1 > D2), a range of “D2 - ⁇ , D1 + ⁇ ” with ⁇ as a constant.
  • the constant ⁇ is determined based on a variance of a disparity of a subject obtained from the imaging unit 2 (stereo camera).
  • the recognition processor 34 recognizes, for example, an object, a person, and a guardrail preceding the vehicle using the disparity image generated by the disparity image generator 33 as described above, and outputs recognition data as a recognition result.
  • the equipment control system in calculating the disparity from the stereo image captured by the imaging unit 2, calculates the disparity based on block matching not only for the pixel position having an edge but also for other portions, and calculates many disparities in advance. Thereafter, the equipment control system validates only the disparity of the pixel having an edge, and when a difference between the validated disparity and a nearby validated disparity positioned nearby is equal to or smaller than a predetermined value, validates a disparity having the same value present between the validated disparity and the nearby validated disparity.
  • an appropriate disparity can be generated not only at a boundary of a three-dimensional object but also in the three-dimensional object and other spaces. That is, disparity information of a preceding vehicle can be appropriately generated not only at a vehicle edge but also in the vehicle and other spaces.
  • the disparity image appropriate for recognizing an object can be generated, and the vehicle can be accurately recognized as one object with correct size and distance. This configuration can prevent the preceding vehicle from being coupled with another object to be erroneously detected.
  • two or more matched portions may appear in some cases, so that the most probable disparity value may be erroneously output.
  • an erroneous disparity value is actually output (erroneous matching)
  • the erroneous disparity value indicating that the object having the repetitive pattern present at a distant position is positioned nearby.
  • the recognition processor 34 at a rear stage recognizes one wall as two walls including one wall having a distance of 2 m from the own vehicle and the other wall having a distance of 5 m from the own vehicle. Then a brake is operated although the distance between the wall and the own vehicle is 5 m, which is called “erroneous braking”.
  • the disparity arithmetic unit 32 does not search for the number of values of non-similarity close to each other and the most probable disparity value after calculation of non-similarity in the search range is finished.
  • the disparity arithmetic unit 32 counts the number of local minimum values of non-similarity and searches for the disparity at the same time.
  • the disparity arithmetic unit 32 updates the predetermined range, and counts the number of local minimum values of non-similarity in the updated range.
  • the disparity image generator 33 has functions illustrated in Fig. 19.
  • the second embodiment described below is different from the first embodiment only in the operation of the disparity image generator 33.
  • the following describes only differences, and redundant description will not be repeated.
  • a part in Fig. 19 that operates similarly to that in Fig. 17 is denoted by the same reference numeral, and detailed description thereof will not be repeated.
  • the pair position calculator 104 of the disparity image generator 33 includes a valid disparity setting unit 120, a search range setting unit 121 for a disparity to be paired, a setting unit 122 for a disparity to be paired, a pair depth difference calculator 123, and the parameter memory 113.
  • the intra-pair disparity validation unit 105 includes a comparator 124 and the valid disparity determination unit 115.
  • the search range setting unit 121 for a disparity to be paired is an example of a search range setting unit.
  • the pair depth difference calculator 123 is an example of a difference detector.
  • the valid disparity setting unit 120 of the pair position calculator 104 selects a pixel for which the valid disparity flag is turned on (valid pixel) as a comparison output from the edge validation unit 103.
  • the search range setting unit 121 for a disparity to be paired calculates and sets a range in which a disparity to be paired with the valid disparity is searched for in the right direction of the pixel of the valid disparity on the same line as the selected pixel based on the disparity value (valid disparity) of the selected valid pixel and the maximum value of the interval between the pair.
  • the maximum value of the interval between the pair is an example of pair interval information, and synonymous with the pair interval threshold indicating the actual width of the object to be detected.
  • Fig. 20 is a diagram schematically illustrating an operation of searching for a disparity to be paired performed by the search range setting unit 121 for a disparity to be paired.
  • a black solid pixel represents a pixel SG of the valid disparity.
  • Each of the pixel P1 to pixel P4 represents a pixel of the disparity to be paired in the search range set in accordance with the disparity value of the pixel SG of the valid disparity.
  • the search range setting unit 121 for a disparity to be paired calculates a maximum width (right direction) for searching for a disparity to be paired on the disparity image based on the maximum value of the interval between the pair and the disparity value of the selected pixel.
  • the setting unit 122 for a disparity to be paired detects a disparity closest to the valid disparity in the search range for a disparity to be paired, and causes the detected disparity to be the disparity to be paired.
  • processing subsequent to setting processing for a disparity to be paired is not performed by the setting unit 122 for a disparity to be paired, and the search range for a disparity to be paired and the disparity to be paired are set based on the valid disparity that is subsequently set.
  • the disparity to be paired set by the setting unit 122 for a disparity to be paired is input to the pair depth difference calculator 123 together with the valid disparity as a pair.
  • the pair depth difference calculator 123 calculates an absolute value of a difference in distance based on the input valid disparity and the disparity to be paired.
  • the comparator 124 of the intra-pair disparity validation unit 105 compares the depth difference threshold read from the parameter memory 113 with the depth difference calculated by the pair depth difference calculator 123 based on the valid disparity.
  • the valid disparity determination unit 115 determines that an intra-pair disparity between the valid disparity and the disparity to be paired is the valid disparity to be output.
  • the valid disparity determination unit 115 determines that the disparity within the range of the two disparity values including the valid disparity and the disparity to be paired is the valid disparity to be output.
  • the range of the two disparity values including the valid disparity and the disparity to be paired means, assuming that the two disparity values are D1 and D2 (D1 > D2), a range of “D2 - ⁇ , D1 + ⁇ ” with ⁇ as a constant.
  • the constant ⁇ can be determined based on a variance of a disparity of a predetermined subject obtained from the imaging unit 2.
  • the number of disparity points can be controlled without increasing of disparity noise, and the same effect as that of the first embodiment can be obtained.
  • the disparity image generator 33 has functions illustrated in Fig. 21.
  • the third embodiment described below is different from the first embodiment only in the operation of the disparity image generator 33. Thus, the following describes only differences, and redundant description will not be repeated.
  • a part in Fig. 21 that operates similarly to that in Fig. 17 is denoted by the same reference numeral, and detailed description thereof will not be repeated.
  • the edge validation unit 103 includes the edge amount calculator 106, a comparator 131, and a comparator 132.
  • a first valid disparity flag from the comparator 131 is supplied to the valid disparity pair setting unit 108 of the pair position calculator 104, and a second valid disparity flag from the comparator 132 is supplied to the valid disparity determination unit 115 of the intra-pair disparity validation unit 105.
  • the first valid disparity flag is an example of first valid disparity information.
  • the second valid disparity flag is an example of second valid disparity information.
  • the disparity value of the edge pixel is validated using a plurality of thresholds such as two thresholds (alternatively, three or more thresholds may be used). Specifically, a first edge amount threshold is larger than a second edge amount threshold. The first edge amount threshold is supplied to the comparator 131, and the second edge amount threshold is supplied to the comparator 132.
  • the comparator 131 compares an absolute value of the edge amount calculated by the edge amount calculator 106 with the first edge amount threshold, and supplies the first valid disparity flag for validating a pixel that makes a valid disparity pair to the valid disparity pair setting unit 108 of the pair position calculator 104.
  • the comparator 132 compares the absolute value of the edge amount calculated by the edge amount calculator 106 with the second edge amount threshold, and supplies the second valid disparity flag for finally validating the pixel of the intra-pair disparity to the valid disparity determination unit 115.
  • the number of disparities to be validated among intra-pair disparities can be controlled, a disparity image optimum for object detection processing at a rear stage can be generated, and the same effect as that in the above embodiments can be obtained.
  • Validation processing of the edge pixel using a plurality of thresholds performed by the edge validation unit 103 can be applied to the second embodiment.
  • the valid disparity setting unit 120 and the search range setting unit 121 for a disparity to be paired are assumed to perform processing on the same line of the disparity image.
  • the valid disparity pair may be set within a range of three lines in total including the same line of the disparity image and lines upper and lower than the same line of the disparity image.
  • the equipment control system according to a fourth embodiment With the equipment control system according to the fourth embodiment, one object can be correctly recognized as one object by reducing the number of disparity values of erroneous matching described above using Fig. 14 and Fig. 15, and performing object recognition processing with the disparity image including many valid disparity values. Due to this, a correct support operation can be performed.
  • Fig. 22 is a functional block diagram of the disparity image generator 33 disposed in the equipment control system according to the fourth embodiment. As illustrated in Fig. 22, the disparity image generator 33 includes a matching cost calculator 501, an edge detector 502, a repetitive pattern detector 503, an entire surface disparity image generator 504, and a generator 505.
  • the disparity image generator 33 is implemented when the FPGA 14 executes the object detection program stored in the ROM 16.
  • the matching cost calculator 501 to generator 505 are implemented as software.
  • part or all of the matching cost calculator 501 to generator 505 may be implemented as hardware such as an integrated circuit (IC).
  • the object detection program may be recorded and provided in a computer-readable recording medium such as a compact disc read only memory (CD-ROM) and a flexible disk (FD) as an installable or executable file.
  • the object detection program may also be recorded and provided in a computer-readable recording medium such as a compact disc recordable (CD-R), a DVD, a Blu-ray Disc (registered trademark), and a semiconductor memory.
  • the DVD is an abbreviation for a “digital versatile disc”.
  • the object detection program may be provided to be installed via a network such as the Internet.
  • the object detection program may be embedded and provided in a ROM in a device, for example.
  • the flowchart of Fig. 23 illustrates a procedure of disparity image generation processing performed by the disparity image generator 33.
  • the matching cost calculator 501 calculates a non-similarity (matching cost) of each pixel of a reference image and a comparative image present on the same scanning line among reference images and comparative images captured by the imaging unit 2.
  • the entire surface disparity image generator 504 generates an entire surface disparity image in which all pixels are represented by disparity values based on the calculated non-similarity.
  • the repetitive pattern detector 503 as an example of a discrimination unit and a pattern detector discriminates validity of each pixel of the stereo image based on the number of local minimum values of non-similarity within a range finally updated within the search range as described above.
  • the repetitive pattern detector 503 performs detection processing of a repetitive pattern described in the first embodiment for each pixel. That is, as described in the first embodiment, the repetitive pattern detector 503 counts the number of local minimum values of non-similarity and searches for the disparity at the same time, updates the predetermined range when the local minimum value of non-similarity becomes out of the predetermined range, and performs detection processing of a repetitive pattern for counting the number of local minimum values of non-similarity within the updated range for each pixel. The repetitive pattern detector 503 adds, to a pixel in which no repetition occurs, validation information indicating that there is no repetition (sets a validation flag).
  • the edge detector 502 adds, to a pixel having luminance larger than a predetermined threshold, edge information indicating that the pixel corresponds to an edge of the object (sets an edge flag).
  • the generator 505 as an example of an extractor extracts, as a pixel of valid disparity, a pixel to which both of the validation information and the edge information are added in the entire surface disparity image. That is, the generator 505 extracts the valid disparity based on the pixel for which the validation flag is turned on and the edge flag is turned on in the entire surface disparity image.
  • Fig. 24 is an extraction result obtained by using a conventional method for extracting the valid disparity.
  • a region in which four disparity values of “4” are continuous is a region in which a correct disparity is obtained.
  • a region subsequent thereto in which the disparity values are “10, 10, 22, 22, 22, 22, 22, 22” is a region in which erroneous matching occurs due to an object of repetitive pattern positioned at a long distance.
  • a region subsequent thereto in which seven disparity values of “4” are continuous is a region in which a correct disparity is obtained.
  • erroneous determination occurs such that a pixel is determined to be a valid pixel although erroneous matching occurs and the disparity values are incorrect.
  • the generator 505 in the equipment control system extracts, as a pixel of valid disparity, the pixel for which the validation flag is turned on and the edge flag is turned on as illustrated in Fig. 25. That is, the generator 505 performs, as it were, processing for inputting the validation flag and the edge flag to an AND gate to obtain an output. Accordingly, as illustrated in Fig. 25, the pixel having the disparity value of “4” in which the validation flag and the edge flag are both “1” is determined to be a valid pixel. The generator 505 also determines, to be a valid pixel, a pixel between the pixels having the disparity value of “4” that have been determined to be valid. In contrast, the pixels in the region in which erroneous matching occurs are all determined to be invalid pixels.
  • the generator 505 outputs, to the recognition processor 34 at a rear stage, a disparity image in which the disparity of erroneous matching is reduced (noise is reduced) and many valid disparities are included (Step S205), and ends the processing in the flowchart of Fig. 23.
  • one object can be correctly recognized as one object, object recognition processing can be performed, and correct driving support can be performed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A disparity image generation device includes a valid pixel determination unit configured to determine a valid pixel based on a feature value of each pixel in a captured image; and a validation unit configured to validate a disparity, which is not a valid disparity, near a valid disparity corresponding to the valid pixel in a disparity image corresponding to the captured image.

Description

DISPARITY IMAGE GENERATION DEVICE, DISPARITY IMAGE GENERATION METHOD, DISPARITY IMAGE GENERATION PROGRAM, OBJECT RECOGNITION DEVICE, AND EQUIPMENT CONTROL SYSTEM
The present invention relates to a disparity image generation device, a disparity image generation method, a disparity image generation program, an object recognition device, and an equipment control system.
In recent years, known is a technique for detecting an object such as a person or an automobile rapidly by measuring distances with millimetric wave radar, laser radar, or a stereo camera, for example. For example, to detect three-dimensional position and size of an object such as a person or a preceding vehicle by measuring a distance with a stereo camera, a position of a road surface is detected after interpolating a disparity of the object, and the object being in contact with the road surface is detected. A detection output of such an object is used for automatic brake control, automatic steering wheel control, or the like.
To accurately detect the three-dimensional position and size of the object such as a person and a preceding vehicle with the stereo camera, a disparity in a horizontal direction needs to be detected. As a method for detecting a disparity, known are a block matching method and a sub-pixel interpolation method.
PTL 1 (Japanese Laid-open Patent Publication No. 11-351862) discloses a technique for creating an interpolated disparity image, when there are pixels having the same disparity on the left and the right in a disparity image from which an object at the same height as the road surface is eliminated, by substituting a disparity value into a pixel between the left pixel and the right pixel to detect a forward vehicle in a driver's own lane and obtain a distance.
In the block matching method in the related art, a disparity of a portion having a virtually vertical edge or texture can be detected with high accuracy. However, in the block matching method in the related art, there is difficulty in detecting the disparity at a virtually horizontal edge. In the block matching method in the related art, even if the disparity can be detected, much noise is disadvantageously included therein.
A three-dimensional object such as a preceding vehicle is a box-shaped object, and can be regarded as a collection of perpendicular lines on the left and right ends and horizontal lines connecting the perpendicular lines on the left and right ends. It is hard to detect the disparity of this object except the perpendicular lines on both ends. This means that there is a valid disparity at a portion where a vertical edge is present.
In this case, the perpendicular lines are not recognized as one object, and erroneously recognized as two objects running side by side. A technique has been developed in which the perpendicular lines can be recognized as one object by interpolating the disparity. However, this technique has a problem in that there is difficulty in recognizing the object correctly because the disparity is interpolated between the automobile and another automobile running side by side, a nearby sign, and another three-dimensional object.
In a case of the technique disclosed in PTL 1, disparities between the automobile and the other three-dimensional object are interpolated with the same value, so that the objects having different sizes are recognized. In a case of the technique disclosed in PTL 1, the disparity always includes an error, so that interpolation cannot be made by interpolating the disparities of the same disparity value, and difficulty still arises in recognizing the object.
The present invention is made in view of the above described problem, and provides a disparity image generation device, a disparity image generation method, a disparity image generation program, an object recognition device, and an equipment control system for generating a disparity image appropriate for recognizing an object.
To solve the problem as described and achieve the object, one aspect of the present invention includes a valid pixel determination unit configured to determine a valid pixel based on a feature value of each pixel in a captured image; and a validation unit configured to validate a disparity, which is not a valid disparity, near a valid disparity corresponding to the valid pixel in a disparity image corresponding to the captured image.
According to the present invention, a disparity image appropriate for recognizing an object can be generated.
Fig. 1 is a schematic diagram illustrating a schematic configuration of an equipment control system according to a first embodiment. Fig. 2 is a block diagram illustrating a schematic configuration of an imaging unit and an analyzing unit disposed in the equipment control system according to the first embodiment. Fig. 3 is a block diagram illustrating a functional configuration of the analyzing unit according to the first embodiment. Fig. 4 is a block diagram illustrating a functional configuration of a principal part of a disparity arithmetic unit according to the first embodiment. Fig. 5 is a flowchart illustrating processing performed by the disparity arithmetic unit according to the first embodiment. Fig. 6 is a diagram illustrating a specific example of processing performed by the disparity arithmetic unit illustrated in Fig. 5. Fig. 7 is a diagram for explaining a case in which a result of invalidation determination in the processing illustrated in Fig. 5 is true. Fig. 8 is a diagram for explaining a case in which the result of invalidation determination in the processing illustrated in Fig. 5 is false. Fig. 9 is a diagram for explaining a threshold that can prevents a false result of invalidation determination from occurring in the processing illustrated in Fig. 5. Fig. 10 is another diagram for explaining a threshold that can prevents a false result of invalidation determination from occurring in the processing illustrated in Fig. 5. Fig. 11 is a diagram for explaining minimum value processing and exception processing performed by the disparity arithmetic unit. Fig. 12 is a diagram illustrating an example of an arithmetic result of the disparity arithmetic unit according to the first embodiment. Fig. 13 is a diagram illustrating another example of the arithmetic result of the disparity arithmetic unit according to the first embodiment. Fig. 14 is a diagram illustrating an example of a graph of a matching processing result. Fig. 15 is a diagram illustrating another example of the graph of the matching processing result. Fig. 16 is a specific functional block diagram of a valid disparity determination unit. Fig. 17 is a further specific functional block diagram of the valid disparity determination unit. Fig. 18 is a schematic diagram for explaining an operation of setting a valid disparity pair in a valid disparity pair setting unit. Fig. 19 is a functional block diagram of a valid disparity determination unit in an equipment control system according to a second embodiment. Fig. 20 is a schematic diagram for explaining a search range of the valid disparity pair. Fig. 21 is a functional block diagram of a valid disparity determination unit in an equipment control system according to a third embodiment. Fig. 22 is a functional block diagram of a principal part of an equipment control system according to a fourth embodiment. Fig. 23 is a flowchart illustrating a procedure of disparity image generation processing in a disparity image generator in the equipment control system according to the fourth embodiment. Fig. 24 is a diagram for explaining a conventional determination method that erroneously determines a pixel in which erroneous matching occurs to be a valid pixel. Fig. 25 is a diagram for explaining the disparity image generator in the equipment control system according to the fourth embodiment that accurately detects the valid pixel to be output.
The following describes embodiments in detail with reference to the attached drawings.
First Embodiment
Fig. 1 is a schematic diagram illustrating a schematic configuration of an equipment control system according to a first embodiment. As illustrated in Fig. 1, the equipment control system is disposed in a vehicle 1 such as an automobile as an example of equipment. The equipment control system includes an imaging unit 2, an analyzing unit 3, a control unit 4, and a display unit 5.
The imaging unit 2 is disposed near a room mirror on a windshield 6 of the vehicle 1, and takes an image of the vehicle 1 in a traveling direction, for example. Various pieces of data including image data obtained through an imaging operation of the imaging unit 2 are supplied to the analyzing unit 3. The analyzing unit 3 analyzes an object to be recognized such as a road surface on which the vehicle 1 is traveling, a vehicle preceding the vehicle 1, a pedestrian, and an obstacle based on the various pieces of data supplied from the imaging unit 2. The control unit 4 gives a warning and the like to a driver of the vehicle 1 via the display unit 5 based on an analysis result of the analyzing unit 3. The control unit 4 supports traveling by controlling various onboard devices, performing steering wheel control or brake control of the vehicle 1, for example, based on the analysis result.
Fig. 2 is a schematic block diagram of the imaging unit 2 and the analyzing unit 3. As illustrated in Fig. 2, the imaging unit 2 has a stereo camera configuration including two imaging units 10A and 10B, for example. The two imaging units 10A and 10B have the same configuration. Specifically, the imaging units 10A and 10B includes imaging lenses 11A and 11B, image sensors 12A and 12B in which light receiving elements are two-dimensionally arranged, and controllers 13A and 13B that drive the image sensors 12A and 12B to take an image.
The analyzing unit 3 is an example of an object recognition device, and includes a field-programmable gate array (FPGA) 14, a random access memory (RAM) 15, and a read only memory (ROM) 16. The analyzing unit 3 also includes a serial interface (serial IF) 18 and a data IF 19. The FPGA 14 to the data IF 19 are connected with each other via a data bus line 21 of the analyzing unit 3. The imaging unit 2 and the analyzing unit 3 are connected with each other via the data bus line 21 and a serial bus line 20.
The RAM 15 stores disparity image data and the like generated based on luminance image data supplied from the imaging unit 2. The ROM 16 stores an operation system and various programs such as an object detection program including a disparity image generation program.
The FPGA 14 operates in accordance with the disparity image generation program included in the object detection program. As described later in detail, the FPGA 14 causes one of captured images captured by the imaging units 10A and 10B to be a reference image, and causes the other one thereof to be a comparative image. The FPGA 14 calculates a position shift amount between a corresponding image portion on the reference image and a corresponding image portion on the comparative image, both corresponding to the same point in an imaging region, as a disparity value (disparity image data) of the corresponding image portion.
Specifically, in calculating the disparity from a stereo image captured by the imaging unit 2, the FPGA 14 previously calculates many disparities based on block matching for a pixel position having an edge and other portions. Thereafter, the disparity of the pixel having an edge is validated. When a difference between the validated disparity that has been validated and a validated disparity positioned nearby is equal to or smaller than a predetermined value, a disparity between the validated disparity that has been validated and the validated disparity positioned nearby is validated. “Validate” means to specify (or extract) the disparity as information used for processing of recognizing an object.
Thus, the equipment control system according to the first embodiment can appropriately generate disparity information of the preceding vehicle not only at a vehicle edge but also in the vehicle and other spaces. The vehicle 1 can be recognized as one object with correct size and distance, and the vehicle 1 can be prevented from being coupled with another object to be erroneously detected.
A CPU 17 operates based on the operation system stored in the ROM 16, and performs overall imaging control on the imaging units 10A and 10B. The CPU 17 loads the object detection program from the ROM 16, and performs various pieces of processing using the disparity image data written into the RAM 15. Specifically, based on the object detection program, the CPU 17 refers to controller area network (CAN) information such as vehicle speed, acceleration, a steering angle, and a yaw rate acquired from each sensor disposed in the vehicle 1 via the data IF 19, and performs processing of recognizing the object to be recognized such as a road surface, a guardrail, a vehicle, and a person, disparity calculation, calculation of a distance to the object to be recognized, and the like.
The CPU 17 supplies a processing result to the control unit 4 illustrated in Fig. 1 via the serial IF 18 or the data IF 19. The control unit 4 is an example of a control device, and performs, for example, brake control, vehicle speed control, and steering wheel control based on data as the processing result. The control unit 4 causes the display unit 5 to display a warning and the like based on the data as the processing result. This configuration can support driving of the vehicle 1 by the driver.
The following specifically describes an operation of generating the disparity image and an operation of recognizing the object to be recognized in the equipment control system according to the first embodiment.
First, the imaging units 10A and 10B of the imaging unit 2 constituting the stereo camera generate luminance image data. Specifically, when the imaging units 10A and 10B have color specifications, each of the imaging units 10A and 10B performs an arithmetic operation of Y = 0.3R + 0.59G + 0.11B. Due to this, color luminance conversion processing is performed for generating a luminance (Y) signal from each signal of RGB (red, green, and blue).
Each of the imaging units 10A and 10B converts the luminance image data generated in the color luminance conversion processing into an ideal parallel stereo image obtained when two pinhole cameras are mounted in parallel. Specifically, each of the imaging units 10A and 10B converts each pixel in the luminance image data using a calculation result of a distortion amount of each pixel calculated by using a polynomial expression as follows: Δx = f(x, y), Δy = g(x, y). The polynomial expression is, for example, based on a quintic polynomial expression regarding x (a horizontal direction position of the image) and y (a vertical direction position of the image). Accordingly, a parallel luminance image can be obtained in which distortion of an optical system in the imaging units 10A and 10B is corrected. Such luminance images (a right captured image and a left captured image) are supplied to the FPGA 14 of the analyzing unit 3.
Fig. 3 is a functional block diagram of each function that is implemented when the FPGA 14 executes the object detection program stored in the ROM 16 in the equipment control system according to the first embodiment. As illustrated in Fig. 3, the FPGA 14 implements a captured image corrector 31, a disparity arithmetic unit 32, a disparity image generator 33, and a recognition processor 34 by executing the object detection program.
The captured image corrector 31 performs correction such as gamma correction and distortion correction (parallelization of left and right captured images) on a left captured image and a right captured image. The disparity arithmetic unit 32 calculates a disparity value d from the left and right captured images corrected by the captured image corrector 31. Details about the disparity arithmetic unit 32 will be described later. The disparity image generator 33 generates a disparity image using the disparity value d calculated by the disparity arithmetic unit 32. The disparity image represents a pixel value corresponding to the disparity value d calculated for each pixel on the reference image as a pixel value of each pixel. The recognition processor 34 recognizes an object preceding the vehicle and generates recognition data as a recognition result using the disparity image generated by the disparity image generator 33.
Part or all of the captured image corrector 31 to the recognition processor 34 may be implemented as hardware such as an integrated circuit (IC). The object detection program may be recorded and provided in a computer-readable recording medium such as a compact disc read only memory (CD-ROM), a flexible disk (FD), a compact disc recordable (CD-R), a DVD, a Blu-ray Disc (registered trademark), and a semiconductor memory, as an installable or executable file. The DVD is an abbreviation for a “digital versatile disc”. The object detection program may be provided to be installed via a network such as the Internet. The object detection program may be embedded and provided in a ROM in a device, for example.
Outline of disparity arithmetic unit
The disparity arithmetic unit 32 assumes luminance image data of the imaging unit 10A as reference image data, assumes luminance image data of the imaging unit 10B as comparative image data, and generates disparity image data representing disparity between the reference image data and the comparative image data. Specifically, the disparity arithmetic unit 32 defines a block including a plurality of pixels (for example, 16 pixels × 1 pixel) around one disparity of interest (pixel of interest) for a predetermined “row” of the reference image data. In the same “row” in the comparative image data, a block having the same size as that of the defined block of the reference image data is shifted in a horizontal line direction (X-direction) one pixel by one pixel. The disparity arithmetic unit 32 calculates a correlation value representing a correlation between a feature value indicating a feature of a pixel value of the defined block in the reference image data and a feature value indicating a feature of a pixel value of each block in the comparative image data.
The disparity arithmetic unit 32 performs matching processing of selecting a block in the comparative image data that is most correlated with the block in the reference image data from among blocks in the comparative image data based on the calculated correlation value. Thereafter, the disparity arithmetic unit 32 calculates, as the disparity value d, a position shift amount between the pixel of interest of the block in the reference image data and the corresponding pixel of the block in the comparative image data selected in the matching processing. By performing such processing of calculating the disparity value d on the entire region or a specific region of the reference image data, the disparity image data is obtained.
As the feature value of the block used for matching processing, for example, a value of each pixel (luminance value) in the block can be used. As the correlation value, the sum total of absolute values of differences between values of the respective pixels (luminance values) in the block in the reference image data and values of the respective pixels (luminance values) in the block in the comparative image data corresponding to the former pixels can be used, for example. In this case, a block in which the sum total is the smallest is detected as a most correlated block.
Configuration of disparity arithmetic unit
Fig. 4 is a block diagram illustrating a functional configuration of a principal part of the disparity arithmetic unit 32. As illustrated in the drawing, the disparity arithmetic unit 32 includes an information processor 440 and an information storage unit 450, which can communicate with each other.
The information processor 440 includes a non-similarity calculator 441, an inclination calculator 442, a local minimum value detector 443, a threshold setting unit 444, a flag controller 445, a counter controller 446, and a validity determination unit 447. The information storage unit 450 includes a non-similarity register 451, an inclination register 452, a threshold register 453, a flag register 454, and a local minimum value counter 455. The validity determination unit 447 performs an operation including an invalidation unit.
The non-similarity calculator 441 is an example of an evaluation value calculator, calculates a non-similarity as an evaluation value of a correlation between the reference image and the comparative image (an evaluation value of matching) using a zero-mean sum of squared difference (ZSSD) method disclosed in a reference (Japanese Laid-open Patent Publication No. 2013-45278), for example, and writes the non-similarity into the non-similarity register 451.
In place of the zero-mean sum of squared difference (ZSSD) method, a sum of squared difference (SSD) method, a sum of absolute difference (SAD) method, or a zero-mean sum of absolute difference (ZSAD) method may be used. When a disparity value at a sub-pixel level smaller than one pixel is required in matching processing, an estimation value is used. As a method of estimating the estimation value, for example, an equiangular linear method or a quadratic curve method can be used. However, an error occurs in the estimated disparity value at the sub-pixel level. Thus, an estimation error correction (EEC) method and the like may be used for reducing an estimation error.
Next, the inclination calculator 442 calculates an inclination of non-similarity from a difference value of non-similarity at adjacent shift positions in a case in which the comparative image is shifted with respect to the reference image, and writes the inclination into the inclination register 452. The local minimum value detector 443 is an example of an extreme value detector, and detects the local minimum value of non-similarity as an extreme value of the evaluation value of correlation based on the fact that the inclination value calculated by the inclination calculator 442 is changed from negative to positive.
The threshold setting unit 444 is an example of an updater. When a value held by the flag register 454 is “0” (when a flag is off), the threshold setting unit 444 generates an upper threshold Uth and a lower threshold Lth as set values for a range of the local minimum value above and below the local minimum value based on the local minimum value detected by the local minimum value detector 443, and writes the upper threshold Uth and the lower threshold Lth into the threshold register 453. At this point, the flag controller 445 writes a value “1” indicating that the upper threshold Uth and the lower threshold Lth are updated into the flag register 454. The counter controller 446 as an example of a counter counts up the value of the local minimum value counter 455. The value of the local minimum value counter 455 represents the number of minimum values of non-similarity within a range of the threshold held by the threshold register 453.
When the value held by the flag register 454 is “1” (when the flag is on) and the local minimum value detected by the local minimum value detector 443 is within the range of the threshold held by the threshold register 453, the counter controller 446 counts up the value of the local minimum value counter 455.
The counter controller 446 includes a resetting unit. When the value held by the flag register 454 is “1”, and the inclination calculated by the inclination calculator 442 is kept negative and the non-similarity calculated by the non-similarity calculator 441 becomes lower than the lower threshold Lth held by the threshold register 453, the counter controller 446 resets the value of the local minimum value counter 455. In this case, the flag controller 445 writes “0” into the flag register 454, and resets the flag.
Processing of disparity arithmetic unit
Fig. 5 is a flowchart illustrating processing performed by the disparity arithmetic unit 32 illustrated in Fig. 4. Fig. 6 is a diagram illustrating a specific example of the processing illustrated in Fig. 5. The horizontal axis in Fig. 6 indicates a search range, that is, a shift amount (deviation) of a pixel position in the comparative image with respect to a pixel position in the reference image, and the vertical axis indicates the non-similarity as the evaluation value for matching. The following describes an operation performed by the disparity arithmetic unit 32 with reference to these drawings.
The flowchart illustrated in Fig. 5 is performed for each pixel of the reference image. The search range of the comparative image with respect to the pixels in the reference image is 1 to 68 pixels. At the time when the flowchart is started, no data is written into the non-similarity register 451, the inclination register 452, and the threshold register 453. “0” is set to the flag register 454, and an initial value of the local minimum value counter 455 is “0”. In explanation for this procedure, a value (flag) held by the flag register 454 is represented by C, and a count value of the local minimum value counter 455 is represented by cnt.
When matching data: data(t) representing the non-similarity is input by the non-similarity calculator 441 (Step S301), a numerical value of t is determined (Step S302). At first, t = 1 (t = 1 at Step S302), so that data(1) is written into the non-similarity register 451 to be held (Step S306), and whether t is the last value, that is, whether t is the last of the search range is determined (Step S307). Here, t = 1 is not the last value of the search range (No at Step S307), so that t is incremented to satisfy t = 2 (Step S312), and the process proceeds to Step S301.
In this case, the process proceeds in order of Step S301, Step S302, and Step S303, and the inclination calculator 442 calculates an inclination between the data(1) and data(2) (Step S303). The inclination is calculated from a difference between two pieces of data “data(2) - data(1)”. Next, the numerical value of t is determined (Step S304). In this case, t = 2 (t = 2 at Step S304), so that a reference numeral of inclination is written into the inclination register 452 to be held (Step S305).
Next, the data(2) is written into the non-similarity register 451 to be held (Step S306), whether t is the last value is determined (Step S307), t is incremented to 3 (Step S312) based on a determination result (No at Step S307), and the process proceeds to Step S301.
In this case, the processes at Step S301, Step S302, and Step S303 are the same as those in the former case (when t = 2). However, t = 3 in this case, so that the process proceeds from Step S304 to Step S313, and whether the inclination is changed from negative to positive is determined. This determination processing is performed by the local minimum value detector 443.
When “data(2) - data(1)” is negative and “data(3) - data(2)” is positive, it is determined that the inclination is changed from negative to positive (Yes at Step S313). In this case, held data(t-1), that is, the data(2) is determined to be the local minimum value herein (Step S317).
When the data(2) is determined to be the local minimum value at Step S317, whether another local minimum value is present, that is, whether C = 0 is satisfied is determined (Step S318). When C = 0 is satisfied (Yes at Step S318), the threshold setting unit 444 updates the upper threshold Uth and the lower threshold Lth held by the threshold register 453, and the flag controller 445 sets C = 1 to set the flag (Step S320).
The data(1) to the data(3) in Fig. 6 correspond to the processes described above. That is, the data(2) is determined to be the local minimum value at Step S317, and the upper threshold Uth1 and the lower threshold Lth1 are updated (as initial setting in this case) above and below the local minimum value at Step S320. The upper threshold Uth and the lower threshold Lth are assumed to be “data(2) + predetermined value” and “data(2) - predetermined value”, respectively. In this case, a circuit configuration is preferably simplified by employing “predetermined value = 2n”.
After Step S320, the counter controller 446 counts up the value of the local minimum value counter 455 (Step S321). In this case, the count value is “1”. The count value represents the number of local minimum values within a range of the threshold (equal to or smaller than the upper threshold Uth1, and equal to or larger than the lower threshold Lth1) set at Step S320.
After Step S321, the process proceeds in order of Step S305, Step S306, and Step S307, and the reference numeral of inclination (positive in this case) and the matching data (the data(3) in this case) are held, whether t is the last is determined (not the last in this case, so that No at S307), t is incremented to 4 (Step S312), and the process proceeds to Step S301.
Also in this case in which t = 4 is satisfied (the same applies to the following numbers to the last of the search range, that is, t = 68), the process proceeds in order of Step S301, Step S302, Step S303, Step S304, and Step S313, and whether the inclination is changed from negative to positive is determined to detect the local minimum value (Step S313). The case in which the inclination is changed from negative to positive (Yes at Step S313) has been described above (when t = 3), so that the following describes another case (No at Step S313).
In this case, it is determined whether the inclination is kept negative (Step S314). If the inclination is not kept negative (No at Step S314), the process proceeds in order of Step S305, Step S306, and Step S307. Content of the processes at Step S305, Step S306, and Step S307 and subsequent steps is the same as that in the case of t = 3.
In a case of Fig. 6, when t = 4, the process proceeds as follows: No at Step S313, No at Step S314, Step S305, Step S306, Step S307, and Step S312. Thus, the reference numeral (negative in this case) of inclination and the data(4) are held and t is incremented to 5, and the process proceeds to Step S301.
Also in this case, the process proceeds in order of Step S301, Step S302, Step S303, Step S304, and Step S313, and it is determined whether the inclination is changed from negative to positive (Step S313). The cases of Yes at Step S313 and No at Step S313 to No at Step S314 have been already described for the cases of t = 3 and t = 4, respectively, so that the following describes a case of No at Step S313 to Yes at Step S314.
In this case, it is determined whether the local minimum value has been previously generated and the data(t) (data(5) in this case) becomes lower than the lower threshold (Step S315). In a case of No at Step S315, that is, when the local minimum value is not previously generated or when the data(t) is not lower than the lower threshold even though the local minimum value is generated, the process proceeds to Step S305.
In a case of Yes at Step S315, that is, when the local minimum value has been previously generated and the data(t) becomes lower than the lower threshold, the process proceeds to Step S316. At Step S316, the flag controller 445 sets C = 0 to reset the flag, and the counter controller 446 resets the local minimum value counter 455 to “0”.
In a case of Fig. 6, when t = 5, the inclination is kept negative, so that the process proceeds as follows: No at Step S313, Yes at Step S314, Yes at Step S315, Step S316, Step S305, Step S306, Step S307, and Step S312. Accordingly, the reference numeral of inclination (negative in this case) and the matching data (data(5) in this case) are held, whether t is the last is determined (not the last in this case, so that No at Step S307), t is incremented to 6 (Step S312), and the process proceeds to Step S301.
Also in this case, the process proceeds in order of Step S301, Step S302, Step S303, Step S304, and Step S313, and it is determined whether the inclination is changed from negative to positive. In a case of Fig. 6, similarly to the case of t = 5, the inclination is kept negative in a case of t = 6, so that the steps through which the process passes after Step S313 are the same as those in the case of t = 5.
Subsequently, also in a case of t = 7, similarly to the cases of t = 5 and t = 6, the inclination is kept negative, so that the steps through which the process passes after Step S313 are the same as those in the case of t = 5. In a case of t = 8, similarly to the case of t = 3, the inclination is changed from negative to positive, so that the process proceeds from Step S317 to Step S318. When t = 5, C = 0 is set, so that Yes at Step S318 is determined, and the process proceeds in order of Step S320, Step S321, and Step S305.
In this case, at Step S320, the upper threshold Uth1 and the lower threshold Lth1 are updated to be an upper threshold Uth2 and a lower threshold Lth2, respectively, as illustrated in Fig. 6. At this point, the upper threshold Uth2 and the lower threshold Lth2 are “data(7) + predetermined value” and “data(7) - predetermined value”, respectively. The count value (= 1) counted up at Step S321 represents the number of local minimum values within a range of the updated threshold that is updated at Step S320 (equal to or smaller than the upper threshold Uth2, and equal to or larger than the lower threshold Lth2).
In a case of Fig. 6, the inclination is changed from positive to negative when t = 9, so that the process proceeds as follows: No at Step S313, No at S314, and S305. The inclination is changed from negative to positive when t = 10, so that the process proceeds from Step S317 to S318. When t = 8, C = 1 is set, so that No at Step S318 is determined, and the process proceeds to Step S319.
At Step S319, it is determined whether data(9) as the local minimum value determined at Step S317 is within a range of the lower threshold (Lth2) and the upper threshold (Uth2). If the data(9) is within the range (Yes at Step S319), the data(9) is counted up (Step S321), and the process proceeds to Step S305. If the data(9) is out of the range (No at Step S319), the process directly proceeds to Step S305.
In the case of Fig. 6, the data(9) is within the range of the lower threshold (Lth2) and the upper threshold (Uth2), so that the data(9) is counted up and the value of the local minimum value counter 455 becomes “2”. The count value “2” means that there are two local minimum values within the range of the latest thresholds (the upper threshold Uth2 and the lower threshold Lth2 in this case).
Subsequently, the processes are repeated in order of Step S301 → Step S302 → Step S303 → Step S304 → Step S313 until the last t (68 in this case) of the search range is reached. When the last t is reached (Yes at Step S307), the counter controller 446 outputs the count value of the local minimum value counter 455 (Step S308).
Next, the validity determination unit 447 determines whether the count value is equal to or larger than a predetermined value (for example, 2) (Step S309). If the count value is equal to or larger than the predetermined value (Yes at Step S309), the validity determination unit 447 determines that the count value is invalid (Step S310), and sets the flag so that the recognition processor 34 does not use the disparity value of the pixel in the reference image (Step S311).
Fig. 14 illustrates a graph of an example of a matching processing result. In Fig. 14, the horizontal axis indicates the search range, that is, the shift amount (deviation) of the pixel position in the comparative image with respect to the pixel position in the reference image, and the vertical axis indicates the non-similarity as the evaluation value for correlation. In Fig. 14, the non-similarity is the smallest for the seventh search pixel surrounded with a circle, so that 7 is the most probable disparity value. A negative value on the horizontal axis is of a search range for obtaining a sub-pixel disparity.
However, in a process of calculating the disparity of an object having a repetitive pattern on an external appearance such as a building on which windows having the same design are lined, a tile wall on which the same shapes and figures are lined, a fence, a load-carrying platform of a truck vehicle, or a load-carrying platform of a trailer vehicle, as illustrated in Fig. 15, two or more (six in the example of Fig. 15) matched portions may appear in some cases, so that the most probable disparity value may be erroneously output.
With such a fact, for example, an erroneous disparity value is actually output (erroneous matching), the erroneous disparity value indicating that the object having the repetitive pattern present at a distant position is positioned nearby. Specifically, when a distance between the own vehicle and a tile wall having the repetitive pattern on which the same shapes and figures are lined is 5 m, the disparity value for the distance of 5 m and the disparity value for the distance of 2 m are mixed to be output. Due to this, in object recognition processing at a rear stage, one wall is recognized as two walls including a wall having a distance of 2 m from the own vehicle and a wall having a distance of 5 m from the own vehicle. Then a brake is operated although the distance between the tile wall and the own vehicle is 5 m, which is called “erroneous braking”.
However, the disparity arithmetic unit 32 searches for neither the number of values of non-similarity close to each other nor the most probable disparity value after calculation of the non-similarity in the search range is finished. The disparity arithmetic unit 32 counts the number of local minimum values of non-similarity and searches for the disparity at the same time. When the local minimum value of non-similarity becomes out of a predetermined range, the disparity arithmetic unit 32 updates the predetermined range, and counts the number of local minimum values of non-similarity in the updated range. Due to this, when a repetitive pattern appears, a time until the disparity arithmetic unit 32 determines whether to use the repetitive pattern for the object recognition processing can be shortened without increasing of a processing time. Thus, with the equipment control system according to the first embodiment including the disparity arithmetic unit 32, “erroneous braking” can be suppressed.
In the flowchart illustrated in Fig. 5, search is performed in an ascending order of t. Search may also be performed in a descending order of t. In the flowchart illustrated in Fig. 5, when the local minimum value is first detected, the upper threshold and the lower threshold are set in accordance with the local minimum value. Alternatively, optional upper threshold and lower threshold may be initially set at the time when the procedure is started. In the flowchart illustrated in Fig. 5, the non-similarity is used as the evaluation value for correlation, the value of the non-similarity being reduced as the correlation is increased. A similarity, the value of which increases as the correlation increases, may also be used.
Details about upper threshold and lower threshold
In Fig. 5 and Fig. 6 described above, the upper threshold Uth1 and the lower threshold Lth1 are “data(2) + predetermined value” and “data(2) - predetermined value”, respectively, and the upper threshold Uth2 and the lower threshold Lth2 are “data(7) + predetermined value” and “data(7) - predetermined value”, respectively. That is, the upper threshold Uth and the lower threshold Lth are calculated to be set using expressions of “newly detected local minimum value + predetermined value” and “newly detected local minimum value - predetermined value”, respectively. Hereinafter, the upper threshold and the lower threshold calculated by the above expressions are referred to as a first upper threshold and a first lower threshold, respectively.
Next, the following describes a case in which a result of invalidation determination (Step S310) in the processing illustrated in Fig. 5 is true, and a case in which the result is false. The following also describes a second upper threshold and a second lower threshold serving as an upper threshold and a lower threshold by which the occurrence of the false result can be reduced.
Case in which result of invalidation determination is true
Fig. 7 is a diagram for explaining a case in which a result of invalidation determination in the processing illustrated in Fig. 5 is true. The horizontal axis and the vertical axis in Fig. 7 and Fig. 8 to Fig. 13 (described later) are the same as those in Fig. 6, and indicate the search range and the non-similarity, respectively.
Fig. 7 illustrates a matching processing result in a case in which an image in the search range has a repetitive pattern having much texture. Amplitude of non-similarity (for example, ZSSD) is large due to the much texture. In a case of Fig. 7, Uth (first upper threshold) and Lth (first lower threshold) are set to be “data(ta) + k (predetermined value)” and “data(ta) - k (predetermined value)”, respectively, with respect to data(ta) as the local minimum value, and three local minimum values in the range of threshold are counted. Based on this correct count value, a true determination result (invalid) can be obtained.
Case in which result of invalidation determination is false
Fig. 8 is a diagram for explaining a case in which the result of invalidation determination in the processing illustrated in Fig. 5 is false.
Fig. 8 illustrates a matching processing result in a case in which the image in the search range has no repetitive pattern and has less texture. The amplitude of non-similarity is small due to the less texture. In the case of Fig. 8, the minimum value = data(tc) is present at one point. Accordingly, although a correct disparity value tc can be obtained, five local minimum values are counted in a range of Uth (first upper threshold) and Lth (first lower threshold) set above and below data(tb) as the local minimum value, not being the minimum value. Then invalidation is determined based on the count value.
Regarding second upper threshold and second lower threshold
Fig. 9 and Fig. 10 are examples of diagrams for explaining a second threshold that can prevents a false result of invalidation determination from occurring in the processing illustrated in Fig. 5. Fig. 9 and Fig. 10 illustrate a matching processing result in a case in which the image in the search range is the same as that in Fig. 7 and Fig. 8, respectively.
The second upper threshold and the second lower threshold are set to values corresponding to the newly detected local minimum value. That is, in the case of Fig. 9 for example, Uth (second upper threshold) and Lth (second lower threshold) are set to be “data(ta) × Um” and “data(ta) × Lm”, respectively, with respect to the data(ta) as the local minimum value. In this case, Um and Lm are coefficients representing a ratio. Values of Um and Lm satisfy “Um > 1 > Lm”, and may be any values so long as an updated upper threshold is smaller than the lower threshold before updating. In the case of Fig. 9, similarly to the case of Fig. 7, three local minimum values within the range of threshold are counted.
In the case of Fig. 10, Uth (second upper threshold) and Lth (second lower threshold) are set to be “data(tc) × Um” and “data(tc) × Lm”, respectively, with respect to the data(tc) as the smallest local minimum value. In the case of Fig. 10, unlike the case of Fig. 8, the count value of the local minimum value is “1”, so that a correct disparity value tc is employed.
In this way, by setting the upper threshold and the lower threshold to values corresponding to the newly detected local minimum value, a probability of counting only the smallest local minimum value present at one point can be increased when the image in the search range has no repetitive pattern and has less texture. That is, by changing the first upper threshold and the first lower threshold in a repetitive pattern detection algorithm illustrated in Fig. 5 into the second upper threshold and the second lower threshold, respectively, it is possible to prevent a situation from occurring in which invalidation is determined although a correct disparity value is obtained.
In Fig. 9 and Fig. 10, the upper threshold and the lower threshold corresponding to the local minimum value are calculated by multiplying the local minimum value by the coefficient. Alternatively, k in Fig. 7 and Fig. 8 may be changed depending on the local minimum value in place of being fixed at a predetermined value.
Minimum value processing and exception processing
Fig. 11 is a diagram for explaining minimum value processing and exception processing performed by the disparity arithmetic unit 32.
As a basis of disparity value calculation, calculating the disparity value in which the non-similarity such as ZSSD is the minimum value is a prerequisite, so that, in addition to an algorithm for counting the number of local minimum values within the range of the upper and lower thresholds at a minimum level illustrated in Fig. 5, a pure minimum value and a disparity value corresponding thereto need to be successively processed to be searched for.
To cope with a case in which the minimum value of non-similarity is the last in the search range as illustrated in the upper graph (A) in Fig. 11, or a case in which the minimum value of non-similarity is the first in the search range as illustrated in the lower graph (B) Fig. 11, the pure minimum value is successively processed, and when the minimum value is smaller than the lower threshold Lth that is finally updated, the disparity that gives the minimum value is output. In this case, after the algorithm illustrated in Fig. 5 for 68 pieces of non-similarity data is finished, invalidation determination is forcibly performed as exception processing.
That is, in a case of A in Fig. 11 for example, when the algorithm illustrated in Fig. 11 is finished, the local minimum value counter 455 is reset to 0 at Step S316 based on data(68) as the minimum value. However, invalidation determination is forcibly performed. To obtain a sub-pixel disparity, the search range t indicated by the horizontal axis of A in Fig. 11 is -2 to 65, and data(65) on the right end is the minimum value.
For example, in a case of B in Fig. 11, when the algorithm illustrated in Fig. 5 is finished, the count value of the local minimum value counter 455 is “3”. However, data(1) smaller than Lth is present, so that invalidation determination is finally forcibly performed. To obtain the sub-pixel disparity, the search range t indicated by the horizontal axis of B in Fig. 11 is -2 to 65, and data(-2) on the left end is the minimum value.
The minimum value processing and the exception processing are summarized as the following (i) to (iii).
(i) When the non-similarity at an end of the search range is the minimum value, and the disparity value (value of the search range t) in which the minimum value is detected is negative, invalidation is forcibly determined regardless of the count value of the local minimum value counter 455.
(ii) When the non-similarity at an end of the search range is included in a finally determined threshold range, the local minimum value counter 455 is counted up. For example, only the left end is included in the finally determined threshold range, an output count value is counted up by 1. For example, only the right end is included in the finally determined threshold range, the output count value is counted up by 1. For example, both of the left end and the right end are included in the finally determined threshold range, the output count value is counted up by 2.
(iii) In cases of monotonic increase and monotonic decrease, the local minimum value is not detected and the count value becomes 0. However, invalidation is forcibly determined.
Arithmetic result of disparity arithmetic unit
Fig. 12 is a diagram illustrating a first example of an arithmetic result of the disparity arithmetic unit 32, and Fig. 13 is a diagram illustrating a second example thereof. In these figures, the horizontal axis indicates the search range, and the non-similarity indicated by the vertical axis is ZSSD calculated using a block of 7 pixels × 7 pixels. A negative portion of the search range is used for obtaining the sub-pixel disparity.
Fig. 12 is obtained by calculating disparity values of a captured image of a window of a building. The upper threshold and the lower threshold are finally updated values (set in accordance with the local minimum value of the 8th pixel in the search range, in this case). The number of local minimum values in this threshold range is 4, and invalidation is determined by the validity determination unit 447.
Fig. 13 is obtained by calculating disparity values of a captured image of a tile wall. The upper threshold and the lower threshold are finally updated values (set in accordance with the local minimum value of the 23rd pixel in the search range, in this case). The number of local minimum values in this threshold range is 2, and invalidation is determined by the validity determination unit 447.
To detect the minimum value of non-similarity in the search range based on ZSSD and the like, for example, the smallest non-similarity is detected from among the first non-similarity, the second non-similarity, and the third non-similarity, and the smallest non-similarity is compared with the fourth and subsequent non-similarities. Detection processing may be performed for each pixel to detect the minimum value of non-similarity so that the minimum value of non-similarity within a predetermined threshold is detected. In this case, a load of arithmetic processing of non-similarity on the FPGA 14 can be reduced.
Alternatively, for example, after 64 non-similarities in the search range are once stored in a memory, the non-similarity indicating the minimum value may be detected, and the number of non-similarities included within the threshold determined based on the minimum value of non-similarity may be detected to detect the minimum value of non-similarity.
Operation of disparity image generator
Next, the following describes an operation of the disparity image generator 33. As illustrated in Fig. 16, the disparity image generator 33 includes an edge validation unit 103, a pair position calculator 104, and an intra-pair disparity validation unit 105. The edge validation unit 103 is an example of a valid pixel determination unit to which the disparity value d (disparity image) calculated by the disparity arithmetic unit 32 and the luminance image generated by the captured image corrector 31 are supplied. When the luminance image and the disparity image are input, the edge validation unit 103 determines, to be an edge pixel, a pixel in which an amount of an edge component is equal to or larger than a predetermined component amount in the luminance image, and validates the disparity value at the pixel position. The amount of edge component in the luminance image is an example of a feature value. The edge pixel is a valid pixel, and the disparity corresponding to the edge pixel on the disparity image is a valid disparity.
The pair position calculator 104 is an example of a calculator, assumes two adjacent valid disparities on the same line in the disparity image as a valid disparity pair, and calculates a distance difference in a depth direction in a real space and an interval in a horizontal direction (positional relation) of the disparities. The pair position calculator 104 then determines whether the distance difference is within the predetermined threshold range, and whether the interval in the horizontal direction is within another predetermined threshold range in accordance with the disparity value of the valid disparity pair.
The intra-pair disparity validation unit 105 is an example of a validation unit, and validates the intra-pair disparity in the valid disparity pair (disparity between the valid disparity pair) when both of the distance difference and the interval in the horizontal direction determined by the pair position calculator 104 are within the threshold range. An intra-pair disparity to be validated is in the vicinity of two disparity values of the valid disparity pair.
Fig. 17 illustrates a more detailed functional block diagram of the disparity image generator 33. In Fig. 17, as described above, a valid disparity determination unit 102 includes the edge validation unit 103, the pair position calculator 104, and the intra-pair disparity validation unit 105.
The edge validation unit 103 includes an edge amount calculator 106 and a comparator 107. The pair position calculator 104 includes a valid disparity pair setting unit 108, a pair interval calculator 109, a comparator 110, a pair depth difference calculator 111, a comparator 112, and a parameter memory 113. The intra-pair disparity validation unit 105 includes a validation determination unit 114 and a valid disparity determination unit 115. The valid disparity pair setting unit 108 is an example of a pair setting unit. Each of the pair interval calculator 109 and the pair depth difference calculator 111 is an example of a calculator. The valid disparity determination unit 115 is an example of a validation unit.
To the edge amount calculator 106 of the edge validation unit 103, a luminance image within a predetermined processing target range in an input image is input. The edge amount calculator 106 calculates an edge amount from the luminance image. As a method for calculating the edge amount, for example, a Sobel filter or a secondary differential filter can be used. Considering reduction of hardware and characteristics of block matching processing, a difference between pixels on both ends on the same line as the pixel of interest may be used.
The comparator 107 of the edge validation unit 103 compares an absolute value of the edge amount calculated by the edge amount calculator 106 with an edge amount threshold determined in advance, and supplies a comparison output thereof as a valid disparity flag to the valid disparity pair setting unit 108 of the pair position calculator 104. For example, the absolute value of the calculated edge amount is larger than the edge amount threshold, the comparison output at high level is supplied to the valid disparity pair setting unit 108 (the valid disparity flag is turned on). When the absolute value of the calculated edge amount is smaller than the edge amount threshold, the comparison output at low level is supplied to the valid disparity pair setting unit 108 (the valid disparity flag is turned off).
The valid disparity flag and the disparity image within the predetermined range described above are supplied to the valid disparity pair setting unit 108 of the pair position calculator 104. By way of example, as illustrated in Fig. 18, the valid disparity pair setting unit 108 sets, as the valid disparity pair, two disparities that are not adjacent to each other at pixel positions closest to each other on the same line for which the valid disparity flag is turned on. Fig. 18 illustrates an example in which a first valid disparity pair, a second valid disparity pair, and a third valid disparity pair are set.
Next, the interval between the pair in the horizontal direction and the difference between the pair in the depth direction are calculated using respective disparity values of disparities of the valid disparity pair as described above. The pair interval calculator 109 calculates the interval between the pair in the horizontal direction in the real space from the disparity value of the left pixel of the valid disparity pair and the interval between the pair (pixel unit) on the disparity image. The pair interval calculator 109 calculates depths from the respective two disparities of the valid disparity pair, and calculates an absolute value of a difference between the depths.
The comparator 110 compares the interval between the pair in the horizontal direction calculated by the pair interval calculator 109 with a pair interval threshold. The pair interval threshold is determined in advance with reference to an actual width of an object to be detected. For example, to detect a person alone, the pair interval threshold is set to be a width occupied by a person. For example, a width of a large-size vehicle is prescribed to be 2500 mm at the maximum in Japan. Thus, in detecting a vehicle, the pair interval threshold is set to be the maximum width of the vehicle that is legally prescribed. The comparator 110 compares such a pair interval threshold with the interval between the pair in the horizontal direction calculated by the pair interval calculator 109, and supplies a comparison output to the validation determination unit 114 of the intra-pair disparity validation unit 105.
The pair depth difference calculator 111 calculates a depth difference in the valid disparity pair described above. A depth difference threshold read from the parameter memory 113 using the pair of disparity values is supplied to the comparator 112. The depth difference threshold read from the parameter memory 113 is determined in accordance with a distance calculated from the disparity value of the left pixel of the valid disparity pair. The depth difference threshold is determined in accordance with the distance because resolution of the disparity obtained from the stereo image of the imaging unit 2 is lowered when the distance to the object to be detected is large, and variance of detection distance becomes large. Accordingly, corresponding to the valid disparity value or a distance calculated therefrom, depth difference thresholds such as 10%, 15%, and 20% of the distance are stored in the parameter memory 113. The comparator 112 compares the depth difference threshold with the depth difference in the valid disparity pair, and supplies a comparison output to the validation determination unit 114 of the intra-pair disparity validation unit 105.
Next, the validation determination unit 114 of the intra-pair disparity validation unit 105 performs intra-pair region validation determination. That is, when the comparison outputs supplied from the comparators 110 and 112 respectively indicate that the interval between the pair in the horizontal direction is equal to or smaller than the pair interval threshold and the pair depth difference is equal to or smaller than the depth difference threshold, the validation determination unit 114 determines that an intra-pair region is valid. The disparity (intra-pair disparity) present in the intra-pair region determined to be valid is supplied to the valid disparity determination unit 115.
If the supplied intra-pair disparity has a value within a disparity value range that is determined in accordance with the disparity values of the pair of disparities, the valid disparity determination unit 115 determines the supplied intra-pair disparity to be the valid disparity, and outputs the intra-pair disparity as the valid disparity. The disparity value range of the pair of disparities means, assuming that two values of the pair of disparities are D1 and D2 (D1 > D2), a range of “D2 - α, D1 + α” with α as a constant. The constant α is determined based on a variance of a disparity of a subject obtained from the imaging unit 2 (stereo camera).
The recognition processor 34 recognizes, for example, an object, a person, and a guardrail preceding the vehicle using the disparity image generated by the disparity image generator 33 as described above, and outputs recognition data as a recognition result.
Effect of first embodiment
As is clear from the above description, in calculating the disparity from the stereo image captured by the imaging unit 2, the equipment control system according to the first embodiment calculates the disparity based on block matching not only for the pixel position having an edge but also for other portions, and calculates many disparities in advance. Thereafter, the equipment control system validates only the disparity of the pixel having an edge, and when a difference between the validated disparity and a nearby validated disparity positioned nearby is equal to or smaller than a predetermined value, validates a disparity having the same value present between the validated disparity and the nearby validated disparity.
Accordingly, an appropriate disparity can be generated not only at a boundary of a three-dimensional object but also in the three-dimensional object and other spaces. That is, disparity information of a preceding vehicle can be appropriately generated not only at a vehicle edge but also in the vehicle and other spaces. Thus, the disparity image appropriate for recognizing an object can be generated, and the vehicle can be accurately recognized as one object with correct size and distance. This configuration can prevent the preceding vehicle from being coupled with another object to be erroneously detected.
In a process of calculating the disparity of the object having a repetitive pattern on an external appearance, two or more matched portions (with reference to Fig. 15) may appear in some cases, so that the most probable disparity value may be erroneously output. With such a fact, for example, an erroneous disparity value is actually output (erroneous matching), the erroneous disparity value indicating that the object having the repetitive pattern present at a distant position is positioned nearby. Due to this, the recognition processor 34 at a rear stage recognizes one wall as two walls including one wall having a distance of 2 m from the own vehicle and the other wall having a distance of 5 m from the own vehicle. Then a brake is operated although the distance between the wall and the own vehicle is 5 m, which is called “erroneous braking”.
However, in the equipment control system according to the first embodiment, the disparity arithmetic unit 32 does not search for the number of values of non-similarity close to each other and the most probable disparity value after calculation of non-similarity in the search range is finished. The disparity arithmetic unit 32 counts the number of local minimum values of non-similarity and searches for the disparity at the same time. When the local minimum value of non-similarity becomes out of the predetermined range, the disparity arithmetic unit 32 updates the predetermined range, and counts the number of local minimum values of non-similarity in the updated range. Due to this, when a repetitive pattern appears, a time until the disparity arithmetic unit 32 determines whether to use the repetitive pattern for the object recognition processing of the recognition processor 34 can be shortened without increasing of a processing time. Additionally, an erroneous disparity value is prevented from being output, and “erroneous braking” described above can be suppressed.
Second Embodiment
Next, the following describes the equipment control system according to a second embodiment. In the equipment control system according to the second embodiment, the disparity image generator 33 has functions illustrated in Fig. 19. The second embodiment described below is different from the first embodiment only in the operation of the disparity image generator 33. Thus, the following describes only differences, and redundant description will not be repeated. A part in Fig. 19 that operates similarly to that in Fig. 17 is denoted by the same reference numeral, and detailed description thereof will not be repeated.
That is, in the equipment control system according to the second embodiment, the pair position calculator 104 of the disparity image generator 33 includes a valid disparity setting unit 120, a search range setting unit 121 for a disparity to be paired, a setting unit 122 for a disparity to be paired, a pair depth difference calculator 123, and the parameter memory 113. The intra-pair disparity validation unit 105 includes a comparator 124 and the valid disparity determination unit 115. The search range setting unit 121 for a disparity to be paired is an example of a search range setting unit. The pair depth difference calculator 123 is an example of a difference detector.
When the disparity image within the predetermined processing target range is input, the valid disparity setting unit 120 of the pair position calculator 104 selects a pixel for which the valid disparity flag is turned on (valid pixel) as a comparison output from the edge validation unit 103. The search range setting unit 121 for a disparity to be paired calculates and sets a range in which a disparity to be paired with the valid disparity is searched for in the right direction of the pixel of the valid disparity on the same line as the selected pixel based on the disparity value (valid disparity) of the selected valid pixel and the maximum value of the interval between the pair. The maximum value of the interval between the pair is an example of pair interval information, and synonymous with the pair interval threshold indicating the actual width of the object to be detected. Fig. 20 is a diagram schematically illustrating an operation of searching for a disparity to be paired performed by the search range setting unit 121 for a disparity to be paired. In Fig. 20, a black solid pixel represents a pixel SG of the valid disparity. Each of the pixel P1 to pixel P4 represents a pixel of the disparity to be paired in the search range set in accordance with the disparity value of the pixel SG of the valid disparity. The search range setting unit 121 for a disparity to be paired calculates a maximum width (right direction) for searching for a disparity to be paired on the disparity image based on the maximum value of the interval between the pair and the disparity value of the selected pixel.
Next, the setting unit 122 for a disparity to be paired detects a disparity closest to the valid disparity in the search range for a disparity to be paired, and causes the detected disparity to be the disparity to be paired. When the disparity to be paired is not present in the search range for a disparity to be paired, processing subsequent to setting processing for a disparity to be paired is not performed by the setting unit 122 for a disparity to be paired, and the search range for a disparity to be paired and the disparity to be paired are set based on the valid disparity that is subsequently set.
The disparity to be paired set by the setting unit 122 for a disparity to be paired is input to the pair depth difference calculator 123 together with the valid disparity as a pair. Similarly to the pair depth difference calculator 111 illustrated in Fig. 17, the pair depth difference calculator 123 calculates an absolute value of a difference in distance based on the input valid disparity and the disparity to be paired.
The comparator 124 of the intra-pair disparity validation unit 105 compares the depth difference threshold read from the parameter memory 113 with the depth difference calculated by the pair depth difference calculator 123 based on the valid disparity. When a comparison output indicating that the depth difference is equal to or smaller than the depth difference threshold is supplied from the comparator 124, the valid disparity determination unit 115 determines that an intra-pair disparity between the valid disparity and the disparity to be paired is the valid disparity to be output. When a comparison output indicating that the value is within a range of two disparity values including the valid disparity and the disparity to be paired is supplied from the comparator 124, the valid disparity determination unit 115 determines that the disparity within the range of the two disparity values including the valid disparity and the disparity to be paired is the valid disparity to be output.
The range of the two disparity values including the valid disparity and the disparity to be paired means, assuming that the two disparity values are D1 and D2 (D1 > D2), a range of “D2 - α, D1 + α” with α as a constant. The constant α can be determined based on a variance of a disparity of a predetermined subject obtained from the imaging unit 2.
With the equipment control system according to the second embodiment, the number of disparity points can be controlled without increasing of disparity noise, and the same effect as that of the first embodiment can be obtained.
Third Embodiment
Next, the following describes the equipment control system according to a third embodiment. In the equipment control system according to the third embodiment, the disparity image generator 33 has functions illustrated in Fig. 21. The third embodiment described below is different from the first embodiment only in the operation of the disparity image generator 33. Thus, the following describes only differences, and redundant description will not be repeated. A part in Fig. 21 that operates similarly to that in Fig. 17 is denoted by the same reference numeral, and detailed description thereof will not be repeated.
That is, in the equipment control system according to the third embodiment, the edge validation unit 103 includes the edge amount calculator 106, a comparator 131, and a comparator 132. A first valid disparity flag from the comparator 131 is supplied to the valid disparity pair setting unit 108 of the pair position calculator 104, and a second valid disparity flag from the comparator 132 is supplied to the valid disparity determination unit 115 of the intra-pair disparity validation unit 105. The first valid disparity flag is an example of first valid disparity information. The second valid disparity flag is an example of second valid disparity information.
That is, in the equipment control system according to the third embodiment, the disparity value of the edge pixel is validated using a plurality of thresholds such as two thresholds (alternatively, three or more thresholds may be used). Specifically, a first edge amount threshold is larger than a second edge amount threshold. The first edge amount threshold is supplied to the comparator 131, and the second edge amount threshold is supplied to the comparator 132. The comparator 131 compares an absolute value of the edge amount calculated by the edge amount calculator 106 with the first edge amount threshold, and supplies the first valid disparity flag for validating a pixel that makes a valid disparity pair to the valid disparity pair setting unit 108 of the pair position calculator 104. The comparator 132 compares the absolute value of the edge amount calculated by the edge amount calculator 106 with the second edge amount threshold, and supplies the second valid disparity flag for finally validating the pixel of the intra-pair disparity to the valid disparity determination unit 115.
Accordingly, the number of disparities to be validated among intra-pair disparities can be controlled, a disparity image optimum for object detection processing at a rear stage can be generated, and the same effect as that in the above embodiments can be obtained. Validation processing of the edge pixel using a plurality of thresholds performed by the edge validation unit 103 can be applied to the second embodiment. The valid disparity setting unit 120 and the search range setting unit 121 for a disparity to be paired are assumed to perform processing on the same line of the disparity image. Alternatively, for example, the valid disparity pair may be set within a range of three lines in total including the same line of the disparity image and lines upper and lower than the same line of the disparity image.
Fourth Embodiment
Next, the following describes the equipment control system according to a fourth embodiment. With the equipment control system according to the fourth embodiment, one object can be correctly recognized as one object by reducing the number of disparity values of erroneous matching described above using Fig. 14 and Fig. 15, and performing object recognition processing with the disparity image including many valid disparity values. Due to this, a correct support operation can be performed.
Fig. 22 is a functional block diagram of the disparity image generator 33 disposed in the equipment control system according to the fourth embodiment. As illustrated in Fig. 22, the disparity image generator 33 includes a matching cost calculator 501, an edge detector 502, a repetitive pattern detector 503, an entire surface disparity image generator 504, and a generator 505.
The disparity image generator 33 is implemented when the FPGA 14 executes the object detection program stored in the ROM 16. In this example, the matching cost calculator 501 to generator 505 are implemented as software. Alternatively, part or all of the matching cost calculator 501 to generator 505 may be implemented as hardware such as an integrated circuit (IC).
The object detection program may be recorded and provided in a computer-readable recording medium such as a compact disc read only memory (CD-ROM) and a flexible disk (FD) as an installable or executable file. The object detection program may also be recorded and provided in a computer-readable recording medium such as a compact disc recordable (CD-R), a DVD, a Blu-ray Disc (registered trademark), and a semiconductor memory. The DVD is an abbreviation for a “digital versatile disc”. The object detection program may be provided to be installed via a network such as the Internet. The object detection program may be embedded and provided in a ROM in a device, for example.
The flowchart of Fig. 23 illustrates a procedure of disparity image generation processing performed by the disparity image generator 33. First, at Step S201, the matching cost calculator 501 calculates a non-similarity (matching cost) of each pixel of a reference image and a comparative image present on the same scanning line among reference images and comparative images captured by the imaging unit 2. The entire surface disparity image generator 504 generates an entire surface disparity image in which all pixels are represented by disparity values based on the calculated non-similarity.
At Step S202, the repetitive pattern detector 503 as an example of a discrimination unit and a pattern detector discriminates validity of each pixel of the stereo image based on the number of local minimum values of non-similarity within a range finally updated within the search range as described above.
Specifically, the repetitive pattern detector 503 performs detection processing of a repetitive pattern described in the first embodiment for each pixel. That is, as described in the first embodiment, the repetitive pattern detector 503 counts the number of local minimum values of non-similarity and searches for the disparity at the same time, updates the predetermined range when the local minimum value of non-similarity becomes out of the predetermined range, and performs detection processing of a repetitive pattern for counting the number of local minimum values of non-similarity within the updated range for each pixel. The repetitive pattern detector 503 adds, to a pixel in which no repetition occurs, validation information indicating that there is no repetition (sets a validation flag).
Next, at Step S203, the edge detector 502 adds, to a pixel having luminance larger than a predetermined threshold, edge information indicating that the pixel corresponds to an edge of the object (sets an edge flag).
Next, at Step S204, the generator 505 as an example of an extractor extracts, as a pixel of valid disparity, a pixel to which both of the validation information and the edge information are added in the entire surface disparity image. That is, the generator 505 extracts the valid disparity based on the pixel for which the validation flag is turned on and the edge flag is turned on in the entire surface disparity image.
Fig. 24 is an extraction result obtained by using a conventional method for extracting the valid disparity. In Fig. 24, a region in which four disparity values of “4” are continuous is a region in which a correct disparity is obtained. A region subsequent thereto in which the disparity values are “10, 10, 22, 22, 22, 22, 22, 22” is a region in which erroneous matching occurs due to an object of repetitive pattern positioned at a long distance. A region subsequent thereto in which seven disparity values of “4” are continuous is a region in which a correct disparity is obtained. In a case of the conventional method for extracting the valid disparity, as illustrated by being surrounded with a thick line frame in Fig. 24, erroneous determination occurs such that a pixel is determined to be a valid pixel although erroneous matching occurs and the disparity values are incorrect.
The generator 505 in the equipment control system according to the first embodiment extracts, as a pixel of valid disparity, the pixel for which the validation flag is turned on and the edge flag is turned on as illustrated in Fig. 25. That is, the generator 505 performs, as it were, processing for inputting the validation flag and the edge flag to an AND gate to obtain an output. Accordingly, as illustrated in Fig. 25, the pixel having the disparity value of “4” in which the validation flag and the edge flag are both “1” is determined to be a valid pixel. The generator 505 also determines, to be a valid pixel, a pixel between the pixels having the disparity value of “4” that have been determined to be valid. In contrast, the pixels in the region in which erroneous matching occurs are all determined to be invalid pixels.
Finally, the generator 505 outputs, to the recognition processor 34 at a rear stage, a disparity image in which the disparity of erroneous matching is reduced (noise is reduced) and many valid disparities are included (Step S205), and ends the processing in the flowchart of Fig. 23.
By reducing the number of such erroneous matching processes and using the disparity image in which valid disparity values are increased in recognition processing, one object can be correctly recognized as one object, object recognition processing can be performed, and correct driving support can be performed.
The embodiments described above are exemplary only, and do not intend to limit the scope of the present invention. These novel embodiments can be implemented in other various forms. The embodiments can be variously omitted, replaced, and modified without departing from the gist of the present invention. For example, the same configuration, processing, and effect as described above can be obtained by using a distance image and a distance value in place of the disparity image and the disparity value. These embodiments and modifications thereof are included in the scope and the gist of the present invention, and also included in the invention described in Claims and an equivalent thereof.
1 Vehicle
2 Imaging unit
3 Analyzing unit
4 Control unit
14 FPGA
17 CPU
31 Captured image corrector
32 Disparity arithmetic unit
33 Disparity image generator
34 Recognition processor
103 Edge validation unit
104 Pair position calculator
105 Intra-pair disparity validation unit
106 Edge amount calculator
107 Comparator
108 Valid disparity pair setting unit
109 Pair interval calculator
110 Comparator
111 Pair depth difference calculator
112 Comparator
113 Parameter memory
114 Validation determination unit
115 Valid disparity determination unit
120 Valid disparity setting unit
121 Search range setting unit for a disparity to be paired
122 Setting unit for a disparity to be paired
123 Pair depth difference calculator
124 Comparator
131 Comparator
132 Comparator
440 Information processor
441 Non-similarity calculator
442 Inclination calculator
443 Local minimum value detector
444 Threshold setting unit
445 Flag controller
446 Counter controller
447 Validity determination unit
450 Information storage unit
451 Non-similarity register
452 Inclination register
453 Threshold register
454 Flag register
455 Local minimum value counter
501 Matching cost calculator
502 Edge detector
503 Repetitive pattern detector
504 Entire surface disparity image generator
505 Generator
Japanese Unexamined Patent Publication No. 11-351862

Claims (15)

  1. A disparity image generation device comprising:
    a valid pixel determination unit configured to determine a valid pixel based on a feature value of each pixel in a captured image; and
    a validation unit configured to validate a disparity, which is not a valid disparity, near a valid disparity corresponding to the valid pixel in a disparity image corresponding to the captured image.
  2. The disparity image generation device according to claim 1, further comprising:
    a calculator configured to set a valid disparity pair of two adjacent valid disparities out of valid disparities each corresponding to the valid pixel in the disparity image corresponding to the captured image, and calculate a positional relation between the valid disparities of the valid disparity pair, wherein
    the validation unit validates a disparity between the valid disparity pair when a disparity value of the disparity between the valid disparity pair is within a disparity value range determined in accordance with disparity values of the valid disparity pair.
  3. The disparity image generation device according to claim 2, wherein
    the calculator includes
    a pair setting unit configured to set, as the valid disparity pair, a disparity of interest out of valid disparities each corresponding to the valid pixel and a most closest valid disparity in the same line as the disparity of interest; and
    a pair position calculator configured to calculate a positional relation of the valid disparity pair.
  4. The disparity image generation device according to claim 2, wherein
    the calculator includes
    a search range setting unit configured to set a search range of a disparity to be paired with the valid disparity in the same line as a selected pixel in accordance with a valid disparity as a disparity value of the valid pixel and pair interval information indicating an actual width of an object to be detected; and
    a setting unit for a disparity to be paired configured to set the disparity to be paired in the search range; and
    a difference detector configured to detect a difference between the valid disparity and the disparity to be paired, and
    the validation unit validates a disparity between the valid disparity and the disparity to be paired when the difference between the valid disparity and the disparity to be paired is equal to or smaller than a predetermined threshold.
  5. The disparity image generation device according to any one of claims 2 to 4, wherein
    the valid pixel determination unit supplies first valid disparity information to the calculator when an edge amount obtained as the feature value from the captured image is larger than a first edge amount threshold, and supplies second valid disparity information to the validation unit when the edge amount obtained as the feature value from the captured image is larger than a second edge amount threshold smaller than the first edge amount threshold,
    the calculator sets the valid disparity pair based on the first valid disparity information, and
    the validation unit validates a disparity between the valid disparity pair based on the second valid disparity information.
  6. The disparity image generation device according to any one of claims 1 to 5, further comprising a disparity arithmetic unit configured to calculate a disparity value of the captured image through matching processing, wherein
    the disparity arithmetic unit includes
    an evaluation value calculator configured to calculate an evaluation value of correlation within a predetermined search range;
    an extreme value detector configured to detect an extreme value of the evaluation value;
    a counter configured to count the number of the extreme values having values within a predetermined range; and
    an updater configured to update the predetermined range when an extreme value representing higher correlation than the value within the range is detected, wherein
    the counter counts the number of extreme values within a range finally updated within the search range.
  7. The disparity image generation device according to claim 6, wherein the updater updates the predetermined range so that the extreme value representing higher correlation becomes the center of the updated predetermined range.
  8. The disparity image generation device according to claim 6 or 7, wherein the disparity arithmetic unit invalidates a disparity value corresponding to the extreme value within the finally updated range when a count value of the counter is equal to or larger than a predetermined value.
  9. The disparity image generation device according to any one of claims 6 to 8, wherein the disparity arithmetic unit includes a resetting unit configured to reset the counter.
  10. The disparity image generation device according to any one of claims 1 to 9, further comprising:
    an edge detector configured to detect an edge of the captured image;
    a discrimination unit configured to discriminate validity of each pixel in the captured image based on the number of extreme values within the range finally updated within the search range; and
    an extractor configured to extract, as a valid disparity, a disparity value of a pixel that is discriminated to be valid by the discrimination unit and detected as the edge by the edge detector out of pixels in the captured image.
  11. The disparity image generation device according to claim 10, wherein
    the discrimination unit is a pattern detector configured to detect a repetitive pattern using the captured image and discriminate a pixel that is not the repetitive pattern as a valid pixel, and
    the extractor extracts, as a valid disparity, a disparity value of the pixel that is not the repetitive pattern and detected as the edge out of pixels in the disparity image.
  12. A disparity image generation method comprising:
    determining a valid pixel based on a feature value of each pixel in a captured image; and
    validating a disparity, which is not a valid disparity, near a valid disparity corresponding to the valid pixel in a disparity image corresponding to the captured image.
  13. A computer program causing a computer to perform:
    determining a valid pixel based on a feature value of each pixel in a captured image; and
    validating a disparity, which is not a valid disparity, near a valid disparity corresponding to the valid pixel in a disparity image corresponding to the captured image.
  14. An object recognition device comprising:
    the disparity image generation device according to any one of claims 1 to 11; and
    an object recognition unit configured to recognize an object using a disparity image generated by the disparity image generation device.
  15. An equipment control system comprising:
    the disparity image generation device according to any one of claims 1 to 11;
    an object recognition unit configured to recognize an object using a disparity image generated by the disparity image generation device; and
    a control device configured to control a device using an object recognition result of the object recognition device.
PCT/JP2016/003129 2015-07-02 2016-06-29 Disparity image generation device, disparity image generation method, disparity image generation program, object recognition device, and equipment control system WO2017002367A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201680037648.3A CN107735812B (en) 2015-07-02 2016-06-29 Object recognition apparatus, object recognition method, device control system, and image generation apparatus
KR1020177037261A KR102038570B1 (en) 2015-07-02 2016-06-29 Parallax image generating device, parallax image generating method, parallax image generating program, object recognition device, and device control system
EP16817477.9A EP3317850B1 (en) 2015-07-02 2016-06-29 Disparity image generation device, disparity image generation method, disparity image generation program, object recognition device, and equipment control system
US15/854,461 US10520309B2 (en) 2015-07-02 2017-12-26 Object recognition device, object recognition method, equipment control system, and distance image generation device

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2015133967 2015-07-02
JP2015-133967 2015-07-02
JP2015-178002 2015-09-09
JP2015178002 2015-09-09
JP2016088603A JP6805534B2 (en) 2015-07-02 2016-04-26 Parallax image generator, parallax image generation method and parallax image generation program, object recognition device, device control system
JP2016-088603 2016-04-26

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/854,461 Continuation US10520309B2 (en) 2015-07-02 2017-12-26 Object recognition device, object recognition method, equipment control system, and distance image generation device

Publications (1)

Publication Number Publication Date
WO2017002367A1 true WO2017002367A1 (en) 2017-01-05

Family

ID=57608049

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/003129 WO2017002367A1 (en) 2015-07-02 2016-06-29 Disparity image generation device, disparity image generation method, disparity image generation program, object recognition device, and equipment control system

Country Status (1)

Country Link
WO (1) WO2017002367A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3761220A1 (en) * 2019-07-05 2021-01-06 Everdrone AB Method for improving the interpretation of the surroundings of a vehicle
US11062613B2 (en) 2018-04-05 2021-07-13 Everdrone Ab Method and system for interpreting the surroundings of a UAV
CN113965697A (en) * 2021-10-21 2022-01-21 北京的卢深视科技有限公司 Parallax imaging method based on continuous frame information, electronic device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008065634A (en) * 2006-09-07 2008-03-21 Fuji Heavy Ind Ltd Object detection apparatus and object detection method
JP2013164351A (en) * 2012-02-10 2013-08-22 Toyota Motor Corp Stereo parallax calculation device
JP2013250907A (en) * 2012-06-04 2013-12-12 Ricoh Co Ltd Parallax calculation device, parallax calculation method and parallax calculation program
JP2015011619A (en) * 2013-07-01 2015-01-19 株式会社リコー Information detection device, mobile equipment control system, mobile body and program for information detection

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008065634A (en) * 2006-09-07 2008-03-21 Fuji Heavy Ind Ltd Object detection apparatus and object detection method
JP2013164351A (en) * 2012-02-10 2013-08-22 Toyota Motor Corp Stereo parallax calculation device
JP2013250907A (en) * 2012-06-04 2013-12-12 Ricoh Co Ltd Parallax calculation device, parallax calculation method and parallax calculation program
JP2015011619A (en) * 2013-07-01 2015-01-19 株式会社リコー Information detection device, mobile equipment control system, mobile body and program for information detection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3317850A4 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11062613B2 (en) 2018-04-05 2021-07-13 Everdrone Ab Method and system for interpreting the surroundings of a UAV
EP3761220A1 (en) * 2019-07-05 2021-01-06 Everdrone AB Method for improving the interpretation of the surroundings of a vehicle
US11423560B2 (en) 2019-07-05 2022-08-23 Everdrone Ab Method for improving the interpretation of the surroundings of a vehicle
CN113965697A (en) * 2021-10-21 2022-01-21 北京的卢深视科技有限公司 Parallax imaging method based on continuous frame information, electronic device and storage medium

Similar Documents

Publication Publication Date Title
US10520309B2 (en) Object recognition device, object recognition method, equipment control system, and distance image generation device
US10580155B2 (en) Image processing apparatus, imaging device, device control system, frequency distribution image generation method, and recording medium
US9794543B2 (en) Information processing apparatus, image capturing apparatus, control system applicable to moveable apparatus, information processing method, and storage medium of program of method
JP3780922B2 (en) Road white line recognition device
US10776946B2 (en) Image processing device, object recognizing device, device control system, moving object, image processing method, and computer-readable medium
KR101609303B1 (en) Method to calibrate camera and apparatus therefor
JP7206583B2 (en) Information processing device, imaging device, device control system, moving object, information processing method and program
US20010002936A1 (en) Image recognition system
CN111971682B (en) Road surface detection device, image display device, obstacle detection device, road surface detection method, image display method, and obstacle detection method
US11151395B2 (en) Roadside object detection device, roadside object detection method, and roadside object detection system
JP6592991B2 (en) Object detection apparatus, object detection method, and program
JPH11351862A (en) Foregoing vehicle detecting method and equipment
WO2017002367A1 (en) Disparity image generation device, disparity image generation method, disparity image generation program, object recognition device, and equipment control system
JP6668922B2 (en) Information processing device, imaging device, moving object control system, information processing method, and program
JP6569416B2 (en) Image processing apparatus, object recognition apparatus, device control system, image processing method, and image processing program
JP7064400B2 (en) Object detection device
JP2000259997A (en) Height of preceding vehicle and inter-vehicle distance measuring device
JP3532896B2 (en) Smear detection method and image processing apparatus using the smear detection method
WO2018097269A1 (en) Information processing device, imaging device, equipment control system, mobile object, information processing method, and computer-readable recording medium
WO2020036039A1 (en) Stereo camera device
EP2919191B1 (en) Disparity value deriving device, equipment control system, movable apparatus, robot, and disparity value producing method
JP2015172846A (en) Image processing apparatus, equipment control system, and image processing program
JP2006113051A (en) Image recognizing device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16817477

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20177037261

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2016817477

Country of ref document: EP