WO2021176819A1 - Processing device - Google Patents

Processing device Download PDF

Info

Publication number
WO2021176819A1
WO2021176819A1 PCT/JP2020/048694 JP2020048694W WO2021176819A1 WO 2021176819 A1 WO2021176819 A1 WO 2021176819A1 JP 2020048694 W JP2020048694 W JP 2020048694W WO 2021176819 A1 WO2021176819 A1 WO 2021176819A1
Authority
WO
WIPO (PCT)
Prior art keywords
parallax
image
control target
value
pair
Prior art date
Application number
PCT/JP2020/048694
Other languages
French (fr)
Japanese (ja)
Inventor
真穂 堀永
直也 多田
裕史 大塚
Original Assignee
日立Astemo株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日立Astemo株式会社 filed Critical 日立Astemo株式会社
Priority to JP2022504989A priority Critical patent/JP7250211B2/en
Priority to DE112020005059.9T priority patent/DE112020005059T5/en
Publication of WO2021176819A1 publication Critical patent/WO2021176819A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • G01C3/02Details
    • G01C3/06Use of electric means to obtain final indication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Definitions

  • the present invention relates to, for example, a processing device that processes an captured image captured by a stereo camera mounted on a vehicle.
  • Patent Document 1 a technique for estimating which part of a road marking is imaged from a plurality of images captured as the vehicle moves.
  • a relative vertical deviation occurs between the optical axis of the left camera and the optical axis of the right camera, and the stereo camera is repeatedly obliquely oblique toward the depth direction such as a flow guide zone (zebra zone).
  • zebra zone a flow guide zone
  • the area on which the road surface paint is written may be erroneously detected as a three-dimensional object. If the area where the road surface paint is written is falsely detected as a three-dimensional object, the safety system provided by the stereo camera such as AEB or ACC may be activated, and unnecessary warnings or brakes may be applied, giving the occupant a sense of discomfort. ..
  • the above-mentioned conventional method is for correcting the vertical deviation between the left and right cameras, and time is required for the correction process. If the vehicle travels on a road surface with a guide zone before the correction of the vertical deviation is completed, the guide zone may be erroneously detected as a three-dimensional object. Therefore, during the vertical deviation correction process, a mechanism such as stopping the system provided by the stereo camera is required.
  • the safety system may not operate in the actual necessary situation, which may lead to a decrease in safety performance.
  • An object of the present invention is to differentiate a three-dimensional object from a non-three-dimensional object regardless of whether or not a vertical optical axis shift occurs between the left and right cameras of the stereo camera, and to make the stereo camera recognize it as a three-dimensional object. On the other hand, it is to provide a processing device for determining whether or not to control.
  • the processing device of the present invention that solves the above problems is a processing device that processes a pair of captured images captured by a pair of cameras, and generates a first parallax image from the pair of captured images.
  • the acquisition unit, the second disparity image generation unit that generates the second disparity image by shifting the relative vertical positions of the pair of captured images, and the second disparity corresponding to the candidate region of the first disparity image.
  • the block diagram which shows the whole structure of the image pickup apparatus in 1st Embodiment.
  • the figure which shows typically the left-right captured image and the parallax image.
  • the figure which shows an example of the recognition area of a controlled object.
  • the figure which shows the difference between the distant parallax and the near parallax in the left and right captured images.
  • the figure explaining the parallax when the optical axis of the left and right cameras is relatively displaced in the vertical direction, and the parallax is imaged by the stereo camera.
  • FIG. 3 is a diagram schematically showing a left and right captured image and a parallax image.
  • FIG. 3A is a schematic view of a captured image of the front of the vehicle captured by the left camera
  • FIG. 3B shows a schematic view of the captured image of the front of the vehicle captured by the right camera at the same time as the left camera.
  • 3 (c) is a parallax image created by using the captured image of FIG. 3 (a) and the captured image of FIG. 3 (b).
  • An intermediate region 322 is shown.
  • the images captured by the left and right cameras capture the state in which the preceding vehicle 311 is traveling in front of the own vehicle.
  • the own vehicle is traveling in the traveling lane 312, and guide zones 313 are provided on both sides of the traveling lane 312.
  • the stereo camera detects a three-dimensional object from the parallax image as shown in FIG. 3C and recognizes it as a control target candidate.
  • the parallax In the absence of a three-dimensional object, the parallax is shown to gradually decrease as it shifts from the vicinity to the distance, and also in the parallax image as it gradually decreases as it shifts from the lower to the upper in the vertical direction.
  • the parallax when there is a three-dimensional object, the parallax is the same in the vicinity and the distance, and the parallax image is also shown to have the same value along the vertical direction. Therefore, in the parallax image, it is determined that there is a three-dimensional object in a region having the same parallax along the vertical direction.
  • the preceding vehicle 311 which is actually a three-dimensional object is detected as a three-dimensional object 331, but a part of the diversion zone 313 which is not actually a three-dimensional object is also a three-dimensional object 332. Has been detected as.
  • FIG. 4 is a diagram showing an example of a recognition area of the control target candidate.
  • the candidate area recognized as the control target candidate is shown as a rectangular frame 402.
  • the position of the rectangular frame 402 indicating the candidate region is defined by the xy coordinates of the parallax image 401.
  • FIG. 5 is a diagram showing the difference between the distant parallax and the near parallax in the left and right captured images.
  • Two preceding vehicles 511 and 521 are imaged on both the image 501 captured by the left camera and the image 502 captured by the right camera. Due to the characteristics of the stereo camera, as shown in FIG. 5, the parallax with respect to the distant preceding vehicle 521 is small, and the parallax with respect to the nearby preceding vehicle 511 is large.
  • FIG. 6 is a diagram for explaining the reason why the parallax error differs depending on the inclination angle of the line captured in the image.
  • FIG. 6A is an image of a line extending straight along the traveling direction of the own vehicle, and a vertical line 611 extending straight along the vertical direction of the image is imaged.
  • 6 (b) and 6 (c) are images of lines extending diagonally with respect to the traveling direction of the own vehicle, and the image of FIG. 6 (b) is inclined with respect to the lateral direction of the image.
  • the diagonal line 612 with a large size is imaged
  • the diagonal line 613 with a small inclination with respect to the lateral direction of the image is imaged in the image of FIG. 6 (c).
  • FIG. 6 (a) shows.
  • the vertical lines 611 of the image 601 shown have the same positions P1 and P2 on the left and right when the image is taken by the left camera and the positions P2 when the image is taken by the right camera, and the deviation error is 0.
  • the diagonal line 612 of the image 602 shown in FIG. 6B an error ⁇ 1 occurs in the left-right direction between the position P3 when the image is taken by the left camera and the position P4 when the image is taken by the right camera. ..
  • the diagonal line 613 of the image 603 shown in FIG. 6 (c) has a larger inclination than the diagonal line 612 of the image 602 shown in FIG.
  • the error ⁇ 2 is larger ( ⁇ 1 ⁇ 2). That is, when the optical axis of the left camera L and the optical axis of the right camera R are vertically displaced from each other in the vertical direction, the error increases as the inclination of the imaged line increases.
  • FIG. 7 is a diagram for explaining the parallax when the diversion zone is imaged by a stereo camera in which the optical axes of the left and right cameras are relatively displaced in the vertical direction.
  • the white line of the guide zone 702 is imaged so that the inclination is larger in the vicinity and gradually decreases as the distance is increased. Therefore, when the optical axes of the left and right cameras are relatively displaced in the vertical direction, the parallax error 721 to 723 in the left-right direction of the white line shifts from the vicinity to the distance as described with reference to FIG. Therefore, it gradually increases.
  • the parallax 711 to 713 of the white line gradually decreases as the distance from the vicinity to the distance increases, as described with reference to FIG. 5, regardless of the deviation of the optical axis.
  • the measured parallax in the vicinity is the sum of the relatively large correct parallax 711 and the relatively small parallax error 721, and the distant measurement.
  • the parallax is the sum of a relatively small correct parallax 713 and a relatively large parallax error 723, and the measured parallax between near and far is the middle size correct parallax 712 and the middle size parallax error 722. And are added.
  • the measured parallax 703 is equal to each other in the vicinity, the distance, and the middle thereof, and there is a region in the parallax image in which the parallax is the same in the vertical direction, and there is a risk of erroneously detecting that there is a three-dimensional object.
  • the imaging device of the present embodiment differentiates a three-dimensional object from a non-three-dimensional object regardless of whether or not a vertical optical axis shift occurs between the left and right cameras of the stereo camera.
  • a process of determining whether or not a stereo camera recognizes a three-dimensional object as a control target is performed.
  • the imaging device of the present embodiment calculates the parallax average value when the pair of captured images captured by the pair of imaging units are not shifted in the vertical direction and the parallax average value when the pair of captured images are relatively shifted in the vertical direction, and shifts the images. It has a configuration in which the control necessity judgment of the control target is performed from the distribution shape of the amount and the parallax average value.
  • FIG. 1 is a block diagram showing the overall configuration of the image pickup apparatus according to the first embodiment.
  • the image pickup device 100 is mounted on a vehicle such as an automobile, and images the front from the vehicle with a pair of left and right cameras, determines the presence or absence of a three-dimensional object based on parallax, or certifies the type of the three-dimensional object. It is a stereo camera that performs processing.
  • the image pickup device 100 includes a first image pickup unit 11 and a second image pickup section 12 which are a pair of cameras, and a processing device 1 which processes two captured images captured by the first image pickup section 11 and the second image pickup section 12. It has.
  • the processing device 1 is configured inside the imaging device 100, which is a stereo camera, will be described as an example, but the place where the processing device 1 is configured is not limited to the inside of the imaging device 100. Instead, it may be configured by an ECU or the like provided separately from the image pickup apparatus 100.
  • the first imaging unit 11 and the second imaging unit 12 are arranged at positions separated from each other in the vehicle interior of the vehicle in the vehicle width direction, and pass through the windshield to image an overlapping region in front of the vehicle.
  • the first image pickup unit 11 and the second image pickup unit 12 are composed of an assembly in which, for example, an optical system component such as a lens and an image pickup element such as a CCD or CMOS are combined.
  • the first imaging unit 11 and the second imaging unit 12 are adjusted so that their optical axes are parallel to each other and have the same height.
  • the captured images captured by the first imaging unit 11 and the second imaging unit 12 are input to the processing device 1.
  • the processing device 1 is composed of hardware having, for example, a CPU and memory, and software installed and executed on the hardware.
  • the processing device 1 has, as internal functions, a first image acquisition unit 13, a second image acquisition unit 14, a first parallax image generation unit 15, a control target candidate recognition unit 16, and a first parallax value acquisition unit 17.
  • a second parallax image generation unit 18, a second parallax value acquisition unit 19, and a control target determination unit 20 are provided.
  • the first image acquisition unit 13 and the second image acquisition unit 14 acquire captured images that are simultaneously and periodically captured by the first imaging unit 11 and the second imaging unit 12.
  • the first image acquisition unit 13 and the second image acquisition unit 14 cut out images of predetermined regions that overlap each other from a pair of captured images simultaneously captured by the first imaging unit 11 and the second imaging unit 12, respectively, and the first image. And get as a second image.
  • the first parallax image generation unit 15 generates a first parallax image using a pair of captured images acquired by the first image acquisition unit 13 and the second image acquisition unit 14.
  • a method for generating the first parallax image a conventionally known method can be used. For example, with the first image as a reference, in the second image, a pixel array at the same height position in the vertical direction as the first image is scanned in the horizontal direction to find a coincidence point with the first image, and the first image and the second image are used. So-called stereo matching is performed, in which the amount of lateral deviation from the image is calculated as the parallax.
  • the control target candidate recognition unit 16 detects a three-dimensional object from the first parallax image generated by the first parallax image generation unit 15, and recognizes the three-dimensional object as a control target candidate. For example, a three-dimensional object exists in a region where the amount of change in parallax in the vertical direction of the image is small compared to the amount of change in parallax in the vertical direction of an image in which a flat surface continues without a three-dimensional object. Judge that you are doing.
  • the control target candidate recognition unit 16 recognizes each of them as a control target candidate.
  • the control target candidate recognition unit 16 acquires the coordinate information of the candidate region in which the control target candidate is recognized in the first parallax image (for example, the regions 331 and 332 in FIG. 3 and the region 402 in FIG. 4).
  • the first parallax value acquisition unit 17 acquires the first parallax value, which is the parallax calculation value in the candidate region where the control target candidate exists in the first parallax image.
  • the first parallax value acquisition unit 17 executes a process of acquiring the first parallax value only when the control target candidate is recognized by the control target candidate recognition unit 16.
  • the first parallax value acquisition unit 17 does not perform the process of acquiring the first parallax value when no control target candidate is recognized by the control target candidate recognition unit 16.
  • the parallax average value in the candidate region can be used.
  • the parallax average value in the candidate region is calculated from the parallax value and the number of parallax in the candidate region.
  • the calculated parallax value is not limited to the average parallax value, but is the dispersion value of parallax, the maximum value of parallax, the minimum value of parallax, the difference between the maximum and minimum values of parallax, and the maximum and minimum values of parallax.
  • the ratio of values, the average value of the maximum values of parallax, or the average value of the minimum values of parallax can be used.
  • the second parallax image generation unit 18 generates a plurality of second parallax images using the pair of captured images used when the first parallax image was generated.
  • the second parallax image generation unit 18 uses a pair of shifted images cut out by shifting the relative vertical positions of the first image and the second image for each vertical coordinate or for each equally divided division. Generate a parallax image.
  • the second parallax image generation unit 18 cuts out a pair of shifted images by shifting at least one position of the first image and the second image only in the upward direction, only in the downward direction, or in both the upper and lower directions.
  • the second parallax image generation unit 18 changes the amount of shift in the vertical direction to generate a plurality of pairs of shift images, and uses these to generate a plurality of second parallax images. That is, the second parallax image generation unit 18 shifts at least one position of the first image and the second image a plurality of times to generate a plurality of a pair of shift images, and generates a second parallax image from each of the pair of shift images. Generate.
  • the second parallax value acquisition unit 19 acquires the second parallax value, which is the calculated parallax value in the corresponding area, from the corresponding area of the second parallax image.
  • the corresponding area of the second parallax image is set at the same position as the candidate area of the first parallax image. Since a plurality of second parallax images are generated, a corresponding region is set at the same position as the candidate region of the first parallax image for each second parallax image, and the second parallax value is obtained from each corresponding region. Further, when there are a plurality of candidate regions of the first parallax image, the corresponding regions of the second parallax image are set corresponding to each, and the second parallax value of each corresponding region is obtained.
  • the parallax average value in the corresponding region can be used in the same manner as the first parallax value.
  • the average parallax value in the corresponding area is calculated from the parallax value and the number of parallaxes in the corresponding area.
  • the average value of the ratio, the maximum value of parallax, or the average value of the minimum value of parallax can be used.
  • the second parallax value is calculated for each of the plurality of second parallax images.
  • the control target determination unit 20 uses the first parallax image generated by the first parallax image generation unit 15 and the plurality of second parallax images generated by the second parallax image generation unit 18, and the necessity of the control target, that is, , It is determined whether or not the control target candidate detected as a three-dimensional object is recognized as a control target.
  • the control target determination unit 20 determines the necessity of the control target based on the parallax calculation values calculated for each of the first parallax image and the plurality of second parallax images.
  • FIG. 2 is a flowchart illustrating a control target determination process of the image pickup apparatus according to the first embodiment.
  • step S201 image acquisition is performed.
  • a pair of captured images simultaneously captured by the first imaging unit 11 and the second imaging unit 12 are acquired by the first image acquisition unit 13 and the second image acquisition unit 14.
  • step S202 the first parallax image is generated.
  • the first parallax image is generated by the first parallax image generation unit 15 using a pair of captured images acquired by the first image acquisition unit 13 and the second image acquisition unit 14.
  • step S203 it is determined whether or not the control target candidate exists. Whether or not there is a control target candidate is determined depending on whether or not a three-dimensional object is detected from the first parallax image.
  • a three-dimensional object is detected from the first parallax image.
  • the process proceeds to step S204.
  • no three-dimensional object is detected, it is determined that there is no control target candidate in front of the own vehicle (NO in step S203), and the process returns to step S201. That is, in step S203, the three-dimensional object and the non-three-dimensional object are differentiated, and only the object recognized as the three-dimensional object is subjected to the process of determining whether or not it is to be controlled in step S204 or later.
  • step S204 a process of shifting the image in the vertical direction is performed.
  • the first image and the second image are shifted by relative vertical coordinates or by equally divided divisions to obtain a pair of shifted images.
  • the process of acquiring is performed.
  • step S205 the parallax is calculated by stereo matching using the pair of shifted images, and the second parallax image is generated.
  • step S206 it is determined whether or not there are a plurality of parallax images. Then, when the number of the second parallax images is less than the preset number of two or more, it is determined that there are no plurality of parallax images (NO in step S206), and the process returns to step S204. Then, in step S204 and step S205, the vertical shift amount of the pair of images is changed to generate another pair of shift images having different shift amounts, and further another second parallax image is generated.
  • step S206 a process of generating a second parallax image from the pair of images in which the shift amount is changed is performed.
  • the amount of shift can be, for example, one pixel or a plurality of pixels.
  • the shifting direction may be, for example, at least one of the upper and lower sides of the second image with respect to the first image, and in the present embodiment, the second image is shifted both upward and lower with respect to the first image. Each of them generates a pair of shifted images, and a second parallax image is generated from each of the plurality of pairs of shifted images.
  • step S207 a distribution of parallax average values is created.
  • the first parallax value of the candidate region in the first parallax image and the second parallax value of the corresponding region in the plurality of second parallax images are calculated, and an approximate straight line is obtained from the distribution of these parallax values.
  • FIG. 8 is a diagram showing the relationship between the amount of shift and the average parallax value.
  • FIG. 8 (a) shows a state in which the parallax average value is substantially constant and the slope of the approximate straight line of the distribution is zero regardless of the amount of shift
  • FIG. 8 (b) shows the state according to the amount of shift. It is a figure which shows the state which the parallax average value changes, and the slope of the approximate straight line of a distribution is equal to or more than a threshold.
  • step S208 the slope of the approximate straight line of the distribution obtained in step S207 is compared with the preset threshold value.
  • the control target candidate is a three-dimensional object that is correctly recognized
  • the parallax value in the recognition area of the control target candidate does not change due to the vertical deviation, so that the distribution is an approximate straight line with zero slope.
  • the slope of the approximate straight line of the distribution is larger than the threshold value, it is determined that the parallax value of the corresponding region has changed due to the vertical deviation between the first image and the plurality of second images.
  • control target candidate recognized as a three-dimensional object in the first parallax image by the control target candidate recognition unit 16 is a true three-dimensional object such as a preceding vehicle, a bicycle, or a pedestrian, it is affected by the presence or absence of vertical deviation.
  • the parallax value does not change between the candidate area to be controlled and the corresponding area.
  • the control target candidate recognized as a three-dimensional object in the first parallax image by the control target candidate recognition unit 16 is a non-three-dimensional object such as a headrace zone (zebra zone) on the road surface
  • the parallax value of the candidate region is used.
  • the parallax value of the corresponding region has a characteristic that it changes in comparison with the presence or absence of vertical deviation.
  • step S209 the control target candidate whose slope of the approximate straight line of the distribution is determined to be larger than the threshold value in the comparison result in step S208 is regarded as a control-unnecessary target, and a process of excluding it from the control target is executed. Then, the control target candidate that remains without being excluded from the control target candidates is determined to be a control target that requires control, and the control target information is output.
  • the parallax average value is a substantially constant value, that is, the parallax calculation value of the candidate region is irrespective of the relative vertical shift amount of the first image and the second image.
  • the control target candidate is recognized as a control target.
  • the parallax average values are different from each other, that is, the area corresponding to the parallax average value of the candidate region according to the relative vertical shift amount of the first image and the second image.
  • control target candidate is not a three-dimensional object but a erroneous detection of the headrace zone as a three-dimensional object, and processing is performed to exclude it from the control target.
  • step S208 An example of creating a distribution of the parallax average value in step S207 and determining whether to exclude the control target candidate from the control target based on the slope of the approximate straight line of the distribution has been described in step S208, but the present invention is limited to this. It's not something.
  • the parallax dispersion value, the maximum parallax value, the minimum parallax value, the difference between the maximum and minimum parallax values, and the maximum and minimum parallax values instead of the slope of the approximate straight line of the distribution, the parallax dispersion value, the maximum parallax value, the minimum parallax value, the difference between the maximum and minimum parallax values, and the maximum and minimum parallax values.
  • the ratio of values, the average value of the maximum values of parallax, or the average value of the minimum values of parallax can be used.
  • FIG. 9 is a diagram illustrating an example of determining the necessity of a controlled object based on statistical results.
  • the average value of the deviations 1 to 6 which are the six types of shift images in the case of the positive detection and the false detection, and the average value (ave) and the dispersion value (sigma) of these deviation average values.
  • Minimum value (min), maximum value (max), difference between maximum value and minimum value (max-min), average value of minimum value (min / ave), and average value of maximum value (max / ave) Is shown, and it is also possible to judge the necessity of the control target by comparing these values with the threshold value.
  • the imaging device 100 of the present embodiment acquires a first image and a second image from a pair of captured images, generates a first parallax image using the first image and the second image, and a three-dimensional object from the first parallax image. Is detected and recognized as a control target candidate, and the parallax value in the candidate area where the control target candidate exists is acquired as the first parallax value. Then, from the pair of captured images obtained from the first image and the second image, a pair of shifted images in which the positional relationship in the vertical direction relative to the first image and the second image is shifted is acquired, and the pair of these shifted images is obtained. A second disparity image is generated from the shifted image, a corresponding area corresponding to the candidate area of the first disparity image is set in the second disparity image, and the disparity value in the corresponding area is acquired as the second disparity value.
  • a pair of shifted images in which the shift amount is changed are acquired, a second parallax image is generated, a corresponding area is set, and a second parallax in the corresponding area is obtained.
  • the value is acquired, and a plurality of second parallax values having different shift amounts are acquired.
  • it is determined from the distribution status of the first parallax value and the plurality of second parallax values whether or not the control target candidate erroneously detects the headrace zone as a three-dimensional object, and when it is determined to be erroneous detection. Is processed to be excluded from the control target.
  • the detection is positive, and the plurality of second parallax values change according to the amount of shift with respect to the first parallax value. If this is the case, it is judged to be a false positive.
  • the image pickup apparatus 100 of the present embodiment it is possible to determine whether or not control of the control target is necessary even when the left and right cameras have optical axis deviations. Therefore, the headrace zone can be excluded from the control target as a three-dimensional object, and it is possible to prevent the safety system such as AEB and ACC from operating and applying unnecessary alarms and brakes. It is possible to prevent a person from feeling uncomfortable.
  • a three-dimensional object having a characteristic of a repeating pattern such as a fence or a fence is recognized as a control target. For example, even for a three-dimensional object having a characteristic of a repeating pattern such as a fence or a fence, the parallax value in the corresponding region of the second parallax image obtained in step S205 changes with the occurrence of vertical deviation of the pair of cameras. Have.
  • step S210 When it is determined in step S203 that the control target candidate exists, it is determined in step S210 whether or not the own vehicle speed is larger than the threshold value. It can be determined that the fence or fence is recognized as the control target when the fence or fence is located on the traveling path of the own vehicle, and it can be inferred that the own vehicle is traveling at a low speed. Therefore, only when the own vehicle speed is greater than the preset vehicle speed threshold value (YES in step S210), the process proceeds to step S204, and when the own vehicle speed on which the image pickup device 100 is mounted is equal to or less than the vehicle speed threshold value (in step S210). NO) does not generate a second parallax image.
  • the headrace zone when the headrace zone is erroneously detected as a three-dimensional object due to the vertical deviation of the optical axis generated between the left and right cameras, it is determined that the light guide zone is not a three-dimensional object, and the safety system is prevented from operating. It is possible.
  • the present invention is not limited to the above-described embodiments, and various designs are designed without departing from the spirit of the present invention described in the claims. You can make changes.
  • the above-described embodiment has been described in detail in order to explain the present invention in an easy-to-understand manner, and is not necessarily limited to the one including all the described configurations.
  • Imaging device 11 1st imaging unit 12 2nd imaging unit 13 1st image acquisition unit 14 2nd image acquisition unit 15 1st parallax image generation unit 16 Control target candidate recognition unit 17 1st parallax value acquisition unit 18 2nd parallax image Generation unit 19 Second parallax value acquisition unit 20 Control target judgment unit

Abstract

This processing device can determine the necessity of controlling a control target even when the optical axis of a stereo camera shifts. This processing device, for processing a pair of captured images captured with a pair of cameras, performs processing for generating a first parallax image from the pair of captured images, recognizing a control target candidate from the first parallax image, acquiring a first parallax value inside of a candidate region where the control target candidate is present in the first parallax image, shifting the relative vertical position of the pair of captured images to generate a second parallax image, acquiring a second parallax value inside of the corresponding region of the second parallax image that corresponds to the candidate region of the first parallax image, and using the first parallax value and the second parallax value to determine whether or not to identify the control target candidate as the control target.

Description

処理装置Processing equipment
 本発明は、例えば車両に搭載されたステレオカメラで撮像された撮像画像を処理する処理装置に関する。 The present invention relates to, for example, a processing device that processes an captured image captured by a stereo camera mounted on a vehicle.
 従来から、自車の移動に伴って撮像された複数の画像から路面標示の何処の部位が撮像されているかを推定する技術が知られている(特許文献1)。 Conventionally, there has been known a technique for estimating which part of a road marking is imaged from a plurality of images captured as the vehicle moves (Patent Document 1).
特開2010-107435号公報JP-A-2010-107435
 ステレオカメラは、左カメラの光軸と右カメラの光軸との間に相対的な縦方向ずれが発生している状態で、導流帯(ゼブラゾーン)のような奥行き方向に向かって繰り返し斜め線がペイントされた路面を撮像した場合、路面ペイントが記載された領域を立体物として誤検知するおそれがある。路面ペイントが記載された領域を立体物として誤検知した場合、AEBやACCといったステレオカメラが提供する安全システムが作動し、不要な警報やブレーキがかかるなどして乗員に違和感を与える可能性がある。 In a stereo camera, a relative vertical deviation occurs between the optical axis of the left camera and the optical axis of the right camera, and the stereo camera is repeatedly obliquely oblique toward the depth direction such as a flow guide zone (zebra zone). When the road surface on which the lines are painted is imaged, the area on which the road surface paint is written may be erroneously detected as a three-dimensional object. If the area where the road surface paint is written is falsely detected as a three-dimensional object, the safety system provided by the stereo camera such as AEB or ACC may be activated, and unnecessary warnings or brakes may be applied, giving the occupant a sense of discomfort. ..
 前述した従来の手法は、左右カメラ間の縦方向ずれを補正するためのものであり、補正処理のための時間が発生する。縦方向ずれの補正が完了するまでの間に導流帯のある路面を走行した場合、導流帯を立体物として誤検知する可能性がある。したがって、縦方向ずれの補正処理中は、ステレオカメラが提供するシステムを停止状態にするなどの仕組みが必要となる。 The above-mentioned conventional method is for correcting the vertical deviation between the left and right cameras, and time is required for the correction process. If the vehicle travels on a road surface with a guide zone before the correction of the vertical deviation is completed, the guide zone may be erroneously detected as a three-dimensional object. Therefore, during the vertical deviation correction process, a mechanism such as stopping the system provided by the stereo camera is required.
 しかしながら、ステレオカメラが提供するシステム自体を完全に停止状態にしてしまうと、実際に必要な状況において安全システムが作動しない可能性もあり、安全性能の低下を招くおそれがある。 However, if the system itself provided by the stereo camera is completely stopped, the safety system may not operate in the actual necessary situation, which may lead to a decrease in safety performance.
 本発明の目的は、ステレオカメラの左右カメラ間に縦方向の光軸ずれが発生しているか否かにかかわらず、立体物と非立体物を差別化し、ステレオカメラが立体物として認識したものに対して、制御対象とするか否かを判断する処理装置を提供することである。 An object of the present invention is to differentiate a three-dimensional object from a non-three-dimensional object regardless of whether or not a vertical optical axis shift occurs between the left and right cameras of the stereo camera, and to make the stereo camera recognize it as a three-dimensional object. On the other hand, it is to provide a processing device for determining whether or not to control.
 上記課題を解決する本発明の処理装置は、一対のカメラで撮像された一対の撮像画像を処理する処理装置であって、前記一対の撮像画像から第一視差画像を生成する第一視差画像生成部と、前記第一視差画像から制御対象候補を認識する制御対象候補認識部と、前記第一視差画像において前記制御対象候補が存在する候補領域内の第一視差値を取得する第一視差値取得部と、前記一対の撮像画像の相対的な縦方向の位置をずらして第二視差画像を生成する第二視差画像生成部と、前記第一視差画像の候補領域と対応する前記第二視差画像の対応領域内の第二視差値を取得する第二視差値取得部と、前記第一視差値と前記第二視差値とを用いて前記制御対象候補を制御対象として認定するか否かを判断する制御対象判断部と、を有することを特徴とする。 The processing device of the present invention that solves the above problems is a processing device that processes a pair of captured images captured by a pair of cameras, and generates a first parallax image from the pair of captured images. A unit, a control target candidate recognition unit that recognizes a control target candidate from the first parallax image, and a first discriminant value that acquires a first discriminant value in a candidate region in which the control target candidate exists in the first parallax image. The acquisition unit, the second disparity image generation unit that generates the second disparity image by shifting the relative vertical positions of the pair of captured images, and the second disparity corresponding to the candidate region of the first disparity image. Whether or not to certify the control target candidate as a control target by using the second parallax value acquisition unit that acquires the second parallax value in the corresponding region of the image and the first parallax value and the second parallax value. It is characterized by having a control target determination unit for determining.
 本発明によれば、ステレオカメラが立体物として認識したものに対して、制御対象とするか否かを判断することができる。本発明に関連する更なる特徴は、本明細書の記述、添付図面から明らかになるものである。また、上記した以外の、課題、構成及び効果は、以下の実施形態の説明により明らかにされる。 According to the present invention, it is possible to determine whether or not a stereo camera recognizes a three-dimensional object as a control target. Further features relating to the present invention will become apparent from the description herein and the accompanying drawings. In addition, problems, configurations, and effects other than those described above will be clarified by the following description of the embodiments.
第1実施形態における撮像装置の全体構成を示すブロック図。The block diagram which shows the whole structure of the image pickup apparatus in 1st Embodiment. 第1実施形態における撮像装置の制御対象判断処理を説明するフローチャート。The flowchart explaining the control target determination process of the image pickup apparatus in 1st Embodiment. 左右の撮像画像と視差画像を模式的に示す図。The figure which shows typically the left-right captured image and the parallax image. 制御対象の認識領域の一例を示す図。The figure which shows an example of the recognition area of a controlled object. 左右の撮像画像における遠方の視差と近傍の視差との差を示す図。The figure which shows the difference between the distant parallax and the near parallax in the left and right captured images. 画像に撮像される線の傾斜角度に応じて視差の誤差が異なる理由を説明する図。The figure explaining the reason why the parallax error differs depending on the inclination angle of the line imaged in an image. 左右のカメラの光軸が相対的に縦方向にずれているステレオカメラで導流帯を撮像したときの視差を説明する図。The figure explaining the parallax when the optical axis of the left and right cameras is relatively displaced in the vertical direction, and the parallax is imaged by the stereo camera. ずらし量と視差平均値との関係を示す図。The figure which shows the relationship between the shift amount and the parallax average value. 制御対象の要否判断を統計結果に基づいて行う例を説明する図。The figure explaining the example which makes the necessity judgment of a control target based on a statistical result. 第2実施形態における撮像装置の制御対象判断処理を説明するフローチャート。The flowchart explaining the control target determination process of the image pickup apparatus in 2nd Embodiment.
 まず、ステレオカメラが導流帯を立体物として誤検知する理由について説明する。
 図3は、左右の撮像画像と視差画像を模式的に示す図である。図3(a)は、左カメラにより車両前方を撮像した撮像画像の模式図であり、図3(b)は、右カメラにより左カメラと同時に車両前方を撮像した撮像画像の模式図を示す。そして、図3(c)は、図3(a)の撮像画像と図3(b)の撮像画像を用いて作成した視差画像であり、自車両に近い領域321、自車両から遠い領域323、中間の領域322を示す。
First, the reason why the stereo camera erroneously detects the headrace zone as a three-dimensional object will be described.
FIG. 3 is a diagram schematically showing a left and right captured image and a parallax image. FIG. 3A is a schematic view of a captured image of the front of the vehicle captured by the left camera, and FIG. 3B shows a schematic view of the captured image of the front of the vehicle captured by the right camera at the same time as the left camera. 3 (c) is a parallax image created by using the captured image of FIG. 3 (a) and the captured image of FIG. 3 (b). An intermediate region 322 is shown.
 図3(a)および図3(b)に示すように、左右のカメラにより撮像された画像には、自車両の前方を先行車両311が走行している状態が撮像されている。自車両は、走行車線312を走行しており、走行車線312の両側には、導流帯313が設けられている。 As shown in FIGS. 3A and 3B, the images captured by the left and right cameras capture the state in which the preceding vehicle 311 is traveling in front of the own vehicle. The own vehicle is traveling in the traveling lane 312, and guide zones 313 are provided on both sides of the traveling lane 312.
 ステレオカメラでは、図3(c)に示すような視差画像から立体物を検知し、制御対象候補として認識する。視差は、立体物がない場合は近傍から遠方に向かって移行するにしたがって漸次減少し、視差画像においても縦方向下方から上方に移行するにしたがって漸次減少するように示される。一方、立体物がある場合は、近傍と遠方で視差が同じとなり、視差画像においても縦方向に沿って同じ値となるように示される。したがって、視差画像において縦方向に沿って同じ視差となる領域に立体物があると判断している。図3(c)に示す視差画像では、実際にも立体物である先行車両311が立体物331として検知されているが、実際には立体物ではない導流帯313の一部も立体物332として検知されている。 The stereo camera detects a three-dimensional object from the parallax image as shown in FIG. 3C and recognizes it as a control target candidate. In the absence of a three-dimensional object, the parallax is shown to gradually decrease as it shifts from the vicinity to the distance, and also in the parallax image as it gradually decreases as it shifts from the lower to the upper in the vertical direction. On the other hand, when there is a three-dimensional object, the parallax is the same in the vicinity and the distance, and the parallax image is also shown to have the same value along the vertical direction. Therefore, in the parallax image, it is determined that there is a three-dimensional object in a region having the same parallax along the vertical direction. In the parallax image shown in FIG. 3C, the preceding vehicle 311 which is actually a three-dimensional object is detected as a three-dimensional object 331, but a part of the diversion zone 313 which is not actually a three-dimensional object is also a three-dimensional object 332. Has been detected as.
 図4は、制御対象候補の認識領域の一例を示す図である。
 図4に示す例では、制御対象候補と認識された候補領域を矩形枠402として示している。候補領域を示す矩形枠402の位置は、視差画像401のx-y座標により規定される。
FIG. 4 is a diagram showing an example of a recognition area of the control target candidate.
In the example shown in FIG. 4, the candidate area recognized as the control target candidate is shown as a rectangular frame 402. The position of the rectangular frame 402 indicating the candidate region is defined by the xy coordinates of the parallax image 401.
 図5は、左右の撮像画像における遠方の視差と近傍の視差との差を示す図である。
 左カメラによる撮像画像501と右カメラによる撮像画像502の両方に、2台の先行車両511、521が撮像されている。ステレオカメラの特性上、図5に示すように、遠方の先行車両521に対する視差は小さく、近傍の先行車両511に対する視差は大きい。
FIG. 5 is a diagram showing the difference between the distant parallax and the near parallax in the left and right captured images.
Two preceding vehicles 511 and 521 are imaged on both the image 501 captured by the left camera and the image 502 captured by the right camera. Due to the characteristics of the stereo camera, as shown in FIG. 5, the parallax with respect to the distant preceding vehicle 521 is small, and the parallax with respect to the nearby preceding vehicle 511 is large.
 図6は、画像に撮像される線の傾斜角度に応じて視差の誤差が異なる理由を説明する図である。図6(a)は、自車両の進行方向に沿って真っ直ぐ延びる線を撮像したものであり、画像の縦方向に沿って真っ直ぐに伸びる縦線611が撮像されている。図6(b)および図6(c)は、自車両の進行方向に対して斜めに延びる線を撮像したものであり、図6(b)の画像には、画像の横方向に対して傾斜が大きい斜め線612が撮像され、図6(c)の画像には、画像の横方向に対して傾斜が小さい斜め線613が撮像されている。 FIG. 6 is a diagram for explaining the reason why the parallax error differs depending on the inclination angle of the line captured in the image. FIG. 6A is an image of a line extending straight along the traveling direction of the own vehicle, and a vertical line 611 extending straight along the vertical direction of the image is imaged. 6 (b) and 6 (c) are images of lines extending diagonally with respect to the traveling direction of the own vehicle, and the image of FIG. 6 (b) is inclined with respect to the lateral direction of the image. The diagonal line 612 with a large size is imaged, and the diagonal line 613 with a small inclination with respect to the lateral direction of the image is imaged in the image of FIG. 6 (c).
 例えば左カメラLの光軸の方が右カメラRの光軸よりも上方にずれている状態で図6(a)~図6(c)に示す線を撮像した場合、図6(a)に示す画像601の縦線611は、左カメラで撮像したときと右カメラで撮像したときの位置P1、P2は左右に同じ位置であり、視差の誤差は0である。これに対し、図6(b)に示す画像602の斜め線612は、左カメラで撮像したときの位置P3と右カメラで撮像したときの位置P4との間には左右方向に誤差δ1が生じる。そして、図6(c)に示す画像603の斜め線613のときは、左カメラLで撮像したときの位置P5と右カメラRで撮像したときの位置P6との間には左右方向に誤差δ2が生じる。 For example, when the lines shown in FIGS. 6 (a) to 6 (c) are imaged in a state where the optical axis of the left camera L is shifted upward from the optical axis of the right camera R, FIG. 6 (a) shows. The vertical lines 611 of the image 601 shown have the same positions P1 and P2 on the left and right when the image is taken by the left camera and the positions P2 when the image is taken by the right camera, and the deviation error is 0. On the other hand, in the diagonal line 612 of the image 602 shown in FIG. 6B, an error δ1 occurs in the left-right direction between the position P3 when the image is taken by the left camera and the position P4 when the image is taken by the right camera. .. Then, in the case of the diagonal line 613 of the image 603 shown in FIG. 6C, there is an error δ2 in the left-right direction between the position P5 when the image is taken by the left camera L and the position P6 when the image is taken by the right camera R. Occurs.
 誤差δ1と誤差δ2とを比較すると、図6(c)に示す画像603の斜め線613の方が、図6(b)に示す画像602の斜め線612よりも傾斜が大きいので、誤差δ1よりも誤差δ2の方が大きくなる(δ1<δ2)。つまり、左カメラLの光軸と右カメラRの光軸が互いに縦方向に上下にずれている場合、撮像される線の傾斜が大きくなるに応じて誤差も大きくなる。 Comparing the error δ1 and the error δ2, the diagonal line 613 of the image 603 shown in FIG. 6 (c) has a larger inclination than the diagonal line 612 of the image 602 shown in FIG. However, the error δ2 is larger (δ1 <δ2). That is, when the optical axis of the left camera L and the optical axis of the right camera R are vertically displaced from each other in the vertical direction, the error increases as the inclination of the imaged line increases.
 なお、左カメラLの光軸と右カメラRの光軸とが完全に平行であり、相対的に縦方向にずれていない場合には、左カメラで撮像したときと右カメラで撮像したときの左右方向の位置は同じであり、視差誤差は発生しない。左右のカメラの光軸が相対的に縦方向にずれていた場合にのみ、図6(b)、図6(c)に示すような視差誤差が発生する。 When the optical axis of the left camera L and the optical axis of the right camera R are completely parallel and do not deviate relatively in the vertical direction, when the image is taken with the left camera and when the image is taken with the right camera. The positions in the left-right direction are the same, and no discrepancy error occurs. The parallax error as shown in FIGS. 6 (b) and 6 (c) occurs only when the optical axes of the left and right cameras are relatively displaced in the vertical direction.
 図7は、左右のカメラの光軸が相対的に縦方向にずれているステレオカメラで導流帯を撮像したときの視差を説明する図である。ステレオカメラ701から導流帯702を撮像すると、導流帯702の白線は、近傍の方が傾斜は大きく、遠方に移行するにしたがって漸次傾斜は小さくなるように撮像される。したがって、左右のカメラの光軸が相対的に縦方向にずれていた場合に、白線の左右方向の視差誤差721~723は、図6を用いて説明したように、近傍から遠方に移行するにしたがって漸次大きくなる。一方、白線の視差711~713は、光軸のずれにかかわらず、図5を用いて説明したように、近傍から遠方に移行するにしたがって漸次小さくなる。 FIG. 7 is a diagram for explaining the parallax when the diversion zone is imaged by a stereo camera in which the optical axes of the left and right cameras are relatively displaced in the vertical direction. When the guide zone 702 is imaged from the stereo camera 701, the white line of the guide zone 702 is imaged so that the inclination is larger in the vicinity and gradually decreases as the distance is increased. Therefore, when the optical axes of the left and right cameras are relatively displaced in the vertical direction, the parallax error 721 to 723 in the left-right direction of the white line shifts from the vicinity to the distance as described with reference to FIG. Therefore, it gradually increases. On the other hand, the parallax 711 to 713 of the white line gradually decreases as the distance from the vicinity to the distance increases, as described with reference to FIG. 5, regardless of the deviation of the optical axis.
 したがって、左右のカメラの光軸が相対的に縦方向にずれていた場合に、近傍の測定視差は、比較的大きな正しい視差711と比較的小さな視差誤差721とを加算したものとなり、遠方の測定視差は、比較的小さな正しい視差713と比較的大きな視差誤差723とを加算したものとなり、近傍と遠方の中間の測定視差は、中間の大きさの正しい視差712と中間の大きさの視差誤差722とを加算したものとなる。したがって、近傍と遠方とその中間のいずれも互いに等しい測定視差703となり、視差画像において縦方向に沿って同じ視差となる領域が存在しており、立体物があると誤検知するおそれがある。 Therefore, when the optical axes of the left and right cameras are relatively offset in the vertical direction, the measured parallax in the vicinity is the sum of the relatively large correct parallax 711 and the relatively small parallax error 721, and the distant measurement. The parallax is the sum of a relatively small correct parallax 713 and a relatively large parallax error 723, and the measured parallax between near and far is the middle size correct parallax 712 and the middle size parallax error 722. And are added. Therefore, the measured parallax 703 is equal to each other in the vicinity, the distance, and the middle thereof, and there is a region in the parallax image in which the parallax is the same in the vertical direction, and there is a risk of erroneously detecting that there is a three-dimensional object.
 このような課題に対して、本実施形態の撮像装置は、ステレオカメラの左右カメラ間に縦方向の光軸ずれが発生しているか否かにかかわらず、立体物と非立体物を差別化し、ステレオカメラが立体物として認識したものに対して、制御対象とするか否かを判断する処理を行う。 In response to such a problem, the imaging device of the present embodiment differentiates a three-dimensional object from a non-three-dimensional object regardless of whether or not a vertical optical axis shift occurs between the left and right cameras of the stereo camera. A process of determining whether or not a stereo camera recognizes a three-dimensional object as a control target is performed.
 以下、本発明の実施形態について図面を用いて説明する。
 本実施形態の撮像装置は、一対の撮像部で撮像した一対の撮像画像を縦方向にずらさないときの視差平均値と、縦方向に相対的にずらしたときの視差平均値を算出し、ずらし量と視差平均値の分布形状から制御対象の制御要否判断を実施する構成を有する。
Hereinafter, embodiments of the present invention will be described with reference to the drawings.
The imaging device of the present embodiment calculates the parallax average value when the pair of captured images captured by the pair of imaging units are not shifted in the vertical direction and the parallax average value when the pair of captured images are relatively shifted in the vertical direction, and shifts the images. It has a configuration in which the control necessity judgment of the control target is performed from the distribution shape of the amount and the parallax average value.
<第1実施形態>
 図1は、第1実施形態における撮像装置の全体構成を示すブロック図である。
 撮像装置100は、例えば自動車等の車両に搭載されるものであり、左右一対のカメラによって車両から前方を撮像し、視差に基づいて立体物の有無を判定し、あるいは立体物の種類を認定する処理を行うステレオカメラである。
<First Embodiment>
FIG. 1 is a block diagram showing the overall configuration of the image pickup apparatus according to the first embodiment.
The image pickup device 100 is mounted on a vehicle such as an automobile, and images the front from the vehicle with a pair of left and right cameras, determines the presence or absence of a three-dimensional object based on parallax, or certifies the type of the three-dimensional object. It is a stereo camera that performs processing.
 撮像装置100は、一対のカメラである第一撮像部11および第二撮像部12と、第一撮像部11および第二撮像部12により撮像された二枚の撮像画像を処理する処理装置1とを備えている。本実施形態では、ステレオカメラである撮像装置100の内部に処理装置1が構成される場合を例に説明するが、処理装置1が構成される場所は撮像装置100の内部に限定されるものではなく、撮像装置100とは別体に設けられたECUなどにおいて構成されるものであってもよい。 The image pickup device 100 includes a first image pickup unit 11 and a second image pickup section 12 which are a pair of cameras, and a processing device 1 which processes two captured images captured by the first image pickup section 11 and the second image pickup section 12. It has. In the present embodiment, the case where the processing device 1 is configured inside the imaging device 100, which is a stereo camera, will be described as an example, but the place where the processing device 1 is configured is not limited to the inside of the imaging device 100. Instead, it may be configured by an ECU or the like provided separately from the image pickup apparatus 100.
 第一撮像部11と第二撮像部12は、自動車の車室内において互いに車幅方向に離間した位置に配置されており、フロントガラスを透過して車両前方の重複した領域を撮像する。第一撮像部11と第二撮像部12は、例えばレンズなどの光学系部品とCCDやCMOS等の撮像素子を組み合わせた組み立て品によって構成されている。第一撮像部11と第二撮像部12は、互いの光軸が平行でかつ同じ高さになるように調整されている。第一撮像部11と第二撮像部12によって撮像された撮像画像は、処理装置1に入力される。 The first imaging unit 11 and the second imaging unit 12 are arranged at positions separated from each other in the vehicle interior of the vehicle in the vehicle width direction, and pass through the windshield to image an overlapping region in front of the vehicle. The first image pickup unit 11 and the second image pickup unit 12 are composed of an assembly in which, for example, an optical system component such as a lens and an image pickup element such as a CCD or CMOS are combined. The first imaging unit 11 and the second imaging unit 12 are adjusted so that their optical axes are parallel to each other and have the same height. The captured images captured by the first imaging unit 11 and the second imaging unit 12 are input to the processing device 1.
 処理装置1は、例えばCPUやメモリなどを有するハードウエアと、ハードウエアにインストールされて実行されるソフトウエアとにより構成されている。処理装置1は、内部機能として、第一画像取得部13と、第二画像取得部14と、第一視差画像生成部15と、制御対象候補認識部16と、第一視差値取得部17と、第二視差画像生成部18と、第二視差値取得部19と、制御対象判断部20とを備える。 The processing device 1 is composed of hardware having, for example, a CPU and memory, and software installed and executed on the hardware. The processing device 1 has, as internal functions, a first image acquisition unit 13, a second image acquisition unit 14, a first parallax image generation unit 15, a control target candidate recognition unit 16, and a first parallax value acquisition unit 17. A second parallax image generation unit 18, a second parallax value acquisition unit 19, and a control target determination unit 20 are provided.
 第一画像取得部13と第二画像取得部14は、第一撮像部11と第二撮像部12において同時でかつ定期的に撮像される撮像画像を取得する。第一画像取得部13と第二画像取得部14は、第一撮像部11および第二撮像部12によって同時に撮像される一対の撮像画像から互いに重複する所定領域の画像をそれぞれ切り出して第一画像および第二画像として取得する。 The first image acquisition unit 13 and the second image acquisition unit 14 acquire captured images that are simultaneously and periodically captured by the first imaging unit 11 and the second imaging unit 12. The first image acquisition unit 13 and the second image acquisition unit 14 cut out images of predetermined regions that overlap each other from a pair of captured images simultaneously captured by the first imaging unit 11 and the second imaging unit 12, respectively, and the first image. And get as a second image.
 第一視差画像生成部15は、第一画像取得部13と第二画像取得部14で取得した一対の撮像画像を用いて第一視差画像を生成する。第一視差画像を生成する方法は、従来から一般的に知られている方法を用いることができる。例えば、第一画像を基準として、第二画像において第一画像と縦方向に同じ高さ位置の画素列を横方向に走査して第一画像との一致点を探し出し、第一画像と第二画像との間の横方向のずれ量を視差として算出する、いわゆるステレオマッチングを行う。 The first parallax image generation unit 15 generates a first parallax image using a pair of captured images acquired by the first image acquisition unit 13 and the second image acquisition unit 14. As a method for generating the first parallax image, a conventionally known method can be used. For example, with the first image as a reference, in the second image, a pixel array at the same height position in the vertical direction as the first image is scanned in the horizontal direction to find a coincidence point with the first image, and the first image and the second image are used. So-called stereo matching is performed, in which the amount of lateral deviation from the image is calculated as the parallax.
 制御対象候補認識部16は、第一視差画像生成部15で生成した第一視差画像から立体物を検知し、その立体物を制御対象候補として認識する。例えば立体物が存在せず平坦な面が続いている状態が撮像された画像の縦方向における視差の変化量と比較して、画像の縦方向に視差の変化量が少ない領域に立体物が存在していると判断する。制御対象候補認識部16は、第一視差画像から複数の立体物が検知された場合には、それぞれを制御対象候補として認識する。制御対象候補認識部16は、第一視差画像において制御対象候補を認識した候補領域の座標情報を取得する(例えば図3の領域331、332、および図4の領域402)。 The control target candidate recognition unit 16 detects a three-dimensional object from the first parallax image generated by the first parallax image generation unit 15, and recognizes the three-dimensional object as a control target candidate. For example, a three-dimensional object exists in a region where the amount of change in parallax in the vertical direction of the image is small compared to the amount of change in parallax in the vertical direction of an image in which a flat surface continues without a three-dimensional object. Judge that you are doing. When a plurality of three-dimensional objects are detected from the first parallax image, the control target candidate recognition unit 16 recognizes each of them as a control target candidate. The control target candidate recognition unit 16 acquires the coordinate information of the candidate region in which the control target candidate is recognized in the first parallax image (for example, the regions 331 and 332 in FIG. 3 and the region 402 in FIG. 4).
 第一視差値取得部17は、第一視差画像において制御対象候補が存在する候補領域内の視差演算値である第一視差値を取得する。第一視差値取得部17は、制御対象候補認識部16によって制御対象候補が認識された場合にのみ第一視差値を取得する処理を実行する。第一視差値取得部17は、制御対象候補認識部16によって制御対象候補が一つも認識されなかった場合には第一視差値を取得する処理は行わない。視差演算値である第一視差値には、例えば候補領域内における視差平均値を用いることができる。候補領域内の視差平均値は、候補領域内の視差値と視差数から算出される。なお、視差演算値は、視差平均値に限定されるものではなく、視差の分散値、視差の最大値、視差の最小値、視差の最大値と最小値との差分、視差の最大値と最小値の比、視差の最大値の平均値、あるいは、視差の最小値の平均値を用いることができる。 The first parallax value acquisition unit 17 acquires the first parallax value, which is the parallax calculation value in the candidate region where the control target candidate exists in the first parallax image. The first parallax value acquisition unit 17 executes a process of acquiring the first parallax value only when the control target candidate is recognized by the control target candidate recognition unit 16. The first parallax value acquisition unit 17 does not perform the process of acquiring the first parallax value when no control target candidate is recognized by the control target candidate recognition unit 16. For the first parallax value, which is the calculated parallax value, for example, the parallax average value in the candidate region can be used. The parallax average value in the candidate region is calculated from the parallax value and the number of parallax in the candidate region. The calculated parallax value is not limited to the average parallax value, but is the dispersion value of parallax, the maximum value of parallax, the minimum value of parallax, the difference between the maximum and minimum values of parallax, and the maximum and minimum values of parallax. The ratio of values, the average value of the maximum values of parallax, or the average value of the minimum values of parallax can be used.
 第二視差画像生成部18は、第一視差画像を生成したときに使用した一対の撮像画像を用いて複数の第二視差画像を生成する。第二視差画像生成部18は、第一画像と第二画像の相対的な上下の位置を縦方向の座標毎または等分割した区分毎にずらして切り出された一対のずらし画像を用いて第二視差画像を生成する。第二視差画像生成部18は、第一画像と第二画像の少なくとも一方の位置を、上方向のみ、または下方向のみ、または上下両方向にずらして一対のずらし画像を切り出す。 The second parallax image generation unit 18 generates a plurality of second parallax images using the pair of captured images used when the first parallax image was generated. The second parallax image generation unit 18 uses a pair of shifted images cut out by shifting the relative vertical positions of the first image and the second image for each vertical coordinate or for each equally divided division. Generate a parallax image. The second parallax image generation unit 18 cuts out a pair of shifted images by shifting at least one position of the first image and the second image only in the upward direction, only in the downward direction, or in both the upper and lower directions.
 第二視差画像生成部18は、縦方向のずらし量を変更して一対のずらし画像を複数生成し、これらを用いて複数の第二視差画像を生成する。つまり、第二視差画像生成部18は、第一画像と第二画像の少なくとも一方の位置を複数回ずらして一対のずらし画像を複数生成し、複数の一対のずらし画像からそれぞれ第二視差画像を生成する。 The second parallax image generation unit 18 changes the amount of shift in the vertical direction to generate a plurality of pairs of shift images, and uses these to generate a plurality of second parallax images. That is, the second parallax image generation unit 18 shifts at least one position of the first image and the second image a plurality of times to generate a plurality of a pair of shift images, and generates a second parallax image from each of the pair of shift images. Generate.
 第二視差値取得部19は、第二視差画像の対応領域から対応領域内の視差演算値である第二視差値を取得する。第二視差画像の対応領域は、第一視差画像の候補領域と同じ位置に設定される。第二視差画像は複数生成されるので、それぞれの第二視差画像ごとに第一視差画像の候補領域と同じ位置に対応領域が設定され、各対応領域から第二視差値が求められる。また、第一視差画像の候補領域が複数ある場合には、それぞれに対応して第二視差画像の対応領域が設定され、各対応領域の第二視差値が求められる。 The second parallax value acquisition unit 19 acquires the second parallax value, which is the calculated parallax value in the corresponding area, from the corresponding area of the second parallax image. The corresponding area of the second parallax image is set at the same position as the candidate area of the first parallax image. Since a plurality of second parallax images are generated, a corresponding region is set at the same position as the candidate region of the first parallax image for each second parallax image, and the second parallax value is obtained from each corresponding region. Further, when there are a plurality of candidate regions of the first parallax image, the corresponding regions of the second parallax image are set corresponding to each, and the second parallax value of each corresponding region is obtained.
 第二視差値は、第一視差値と同様に、例えば対応領域内における視差平均値を用いることができる。対応領域内の視差平均値は、対応領域内の視差値と視差数から算出される。
また、第一視差値と同じ演算値として比較できるように、視差の分散値、視差の最大値、視差の最小値、視差の最大値と最小値との差分、視差の最大値と最小値の比、視差の最大値の平均値、あるいは、視差の最小値の平均値を用いることができる。第二視差値は、複数の第二視差画像においてそれぞれ演算される。
As the second parallax value, for example, the parallax average value in the corresponding region can be used in the same manner as the first parallax value. The average parallax value in the corresponding area is calculated from the parallax value and the number of parallaxes in the corresponding area.
In addition, the parallax dispersion value, the maximum parallax value, the minimum parallax value, the difference between the maximum and minimum values of parallax, and the maximum and minimum values of parallax so that they can be compared as the same calculated value as the first parallax value. The average value of the ratio, the maximum value of parallax, or the average value of the minimum value of parallax can be used. The second parallax value is calculated for each of the plurality of second parallax images.
 制御対象判断部20は、第一視差画像生成部15で生成した第一視差画像と、第二視差画像生成部18で生成した複数の第二視差画像を用いて、制御対象の要否、つまり、立体物として検知された制御対象候補を制御対象として認定するか否かを判断する。制御対象判断部20では、第一視差画像と複数の第二視差画像においてそれぞれ算出された視差演算値に基づいて、制御対象の要否判断を実施する。 The control target determination unit 20 uses the first parallax image generated by the first parallax image generation unit 15 and the plurality of second parallax images generated by the second parallax image generation unit 18, and the necessity of the control target, that is, , It is determined whether or not the control target candidate detected as a three-dimensional object is recognized as a control target. The control target determination unit 20 determines the necessity of the control target based on the parallax calculation values calculated for each of the first parallax image and the plurality of second parallax images.
 図2は、第1実施形態における撮像装置の制御対象判断処理を説明するフローチャートである。 FIG. 2 is a flowchart illustrating a control target determination process of the image pickup apparatus according to the first embodiment.
 ステップS201では、画像取得が行われる。ここでは、第一撮像部11および第二撮像部12により同時に撮像された一対の撮像画像が第一画像取得部13および第二画像取得部14により取得される。そして、ステップS202では、第一視差画像が生成される。第一視差画像は、第一画像取得部13および第二画像取得部14により取得された一対の撮像画像を用いて第一視差画像生成部15により生成される。 In step S201, image acquisition is performed. Here, a pair of captured images simultaneously captured by the first imaging unit 11 and the second imaging unit 12 are acquired by the first image acquisition unit 13 and the second image acquisition unit 14. Then, in step S202, the first parallax image is generated. The first parallax image is generated by the first parallax image generation unit 15 using a pair of captured images acquired by the first image acquisition unit 13 and the second image acquisition unit 14.
 ステップS203では、制御対象候補が存在するか否かの判断が行われる。制御対象候補が存在するか否かは、第一視差画像から立体物が検知されるか否かに応じて判断される。ここで、少なくとも一つ以上の立体物が検知された場合には自車両の前方に制御対象候補が存在する(ステップS203でYES)と判断して、ステップS204に移行する。
一方、立体物を一つも検知しなかった場合には、自車両の前方に制御対象候補が存在しない(ステップS203でNO)と判断してステップS201に戻る。つまり、ステップS203では、立体物と非立体物との差別化を行い、立体物として認識したものに対してのみ、ステップS204以降で制御対象とするか否かを判断する処理を行う。
In step S203, it is determined whether or not the control target candidate exists. Whether or not there is a control target candidate is determined depending on whether or not a three-dimensional object is detected from the first parallax image. Here, when at least one or more three-dimensional objects are detected, it is determined that the control target candidate exists in front of the own vehicle (YES in step S203), and the process proceeds to step S204.
On the other hand, if no three-dimensional object is detected, it is determined that there is no control target candidate in front of the own vehicle (NO in step S203), and the process returns to step S201. That is, in step S203, the three-dimensional object and the non-three-dimensional object are differentiated, and only the object recognized as the three-dimensional object is subjected to the process of determining whether or not it is to be controlled in step S204 or later.
 ステップS204では、縦方向に画像をずらす処理が行われる。ここでは、第一視差画像を生成したときに使用した一対の撮像画像から、第一画像と第二画像を相対的に縦方向の座標毎または等分割した区分毎にずらして一対のずらし画像を取得する処理が行われる。 In step S204, a process of shifting the image in the vertical direction is performed. Here, from the pair of captured images used when the first parallax image was generated, the first image and the second image are shifted by relative vertical coordinates or by equally divided divisions to obtain a pair of shifted images. The process of acquiring is performed.
 ステップS205では、一対のずらし画像を用いてステレオマッチングにより視差を算出し、第二視差画像を生成する。ステップS206では、複数の視差画像があるか否かが判断される。そして、第二視差画像の数が予め設定された2以上の規定数よりも少ない場合には、複数の視差画像がない(ステップS206でNO)と判断され、ステップS204に戻る。そして、ステップS204およびステップS205において一対の画像の縦方向のずらし量を変更して、ずらし量が異なる別の一対のずらし画像を生成し、更に別の第二視差画像を生成する。そして、ステップS206において規定数以上の第二視差画像がある(ステップS206でYES)と判断されるまで、ずらし量を変更した一対の画像から第二視差画像を生成する処理が行われる。 In step S205, the parallax is calculated by stereo matching using the pair of shifted images, and the second parallax image is generated. In step S206, it is determined whether or not there are a plurality of parallax images. Then, when the number of the second parallax images is less than the preset number of two or more, it is determined that there are no plurality of parallax images (NO in step S206), and the process returns to step S204. Then, in step S204 and step S205, the vertical shift amount of the pair of images is changed to generate another pair of shift images having different shift amounts, and further another second parallax image is generated. Then, until it is determined in step S206 that there are a predetermined number or more of the second parallax images (YES in step S206), a process of generating a second parallax image from the pair of images in which the shift amount is changed is performed.
 ずらし量は、例えば一画素分、あるいは複数画素分だけずらした量とすることができる。また、ずらす方向は、例えば第一画像に対して第二画像を上方と下方の少なくとも一方であればよく、本実施形態では、第一画像に対して第二画像を上方と下方の両方にずらしてそれぞれ一対のずらし画像を生成し、これら複数の一対のずらし画像から第二視差画像を各々生成している。 The amount of shift can be, for example, one pixel or a plurality of pixels. Further, the shifting direction may be, for example, at least one of the upper and lower sides of the second image with respect to the first image, and in the present embodiment, the second image is shifted both upward and lower with respect to the first image. Each of them generates a pair of shifted images, and a second parallax image is generated from each of the plurality of pairs of shifted images.
 ステップS207では、視差平均値の分布を作成する。ここでは、第一視差画像における候補領域の第一視差値と、複数の第二視差画像における対応領域の第二視差値を算出し、これらの視差値の分布から近似直線を求める。 In step S207, a distribution of parallax average values is created. Here, the first parallax value of the candidate region in the first parallax image and the second parallax value of the corresponding region in the plurality of second parallax images are calculated, and an approximate straight line is obtained from the distribution of these parallax values.
 図8は、ずらし量と視差平均値との関係を示す図である。図8(a)は、ずらし量にかかわらず、視差平均値がほぼ一定であり、分布の近似直線の傾きがゼロとなっている状態を示す図、図8(b)は、ずらし量に応じて視差平均値が変化し、分布の近似直線の傾きが閾値以上の状態を示す図である。 FIG. 8 is a diagram showing the relationship between the amount of shift and the average parallax value. FIG. 8 (a) shows a state in which the parallax average value is substantially constant and the slope of the approximate straight line of the distribution is zero regardless of the amount of shift, and FIG. 8 (b) shows the state according to the amount of shift. It is a figure which shows the state which the parallax average value changes, and the slope of the approximate straight line of a distribution is equal to or more than a threshold.
 ステップS208において、ステップS207で求めた分布の近似直線の傾きをあらかじめ設定した閾値と比較する。制御対象候補が正しく認識された立体物である場合、制御対象候補の認識領域内の視差値は縦ずれによって変化しないため、分布は傾きゼロの近似直線となる。一方、分布の近似直線の傾きが閾値よりも大きいとき、第一画像と複数の第二画像との間の縦ずれによって対応領域の視差値が変化したと判断する。 In step S208, the slope of the approximate straight line of the distribution obtained in step S207 is compared with the preset threshold value. When the control target candidate is a three-dimensional object that is correctly recognized, the parallax value in the recognition area of the control target candidate does not change due to the vertical deviation, so that the distribution is an approximate straight line with zero slope. On the other hand, when the slope of the approximate straight line of the distribution is larger than the threshold value, it is determined that the parallax value of the corresponding region has changed due to the vertical deviation between the first image and the plurality of second images.
 制御対象候補認識部16によって第一視差画像において立体物として認識している制御対象候補が、先行車や自転車や歩行者等の真の立体物であれば、縦ずれ発生の有無に影響を受けることなく、制御対象の候補領域と対応領域との間で視差値は変化しない。一方、制御対象候補認識部16によって第一視差画像において立体物として認識している制御対象候補が、路面の導流帯(ゼブラゾーン)等の非立体物であれば、候補領域の視差値と対応領域の視差値は、縦ずれ発生有無で比較して変化する特徴を持つ。 If the control target candidate recognized as a three-dimensional object in the first parallax image by the control target candidate recognition unit 16 is a true three-dimensional object such as a preceding vehicle, a bicycle, or a pedestrian, it is affected by the presence or absence of vertical deviation. The parallax value does not change between the candidate area to be controlled and the corresponding area. On the other hand, if the control target candidate recognized as a three-dimensional object in the first parallax image by the control target candidate recognition unit 16 is a non-three-dimensional object such as a headrace zone (zebra zone) on the road surface, the parallax value of the candidate region is used. The parallax value of the corresponding region has a characteristic that it changes in comparison with the presence or absence of vertical deviation.
 ステップS209において、ステップS208での比較結果において分布の近似直線の傾きが閾値より大きいと判断された制御対象候補は、制御不要対象であるとし、制御対象から除外する処理を実行する。そして、制御対象候補の中から除外されずに残った制御対象候補を、制御が必要な制御対象であると判断して、制御対象の情報を出力する。 In step S209, the control target candidate whose slope of the approximate straight line of the distribution is determined to be larger than the threshold value in the comparison result in step S208 is regarded as a control-unnecessary target, and a process of excluding it from the control target is executed. Then, the control target candidate that remains without being excluded from the control target candidates is determined to be a control target that requires control, and the control target information is output.
 例えば、図8(a)に示すように、視差平均値がほぼ一定の値の場合、つまり第一画像と第二画像の相対的な縦方向のずらし量に関わらず、候補領域の視差演算値と複数の対応領域の視差演算値とがほぼ同じである場合は、制御対象候補は制御対象として認定する。
一方、図8(b)に示すように、視差平均値が互いに異なる場合、つまり第一画像と第二画像の相対的な縦方向のずらし量に応じて、候補領域の視差平均値に対する対応領域の視差平均値が増大または減少する場合は、制御対象候補は立体物ではなく、導流帯を立体物と誤検知したものであると判定し、制御対象から除く処理を行う。
For example, as shown in FIG. 8A, when the parallax average value is a substantially constant value, that is, the parallax calculation value of the candidate region is irrespective of the relative vertical shift amount of the first image and the second image. When the parallax calculation values of the plurality of corresponding areas are almost the same, the control target candidate is recognized as a control target.
On the other hand, as shown in FIG. 8B, when the parallax average values are different from each other, that is, the area corresponding to the parallax average value of the candidate region according to the relative vertical shift amount of the first image and the second image. When the parallax average value of is increased or decreased, it is determined that the control target candidate is not a three-dimensional object but a erroneous detection of the headrace zone as a three-dimensional object, and processing is performed to exclude it from the control target.
 なお、ステップS207において視差平均値の分布を作成し、ステップS208において分布の近似直線の傾きに基づいて制御対象候補を制御対象から除くか否かを判断する例について説明したが、これに限定されるものではない。例えば、他の変形例として、分布の近似直線の傾きの代わりに、視差の分散値、視差の最大値、視差の最小値、視差の最大値と最小値との差分、視差の最大値と最小値の比、視差の最大値の平均値、あるいは、視差の最小値の平均値を用いることができる。 An example of creating a distribution of the parallax average value in step S207 and determining whether to exclude the control target candidate from the control target based on the slope of the approximate straight line of the distribution has been described in step S208, but the present invention is limited to this. It's not something. For example, as another modification, instead of the slope of the approximate straight line of the distribution, the parallax dispersion value, the maximum parallax value, the minimum parallax value, the difference between the maximum and minimum parallax values, and the maximum and minimum parallax values. The ratio of values, the average value of the maximum values of parallax, or the average value of the minimum values of parallax can be used.
 図9は、制御対象の要否判断を統計結果に基づいて行う例を説明する図である。
 図9に示す例では、正検知と誤検知の場合における、6種類のずらし画像であるずらし1~6の視差平均値と、これらの視差平均値の平均値(ave)、分散値(sigma)、最小値(min)、最大値(max)、最大値と最小値との差分(max-min)、最小値の平均値(min/ave)、および、最大値の平均値(max/ave)が示されており、これらの値を閾値と比較して制御対象の要否判断を行うこともできる。
FIG. 9 is a diagram illustrating an example of determining the necessity of a controlled object based on statistical results.
In the example shown in FIG. 9, the average value of the deviations 1 to 6 which are the six types of shift images in the case of the positive detection and the false detection, and the average value (ave) and the dispersion value (sigma) of these deviation average values. , Minimum value (min), maximum value (max), difference between maximum value and minimum value (max-min), average value of minimum value (min / ave), and average value of maximum value (max / ave) Is shown, and it is also possible to judge the necessity of the control target by comparing these values with the threshold value.
 本実施形態の撮像装置100は、一対の撮像画像から第一画像と第二画像を取得し、第一画像と第二画像を用いて第一視差画像を生成し、第一視差画像から立体物を検知して制御対象候補として認識し、制御対象候補が存在する候補領域内の視差値を第一視差値として取得する。そして、第一画像と第二画像を取得した一対の撮像画像から、第一画像と第二画像に対して相対的な縦方向の位置関係をずらした一対のずらし画像を取得し、これら一対のずらし画像から第二視差画像を生成し、第二視差画像において第一視差画像の候補領域に対応する対応領域を設定し、対応領域内の視差値を第二視差値として取得する。 The imaging device 100 of the present embodiment acquires a first image and a second image from a pair of captured images, generates a first parallax image using the first image and the second image, and a three-dimensional object from the first parallax image. Is detected and recognized as a control target candidate, and the parallax value in the candidate area where the control target candidate exists is acquired as the first parallax value. Then, from the pair of captured images obtained from the first image and the second image, a pair of shifted images in which the positional relationship in the vertical direction relative to the first image and the second image is shifted is acquired, and the pair of these shifted images is obtained. A second disparity image is generated from the shifted image, a corresponding area corresponding to the candidate area of the first disparity image is set in the second disparity image, and the disparity value in the corresponding area is acquired as the second disparity value.
 そして、第一画像と第二画像を取得した一対の撮像画像から、ずらし量を変更した一対のずらし画像を取得し、第二視差画像の生成、対応領域の設定、対応領域内の第二視差値の取得を行い、ずらし量が異なる複数の第二視差値を取得する。そして、第一視差値と複数の第二視差値の分布状況から制御対象候補が導流帯を立体物として誤検知したものであるか否かを判断し、誤検知であると判定された場合には、制御対象から除く処理を行う。例えば、第一視差値と複数の第二視差値がほぼ同じ値である場合には正検知であると判断し、第一視差値に対して複数の第二視差値がずらし量に応じて変化する場合には誤検知であると判断する。 Then, from the pair of captured images obtained from the first image and the second image, a pair of shifted images in which the shift amount is changed are acquired, a second parallax image is generated, a corresponding area is set, and a second parallax in the corresponding area is obtained. The value is acquired, and a plurality of second parallax values having different shift amounts are acquired. Then, it is determined from the distribution status of the first parallax value and the plurality of second parallax values whether or not the control target candidate erroneously detects the headrace zone as a three-dimensional object, and when it is determined to be erroneous detection. Is processed to be excluded from the control target. For example, when the first parallax value and the plurality of second parallax values are almost the same value, it is determined that the detection is positive, and the plurality of second parallax values change according to the amount of shift with respect to the first parallax value. If this is the case, it is judged to be a false positive.
 本実施形態の撮像装置100によれば、左右のカメラに光軸ずれが発生している状態でも制御対象の制御要否を判断することができる。したがって、導流帯を立体物として誤検知したものであるとして制御対象から除くことができ、AEBやACCなどの安全システムが作動して不要な警報やブレーキがかかるのを防ぐことができ、運転者に違和感を与えるのを防止できる。 According to the image pickup apparatus 100 of the present embodiment, it is possible to determine whether or not control of the control target is necessary even when the left and right cameras have optical axis deviations. Therefore, the headrace zone can be excluded from the control target as a three-dimensional object, and it is possible to prevent the safety system such as AEB and ACC from operating and applying unnecessary alarms and brakes. It is possible to prevent a person from feeling uncomfortable.
<第2実施形態>
 本実施形態において特徴的なことは、フェンスや柵といった繰り返しパターンの特徴を持つ立体物については、制御対象として認識する構成としたことである。例えば、フェンスや柵といった繰り返しパターンの特徴を持つ立体物に対しても、ステップS205で求める第二視差画像の対応領域内の視差値は、一対のカメラの縦ずれの発生に伴って変化する特徴をもつ。
<Second Embodiment>
What is characteristic of this embodiment is that a three-dimensional object having a characteristic of a repeating pattern such as a fence or a fence is recognized as a control target. For example, even for a three-dimensional object having a characteristic of a repeating pattern such as a fence or a fence, the parallax value in the corresponding region of the second parallax image obtained in step S205 changes with the occurrence of vertical deviation of the pair of cameras. Have.
 しかし、フェンスや柵については、安全システムによる制御が必要な制御対象であるため、フェンスや柵を制御対象から除外しないための処理が発生する。以下、図10のフローチャートに基づいて、フェンスや柵を制御対象から除外しないための処理の流れを説明する。ステップS210以外は、図2に示すフローチャートと同じ処理である。 However, since fences and fences are control targets that need to be controlled by a safety system, processing is required to prevent fences and fences from being excluded from control targets. Hereinafter, the flow of processing for not excluding fences and fences from the control target will be described based on the flowchart of FIG. Except for step S210, the process is the same as the flowchart shown in FIG.
 ステップS203にて制御対象候補が存在すると判定された場合に、ステップS210で自車速が閾値よりも大きいか否かの判定が行われる。フェンスや柵が制御対象と認識されるのは、自車の進行路上にフェンスや柵が位置するときであると判断することができ、自車が低速走行していると推測することができる。よって、自車速があらかじめ設定した車速閾値より大きいとき(ステップS210でYES)にのみ、ステップS204の処理へ移行し、撮像装置100が搭載されている自車速が車速閾値以下のとき(ステップS210でNO)は第二視差画像の生成は行わない。 When it is determined in step S203 that the control target candidate exists, it is determined in step S210 whether or not the own vehicle speed is larger than the threshold value. It can be determined that the fence or fence is recognized as the control target when the fence or fence is located on the traveling path of the own vehicle, and it can be inferred that the own vehicle is traveling at a low speed. Therefore, only when the own vehicle speed is greater than the preset vehicle speed threshold value (YES in step S210), the process proceeds to step S204, and when the own vehicle speed on which the image pickup device 100 is mounted is equal to or less than the vehicle speed threshold value (in step S210). NO) does not generate a second parallax image.
 本発明によれば、左右カメラ間で発生した光軸の縦方向ずれにより、導流帯を立体物として誤検知した場合に、立体物ではないことを判別し、安全システムが作動しないようにすることが可能である。 According to the present invention, when the headrace zone is erroneously detected as a three-dimensional object due to the vertical deviation of the optical axis generated between the left and right cameras, it is determined that the light guide zone is not a three-dimensional object, and the safety system is prevented from operating. It is possible.
 以上、本発明の実施形態について詳述したが、本発明は、前記の実施形態に限定されるものではなく、特許請求の範囲に記載された本発明の精神を逸脱しない範囲で、種々の設計変更を行うことができるものである。例えば、前記した実施の形態は本発明を分かりやすく説明するために詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されるものではない。また、ある実施形態の構成の一部を他の実施形態の構成に置き換えることが可能であり、また、ある実施形態の構成に他の実施形態の構成を加えることも可能である。さらに、各実施形態の構成の一部について、他の構成の追加・削除・置換をすることが可能である。 Although the embodiments of the present invention have been described in detail above, the present invention is not limited to the above-described embodiments, and various designs are designed without departing from the spirit of the present invention described in the claims. You can make changes. For example, the above-described embodiment has been described in detail in order to explain the present invention in an easy-to-understand manner, and is not necessarily limited to the one including all the described configurations. Further, it is possible to replace a part of the configuration of one embodiment with the configuration of another embodiment, and it is also possible to add the configuration of another embodiment to the configuration of one embodiment. Further, it is possible to add / delete / replace other configurations with respect to a part of the configurations of each embodiment.
100 撮像装置
11 第一撮像部
12 第二撮像部
13 第一画像取得部
14 第二画像取得部
15 第一視差画像生成部
16 制御対象候補認識部
17 第一視差値取得部
18 第二視差画像生成部
19 第二視差値取得部
20 制御対象判断部
100 Imaging device 11 1st imaging unit 12 2nd imaging unit 13 1st image acquisition unit 14 2nd image acquisition unit 15 1st parallax image generation unit 16 Control target candidate recognition unit 17 1st parallax value acquisition unit 18 2nd parallax image Generation unit 19 Second parallax value acquisition unit 20 Control target judgment unit

Claims (7)

  1.  一対のカメラで撮像された一対の撮像画像を処理する処理装置であって、
     前記一対の撮像画像から第一視差画像を生成する第一視差画像生成部と、
     前記第一視差画像から制御対象候補を認識する制御対象候補認識部と、
     前記第一視差画像において前記制御対象候補が存在する候補領域内の第一視差値を取得する第一視差値取得部と、
     前記一対の撮像画像の相対的な縦方向の位置をずらして第二視差画像を生成する第二視差画像生成部と、
     前記第一視差画像の候補領域と対応する前記第二視差画像の対応領域内の第二視差値を取得する第二視差値取得部と、
     前記第一視差値と前記第二視差値とを用いて前記制御対象候補を制御対象として認定するか否かを判断する制御対象判断部と、
     を有することを特徴とする処理装置。
    A processing device that processes a pair of captured images captured by a pair of cameras.
    A first parallax image generator that generates a first parallax image from the pair of captured images,
    A control target candidate recognition unit that recognizes a control target candidate from the first parallax image,
    The first parallax value acquisition unit that acquires the first parallax value in the candidate region where the control target candidate exists in the first parallax image, and the first parallax value acquisition unit.
    A second parallax image generation unit that generates a second parallax image by shifting the relative vertical positions of the pair of captured images, and
    A second parallax value acquisition unit that acquires a second parallax value in the corresponding region of the second parallax image corresponding to the candidate region of the first parallax image, and a second parallax value acquisition unit.
    A control target determination unit that determines whether or not to certify the control target candidate as a control target using the first parallax value and the second parallax value.
    A processing device characterized by having.
  2.  前記第一視差画像生成部は、前記一対の撮像画像から第一画像と第二画像を切り出し、前記第一画像と前記第二画像を用いて前記第一視差画像を生成し、
     前記第二視差画像生成部は、前記一対の撮像画像から前記第一画像と前記第二画像の相対的な上下の位置を縦方向の座標毎または等分割した区分毎にずらして切り出された一対のずらし画像を用いて前記第二視差画像を生成することを特徴とする請求項1に記載の処理装置。
    The first parallax image generation unit cuts out a first image and a second image from the pair of captured images, and generates the first parallax image using the first image and the second image.
    The second parallax image generation unit is a pair cut out from the pair of captured images by shifting the relative vertical positions of the first image and the second image for each vertical coordinate or for each equally divided division. The processing apparatus according to claim 1, wherein the second parallax image is generated by using the shifted image.
  3.  前記第二視差画像生成部は、前記第一画像と前記第二画像の少なくとも一方の位置を、上方向のみ、または下方向のみ、または上下両方向にずらして前記一対のずらし画像を切り出すことを特徴とする請求項2に記載の処理装置。 The second parallax image generation unit is characterized in that at least one position of the first image and the second image is shifted only in the upward direction, only in the downward direction, or in both the upper and lower directions to cut out the pair of shifted images. The processing apparatus according to claim 2.
  4.  前記第二視差画像生成部は、前記第一画像と前記第二画像の少なくとも一方の位置を複数回ずらして一対のずらし画像を複数生成し、該複数の一対のずらし画像からそれぞれ前記第二視差画像を生成することを特徴とする請求項2に記載の処理装置。 The second parallax image generation unit generates a plurality of pairs of shifted images by shifting at least one position of the first image and the second image a plurality of times, and the second parallax image is generated from each of the pair of shifted images. The processing apparatus according to claim 2, wherein an image is generated.
  5.  前記制御対象判断部は、前記第一視差画像の前記候補領域内の視差値と視差数から前記候補領域内の視差平均値を求め、前記第二視差画像の前記対応領域内の視差値と視差数から前記対応領域内の視差平均値を求め、前記第一視差画像の前記候補領域内の視差平均値と前記第二視差画像の前記対応領域内の視差平均値との分布の傾きが閾値よりも大きい場合に、前記制御対象候補認識部で認識していた前記制御対象候補を制御対象から除外することを特徴とする請求項2に記載の処理装置。 The control target determination unit obtains the parallax average value in the candidate region from the parallax value and the number of parallax in the candidate region of the first parallax image, and obtains the parallax value and the parallax in the corresponding region of the second parallax image. The parallax average value in the corresponding region is obtained from the number, and the inclination of the distribution between the parallax average value in the candidate region of the first parallax image and the parallax average value in the parallax average value in the corresponding region of the second parallax image is from the threshold value. The processing apparatus according to claim 2, wherein the control target candidate recognized by the control target candidate recognition unit is excluded from the control target when the size is large.
  6.  前記制御対象判断部は、前記第一視差画像の前記候補領域内の視差値と視差数から前記候補領域内の視差平均値を求め、前記第二視差画像の前記対応領域内の視差値と視差数から前記対応領域内の視差平均値を求め、前記視差平均値の分布の分散値が閾値よりも大きい場合に、前記制御対象候補認識部で認識していた前記制御対象候補を制御対象から除外することを特徴とする請求項2に記載の処理装置。 The control target determination unit obtains the parallax average value in the candidate region from the parallax value and the number of parallax in the candidate region of the first parallax image, and obtains the parallax value and the parallax in the corresponding region of the second parallax image. The parallax average value in the corresponding region is obtained from the number, and when the dispersion value of the distribution of the parallax average value is larger than the threshold value, the control target candidate recognized by the control target candidate recognition unit is excluded from the control target. The processing apparatus according to claim 2, wherein the processing apparatus is used.
  7.  前記第二視差画像生成部は、前記処理装置が搭載されている車両の走行速度が閾値以下のときは第二視差画像の生成を行わないことを特徴とする請求項2に記載の処理装置。 The processing device according to claim 2, wherein the second parallax image generation unit does not generate a second parallax image when the traveling speed of the vehicle on which the processing device is mounted is equal to or less than a threshold value.
PCT/JP2020/048694 2020-03-06 2020-12-25 Processing device WO2021176819A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2022504989A JP7250211B2 (en) 2020-03-06 2020-12-25 processing equipment
DE112020005059.9T DE112020005059T5 (en) 2020-03-06 2020-12-25 PROCESSING DEVICE

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-038552 2020-03-06
JP2020038552 2020-03-06

Publications (1)

Publication Number Publication Date
WO2021176819A1 true WO2021176819A1 (en) 2021-09-10

Family

ID=77613225

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/048694 WO2021176819A1 (en) 2020-03-06 2020-12-25 Processing device

Country Status (3)

Country Link
JP (1) JP7250211B2 (en)
DE (1) DE112020005059T5 (en)
WO (1) WO2021176819A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012017650A1 (en) * 2010-08-03 2012-02-09 パナソニック株式会社 Object detection device, object detection method, and program
WO2013062087A1 (en) * 2011-10-28 2013-05-02 富士フイルム株式会社 Image-capturing device for three-dimensional measurement, three-dimensional measurement device, and measurement program
JP2015190921A (en) * 2014-03-28 2015-11-02 富士重工業株式会社 Vehicle stereo-image processing apparatus
WO2019003771A1 (en) * 2017-06-26 2019-01-03 日立オートモティブシステムズ株式会社 Imaging device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5097681B2 (en) 2008-10-31 2012-12-12 日立オートモティブシステムズ株式会社 Feature position recognition device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012017650A1 (en) * 2010-08-03 2012-02-09 パナソニック株式会社 Object detection device, object detection method, and program
WO2013062087A1 (en) * 2011-10-28 2013-05-02 富士フイルム株式会社 Image-capturing device for three-dimensional measurement, three-dimensional measurement device, and measurement program
JP2015190921A (en) * 2014-03-28 2015-11-02 富士重工業株式会社 Vehicle stereo-image processing apparatus
WO2019003771A1 (en) * 2017-06-26 2019-01-03 日立オートモティブシステムズ株式会社 Imaging device

Also Published As

Publication number Publication date
JPWO2021176819A1 (en) 2021-09-10
DE112020005059T5 (en) 2022-07-21
JP7250211B2 (en) 2023-03-31

Similar Documents

Publication Publication Date Title
JP6013884B2 (en) Object detection apparatus and object detection method
EP2422320B1 (en) Object detection device
JP4956452B2 (en) Vehicle environment recognition device
WO2016002405A1 (en) Parking space recognition device
JP6274557B2 (en) Moving surface information detection apparatus, moving body device control system using the same, and moving surface information detection program
KR100941271B1 (en) Prevention method of lane departure for vehicle
US7542835B2 (en) Vehicle image processing device
JP5371725B2 (en) Object detection device
JP6707022B2 (en) Stereo camera
JP4937933B2 (en) Outside monitoring device
JP2009110172A (en) Object detection device
US8730325B2 (en) Traveling lane detector
JP6592991B2 (en) Object detection apparatus, object detection method, and program
US10984258B2 (en) Vehicle traveling environment detecting apparatus and vehicle traveling controlling system
JP6722084B2 (en) Object detection device
CN109522779B (en) Image processing apparatus and method
JP2008309519A (en) Object detection device using image processing
JP5073700B2 (en) Object detection device
WO2011016257A1 (en) Distance calculation device for vehicle
JP2007299045A (en) Lane recognition device
WO2021176819A1 (en) Processing device
WO2017154305A1 (en) Image processing device, apparatus control system, imaging device, image processing method, and program
JP7232005B2 (en) VEHICLE DRIVING ENVIRONMENT DETECTION DEVICE AND DRIVING CONTROL SYSTEM
WO2023053477A1 (en) Image processing device
JP5822866B2 (en) Image processing device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20922789

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022504989

Country of ref document: JP

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 20922789

Country of ref document: EP

Kind code of ref document: A1