WO2011036807A1 - Object detection device and object detection method - Google Patents

Object detection device and object detection method Download PDF

Info

Publication number
WO2011036807A1
WO2011036807A1 PCT/JP2009/066802 JP2009066802W WO2011036807A1 WO 2011036807 A1 WO2011036807 A1 WO 2011036807A1 JP 2009066802 W JP2009066802 W JP 2009066802W WO 2011036807 A1 WO2011036807 A1 WO 2011036807A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
radar
image
extension line
target information
Prior art date
Application number
PCT/JP2009/066802
Other languages
French (fr)
Japanese (ja)
Inventor
剛 名波
Original Assignee
トヨタ自動車株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by トヨタ自動車株式会社 filed Critical トヨタ自動車株式会社
Priority to PCT/JP2009/066802 priority Critical patent/WO2011036807A1/en
Publication of WO2011036807A1 publication Critical patent/WO2011036807A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • B60R21/01Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents
    • B60R21/013Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting collisions, impending collisions or roll-over
    • B60R21/0134Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting collisions, impending collisions or roll-over responsive to imminent contact with an obstacle, e.g. using radar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/165Anti-collision systems for passive traffic, e.g. including static obstacles, trees
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/167Driving aids for lane monitoring, lane changing, e.g. blind spot detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9327Sensor installation details
    • G01S2013/93271Sensor installation details in the front of the vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Definitions

  • the present invention relates to an object detection apparatus and an object detection method for detecting an object based on information acquired by a radar and a camera.
  • the distance between the vehicle and the target can be calculated.
  • the image target information acquired from the camera image the horizontal position and width of the target can be calculated with higher accuracy than the radar target information. Therefore, by fusing the radar target information and the image target information, it is possible to more accurately grasp the position and size of the object, the distance from the host vehicle, and the relative speed.
  • Patent Document 1 discloses an obstacle recognition method for recognizing a preceding vehicle ahead of a host vehicle by sensor fusion recognition processing based on distance measurement in front of the host vehicle by a radar and photographing in front of the host vehicle by a camera. .
  • a candidate area for an obstacle is set in an image of a monocular camera based on a distance measurement result of a laser radar. Then, a horizontal and vertical binarized image edge histogram of the obstacle candidate area is calculated.
  • a stereo camera or a monocular camera can be used as a camera for acquiring image target information.
  • a monocular camera costs can be reduced compared to when a stereo camera is used.
  • the present invention has been made in view of the above problems, and detects an object by fusing radar target information acquired by a radar and image target information acquired by a camera.
  • An object of the present invention is to provide a technique capable of further improving the detection accuracy of an object in an apparatus or an object detection method.
  • the image target information about the lateral end of the first target is used for the fusion of the radar target information and the image target information.
  • the object detection device is: An object detection device that detects an object by fusing radar target information acquired by a radar and image target information acquired from an image of a monocular camera, An extension line extending from the horizontal end of the first target segment connecting the base position of the electromagnetic wave emitted from the radar and the horizontal end of the first target recognized by the monocular camera is defined as an edge extension line.
  • Determining means for determining whether or not the second target recognized by the radar exists on the edge extension line or in the vicinity of the edge extension line; When the determining means determines that the second target recognized by the radar exists on the edge extension line or in the vicinity of the edge extension line, the first target and the second object A distance calculation means for calculating a distance in the depth direction between the target and the target based on radar target information for both targets; When the distance in the depth direction between the first target and the second target calculated by the distance calculation means exceeds a predetermined distance, the first target recognized by the monocular camera Fusion means for fusing radar target information and image target information about the first target without using image target information about the horizontal end of the target; It is provided with.
  • the predetermined distance is the maximum value of the length in the depth direction of an obstacle (such as a vehicle other than the host vehicle) that may exist on the own lane, or a value obtained by adding a certain margin to the maximum value. Is set.
  • the lateral end of the first target recognized by the monocular camera is actually viewed from the own vehicle. There is a high possibility that it is the lateral edge of an object or pattern that exists behind the first target.
  • the object or pattern recognized as the lateral end of the first target by the monocular camera is There is a high possibility that it is an object other than an obstacle present on the own lane or a pattern thereof.
  • the second target exists on the edge extension line or in the vicinity of the edge extension line, and the distance in the depth direction between the first target and the second target exceeds a predetermined distance.
  • the radar target information and the image target information for the first target are fused, both are fused except for the image target information for the lateral end of the first target.
  • An object detection apparatus is An object detection device that detects an object by fusing radar target information acquired by a radar and image target information acquired from an image of a monocular camera, An extension line extending from the horizontal end of the first target segment connecting the base position of the electromagnetic wave emitted from the radar and the horizontal end of the first target recognized by the monocular camera is defined as an edge extension line.
  • Determining means for determining whether or not the second target recognized by the radar exists on the edge extension line or in the vicinity of the edge extension line; When the determination means determines that the second target recognized by the radar does not exist on the edge extension line or in the vicinity of the edge extension line, the first object recognized by the monocular camera Fusion means for fusing radar target information and image target information about the first target using image target information about the horizontal end of the target; It is provided with.
  • the image target information about the lateral end of the first target when the reliability of the image target information about the lateral end of the first target is high, the image target information about the lateral end is used for fusion with the radar target information. Therefore, it is possible to suppress erroneous detection of the horizontal width of the target.
  • An object detection method is An object detection method for detecting an object by fusing radar target information acquired by a radar and image target information acquired from an image of a monocular camera,
  • An extension line extending from the horizontal end of the first target segment connecting the base position of the electromagnetic wave emitted from the radar and the horizontal end of the first target recognized by the monocular camera is defined as an edge extension line.
  • An object detection method is: An object detection method for detecting an object by fusing radar target information acquired by a radar and image target information acquired from an image of a monocular camera, An extension line extending from the horizontal end of the first target segment connecting the base position of the electromagnetic wave emitted from the radar and the horizontal end of the first target recognized by the monocular camera is defined as an edge extension line.
  • the object detection accuracy can be further improved.
  • a safety system such as PCS
  • FIG. 1 is a block diagram illustrating a schematic configuration of a part of the PCS according to the present embodiment.
  • the PCS 1 is mounted on the vehicle 100.
  • the PCS 1 includes a millimeter wave radar 2, a monocular camera 3, a camera ECU 4, and a PCS ECU 5.
  • the PCS ECU 5 includes a fusion calculation unit 6, a physical value calculation unit 7, and a collision determination unit 8.
  • the PCS 1 detects a vehicle other than the host vehicle, an obstacle, a pedestrian, and the like based on information acquired by the millimeter wave radar 2 and the monocular camera 3. When it is determined that there is a high possibility of collision with these objects (including pedestrians), a warning is sent to the driver and collision damage reduction control is performed.
  • the collision damage reduction control include reduction of the collision speed by winding the seat belt or pre-crash brake.
  • the millimeter wave radar 2 is attached to the center of the front side of the vehicle 100.
  • the millimeter wave radar 2 scans the front and oblique front of the vehicle 100 in the horizontal direction with millimeter wave electromagnetic waves, and receives the electromagnetic waves reflected on the surface of an object outside the vehicle.
  • the millimeter wave radar 2 recognizes the target as an electromagnetic wave reflection point.
  • the millimeter wave radar 2 acquires target information from millimeter wave transmission / reception data.
  • the target information acquired from the millimeter wave transmission / reception data is radar target information.
  • the radar target information here is the lateral position of the target, the distance between the host vehicle 100 and the target, and the relative speed between the host vehicle 100 and the target.
  • the radar target information is input to the fusion calculation unit 6 of the camera ECU 4 and the PCS ECU 5.
  • the monocular camera 3 is a CCD camera and is attached to the front center of the vehicle 100.
  • the monocular camera 3 captures images in front of and obliquely forward of the vehicle 100. Thereby, the monocular camera 3 recognizes the target as an image.
  • An image captured by the monocular camera 3 is input to the camera ECU 4 as an image signal.
  • the camera ECU 4 acquires target information from the input image signal.
  • the target information acquired from the image signal input from the monocular camera 3 is the image target information.
  • the image target information here is the horizontal position of the target, the horizontal width and the height of the target.
  • the image target information is input to the fusion calculation unit 6 of the PCS ECU 5.
  • the image target information is input to the fusion calculation unit 6 in association with the radar target information for the same target.
  • Whether the target recognized by the millimeter wave radar 2 and the target recognized by the monocular camera 3 are the same target or not is determined by the lateral position of the target in the radar target information and the image target information. Judgment based on. For example, when the target recognized by the millimeter wave radar 2 and the target recognized by the monocular camera 3 overlap, or the target recognized by the monocular camera 3 is recognized by the millimeter wave radar 2. If the target is within a predetermined range, it is determined that both targets are the same.
  • the mounting position of the millimeter wave radar 2 and the monocular camera 3 is not limited to the front center of the vehicle.
  • these may be attached to the rear side of the vehicle.
  • the fusion calculation unit 6 fuses the radar target information input from the millimeter wave radar 2 and the image target information input from the camera ECU 4. The fusion method will be described later.
  • the calculation result by the fusion calculation unit 6 is input to the physical value calculation unit 7.
  • the physical value calculation unit 7 determines the horizontal position of the target, the horizontal width and height of the target, the distance between the host vehicle 100 and the target, and the host vehicle 100 and the target. Calculate the relative speed.
  • the calculation result by the physical value calculation unit 7 is input to the collision determination unit 8.
  • the collision determination unit 8 determines whether or not the host vehicle 100 and the target are likely to collide based on the calculation result of the physical value calculation unit 7. When it is determined by the collision determination unit 8 that there is a high possibility that the host vehicle 100 and the target will collide, the PCS ECU 5 transmits a control signal to a seat belt retracting actuator, a pre-crash brake actuator, etc. (not shown). To implement collision damage reduction control. Further, in this case, the PCS ECU 5 transmits a warning ON signal to a warning device (not shown) for the driver.
  • the millimeter wave radar 2, the monocular camera 3, the camera ECU 4, the fusion calculation unit 6, and the physical value calculation unit 7 constitute the object detection apparatus according to the present invention.
  • Image target information according to the present embodiment is acquired from an image of the monocular camera 3.
  • a target recognized as a reflection point of electromagnetic waves by the millimeter wave radar 2 is referred to as a radar target
  • a target recognized as an image by the monocular camera 3 is referred to as an image target.
  • the front portion 200 ⁇ / b> A of the guard rail 200 is recognized as a radar target by the millimeter wave radar 2 when the guard rail 200 is installed beside the corner portion of the road.
  • the monocular camera 3 also recognizes the front portion 200A of the guardrail 200 as an image target.
  • the guard rail 200 is inclined with respect to the traveling direction of the vehicle 100, according to the monocular camera 3, the back part of the guard rail 200 behind the front part 200 ⁇ / b> A is at the same position as the front part 200 ⁇ / b> A. It may be recognized as a thing. As a result, in the image target information, the lateral width of the front portion 200 ⁇ / b> A of the guardrail 200 becomes a value larger than the actual width.
  • the image target information erroneously detected as the width of the target being larger than the actual width is fused with the radar target information, and when the object is detected based on the result, the detected object Is determined to protrude from the own lane (in the case of FIG. 2, it is determined that the front portion 200A of the guardrail 200 protrudes from the own lane). In this case, the PCS 1 may malfunction.
  • both of them are fused except for the image target information with low reliability.
  • the base position of the electromagnetic wave irradiated from the millimeter wave radar 2 (that is, the mounting position of the millimeter wave radar 2 in the vehicle 100) is defined as the base position 100a.
  • a front portion 200A of the guardrail 200 is set as a first target.
  • an extended line extending from the lateral end of the image target for the first target which is a line segment connecting the base position 100a and the lateral end of the image target for the front portion 200A of the guard rail 200 that is the first target. Edge extension line.
  • the vicinity of the edge extension line is a range set as a range in which the second target can be handled as being on the edge extension line if the second target exists within the range. (For example, a range of 0.5 m in the left-right direction from the edge extension line). The above determination is performed for the left and right lateral ends of the image target for the first target.
  • a back portion 200B of the guardrail 200 exists as a second target on the edge extension line on the right end side of the image target for the first target.
  • the distance Ld in the depth direction between the first target and the second target is calculated based on the radar target information for both targets. Then, it is determined whether or not the distance Ld between the two targets exceeds a predetermined distance L0.
  • the predetermined distance L0 is set as a value obtained by adding a certain margin to the maximum value in the depth direction of an obstacle (such as a vehicle other than the own vehicle) that may exist on the own lane. (For example, 5 m).
  • a vehicle 300 other than the own vehicle 100 (hereinafter also referred to as another vehicle) exists as an obstacle in an oblique state with respect to the traveling direction of the own vehicle 100. There may be.
  • the front portion 300A of the other vehicle 300 is recognized as the first target and the back portion 300B of the other vehicle 300 is recognized as the second target.
  • the horizontal width of the image target for the first target is larger than the actual horizontal width of the front portion 300A of the other vehicle 300.
  • the horizontal width of the image target for the first target is a value that should be treated as the horizontal width of the other vehicle 300 as an obstacle.
  • the first object is determined. It is determined whether or not the width of the image target for the mark is handled as the width of the obstacle on the own lane.
  • the distance Ld in the depth direction between the first target and the second target is within the predetermined distance L0, the front side and the back side of the obstacle on the own lane are first as shown in FIG.
  • the horizontal width of the image target for the first target is handled as the horizontal width of the obstacle on the own lane.
  • the second target recognized as the radar target exists on the edge extension line of the image target for the first target or in the vicinity of the edge extension line, and the first target is present.
  • the radar for the first target is not used without using the image target information for the lateral end of the first target.
  • the target information and the image target information are fused.
  • the fusion target is recognized as shown in FIG. Note that the fusion target is a target recognized by fusing radar target information and image target information.
  • the radar target information and the image target for the first target are used using the image target information for the lateral end of the first target. Fusion with information.
  • the embodiment it is possible to fuse the image target information and the radar target information excluding the portion with low reliability. Thereby, it can suppress that the width of a target is erroneously detected. Therefore, the object detection accuracy can be further improved.
  • the radar target there may be a second target (radar target) on the edge extension line for the left and right ends of the image target for the first target.
  • the image target information about the first target is not fused with the radar target information. That is, in this case, as shown in FIG. 6B, the radar target is a fusion target.
  • the second target is only on the edge extension line at the end (right end) far from the radar target among the left and right ends of the image target regarding the first target.
  • (Target) may exist.
  • the image target information about the end (left end) closer to the radar target of the image target is fused with the radar target information. That is, in this case, the fusion target is recognized as shown in FIG.
  • the width of the wall is erroneously detected. It is also effective in suppressing this. It is assumed that the point A of the wall 400 is recognized as the first target when the vehicle 100 is traveling on such a road. At this time, depending on the monocular camera 3, the wall behind the point A may be recognized as being at the point A. In this case, in the image target, it is recognized that the right end of the first target protrudes on the own lane.
  • the wall on the edge extension line at the right end of the image target for the first target is recognized as the second target by the millimeter wave radar 2. Therefore, in this embodiment, the radar target information and the image target information are fused except for the image target information at the right end of the first target. That is, the fusion target is recognized as shown in FIG. Thereby, it can suppress that the width
  • step S101 radar target information and image target information are read.
  • step S102 it is determined whether there is radar target information and image target information that can be fused. That is, it is determined whether there is image target information for the same target as the target recognized by the millimeter wave radar 2. If an affirmative determination is made in step S102, the target recognized by the millimeter wave radar 2 and the monocular camera 3 is recognized as the first target, and then the process of step S103 is executed. If a negative determination is made in step S102, execution of this flow is temporarily terminated.
  • step S103 it is determined whether or not there is a second target recognized as a radar target on or near the edge extension line of the image target for the first target. This determination is performed for both the left and right ends of the image target for the first target. If an affirmative determination is made in step S103, the process of step S104 is executed next. If a negative determination is made, the process of step S107 is executed next.
  • step S104 the distance Ld in the depth direction between the first target and the second target is calculated based on the radar target information for both targets.
  • step S105 it is determined whether or not the distance Ld in the depth direction between the first target and the second target exceeds a predetermined distance L0. If an affirmative determination is made in step S105, the process of step S106 is executed next. If a negative determination is made, the process of step S107 is executed next.
  • step S106 the radar target information and the image target information for the first target are fused without using the image target information for the lateral end of the first target.
  • the image target information about the right end of the first target is not used for fusion.
  • the second target recognized as a radar target on or near the edge extension line on the left end side of the image target for the first target and the first target and the second target.
  • the image target information about the left end of the first target is not used for fusion.
  • step S107 the radar target information and the image target information for the first target are fused using the image target information for the lateral end of the first target. That is, the image target information about the first target read in S101 is directly fused with the radar target information about the first target.
  • the fusion calculation unit 6 of the PCS ECU 5 that executes step S103 of the above flow corresponds to a determination unit according to the present invention, and step S103 corresponds to a determination step according to the present invention.
  • the fusion calculation unit 6 of the PCS ECU 5 that executes step S104 of the above flow corresponds to the distance calculation means according to the present invention, and step S104 corresponds to the distance calculation step according to the present invention.
  • the fusion calculation unit 6 of the PCS ECU 5 that executes step S106 or step S107 of the above flow corresponds to the fusion means according to the present invention, and the step S106 or step S107 corresponds to the fusion step according to the present invention. It corresponds to.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Mechanical Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

Disclosed are an object detection device and an object detection method with which object detection accuracy can be improved by detecting the object by means of a fusion of target information obtained from a radar and camera. When a second target identified by the radar does not exist on or near an extended line that extends from a lateral end of a first target of a line segment connecting the base point position of an electromagnetic wave radiated from the radar and the lateral end of the first target that has been identified by means of a single-lens camera, image target information regarding the lateral end of the first target is used in the fusion of radar target information and image target information.

Description

物体検出装置及び物体検出方法Object detection apparatus and object detection method
 本発明は、レーダ及びカメラによって取得される情報に基づいて物体を検出する物体検出装置及び物体検出方法に関する。 The present invention relates to an object detection apparatus and an object detection method for detecting an object based on information acquired by a radar and a camera.
 近年、PCS(Pre-crash safety system)等の衝突回避又は衝突による被害の低減のための安全システムが開発されている。このような安全システムを好適に実現させるためには、自車両以外の車両や障害物、歩行者等の位置及び大きさ、これらと自車との距離及び相対速度を正確に把握する必要がある。これらを把握するための技術として、レーダによって取得されるレーダ物標情報とカメラの画像から取得される画像物標情報とをフュージョンさせることで物体を検出する物体検出装置が知られている。ここで、物標とは検出対象となる物体のことである。 In recent years, safety systems such as PCS (Pre-crash safety system) have been developed for avoiding collisions or reducing damage caused by collisions. In order to suitably realize such a safety system, it is necessary to accurately grasp the position and size of vehicles other than the own vehicle, obstacles, pedestrians, and the distance and relative speed between them and the own vehicle. . As a technique for grasping these, there is known an object detection device that detects an object by fusing radar target information acquired by a radar and image target information acquired from an image of a camera. Here, the target is an object to be detected.
 レーダによって取得されるレーダ物標情報によれば自車両と物標間の距離を算出することができる。しかしながら、レーダ物標情報からは物標の横位置や横幅を高精度で算出することが困難である。一方、カメラの画像から取得される画像物標情報によれば、レーダ物標情報よりも高い精度で物標の横位置や横幅を算出することができる。そのため、レーダ物標情報と画像物標情報とをフュージョンさせることで、物体の位置や大きさ、自車両との距離及び相対速度をより正確に把握することが可能となる。 According to the radar target information acquired by the radar, the distance between the vehicle and the target can be calculated. However, it is difficult to calculate the horizontal position and width of the target with high accuracy from the radar target information. On the other hand, according to the image target information acquired from the camera image, the horizontal position and width of the target can be calculated with higher accuracy than the radar target information. Therefore, by fusing the radar target information and the image target information, it is possible to more accurately grasp the position and size of the object, the distance from the host vehicle, and the relative speed.
 特許文献1には、レーダによる自車前方の測距とカメラによる自車前方の撮影とに基づくセンサフュージョンの認識処理により、自車前方の先行車を認識する障害物認識方法が開示されている。特許文献1における障害物認識方法では、レーザレーダの測距結果に基づいて単眼カメラの画像中に障害物の候補領域を設定する。そして、該障害物の候補領域の水平、垂直の二値化された画像エッジのヒストグラムを算出する。 Patent Document 1 discloses an obstacle recognition method for recognizing a preceding vehicle ahead of a host vehicle by sensor fusion recognition processing based on distance measurement in front of the host vehicle by a radar and photographing in front of the host vehicle by a camera. . In the obstacle recognition method in Patent Document 1, a candidate area for an obstacle is set in an image of a monocular camera based on a distance measurement result of a laser radar. Then, a horizontal and vertical binarized image edge histogram of the obstacle candidate area is calculated.
特開2005-329779号公報JP 2005-329779 A 特開2008-265412号公報JP 2008-265412 A 特許3638323号公報Japanese Patent No. 3638323 特許3400875号公報Japanese Patent No. 3400875 特開平04-193641号公報Japanese Patent Laid-Open No. 04-193641
 物体検出装置においては、画像物標情報を取得するためのカメラとしてステレオカメラ又は単眼カメラを用いることができる。単眼カメラを用いた場合、ステレオカメラを用いた場合に比べてコストを低減することができる。しかしながら、単眼カメラの画像からは正確な奥行き方向の物標情報(距離)を取得することが困難である。そのため、単眼カメラの画像から取得された画像物標情報においては、実際には自車両から見て物標よりも奥に存在する物体又は模様が、物標と同一物体として認識される場合がある。この場合、物標の横幅が実際の幅よりも大きく検出される。 In the object detection apparatus, a stereo camera or a monocular camera can be used as a camera for acquiring image target information. When a monocular camera is used, costs can be reduced compared to when a stereo camera is used. However, it is difficult to acquire accurate target information (distance) in the depth direction from an image of a monocular camera. Therefore, in the image target information acquired from the image of the monocular camera, an object or pattern existing behind the target when viewed from the own vehicle may be recognized as the same object as the target. . In this case, the width of the target is detected to be larger than the actual width.
 このような誤検出により、検出された物体が自車線上にはみ出していると判断されると、PCS等の安全システムの誤作動を招く虞がある。 If it is determined that the detected object protrudes on the own lane due to such a false detection, there is a risk of causing a malfunction of a safety system such as PCS.
 本発明は、上記のような問題に鑑みてなされたものであって、レーダによって取得されるレーダ物標情報とカメラによって取得される画像物標情報とをフュージョンさせることで物体を検出する物体検出装置又は物体検出方法において、物体の検出精度をより向上させることが可能な技術を提供することを目的とする。 The present invention has been made in view of the above problems, and detects an object by fusing radar target information acquired by a radar and image target information acquired by a camera. An object of the present invention is to provide a technique capable of further improving the detection accuracy of an object in an apparatus or an object detection method.
 本発明では、レーダから照射される電磁波の基点位置と単眼カメラによって認識された第一の物標の横端とを結ぶ線分の該第一の物標の横端から延びる延長線上又は該延長線の近傍にレーダによって認識された第二の物標が存在しない場合に、第一の物標の横端についての画像物標情報をレーダ物標情報と画像物標情報とのフュージョンに用いる。 In the present invention, on the extension line extending from the horizontal end of the first target or the extension extending from the horizontal end of the first target recognized by the monocular camera and the base position of the electromagnetic wave emitted from the radar When the second target recognized by the radar does not exist in the vicinity of the line, the image target information about the lateral end of the first target is used for the fusion of the radar target information and the image target information.
 より詳細には、第一の発明に係る物体検出装置は、
 レーダによって取得されるレーダ物標情報と単眼カメラの画像から取得される画像物標情報とをフュージョンさせることで物体を検出する物体検出装置であって、
 前記レーダから照射される電磁波の基点位置と前記単眼カメラによって認識された第一の物標の横端とを結ぶ線分の該第一の物標の横端から延びる延長線をエッジ延長線とし、該エッジ延長線上又は該エッジ延長線の近傍に、前記レーダによって認識された第二の物標が存在するか否かを判別する判別手段と、
 該判別手段によって、前記エッジ延長線上又は前記エッジ延長線の近傍に前記レーダによって認識された前記第二の物標が存在すると判定されたときに、前記第一の物標と前記第二の物標との間の奥行き方向の距離を両物標についてのレーダ物標情報に基づいて算出する距離算出手段と、
 前記距離算出手段によって算出された前記第一の物標と前記第二の物標との間の奥行き方向の距離が所定の距離を越えている場合、前記単眼カメラによって認識された前記第一の物標の前記横端についての画像物標情報を用いずに前記第一の物標についてのレーダ物標情報と画像物標情報とをフュージョンさせるフュージョン手段と、
 を備えたことを特徴とする。
More specifically, the object detection device according to the first invention is:
An object detection device that detects an object by fusing radar target information acquired by a radar and image target information acquired from an image of a monocular camera,
An extension line extending from the horizontal end of the first target segment connecting the base position of the electromagnetic wave emitted from the radar and the horizontal end of the first target recognized by the monocular camera is defined as an edge extension line. Determining means for determining whether or not the second target recognized by the radar exists on the edge extension line or in the vicinity of the edge extension line;
When the determining means determines that the second target recognized by the radar exists on the edge extension line or in the vicinity of the edge extension line, the first target and the second object A distance calculation means for calculating a distance in the depth direction between the target and the target based on radar target information for both targets;
When the distance in the depth direction between the first target and the second target calculated by the distance calculation means exceeds a predetermined distance, the first target recognized by the monocular camera Fusion means for fusing radar target information and image target information about the first target without using image target information about the horizontal end of the target;
It is provided with.
 ここで、所定の距離は、自車線上に存在する可能性のある障害物(自車両以外の車両等)における奥行き方向の長さの最大値又は該最大値にある程度のマージンを加算した値として設定される。 Here, the predetermined distance is the maximum value of the length in the depth direction of an obstacle (such as a vehicle other than the host vehicle) that may exist on the own lane, or a value obtained by adding a certain margin to the maximum value. Is set.
 本発明において、エッジ延長線上又はエッジ延長線近傍に第二の物標が存在している場合、単眼カメラによって認識された第一の物標の横端は、実際には、自車両から見て第一の物標よりも奥に存在する物体又は模様の横端である可能性が高い。また、第一の物標と第二の物標との間の奥行き方向の距離が所定の距離を越えている場合、単眼カメラによって第一の物標の横端として認識された物体又は模様は、自車線上に存在する障害物以外の物体又はその模様である可能性が高い。 In the present invention, when the second target exists on the edge extension line or in the vicinity of the edge extension line, the lateral end of the first target recognized by the monocular camera is actually viewed from the own vehicle. There is a high possibility that it is the lateral edge of an object or pattern that exists behind the first target. In addition, when the distance in the depth direction between the first target and the second target exceeds a predetermined distance, the object or pattern recognized as the lateral end of the first target by the monocular camera is There is a high possibility that it is an object other than an obstacle present on the own lane or a pattern thereof.
 そこで、エッジ延長線上又はエッジ延長線近傍に第二の物標が存在しており、且つ、第一の物標と第二の物標との間の奥行き方向の距離が所定の距離を越えている場合は、第一の物標についてのレーダ物標情報と画像物標情報とをフュージョンさせる際に、第一の物標の横端についての画像物標情報を除いて両者をフュージョンさせる。 Therefore, the second target exists on the edge extension line or in the vicinity of the edge extension line, and the distance in the depth direction between the first target and the second target exceeds a predetermined distance. When the radar target information and the image target information for the first target are fused, both are fused except for the image target information for the lateral end of the first target.
 これにより、信頼性の低い部分を除いた画像物標情報とレーダ物標情報とをフュージョンさせることができる。その結果、物標の横幅が誤検出されることを抑制することができる。従って、物体の検出精度をより向上させることができる。 Thereby, it is possible to fuse the image target information and the radar target information excluding the part with low reliability. As a result, it is possible to suppress erroneous detection of the horizontal width of the target. Therefore, the object detection accuracy can be further improved.
 第二の発明に係る物体検出装置は、
 レーダによって取得されるレーダ物標情報と単眼カメラの画像から取得される画像物標情報とをフュージョンさせることで物体を検出する物体検出装置であって、
 前記レーダから照射される電磁波の基点位置と前記単眼カメラによって認識された第一の物標の横端とを結ぶ線分の該第一の物標の横端から延びる延長線をエッジ延長線とし、該エッジ延長線上又は該エッジ延長線の近傍に、前記レーダによって認識された第二の物標が存在するか否かを判別する判別手段と、
 該判別手段によって、前記エッジ延長線上又は前記エッジ延長線の近傍に前記レーダによって認識された前記第二の物標が存在しないと判定された場合、前記単眼カメラによって認識された前記第一の物標の前記横端についての画像物標情報を用いて前記第一の物標についてのレーダ物標情報と画像物標情報とをフュージョンさせるフュージョン手段と、
 を備えたことを特徴とする。
An object detection apparatus according to a second invention is
An object detection device that detects an object by fusing radar target information acquired by a radar and image target information acquired from an image of a monocular camera,
An extension line extending from the horizontal end of the first target segment connecting the base position of the electromagnetic wave emitted from the radar and the horizontal end of the first target recognized by the monocular camera is defined as an edge extension line. Determining means for determining whether or not the second target recognized by the radar exists on the edge extension line or in the vicinity of the edge extension line;
When the determination means determines that the second target recognized by the radar does not exist on the edge extension line or in the vicinity of the edge extension line, the first object recognized by the monocular camera Fusion means for fusing radar target information and image target information about the first target using image target information about the horizontal end of the target;
It is provided with.
 本発明によれば、第一の物標の横端についての画像物標情報の信頼性が高いときに、該横端についての画像物標情報がレーダ物標情報とのフュージョンに用いられる。そのため、物標の横幅が誤検出されることを抑制することができる。 According to the present invention, when the reliability of the image target information about the lateral end of the first target is high, the image target information about the lateral end is used for fusion with the radar target information. Therefore, it is possible to suppress erroneous detection of the horizontal width of the target.
 第三の発明に係る物体検出方法は、
 レーダによって取得されるレーダ物標情報と単眼カメラの画像から取得される画像物標情報とをフュージョンさせることで物体を検出する物体検出方法であって、
 前記レーダから照射される電磁波の基点位置と前記単眼カメラによって認識された第一の物標の横端とを結ぶ線分の該第一の物標の横端から延びる延長線をエッジ延長線とし、該エッジ延長線上又は該エッジ延長線の近傍に、前記レーダによって認識された第二の物標が存在するか否かを判別する判別ステップと、
 該判別ステップにおいて、前記エッジ延長線上又は前記エッジ延長線の近傍に前記第二の物標が存在すると判定されたときに、前記第一の物標と前記第二の物標との間の奥行き方向の距離を両物標についてのレーダ物標情報に基づいて算出する距離算出ステップと、
 前記距離算出ステップにおいて算出された前記第一の物標と前記第二の物標との間の奥行き方向の距離が所定の距離を越えている場合、前記単眼カメラによって認識された前記第一の物標の前記横端についての画像物標情報を用いずに前記第一の物標についてのレーダ物標情報と画像物標情報とをフュージョンさせるフュージョンステップと、
 を有することを特徴とする。
An object detection method according to a third invention is
An object detection method for detecting an object by fusing radar target information acquired by a radar and image target information acquired from an image of a monocular camera,
An extension line extending from the horizontal end of the first target segment connecting the base position of the electromagnetic wave emitted from the radar and the horizontal end of the first target recognized by the monocular camera is defined as an edge extension line. A determination step of determining whether or not the second target recognized by the radar exists on the edge extension line or in the vicinity of the edge extension line;
The depth between the first target and the second target when it is determined in the determining step that the second target is present on or near the edge extension line. A distance calculating step for calculating the distance in the direction based on radar target information for both targets;
When the distance in the depth direction between the first target and the second target calculated in the distance calculating step exceeds a predetermined distance, the first target recognized by the monocular camera A fusion step of fusing the radar target information and the image target information for the first target without using the image target information for the lateral end of the target;
It is characterized by having.
 本発明によっても、第一の発明と同様の理由により、物標の横幅が誤検出されることを抑制することができる。 Also according to the present invention, it is possible to suppress erroneous detection of the lateral width of the target for the same reason as in the first invention.
 第四の発明に係る物体検出方法は、
 レーダによって取得されるレーダ物標情報と単眼カメラの画像から取得される画像物標情報とをフュージョンさせることで物体を検出する物体検出方法であって、
 前記レーダから照射される電磁波の基点位置と前記単眼カメラによって認識された第一の物標の横端とを結ぶ線分の該第一の物標の横端から延びる延長線をエッジ延長線とし、該エッジ延長線上又は該エッジ延長線の近傍に、前記レーダによって認識された第二の物標が存在するか否かを判別する判別ステップと、
 該判別ステップにおいて、前記エッジ延長線上又は前記エッジ延長線の近傍に前記レーダによって認識された前記第二の物標が存在しないと判定された場合、前記単眼カメラによって認識された前記第一の物標の前記横端についての画像物標情報を用いて前記第一の物標についてのレーダ物標情報と画像物標情報とをフュージョンさせるフュージョンステップと、
 を有することを特徴とする。
An object detection method according to a fourth invention is:
An object detection method for detecting an object by fusing radar target information acquired by a radar and image target information acquired from an image of a monocular camera,
An extension line extending from the horizontal end of the first target segment connecting the base position of the electromagnetic wave emitted from the radar and the horizontal end of the first target recognized by the monocular camera is defined as an edge extension line. A determination step of determining whether or not the second target recognized by the radar exists on the edge extension line or in the vicinity of the edge extension line;
In the determination step, when it is determined that the second target recognized by the radar does not exist on the edge extension line or in the vicinity of the edge extension line, the first object recognized by the monocular camera A fusion step of fusing the radar target information and the image target information for the first target using the image target information for the lateral end of the target;
It is characterized by having.
 本発明によっても、第二の発明と同様の理由により、物標の横幅が誤検出されることを抑制することができる。 Also according to the present invention, it is possible to suppress erroneous detection of the width of the target for the same reason as in the second invention.
 本発明によれば、物体の検出精度をより向上させることができる。その結果、PCS等の安全システムの誤作動を抑制することができる。 According to the present invention, the object detection accuracy can be further improved. As a result, malfunction of a safety system such as PCS can be suppressed.
実施例に係るPCSの一部の概略構成を示すブロック図である。It is a block diagram which shows the one part schematic structure of PCS which concerns on an Example. レーダ物標情報と画像物標情報とのフュージョンの方法を説明するための第一の図である。It is a 1st figure for demonstrating the method of fusion with radar target information and image target information. レーダ物標情報と画像物標情報とのフュージョンの方法を説明するための第二の図である。It is a 2nd figure for demonstrating the method of fusion with radar target information and image target information. レーダ物標情報と画像物標情報とのフュージョンの方法を説明するための第三の図である。It is a 3rd figure for demonstrating the method of fusion with radar target information and image target information. レーダ物標情報と画像物標情報とのフュージョンの方法を説明するための第四の図である。It is the 4th figure for demonstrating the method of fusion of radar target information and image target information. 第一物標についてのレーダ物標と画像物標とが重なっていない場合のフュージョン物標について説明するための第一の図である。It is a 1st figure for demonstrating a fusion target in case the radar target about a 1st target and the image target do not overlap. 第一物標についてのレーダ物標と画像物標とが重なっていない場合のフュージョン物標について説明するための第二の図である。It is a 2nd figure for demonstrating a fusion target in case the radar target and image target about a 1st target have not overlapped. 幅の狭い道路に沿って壁が設けられている場合において、壁のフュージョン物標としての認識のされ方を説明するための図である。When a wall is provided along a narrow road, it is a figure for demonstrating how to recognize as a fusion target of a wall. 実施例に係るレーダ物標情報と画像物標情報とのフュージョンのフローを示すフローチャートである。It is a flowchart which shows the flow of fusion of the radar target information and image target information which concern on an Example.
 以下、本発明の具体的な実施形態について図面に基づいて説明する。本実施例に記載されている構成部品の寸法、材質、形状、その相対配置等は、特に記載がない限りは発明の技術的範囲をそれらのみに限定する趣旨のものではない。 Hereinafter, specific embodiments of the present invention will be described with reference to the drawings. The dimensions, materials, shapes, relative arrangements, and the like of the components described in the present embodiment are not intended to limit the technical scope of the invention to those unless otherwise specified.
 <実施例>
 (システムの概略構成)
 ここでは、本発明に係る物体検出装置をPCSに適用した場合の実施例について説明する。図1は、本実施例に係るPCSの一部の概略構成を示すブロック図である。本実施例においては、PCS1が車両100に搭載されている。PCS1は、ミリ波レーダ2、単眼カメラ3、カメラECU4、PCS ECU5を備えている。また、PCS ECU5は、フュージョン演算部6、物理値演算部7及び衝突判定部8を有している。
<Example>
(Schematic configuration of the system)
Here, an embodiment in the case where the object detection apparatus according to the present invention is applied to a PCS will be described. FIG. 1 is a block diagram illustrating a schematic configuration of a part of the PCS according to the present embodiment. In this embodiment, the PCS 1 is mounted on the vehicle 100. The PCS 1 includes a millimeter wave radar 2, a monocular camera 3, a camera ECU 4, and a PCS ECU 5. The PCS ECU 5 includes a fusion calculation unit 6, a physical value calculation unit 7, and a collision determination unit 8.
 PCS1は、ミリ波レーダ2及び単眼カメラ3によって取得した情報に基づいて自車両以外の車両や障害物、歩行者等を検出する。そして、これらの物体(歩行者含む)と衝突する可能性が高いと判断した場合、運転者へ警報を発信すると共に衝突被害低減制御を実施する。ここで、衝突被害低減制御としては、シートベルトの巻き取りやプリクラッシュブレーキによる衝突速度の低減等を例示することができる。 The PCS 1 detects a vehicle other than the host vehicle, an obstacle, a pedestrian, and the like based on information acquired by the millimeter wave radar 2 and the monocular camera 3. When it is determined that there is a high possibility of collision with these objects (including pedestrians), a warning is sent to the driver and collision damage reduction control is performed. Here, examples of the collision damage reduction control include reduction of the collision speed by winding the seat belt or pre-crash brake.
 ミリ波レーダ2は、車両100の前側中央部に取り付けられている。ミリ波レーダ2は、車両100の前方及び斜め前方をミリ波帯の電磁波によって水平方向にスキャンすると共に車外の物体の表面で反射された電磁波を受信する。これにより、ミリ波レーダ2は物標を電磁波の反射点として認識する。さらに、ミリ波レーダ2は、ミリ波の送受信データから物標情報を取得する。このミリ波の送受信データから取得された物標情報がレーダ物標情報である。ここでのレーダ物標情報は、物標の横位置、自車両100と物標間の距離及び自車両100と物標との相対速度である。該レーダ物標情報がカメラECU4及びPCS ECU5のフュージョン演算部6に入力される。 The millimeter wave radar 2 is attached to the center of the front side of the vehicle 100. The millimeter wave radar 2 scans the front and oblique front of the vehicle 100 in the horizontal direction with millimeter wave electromagnetic waves, and receives the electromagnetic waves reflected on the surface of an object outside the vehicle. Thus, the millimeter wave radar 2 recognizes the target as an electromagnetic wave reflection point. Furthermore, the millimeter wave radar 2 acquires target information from millimeter wave transmission / reception data. The target information acquired from the millimeter wave transmission / reception data is radar target information. The radar target information here is the lateral position of the target, the distance between the host vehicle 100 and the target, and the relative speed between the host vehicle 100 and the target. The radar target information is input to the fusion calculation unit 6 of the camera ECU 4 and the PCS ECU 5.
 単眼カメラ3は、CCDカメラであって、車両100の前側中央部に取り付けられている。単眼カメラ3は、車両100の前方及び斜め前方の画像を撮影する。これにより、単眼カメラ3は物標を画像として認識する。単眼カメラ3によって撮影された画像は画像信号としてカメラECU4に入力される。カメラECU4は、入力された画像信号から物標情報を取得する。この単眼カメラ3から入力された画像信号から取得された物標情報が画像物標情報である。ここでの画像物標情報は、物標の横位置、物標の横幅及び高さである。該画像物標情報がPCS ECU5のフュージョン演算部6に入力される。 The monocular camera 3 is a CCD camera and is attached to the front center of the vehicle 100. The monocular camera 3 captures images in front of and obliquely forward of the vehicle 100. Thereby, the monocular camera 3 recognizes the target as an image. An image captured by the monocular camera 3 is input to the camera ECU 4 as an image signal. The camera ECU 4 acquires target information from the input image signal. The target information acquired from the image signal input from the monocular camera 3 is the image target information. The image target information here is the horizontal position of the target, the horizontal width and the height of the target. The image target information is input to the fusion calculation unit 6 of the PCS ECU 5.
 このとき、画像物標情報は、同一の物標についてのレーダ物標情報と関連付けられてフュージョン演算部6に入力される。ミリ波レーダ2によって認識された物標と単眼カメラ3によって認識された物標とが同一の物標であるか否かは、レーダ物標情報及び画像物標情報における物標の横位置等に基づいて判断される。例えば、ミリ波レーダ2によって認識された物標と単眼カメラ3によって認識された物標とが重なっている場合、又は、単眼カメラ3によって認識された物標がミリ波レーダ2によって認識された物標から所定の範囲内に存在する場合、両物標は同一であると判断される。 At this time, the image target information is input to the fusion calculation unit 6 in association with the radar target information for the same target. Whether the target recognized by the millimeter wave radar 2 and the target recognized by the monocular camera 3 are the same target or not is determined by the lateral position of the target in the radar target information and the image target information. Judgment based on. For example, when the target recognized by the millimeter wave radar 2 and the target recognized by the monocular camera 3 overlap, or the target recognized by the monocular camera 3 is recognized by the millimeter wave radar 2. If the target is within a predetermined range, it is determined that both targets are the same.
 尚、ミリ波レーダ2及び単眼カメラ3の取り付け位置は車両の前側中央部に限られるものではない。例えば、これらを車両の後側に取り付けてもよい。 Note that the mounting position of the millimeter wave radar 2 and the monocular camera 3 is not limited to the front center of the vehicle. For example, these may be attached to the rear side of the vehicle.
 フュージョン演算部6は、ミリ波レーダ2から入力されたレーダ物標情報とカメラECU4から入力された画像物標情報とをフュージョンする。フュージョンの方法については後述する。フュージョン演算部6による演算結果が物理値演算部7に入力される。 The fusion calculation unit 6 fuses the radar target information input from the millimeter wave radar 2 and the image target information input from the camera ECU 4. The fusion method will be described later. The calculation result by the fusion calculation unit 6 is input to the physical value calculation unit 7.
 物理値演算部7は、フュージョン演算部6の演算結果に基づいて、物標の横位置、物標の横幅及び高さ、自車両100と物標間の距離及び自車両100と物標との相対速度を算出する。物理値演算部7による演算結果が衝突判定部8に入力される。 Based on the calculation result of the fusion calculation unit 6, the physical value calculation unit 7 determines the horizontal position of the target, the horizontal width and height of the target, the distance between the host vehicle 100 and the target, and the host vehicle 100 and the target. Calculate the relative speed. The calculation result by the physical value calculation unit 7 is input to the collision determination unit 8.
 衝突判定部8は、物理値演算部7の演算結果に基づいて、自車両100と物標とが衝突する可能性が高いか否かを判別する。衝突判定部8によって自車両100と物標とが衝突する可能性が高いと判定された場合、PCS ECU5は、シートベルト巻き取り用アクチュエータやプリクラッシュブレーキアクチュエータ等(図示略)に制御信号を発信することで衝突被害低減制御を実施する。さらに、この場合、PCS ECU5は、運転者への警報装置(図示略)に警報ONの信号を発信する。 The collision determination unit 8 determines whether or not the host vehicle 100 and the target are likely to collide based on the calculation result of the physical value calculation unit 7. When it is determined by the collision determination unit 8 that there is a high possibility that the host vehicle 100 and the target will collide, the PCS ECU 5 transmits a control signal to a seat belt retracting actuator, a pre-crash brake actuator, etc. (not shown). To implement collision damage reduction control. Further, in this case, the PCS ECU 5 transmits a warning ON signal to a warning device (not shown) for the driver.
 本実施例においては、ミリ波レーダ2、単眼カメラ3、カメラECU4、フュージョン演算部6及び物理値演算部7が本発明に係る物体検出装置を構成する。 In this embodiment, the millimeter wave radar 2, the monocular camera 3, the camera ECU 4, the fusion calculation unit 6, and the physical value calculation unit 7 constitute the object detection apparatus according to the present invention.
 (フュージョン方法)
 以下、フュージョン演算部6における、ミリ波レーダ2から入力されたレーダ物標情報とカメラECU4から入力された画像物標情報とのフュージョンの方法について図2~5に基づいて説明する。本実施例に係る画像物標情報は単眼カメラ3の画像から取得される。しかし、単眼カメラ3の画像からは正確な奥行き方向の物標情報(距離)を取得することが困難である。そのため、単眼カメラ3の画像から取得された画像物標情報においては、実際には自車両100から見て物標よりも奥に存在する物体又は模様が物標と同一物体として認識されている場合がある。この場合、物標の横幅が実際の幅よりも大きく検出される。
(Fusion method)
Hereinafter, a fusion method of the radar target information input from the millimeter wave radar 2 and the image target information input from the camera ECU 4 in the fusion calculation unit 6 will be described with reference to FIGS. Image target information according to the present embodiment is acquired from an image of the monocular camera 3. However, it is difficult to acquire accurate target information (distance) in the depth direction from the image of the monocular camera 3. Therefore, in the image target information acquired from the image of the monocular camera 3, an object or pattern existing behind the target as viewed from the own vehicle 100 is actually recognized as the same object as the target. There is. In this case, the width of the target is detected to be larger than the actual width.
 ここで、ミリ波レーダ2によって電磁波の反射点として認識された物標をレーダ物標と称し、単眼カメラ3によって画像として認識された物標を画像物標と称する。例えば、図2に示すように、道路のコーナー部分の脇にガードレール200が設置されているときに、ミリ波レーダ2によってガードレール200の手前部分200Aがレーダ物標として認識されたとする。このとき、単眼カメラ3によってもガードレール200の手前部分200Aが画像物標として認識される。しかしながら、ガードレール200は、車両100の進行方向に対して斜めになっているため、単眼カメラ3によれば、ガードレール200における手前部分200Aよりも奥の部分も該手前部分200Aと同一の位置にあるものとして認識される場合がある。その結果、画像物標情報においては、ガードレール200の手前部分200Aの横幅が実際の幅よりも大きい値となる。 Here, a target recognized as a reflection point of electromagnetic waves by the millimeter wave radar 2 is referred to as a radar target, and a target recognized as an image by the monocular camera 3 is referred to as an image target. For example, as shown in FIG. 2, it is assumed that the front portion 200 </ b> A of the guard rail 200 is recognized as a radar target by the millimeter wave radar 2 when the guard rail 200 is installed beside the corner portion of the road. At this time, the monocular camera 3 also recognizes the front portion 200A of the guardrail 200 as an image target. However, since the guard rail 200 is inclined with respect to the traveling direction of the vehicle 100, according to the monocular camera 3, the back part of the guard rail 200 behind the front part 200 </ b> A is at the same position as the front part 200 </ b> A. It may be recognized as a thing. As a result, in the image target information, the lateral width of the front portion 200 </ b> A of the guardrail 200 becomes a value larger than the actual width.
 上記のように物標の横幅が実際の幅よりも大きいものとして誤検出された画像物標情報がレーダ物標情報とフュージョンされ、その結果に基づいて物体検出が行なわれると、検出された物体が自車線上にはみ出していると判断される(図2の場合、ガードレール200の手前部分200Aが自車線上にはみ出していると判断される。)。この場合、PCS1が誤作動する虞がある。 As described above, the image target information erroneously detected as the width of the target being larger than the actual width is fused with the radar target information, and when the object is detected based on the result, the detected object Is determined to protrude from the own lane (in the case of FIG. 2, it is determined that the front portion 200A of the guardrail 200 protrudes from the own lane). In this case, the PCS 1 may malfunction.
 そこで、本実施例では、以下のような方法により、レーダ物標情報と画像物標情報とをフュージョンさせる際に信頼性の低い画像物標情報を除いて両者をフュージョンさせる。 Therefore, in the present embodiment, when the radar target information and the image target information are fused by the following method, both of them are fused except for the image target information with low reliability.
 図2において、ミリ波レーダ2から照射される電磁波の基点位置(即ち、車両100におけるミリ波レーダ2の搭載位置)を基点位置100aとする。また、ガードレール200の手前部分200Aを第一物標とする。また、基点位置100aと第一物標たるガードレール200の手前部分200Aについての画像物標の横端とを結ぶ線分の、該第一物標についての画像物標の横端から延びる延長線をエッジ延長線とする。 In FIG. 2, the base position of the electromagnetic wave irradiated from the millimeter wave radar 2 (that is, the mounting position of the millimeter wave radar 2 in the vehicle 100) is defined as the base position 100a. Further, a front portion 200A of the guardrail 200 is set as a first target. Further, an extended line extending from the lateral end of the image target for the first target, which is a line segment connecting the base position 100a and the lateral end of the image target for the front portion 200A of the guard rail 200 that is the first target. Edge extension line.
 そして、本実施例では、エッジ延長線上又は該エッジ延長線近傍に、ミリ波レーダ2によって第一物標とは異なる物標として認識された第二物標が存在するか否かを判別する。ここで、エッジ延長線近傍とは、その範囲内に第二物標が存在していれば、該第二物標がエッジ延長線上にあるものとして取り扱うことが可能な範囲として設定された範囲である(例えば、エッジ延長線から左右方向0.5mの範囲)。上記の判別は、第一物標についての画像物標の左右の横端についてそれぞれ行なわれる。 In this embodiment, it is determined whether or not there is a second target recognized as a target different from the first target by the millimeter wave radar 2 on or near the edge extension line. Here, the vicinity of the edge extension line is a range set as a range in which the second target can be handled as being on the edge extension line if the second target exists within the range. (For example, a range of 0.5 m in the left-right direction from the edge extension line). The above determination is performed for the left and right lateral ends of the image target for the first target.
 図2の場合、第一物標についての画像物標の左端側のエッジ延長線上又は該エッジ延長線近傍には第二物標は存在しない。一方、第一物標についての画像物標の右端側のエッジ延長線上には、ガードレール200の奥部分200Bが第二物標として存在している。 In the case of FIG. 2, there is no second target on the edge extension line on the left end side of the image target for the first target or in the vicinity of the edge extension line. On the other hand, a back portion 200B of the guardrail 200 exists as a second target on the edge extension line on the right end side of the image target for the first target.
 このように、第一物標についての画像物標のエッジ延長線上又はエッジ延長線近傍にレーダ物標として認識された第二物標が存在している場合、画像物標においては、実際には自車両100から見て第一物標よりも奥に存在する第二物標が第一物標と同一の物体と認識された可能性が高い。従って、この場合、第一物標についての画像物標の横幅は第一物標の実際の横幅より大きい可能性が高い。 Thus, when there is a second target recognized as a radar target on or near the edge extension line of the image target for the first target, There is a high possibility that the second target existing behind the first target as viewed from the host vehicle 100 is recognized as the same object as the first target. Therefore, in this case, there is a high possibility that the width of the image target for the first target is larger than the actual width of the first target.
 さらに、本実施例では、上記の場合、第一物標と第二物標との間の奥行き方向の距離Ldを両物標についてのレーダ物標情報に基づいて算出する。そして、両物標間の距離Ldが所定の距離L0を越えているか否かを判別する。ここで、所定の距離L0は、自車線上に存在する可能性のある障害物(自車両以外の車両等)における奥行き方向の長さの最大値にある程度のマージンを加算した値として設定される(例えば、5m)。 Furthermore, in the present embodiment, in the above case, the distance Ld in the depth direction between the first target and the second target is calculated based on the radar target information for both targets. Then, it is determined whether or not the distance Ld between the two targets exceeds a predetermined distance L0. Here, the predetermined distance L0 is set as a value obtained by adding a certain margin to the maximum value in the depth direction of an obstacle (such as a vehicle other than the own vehicle) that may exist on the own lane. (For example, 5 m).
 例えば、図3に示すように、自車線上において、自車両100以外の車両(以下、他車両とも称する)300が障害物として自車両100の進行方向に対して斜めになった状態で存在している場合がある。このとき、ミリ波レーダ2では、他車両300の手前部分300Aが第一物標として認識され、他車両300の奥部分300Bが第二物標として認識されたとする。さらに、第一物標についての画像物標のエッジ延長線上にレーダ物標として認識された第二物標が存在していたとする。 For example, as shown in FIG. 3, on the own lane, a vehicle 300 other than the own vehicle 100 (hereinafter also referred to as another vehicle) exists as an obstacle in an oblique state with respect to the traveling direction of the own vehicle 100. There may be. At this time, in the millimeter wave radar 2, it is assumed that the front portion 300A of the other vehicle 300 is recognized as the first target and the back portion 300B of the other vehicle 300 is recognized as the second target. Furthermore, it is assumed that there is a second target recognized as a radar target on the edge extension line of the image target for the first target.
 この場合、第一物標についての画像物標の横幅は、他車両300の手前部分300Aの実際の横幅よりも大きい。しかしながら、このような場合は、第一物標についての画像物標の横幅は、他車両300の障害物としての横幅として取り扱われるべき値である。 In this case, the horizontal width of the image target for the first target is larger than the actual horizontal width of the front portion 300A of the other vehicle 300. However, in such a case, the horizontal width of the image target for the first target is a value that should be treated as the horizontal width of the other vehicle 300 as an obstacle.
 そこで、本実施例では、上記のように、第一物標と第二物標との間の奥行き方向の距離Ldが所定の距離L0を越えているか否かを判別することにより、第一物標についての画像物標の横幅を自車線上の障害物の横幅として取り扱うか否かを判断する。 Therefore, in this embodiment, as described above, by determining whether or not the distance Ld in the depth direction between the first target and the second target exceeds a predetermined distance L0, the first object is determined. It is determined whether or not the width of the image target for the mark is handled as the width of the obstacle on the own lane.
 第一物標と第二物標との間の奥行き方向の距離Ldが所定の距離L0を越えている場合、図2におけるガードレール200のように自車線外に存在する物体が第一物標及び第二物標として認識されていたり、或いは、自車線上において自車両100の目の前を走行する他車両が第一物標として認識され該他車両より前を走行する別の他車両が第二物標として認識されていたりする可能性が高い。つまり、自車線上の障害物以外の物体が第一物標及び第二物標として認識されている可能性が高い。そのため、この場合は、第一物標についての画像物標の横幅を自車線上の障害物の横幅としては取り扱わない。 When the distance Ld in the depth direction between the first target and the second target exceeds a predetermined distance L0, an object existing outside the own lane, such as the guardrail 200 in FIG. Another vehicle that is recognized as the second target or that travels in front of the host vehicle 100 on the own lane is recognized as the first target and travels ahead of the other vehicle is There is a high possibility of being recognized as a two-target. That is, there is a high possibility that an object other than the obstacle on the own lane is recognized as the first target and the second target. Therefore, in this case, the horizontal width of the image target for the first target is not treated as the horizontal width of the obstacle on the own lane.
 一方、第一物標と第二物標との間の奥行き方向の距離Ldが所定の距離L0以内であれば、図3のように、自車線上の障害物の手前側及び奥側が第一物標及び第二物標としてされている可能性が高い。そのため、この場合は、第一物標についての画像物標の横幅を自車線上の障害物の横幅として取り扱う。 On the other hand, if the distance Ld in the depth direction between the first target and the second target is within the predetermined distance L0, the front side and the back side of the obstacle on the own lane are first as shown in FIG. There is a high possibility that it is used as a target and a second target. Therefore, in this case, the horizontal width of the image target for the first target is handled as the horizontal width of the obstacle on the own lane.
 従って、本実施例においては、第一物標についての画像物標のエッジ延長線上又はエッジ延長線近傍にレーダ物標として認識された第二物標が存在しており、且つ、第一物標と第二物標との間の奥行き方向の距離Ldが所定の距離L0を越えている場合は、第一物標の横端についての画像物標情報を用いずに第一物標についてのレーダ物標情報と画像物標情報とをフュージョンさせる。 Therefore, in this embodiment, the second target recognized as the radar target exists on the edge extension line of the image target for the first target or in the vicinity of the edge extension line, and the first target is present. When the distance Ld in the depth direction between the target and the second target exceeds a predetermined distance L0, the radar for the first target is not used without using the image target information for the lateral end of the first target. The target information and the image target information are fused.
 これによれば、図2に示すようにレーダ物標と画像物標とが認識された場合、フュージョン物標は図4のように認識される。尚、フュージョン物標とは、レーダ物標情報と画像物標情報とをフュージョンさせることで認識された物標である。 According to this, when the radar target and the image target are recognized as shown in FIG. 2, the fusion target is recognized as shown in FIG. Note that the fusion target is a target recognized by fusing radar target information and image target information.
 一方、第一物標についての画像物標のエッジ延長線上又はエッジ延長線近傍にレーダ物標として認識された第二物標が存在していたとしても、第一物標と第二物標との間の奥行き方向の距離Ldが所定の距離L0以下である場合は、第一物標の該横端についての画像物標情報を用いて第一物標についてのレーダ物標情報と画像物標情報とをフュージョンさせる。 On the other hand, even if there is a second target recognized as a radar target on or near the edge extension line of the image target for the first target, the first target and the second target When the distance Ld in the depth direction is equal to or less than the predetermined distance L0, the radar target information and the image target for the first target are used using the image target information for the lateral end of the first target. Fusion with information.
 これによれば、図3に示すようにレーダ物標と画像物標とが認識された場合、フュージョン物標は図5のように認識される。 According to this, when the radar target and the image target are recognized as shown in FIG. 3, the fusion target is recognized as shown in FIG.
 上記のように、実施例によれば、信頼性の低い部分を除いた画像物標情報とレーダ物標情報とをフュージョンさせることができる。これにより、物標の横幅が誤検出されることを抑制することができる。従って、物体の検出精度をより向上させることができる。 As described above, according to the embodiment, it is possible to fuse the image target information and the radar target information excluding the portion with low reliability. Thereby, it can suppress that the width of a target is erroneously detected. Therefore, the object detection accuracy can be further improved.
 (上記の場合以外のフュージョン物標)
 第一物標についてのレーダ物標と画像物標とが重なっていない場合のフュージョン物標について図6及び7に基づいて説明する。尚、図6及び7においては、第一物標と第二物標との間の奥行き方向の距離Ldが所定の距離L0を越えているものとする。
(Fusion targets other than the above)
A fusion target when the radar target and the image target for the first target do not overlap will be described with reference to FIGS. In FIGS. 6 and 7, it is assumed that the distance Ld in the depth direction between the first target and the second target exceeds a predetermined distance L0.
 図6(a)に示すように、第一物標についての画像物標の左右両端についてのエッジ延長線上に第二物標(レーダ物標)が存在する場合がある。この場合、第一物標についての画像物標情報をレーダ物標情報とフュージョンさせない。つまり、この場合は、図6(b)に示すように、レーダ物標がフュージョン物標となる。 As shown in FIG. 6 (a), there may be a second target (radar target) on the edge extension line for the left and right ends of the image target for the first target. In this case, the image target information about the first target is not fused with the radar target information. That is, in this case, as shown in FIG. 6B, the radar target is a fusion target.
 また、図7(a)に示すように、第一物標についての画像物標の左右両端のうちレーダ物標から遠い方の端(右端)についてのエッジ延長線上にのみ第二物標(レーダ物標)が存在する場合がある。この場合、画像物標のレーダ物標に近い方の端(左端)についての画像物標情報をレーダ物標情報とフュージョンさせる。つまり、この場合、フュージョン物標は図7(b)に示すようなかたちで認識される。 Further, as shown in FIG. 7A, the second target (radar) is only on the edge extension line at the end (right end) far from the radar target among the left and right ends of the image target regarding the first target. (Target) may exist. In this case, the image target information about the end (left end) closer to the radar target of the image target is fused with the radar target information. That is, in this case, the fusion target is recognized as shown in FIG.
 本実施例に係るレーダ物標情報と画像物標情報とのフュージョン方法は、図8に示すように幅の狭い道路に沿って壁が設けられている場合において、壁の幅が誤検出されるのを抑制するのにも有効である。このような道路を車両100が走行しているときに、壁400のA地点が第一物標として認識されたとする。このとき、単眼カメラ3によっては、A地点より奥の壁もA地点にあるものとして認識される場合がある。この場合、画像物標では第一物標の右端が自車線上にはみ出していると認識される。 In the fusion method of radar target information and image target information according to the present embodiment, when a wall is provided along a narrow road as shown in FIG. 8, the width of the wall is erroneously detected. It is also effective in suppressing this. It is assumed that the point A of the wall 400 is recognized as the first target when the vehicle 100 is traveling on such a road. At this time, depending on the monocular camera 3, the wall behind the point A may be recognized as being at the point A. In this case, in the image target, it is recognized that the right end of the first target protrudes on the own lane.
 しかしながら、この場合、第一物標についての画像物標の右端のエッジ延長線上の壁が、ミリ波レーダ2によって第二物標として認識される。そのため、本実施例では、第一物標の右端の画像物標情報を除いて、レーダ物標情報と画像物標情報とがフュージョンされる。つまり、フュージョン物標は図8に示すようなかたちで認識される。これにより、壁400の幅が誤検出されるのを抑制することができる。 However, in this case, the wall on the edge extension line at the right end of the image target for the first target is recognized as the second target by the millimeter wave radar 2. Therefore, in this embodiment, the radar target information and the image target information are fused except for the image target information at the right end of the first target. That is, the fusion target is recognized as shown in FIG. Thereby, it can suppress that the width | variety of the wall 400 is misdetected.
 (フュージョンのフロー)
 本実施例に係るフュージョン演算部でのレーダ物標情報と画像物標情報とのフュージョンのフローについて図4に示すフローチャートに基づいて説明する。本フローはフュージョン演算部6において繰り返し実行されるフローである。
(Fusion flow)
A fusion flow of the radar target information and the image target information in the fusion calculation unit according to the present embodiment will be described based on the flowchart shown in FIG. This flow is a flow that is repeatedly executed in the fusion calculation unit 6.
 本フローでは、先ずステップS101において、レーダ物標情報及び画像物標情報が読み込まれる。 In this flow, first, in step S101, radar target information and image target information are read.
 次に、ステップS102において、レーダ物標情報とフュージョン可能な画像物標情報があるか否かが判別される。つまり、ミリ波レーダ2によって認識された物標と同一の物標についての画像物標情報があるか否かが判別される。ステップS102において、肯定判定された場合、ミリ波レーダ2及び単眼カメラ3によって認識された物標が第一物標に認定され、次にステップS103の処理が実行される。ステップS102において、否定判定された場合、本フローの実行が一旦終了される。 Next, in step S102, it is determined whether there is radar target information and image target information that can be fused. That is, it is determined whether there is image target information for the same target as the target recognized by the millimeter wave radar 2. If an affirmative determination is made in step S102, the target recognized by the millimeter wave radar 2 and the monocular camera 3 is recognized as the first target, and then the process of step S103 is executed. If a negative determination is made in step S102, execution of this flow is temporarily terminated.
 ステップS103においては、第一物標についての画像物標のエッジ延長線上又はエッジ延長線近傍にレーダ物標として認識された第二物標が存在しているか否かが判別される。この判別は、第一物標についての画像物標の左右両端について行なわれる。ステップS103において、肯定判定された場合、次にステップS104の処理が実行され、否定判定された場合、次にステップS107の処理が実行される。 In step S103, it is determined whether or not there is a second target recognized as a radar target on or near the edge extension line of the image target for the first target. This determination is performed for both the left and right ends of the image target for the first target. If an affirmative determination is made in step S103, the process of step S104 is executed next. If a negative determination is made, the process of step S107 is executed next.
 ステップS104においては、第一物標と第二物標との間の奥行き方向の距離Ldが両物標についてのレーダ物標情報に基づいて算出される。 In step S104, the distance Ld in the depth direction between the first target and the second target is calculated based on the radar target information for both targets.
 次に、ステップS105において、第一物標と第二物標との間の奥行き方向の距離Ldが所定の距離L0を越えているか否が判別される。ステップS105において、肯定判定された場合、次にステップS106の処理が実行され、否定判定された場合、次にステップS107の処理が実行される。 Next, in step S105, it is determined whether or not the distance Ld in the depth direction between the first target and the second target exceeds a predetermined distance L0. If an affirmative determination is made in step S105, the process of step S106 is executed next. If a negative determination is made, the process of step S107 is executed next.
 ステップS106においては、第一物標の横端についての画像物標情報を用いずに第一物標についてのレーダ物標情報と画像物標情報とがフュージョンされる。 In step S106, the radar target information and the image target information for the first target are fused without using the image target information for the lateral end of the first target.
 つまり、第一物標についての画像物標の右端側のエッジ延長線上又はエッジ延長線近傍にレーダ物標として認識された第二物標が存在し、且つ、第一物標と第二物標との間の奥行き方向の距離Ldが所定の距離L0を越えている場合は、第一物標の右端についての画像物標情報をフュージョンに用いない。また、第一物標についての画像物標の左端側のエッジ延長線上又はエッジ延長線近傍にレーダ物標として認識された第二物標が存在し、且つ、第一物標と第二物標との間の奥行き方向の距離Ldが所定の距離L0を越えている場合は、第一物標の左端についての画像物標情報をフュージョンに用いない。 That is, there is a second target recognized as a radar target on or near the edge extension line on the right end side of the image target for the first target, and the first target and the second target. When the distance Ld in the depth direction between the two objects exceeds a predetermined distance L0, the image target information about the right end of the first target is not used for fusion. In addition, there is a second target recognized as a radar target on or near the edge extension line on the left end side of the image target for the first target, and the first target and the second target. When the distance Ld in the depth direction between the two objects exceeds a predetermined distance L0, the image target information about the left end of the first target is not used for fusion.
 一方、ステップS107においては、第一物標の横端についての画像物標情報を用いて第一物標についてのレーダ物標情報と画像物標情報とがフュージョンされる。つまり、S101において読み込まれた第一物標についての画像物標情報が、そのまま第一物標についてのレーダ物標情報とフュージョンされる。 On the other hand, in step S107, the radar target information and the image target information for the first target are fused using the image target information for the lateral end of the first target. That is, the image target information about the first target read in S101 is directly fused with the radar target information about the first target.
 尚、本実施例においては、上記フローのステップS103を実行するPCS ECU5のフュージョン演算部6が本発明に係る判別手段に相当し、また、該ステップS103が本発明に係る判別ステップに相当する。本実施例においては、上記フローのステップS104を実行するPCS ECU5のフュージョン演算部6が本発明に係る距離算出手段に相当し、また、該ステップS104が本発明に係る距離算出ステップに相当する。本実施例においては、上記フローのステップS106又はステップS107を実行するPCS ECU5のフュージョン演算部6が本発明に係るフュージョン手段に相当し、また、該ステップS106又はステップS107が本発明に係るフュージョンステップに相当する。 In the present embodiment, the fusion calculation unit 6 of the PCS ECU 5 that executes step S103 of the above flow corresponds to a determination unit according to the present invention, and step S103 corresponds to a determination step according to the present invention. In this embodiment, the fusion calculation unit 6 of the PCS ECU 5 that executes step S104 of the above flow corresponds to the distance calculation means according to the present invention, and step S104 corresponds to the distance calculation step according to the present invention. In this embodiment, the fusion calculation unit 6 of the PCS ECU 5 that executes step S106 or step S107 of the above flow corresponds to the fusion means according to the present invention, and the step S106 or step S107 corresponds to the fusion step according to the present invention. It corresponds to.
1・・・PCS(Pre-crash safety system)
2・・・ミリ波レーダ
3・・・単眼カメラ
4・・・カメラECU
5・・・PCS ECU
6・・・フュージョン演算部
7・・・物理地演算部
8・・・衝突判定部
100・・車両(自車両)
200・・ガードレール
300・・車両(他車両)
400・・壁
1 ... PCS (Pre-crash safety system)
2 ... Millimeter wave radar 3 ... Monocular camera 4 ... Camera ECU
5 ... PCS ECU
6... Fusion calculation unit 7... Physical location calculation unit 8. Collision determination unit 100 ..Vehicle (own vehicle)
200-Guardrail 300-Vehicle (other vehicle)
400 ... wall

Claims (4)

  1.  レーダによって取得されるレーダ物標情報と単眼カメラの画像から取得される画像物標情報とをフュージョンさせることで物体を検出する物体検出装置であって、
     前記レーダから照射される電磁波の基点位置と前記単眼カメラによって認識された第一の物標の横端とを結ぶ線分の該第一の物標の横端から延びる延長線をエッジ延長線とし、該エッジ延長線上又は該エッジ延長線の近傍に、前記レーダによって認識された第二の物標が存在するか否かを判別する判別手段と、
     該判別手段によって、前記エッジ延長線上又は前記エッジ延長線の近傍に前記レーダによって認識された前記第二の物標が存在すると判定されたときに、前記第一の物標と前記第二の物標との間の奥行き方向の距離を両物標についてのレーダ物標情報に基づいて算出する距離算出手段と、
     前記距離算出手段によって算出された前記第一の物標と前記第二の物標との間の奥行き方向の距離が所定の距離を越えている場合、前記単眼カメラによって認識された前記第一の物標の前記横端についての画像物標情報を用いずに前記第一の物標についてのレーダ物標情報と画像物標情報とをフュージョンさせるフュージョン手段と、
     を備えたことを特徴とする物体検出装置。
    An object detection device that detects an object by fusing radar target information acquired by a radar and image target information acquired from an image of a monocular camera,
    An extension line extending from the horizontal end of the first target segment connecting the base position of the electromagnetic wave emitted from the radar and the horizontal end of the first target recognized by the monocular camera is defined as an edge extension line. Determining means for determining whether or not the second target recognized by the radar exists on the edge extension line or in the vicinity of the edge extension line;
    When the determining means determines that the second target recognized by the radar exists on the edge extension line or in the vicinity of the edge extension line, the first target and the second object A distance calculating means for calculating a distance in the depth direction between the target and the target based on radar target information for both targets;
    When the distance in the depth direction between the first target and the second target calculated by the distance calculation means exceeds a predetermined distance, the first target recognized by the monocular camera Fusion means for fusing radar target information and image target information about the first target without using image target information about the horizontal end of the target;
    An object detection apparatus comprising:
  2.  レーダによって取得されるレーダ物標情報と単眼カメラの画像から取得される画像物標情報とをフュージョンさせることで物体を検出する物体検出装置であって、
     前記レーダから照射される電磁波の基点位置と前記単眼カメラによって認識された第一の物標の横端とを結ぶ線分の該第一の物標の横端から延びる延長線をエッジ延長線とし、該エッジ延長線上又は該エッジ延長線の近傍に、前記レーダによって認識された第二の物標が存在するか否かを判別する判別手段と、
     該判別手段によって、前記エッジ延長線上又は前記エッジ延長線の近傍に前記レーダによって認識された前記第二の物標が存在しないと判定された場合、前記単眼カメラによって認識された前記第一の物標の前記横端についての画像物標情報を用いて前記第一の物標についてのレーダ物標情報と画像物標情報とをフュージョンさせるフュージョン手段と、
     を備えたことを特徴とする物体検出装置。
    An object detection device that detects an object by fusing radar target information acquired by a radar and image target information acquired from an image of a monocular camera,
    An extension line extending from the horizontal end of the first target segment connecting the base position of the electromagnetic wave emitted from the radar and the horizontal end of the first target recognized by the monocular camera is defined as an edge extension line. Determining means for determining whether or not the second target recognized by the radar exists on the edge extension line or in the vicinity of the edge extension line;
    When the determination means determines that the second target recognized by the radar does not exist on the edge extension line or in the vicinity of the edge extension line, the first object recognized by the monocular camera Fusion means for fusing radar target information and image target information about the first target using image target information about the horizontal end of the target;
    An object detection apparatus comprising:
  3.  レーダによって取得されるレーダ物標情報と単眼カメラの画像から取得される画像物標情報とをフュージョンさせることで物体を検出する物体検出方法であって、
     前記レーダから照射される電磁波の基点位置と前記単眼カメラによって認識された第一の物標の横端とを結ぶ線分の該第一の物標の横端から延びる延長線をエッジ延長線とし、該エッジ延長線上又は該エッジ延長線の近傍に、前記レーダによって認識された第二の物標が存在するか否かを判別する判別ステップと、
     該判別ステップにおいて、前記エッジ延長線上又は前記エッジ延長線の近傍に前記第二の物標が存在すると判定されたときに、前記第一の物標と前記第二の物標との間の奥行き方向の距離を両物標についてのレーダ物標情報に基づいて算出する距離算出ステップと、
     前記距離算出ステップにおいて算出された前記第一の物標と前記第二の物標との間の奥行き方向の距離が所定の距離を越えている場合、前記単眼カメラによって認識された前記第一の物標の前記横端についての画像物標情報を用いずに前記第一の物標についてのレーダ物標情報と画像物標情報とをフュージョンさせるフュージョンステップと、
     を有することを特徴とする物体検出方法。
    An object detection method for detecting an object by fusing radar target information acquired by a radar and image target information acquired from an image of a monocular camera,
    An extension line extending from the horizontal end of the first target segment connecting the base position of the electromagnetic wave emitted from the radar and the horizontal end of the first target recognized by the monocular camera is defined as an edge extension line. A determination step of determining whether or not the second target recognized by the radar exists on the edge extension line or in the vicinity of the edge extension line;
    The depth between the first target and the second target when it is determined in the determining step that the second target is present on or near the edge extension line. A distance calculating step for calculating the distance in the direction based on radar target information for both targets;
    When the distance in the depth direction between the first target and the second target calculated in the distance calculating step exceeds a predetermined distance, the first target recognized by the monocular camera A fusion step of fusing the radar target information and the image target information for the first target without using the image target information for the lateral end of the target;
    An object detection method characterized by comprising:
  4.  レーダによって取得されるレーダ物標情報と単眼カメラの画像から取得される画像物標情報とをフュージョンさせることで物体を検出する物体検出方法であって、
     前記レーダから照射される電磁波の基点位置と前記単眼カメラによって認識された第一の物標の横端とを結ぶ線分の該第一の物標の横端から延びる延長線をエッジ延長線とし、該エッジ延長線上又は該エッジ延長線の近傍に、前記レーダによって認識された第二の物標が存在するか否かを判別する判別ステップと、
     該判別ステップにおいて、前記エッジ延長線上又は前記エッジ延長線の近傍に前記レーダによって認識された前記第二の物標が存在しないと判定された場合、前記単眼カメラによって認識された前記第一の物標の前記横端についての画像物標情報を用いて前記第一の物標についてのレーダ物標情報と画像物標情報とをフュージョンさせるフュージョンステップと、
     を有することを特徴とする物体検出方法。
    An object detection method for detecting an object by fusing radar target information acquired by a radar and image target information acquired from an image of a monocular camera,
    An extension line extending from the horizontal end of the first target segment connecting the base position of the electromagnetic wave emitted from the radar and the horizontal end of the first target recognized by the monocular camera is defined as an edge extension line. A determination step of determining whether or not the second target recognized by the radar exists on the edge extension line or in the vicinity of the edge extension line;
    In the determination step, when it is determined that the second target recognized by the radar does not exist on the edge extension line or in the vicinity of the edge extension line, the first object recognized by the monocular camera A fusion step of fusing the radar target information and the image target information for the first target using the image target information for the lateral end of the target;
    An object detection method characterized by comprising:
PCT/JP2009/066802 2009-09-28 2009-09-28 Object detection device and object detection method WO2011036807A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2009/066802 WO2011036807A1 (en) 2009-09-28 2009-09-28 Object detection device and object detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2009/066802 WO2011036807A1 (en) 2009-09-28 2009-09-28 Object detection device and object detection method

Publications (1)

Publication Number Publication Date
WO2011036807A1 true WO2011036807A1 (en) 2011-03-31

Family

ID=43795577

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2009/066802 WO2011036807A1 (en) 2009-09-28 2009-09-28 Object detection device and object detection method

Country Status (1)

Country Link
WO (1) WO2011036807A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013108664A1 (en) * 2012-01-16 2013-07-25 トヨタ自動車株式会社 Object detection device
JP2015206797A (en) * 2012-11-22 2015-11-19 株式会社デンソー Target detection device
US9798002B2 (en) 2012-11-22 2017-10-24 Denso Corporation Object detection apparatus
JP2018092483A (en) * 2016-12-06 2018-06-14 トヨタ自動車株式会社 Object recognition device
CN110940974A (en) * 2018-09-21 2020-03-31 丰田自动车株式会社 Object detection device
CN111098815A (en) * 2019-11-11 2020-05-05 武汉市众向科技有限公司 ADAS front vehicle collision early warning method based on monocular vision fusion millimeter waves
CN112017241A (en) * 2020-08-20 2020-12-01 广州小鹏汽车科技有限公司 Data processing method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001099930A (en) * 1999-09-29 2001-04-13 Fujitsu Ten Ltd Sensor for monitoring periphery
JP2002098754A (en) * 2000-09-22 2002-04-05 Fujitsu Ten Ltd Radar system for vehicle
JP2005141517A (en) * 2003-11-07 2005-06-02 Daihatsu Motor Co Ltd Vehicle detecting method and device
JP2006048568A (en) * 2004-08-09 2006-02-16 Daihatsu Motor Co Ltd Object recognition method and object recognizing device
JP2008276689A (en) * 2007-05-07 2008-11-13 Mitsubishi Electric Corp Obstacle-recognition device for vehicle

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001099930A (en) * 1999-09-29 2001-04-13 Fujitsu Ten Ltd Sensor for monitoring periphery
JP2002098754A (en) * 2000-09-22 2002-04-05 Fujitsu Ten Ltd Radar system for vehicle
JP2005141517A (en) * 2003-11-07 2005-06-02 Daihatsu Motor Co Ltd Vehicle detecting method and device
JP2006048568A (en) * 2004-08-09 2006-02-16 Daihatsu Motor Co Ltd Object recognition method and object recognizing device
JP2008276689A (en) * 2007-05-07 2008-11-13 Mitsubishi Electric Corp Obstacle-recognition device for vehicle

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013108664A1 (en) * 2012-01-16 2013-07-25 トヨタ自動車株式会社 Object detection device
JP2015206797A (en) * 2012-11-22 2015-11-19 株式会社デンソー Target detection device
US9798002B2 (en) 2012-11-22 2017-10-24 Denso Corporation Object detection apparatus
JP2018092483A (en) * 2016-12-06 2018-06-14 トヨタ自動車株式会社 Object recognition device
CN110940974A (en) * 2018-09-21 2020-03-31 丰田自动车株式会社 Object detection device
CN110940974B (en) * 2018-09-21 2023-10-10 丰田自动车株式会社 Object detection device
CN111098815A (en) * 2019-11-11 2020-05-05 武汉市众向科技有限公司 ADAS front vehicle collision early warning method based on monocular vision fusion millimeter waves
CN112017241A (en) * 2020-08-20 2020-12-01 广州小鹏汽车科技有限公司 Data processing method and device

Similar Documents

Publication Publication Date Title
JP5471195B2 (en) Object detection device
JP4558758B2 (en) Obstacle recognition device for vehicles
KR102569904B1 (en) Apparatus and method for tracking target vehicle and vehicle including the same
JP4883246B2 (en) Object detection apparatus and object detection method
CN107408345B (en) Method and device for determining presence of target object
US8175797B2 (en) Vehicle drive assist system
JP5910046B2 (en) Obstacle detection device
WO2011036807A1 (en) Object detection device and object detection method
EP2302412B1 (en) System and method for evaluation of an automotive vehicle forward collision threat
JP6361592B2 (en) Vehicle control device
US9102329B2 (en) Tracking control apparatus
JP2006258497A (en) Object recognition apparatus for vehicle
JP6600271B2 (en) Object recognition apparatus and object recognition method
JP6432538B2 (en) Collision prediction device
KR20200115640A (en) A system and method for detecting the risk of collision between a vehicle and a secondary object located in a lane adjacent to the vehicle when changing lanes
WO2016047460A1 (en) Object detection device
JP5655297B2 (en) Object detection device, vehicle safety system equipped with object detection device
CN108885833B (en) Vehicle detection device
JP2012064026A (en) Vehicular object detection device and vehicular object detection method
JP4762830B2 (en) Perimeter monitoring system
JP6531689B2 (en) Moving trajectory detection device, moving object detecting device, moving trajectory detection method
JP4872517B2 (en) Obstacle recognition device
JP2013061274A (en) Object detection device and vehicle controller
JP4905496B2 (en) Object detection device
JP2006004188A (en) Obstacle recognition method and obstacle recognition device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09849837

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09849837

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP