JP2011145166A - Vehicle detector - Google Patents

Vehicle detector Download PDF

Info

Publication number
JP2011145166A
JP2011145166A JP2010006100A JP2010006100A JP2011145166A JP 2011145166 A JP2011145166 A JP 2011145166A JP 2010006100 A JP2010006100 A JP 2010006100A JP 2010006100 A JP2010006100 A JP 2010006100A JP 2011145166 A JP2011145166 A JP 2011145166A
Authority
JP
Japan
Prior art keywords
vehicle
detection
means
lidar
end
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2010006100A
Other languages
Japanese (ja)
Inventor
Hiroshi Nakamura
弘 中村
Original Assignee
Toyota Motor Corp
トヨタ自動車株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toyota Motor Corp, トヨタ自動車株式会社 filed Critical Toyota Motor Corp
Priority to JP2010006100A priority Critical patent/JP2011145166A/en
Publication of JP2011145166A publication Critical patent/JP2011145166A/en
Application status is Pending legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • G01S17/86
    • G01S17/931
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data

Abstract

<P>PROBLEM TO BE SOLVED: To provide a vehicle detector that accurately detects the end on a vehicle using a detection point of a laser detection means. <P>SOLUTION: The vehicle detector includes a laser detection means to obtain a detection point relative to an object by receiving the laser beam reflected by the object while irradiating the laser beam in many different directions in each of upper/lower and right/left directions, an imaging means to image the side being detected by the laser detection means, a region search means to search a region including a vehicle from a picture imaged by the imaging means, and a vehicle end identification means to extract a detection point that is included in a specific region among regions searched by the region search means and is detected by the laser detection means for identifying an end on the vehicle based on the extracted detection point. <P>COPYRIGHT: (C)2011,JPO&INPIT

Description

  The present invention relates to a vehicle detection device.

  Conventionally, in a collision prevention device, an ACC [Adaptive Cruise Control] system, and the like, a vehicle detection device that detects the position and orientation of another vehicle has been used to avoid a collision with another vehicle such as a preceding vehicle or an oncoming vehicle. Yes. Some vehicle detection devices use LIDAR [LIght Detection And Ranging] (laser radar). In the vehicle detection device described in Patent Document 1, a detection point in front of the host vehicle is acquired with LIDAR, a captured image in front of the host vehicle is acquired with a camera, and the other vehicle in front using the detection point and the captured image. The end of the vehicle is specified, and the position and orientation of the other vehicle are detected from the end.

JP 2009-98025 A JP-A-2005-90974

  Usually, if linear fitting is performed using only detection points for bumpers that protrude in the vehicle and extend to the left and right ends of the vehicle, the end of the vehicle can be identified with high accuracy. However, since the laser beam also strikes each part of the vehicle other than the bumper, detection points for each part other than the bumper are also obtained. If straight line fitting is performed using detection points for each part other than the bumper, an error increases during straight line fitting, and the end portion of the vehicle cannot be accurately identified.

  Therefore, an object of the present invention is to provide a vehicle detection device that detects an end portion of a vehicle with high accuracy using a detection point of a laser detection means.

  A vehicle detection apparatus according to the present invention includes laser detection means for acquiring a detection point for an object by irradiating laser light in a plurality of directions different from each other in the vertical direction and the horizontal direction and receiving the laser light reflected by the object. An imaging means for imaging the side detected by the laser detection means, an area searching means for searching for an area including the vehicle from a captured image captured by the imaging means, and a specific area among the areas searched for by the area searching means It is characterized by comprising vehicle end specifying means for extracting detection points detected by the included laser detection means and specifying the end of the vehicle based on the extracted detection points.

  In this vehicle detection device, detection points in different directions in the vertical direction and the horizontal direction are acquired by the laser detection means. Further, in the vehicle detection device, a captured image on the side detected by the laser detection unit is acquired by the imaging unit. And in a vehicle detection apparatus, the area | region containing a vehicle is searched from a captured image by an area search means. Further, in the vehicle detection device, the vehicle end specifying means extracts the detection points included in the specific area of the searched areas, and specifies the end of the vehicle based on the extracted detection points. The specific region is a region including a part of the vehicle suitable for specifying the end of the vehicle. Thus, in the vehicle detection device, by using only the detection points in the limited specific region of the region including the vehicle, it is possible to eliminate detection points unnecessary for specifying the end of the vehicle, and to increase the end of the vehicle. It can be detected with accuracy.

  In the vehicle detection device of the present invention, the vehicle end specifying means extracts the detection points detected by the laser detection means included in the area below the center in the vertical direction among the areas searched by the area search means, It is preferable to identify the end of the vehicle based on the extracted detection points.

  In this vehicle detection device, the vehicle end specifying means extracts detection points included in the area below the center in the vertical direction in the searched area, and the end of the vehicle is determined based on the extracted detection points. Identify. The region below the center in the vertical direction is very likely to include a vehicle bumper suitable for detecting the front end or the rear end of the vehicle. As described above, the vehicle detection device uses only detection points (particularly, detection points for the bumper) in a limited region on the lower side of the region including the vehicle, thereby detecting points unnecessary for specifying the end of the vehicle. And the end of the vehicle can be detected with higher accuracy.

  According to the present invention, by using only the detection points within a limited specific region of the region including the vehicle, detection points unnecessary for specifying the vehicle end can be eliminated, and the vehicle end can be detected with high accuracy. can do.

It is a block diagram of the collision prevention apparatus which concerns on this Embodiment. It is a schematic diagram which shows the scanning condition of the left-right direction by LIDAR, and the captured image of a camera. It is a schematic diagram which shows the scanning condition of the up-down direction by LIDAR, and the captured image of a camera. It is an example of a grouped LIDAR point sequence. It is an example which superimposed the LIDAR point sequence and layer which were grouped. It is a flowchart which shows the flow of the process in ECU of FIG.

  Hereinafter, an embodiment of a vehicle detection device according to the present invention will be described with reference to the drawings. In addition, the same code | symbol is attached | subjected about the element which is the same or it corresponds in each figure, and the overlapping description is abbreviate | omitted.

  In the present embodiment, the vehicle detection device according to the present invention is applied to a collision prevention device mounted on a vehicle. The collision prevention apparatus according to the present embodiment includes a vehicle detection apparatus for detecting other vehicles in front of the host vehicle and a braking unit for performing automatic collision avoidance braking.

  With reference to FIGS. 1-5, the collision prevention apparatus 1 which concerns on this Embodiment is demonstrated. FIG. 1 is a configuration diagram of a collision prevention apparatus according to the present embodiment. FIG. 2 is a schematic diagram illustrating a scanning situation in the horizontal direction by LIDAR and a captured image of the camera. FIG. 3 is a schematic diagram illustrating a vertical scanning situation by LIDAR and a captured image of the camera. FIG. 4 is an example of a grouped LIDAR point sequence. FIG. 5 is an example in which the grouped LIDAR point sequence and the layer are superimposed.

  The collision prevention device 1 detects another vehicle in front of the host vehicle, and activates a brake to avoid the collision when there is a possibility of collision with the detected other vehicle. Therefore, the collision prevention device 1 includes a vehicle detection device 10 and a braking unit 20.

  The vehicle detection device 10 identifies an end portion (corresponding to a bumper) of the other vehicle ahead by using a detection point by laser light and a captured image, and detects the position and orientation of the other vehicle from the end portion. In particular, the vehicle detection device 10 uses only the detection points on the lower side in the rectangular area including the image of the other vehicle searched from the captured image in order to specify the end of the other vehicle with high accuracy. Identify the end of. For this purpose, the vehicle detection device 10 includes a LIDAR 11, a camera 12, and an ECU [Electronic Control Unit] 13.

  In the present embodiment, LIDAR 11 corresponds to the laser detection means described in the claims, the camera 12 corresponds to the imaging means described in the claims, and each process in the ECU 13 falls within the claims. This corresponds to the area searching means and vehicle end specifying means to be described.

  The LIDAR 11 is a multi-layer type laser radar, and is a scanning type radar that detects an object (particularly, a vehicle) using laser light. LIDAR 11 is attached to the center of the front end of the host vehicle. The LIDAR 11 includes a mechanism for rotating the laser beam irradiation unit and the light receiving unit in the left-right direction of the host vehicle at a certain angle and a mechanism for rotating the laser beam irradiation unit and the light-receiving unit in the vertical direction at a certain angle. The LIDAR 11 scans in the left-right direction by irradiating the laser beam while changing the direction at a certain angle in the left-right direction at a predetermined angle in the up-down direction and receiving the reflected laser beam. Further, in LIDAR 11, every time scanning in the left-right direction is completed, the direction is changed in the vertical direction at a certain angle, and scanning is performed in the vertical direction. The LIDAR 11 scans in the left-right direction and the up-down direction at regular intervals, and consists of data about each reflection point (each detection point) that can be received (relative distance, relative direction, etc. with the host vehicle (LIDAR 11)). A radar signal is transmitted to the ECU 13. The relative distance is calculated from the speed of the laser beam and the time difference between the irradiation time and the light reception time. The relative direction is obtained from the horizontal scanning angle and the vertical scanning angle. This relative distance and relative direction correspond to the relative position of the detection point with respect to the host vehicle.

  As shown in FIG. 2, the LIDAR 11 receives the reflected light reflected by the other vehicle OV by irradiating the laser beam LH toward the front of the host vehicle MV at every predetermined angle in the left-right direction, thereby receiving the other vehicle OV. Are obtained at different detection points PH, PH,. In addition, as shown in FIG. 3, LIDAR 11 emits laser light LV at a predetermined angle in the vertical direction toward the front of the host vehicle MV, and receives reflected light reflected by the other vehicle OV. Detection points PV, PV,... Different in the vertical direction with respect to the vehicle OV are acquired. In LIDAR 11, one or a plurality of other vehicles OV existing within a predetermined range in front of the host vehicle MV can be detected by the horizontal scanning and the vertical scanning, and a plurality of detection points are set for each of the other vehicles OV. You can get it. In the present embodiment, the point sequence of detection points detected by the LIDAR 11 is a 3D point sequence, which is a LIDAR point sequence.

  The camera 12 is a camera that captures an image of the front of the host vehicle (the side on which detection is performed by the LIDAR 11). The camera 12 is attached in the vicinity of the center LIDAR 11 at the front end of the host vehicle. The camera 12 images the front of the host vehicle at regular intervals, and transmits an image signal including captured image information to the ECU 13. FIGS. 2 and 3 schematically show a captured image I placed between a host vehicle MV and another vehicle OV and located at a predetermined position determined by the magnification setting of the camera 12 and the like. 2 and 3, the captured image I is three-dimensionally inclined for easy understanding.

  The ECU 13 is an electronic control unit including a CPU [Central Processing Unit], a ROM [Read Only Memory], a RAM [Random Access Memory], and the like, and comprehensively controls the vehicle detection device 10. The ECU 13 receives a radar signal from the LIDAR 11, acquires detection point data (LIDAR point sequence), receives an image signal from the camera 12, and acquires a captured image. The ECU 13 converts each detection point from the LIDAR 11 into a position (x, y) in the camera coordinate system with the lens center of the camera 12 as the origin based on the relative distance and relative direction (relative position) of the detection point ( Project on the captured image). Then, the ECU 13 specifies an end portion (bumper portion) of the other vehicle based on the detection point data (LIDAR point sequence) and the captured image, and estimates the position and orientation of the other vehicle from the end portion. Further, the ECU 13 transmits an other vehicle information signal including information on the estimated position and orientation of the other vehicle to the braking unit 20.

  The ECU 13 performs a grouping process on the LIDAR point sequence (all detection points). A conventional method is applied to this grouping process. As a result, the LIDAR point sequence is grouped for each other vehicle. FIG. 4 shows an example of a LIDAR point sequence G grouped with respect to another vehicle OV that runs diagonally in front of the host vehicle. This grouped LIDAR point sequence G is shown in a state of being projected on the captured image I in the camera coordinate system. The grouped LIDAR point sequence G includes a plurality of detection points obtained by receiving reflected waves from the other vehicle OV, and is obtained in different directions in the vertical direction and the left-right direction within the direction range in which the other vehicle OV exists. A plurality of detection points. The following processing is performed for each grouped LIDAR point sequence.

  The ECU 13 extracts a rectangular area including the vehicle image from the captured image. Here, pattern matching is performed using a vehicle template for a peripheral region where a grouped LIDAR point sequence projected on the captured image is present, and an image including the vehicle is searched. Then, a rectangular area including the searched vehicle image is extracted. This rectangular region is extracted so that the left and right ends are substantially the same as the left and right ends of the vehicle image, and the upper and lower ends are substantially the same as the upper and lower ends of the vehicle. The size of the rectangular area changes according to the relative positional relationship between the host vehicle and the other vehicle, and increases as the distance from the vehicle increases. The vehicle image search method may be another method, and the vehicle image search range may be all or a part of the captured image regardless of the position of the grouped LIDAR point sequence.

  FIG. 5 shows a rectangular area RA extracted from the periphery of the grouped LIDAR point sequence G in FIG. Here, the grouped LIDAR point sequence and a plurality of layers included in the rectangular area RA are superimposed. In this example, the rectangular area RA includes eight layers L1 to L8.

  Note that the number of layers is set according to the scanning of the LIDAR 11 in the up and down direction at a certain angle, and the number corresponds to the number of scanning of the LIDAR 11 in the up and down direction. For example, when the scanning range in the vertical direction is 30 ° and scanning is performed every 1 ° as a constant angle, 30 layers are set. As described above, since the size of the rectangular area changes according to the relative positional relationship between the host vehicle and the other vehicle, the number of layers included in the rectangular area including the vehicle also changes according to the relative positional relation. The closer you get, the more. Each layer has different height positions in the vertical direction, and extends in the horizontal direction at each vertical position. In the present embodiment, each layer is an area for counting the number of LIDAR points (detection points) existing at the same height position.

  The ECU 13 extracts the lower layer from the rectangular area. As the lower layer to be extracted, for example, a layer included in a region 1/2 from the lower end of the rectangular region, a layer included in a region 1/3 from the lower end, and a layer included in a region 1/4 from the lower end . For example, in the case of the example illustrated in FIG. 5, four layers L1 to L4 are extracted from the lower end among the eight layers L1 to L8 included in the rectangular area RA.

  The reason why the lower layer is extracted in this way is that only the lower layer of the rectangular area is to be processed in order to reliably extract only the LIDAR point sequence for the bumper located below the vehicle. The bumper protrudes in the vehicle and extends to the left and right ends of the vehicle. Therefore, the end of the vehicle can be specified with high accuracy by performing linear fitting or the like using only the LIDAR point sequence for the bumper. Incidentally, there is a roof or the like that extends to the left and right ends of the vehicle above the vehicle, and it is necessary to reliably eliminate the LIDAR point sequence that is unnecessary for specifying the end of these vehicles.

  The ECU 13 counts the number of LIDAR points (detection points) included in each extracted lower layer. In the case of the example shown in FIG. 5, there are 0 LIDAR points P2 and P2 for the layer L1, 2 LIDAR points P3 and P3 for the layer L3, P3, P3, P3, P3 and P3, L4 includes two LIDAR points P4 and P4. Incidentally, the layer L5 includes two LIDAR points P5 and P5, the layer L6 includes one LIDAR point P6, the layer L7 includes zero, and the layer L8 includes three LIDAR points P8, P8, and P8. However, the number of inclusion points of each layer is not used in the following processing.

  The ECU 13 compares the number of inclusion points of each lower layer, and determines the layer having the largest number of inclusion points (hereinafter referred to as “most frequent layer”). Then, the ECU 13 extracts the LIDAR point sequence of the most frequent layer. This LIDAR point sequence of the most layers is a detection point sequence for the surface (opposite surface) of the other vehicle that is most opposed to the host vehicle, and can be regarded as a detection point for the bumper of the other vehicle. In the case of the example shown in FIG. 5, the number of inclusion points of layer L1 is 0, the number of inclusion points of layer L2 is 2, the number of inclusion points of layer L3 is 6, and the number of inclusion points of layer L4 is 2. Are determined, and LIDAR point examples P3,... Included in the layer L3 are extracted. The LIDAR point examples P3,... Of the layer L3 are detection points for the bumper of the other vehicle OV. In addition, not only the layer with the most inclusion points but also the layers adjacent above or below the most layers may have many inclusion points, the adjacent layers may be extracted.

  The bumper has a surface that is closest to the laser beam emitted from the LIDAR 11 (an opposite surface of another vehicle) and can reflect the emitted laser beam toward the LIDAR 11. Therefore, the number of LIDAR points (detection points) included in the layer corresponding to the bumper increases. However, there are parts of the vehicle that may be included in the lower layer, such as the lower part of the bumper, the front grille, the headlights, and the bonnet. Each of these parts is inclined with respect to the laser beam. Difficult to reflect light toward LIDAR11. Therefore, the number of LIDAR points included in the layer corresponding to each of these parts is reduced. Incidentally, parts of the vehicle that may be included in the upper layer are excluded from the targets for comparison of the inclusion points of the layers. Therefore, the LIDAR point sequence of the most layers can be regarded as a LIDAR point (detection point) for a bumper of another vehicle.

  In the ECU 13, a straight line is fitted to the LIDAR point sequence of the most layers. Then, the ECU 13 extracts a line segment having the left and right end points of the LIDAR point sequence of the most layers as the left and right end points from the fitted straight line. Further, the ECU 13 applies the line segment as a front or rear surface (end portion) of the vehicle by enlarging or reducing a rectangular template corresponding to a general vehicle shape. Then, the ECU 13 calculates the center position and orientation of the other vehicle from the center position and orientation of the rectangle having the size. In addition, as a method for obtaining the center position and direction of another vehicle using the LIDAR point sequence of the most layers, another method may be used. For example, a rectangular template corresponding to a general vehicle shape may be directly applied to the LIDAR point sequence of the most layers.

  When there is a possibility of a collision with another vehicle, the braking unit 20 controls a brake (not shown) to perform a braking control that applies a braking force to the host vehicle. When receiving the other vehicle information signal from the ECU 13, the braking unit 20 acquires information on the position and orientation of the other vehicle from the other vehicle information signal. Then, the braking unit 20 determines whether or not the host vehicle may collide with the other vehicle based on the position and orientation of the other vehicle. When the braking unit 20 determines that there is a possibility of a collision, the brake is operated to decelerate the host vehicle. For example, a brake actuator for adjusting the brake hydraulic pressure of the wheel cylinder of each wheel is provided, and this brake actuator is controlled to adjust the brake hydraulic pressure of the wheel cylinder.

  The operation of the collision preventing apparatus 1 will be described with reference to FIG. In particular, the processing in the ECU 13 will be described along the flowchart of FIG. FIG. 6 is a flowchart showing a flow of processing in the ECU of FIG. The ECU 13 repeatedly executes the following processing from when the engine is started until it is stopped.

  The LIDAR 11 irradiates the laser beam while receiving the reflected laser beam while changing the direction by a certain angle in the vertical direction and the horizontal direction at regular intervals, and transmits a radar signal composed of detection point data to the ECU 13. . The ECU 13 receives this laser signal and acquires detection point data (LIDAR point sequence) (S1).

  The camera 12 images the front of the host vehicle at regular intervals, and transmits an image signal including captured image information to the ECU 13. The ECU 13 receives this image signal and acquires a captured image (S2).

  The ECU 13 performs a grouping process on all acquired LIDAR point sequences to obtain each grouped LIDAR point sequence (S3).

  For each grouped LIDAR point sequence, the ECU 13 performs an image recognition process on the captured image, and extracts a rectangular area including an image of another vehicle corresponding to the grouped LIDAR point sequence (S4).

  The ECU 13 extracts a lower layer from the layers included in the extracted rectangular area (S5). Further, the ECU 13 counts the number of LIDAR points included in each of the extracted lower layers, and obtains the number of included points in each layer (S6). The ECU 13 compares the number of inclusion points of each layer to determine the most frequent layer, and extracts a LIDAR point sequence belonging to the most frequent layer (S7).

  The ECU 13 identifies the end of the vehicle based on the LIDAR point sequence of the most layer (S8), and estimates the position and orientation of the other vehicle from the end of the other vehicle (S9). Then, the ECU 13 transmits an other vehicle information signal including information on the position and orientation of the other vehicle to the braking unit 20.

  When the other vehicle information signal is received, the braking unit 20 determines whether or not the own vehicle may collide with the other vehicle based on the position and orientation of the other vehicle, and determines that there is a possibility of the collision. If so, activate the brake. As a result, the host vehicle decelerates and collision can be avoided.

  According to this collision prevention device 1 (particularly, the vehicle detection device 10), only the detection points in the limited region on the lower side of the rectangular region including other vehicles are used, so that it is not necessary to specify the end of the vehicle. Detection points for various parts can be eliminated, and the end of the other vehicle (and thus the position and orientation of the other vehicle) can be detected with high accuracy.

  The collision prevention apparatus 1 compares the number of inclusion points of each layer included in the area below the rectangular area, extracts the layer with the largest number of inclusion points, and uses only the detection points for the bumper by the most frequent layer. Can be identified with high accuracy.

  As mentioned above, although embodiment which concerns on this invention was described, this invention is implemented in various forms, without being limited to the said embodiment.

  For example, in the present embodiment, the present invention is applied to the vehicle detection device included in the collision prevention device, but it may be a vehicle detection device alone or a vehicle detection device included in another system such as an ACC system.

  Further, although the present embodiment is applied when detecting other vehicles in front of the host vehicle, the present invention can also be applied when detecting other vehicles existing in front of the vehicle.

  Also, in this embodiment, the lower layer in the vertical layer within the rectangular area of the vehicle is extracted, and the layer with the largest number of inclusion points is extracted from the extracted layers to identify the end of the other vehicle However, the LIDAR point (detection point) below the rectangular area of the vehicle is extracted without using the layer, and the end of the other vehicle is specified using the extracted LIDAR point. Good. In addition, when a part other than the bumper is targeted as an end of the vehicle, the LIDAR point on the other side such as the upper side, the left side, the right side, etc. of the rectangular region is used instead of the lower side of the rectangular region. You may make it detect the edge part of a vehicle.

  DESCRIPTION OF SYMBOLS 1 ... Collision prevention apparatus, 10 ... Vehicle detection apparatus, 11 ... LIDAR, 12 ... Camera, 13 ... ECU, 20 ... Braking part.

Claims (2)

  1. Laser detection means for acquiring a detection point for an object by irradiating laser light in a plurality of directions different from each other in the vertical direction and the horizontal direction and receiving the laser light reflected by the object;
    Imaging means for imaging the side detected by the laser detection means;
    Area search means for searching for an area including a vehicle from a captured image captured by the imaging means;
    Vehicle end identification means for extracting detection points detected by the laser detection means included in a specific area of the areas searched by the area search means, and specifying an end of the vehicle based on the extracted detection points; A vehicle detection apparatus comprising:
  2.   The vehicle end specifying means extracts detection points detected by the laser detection means included in a region below the center in the vertical direction among the regions searched by the region search means, and the extracted detection points are used as the extracted detection points. The vehicle detection device according to claim 1, wherein an end portion of the vehicle is specified based on the vehicle.
JP2010006100A 2010-01-14 2010-01-14 Vehicle detector Pending JP2011145166A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2010006100A JP2011145166A (en) 2010-01-14 2010-01-14 Vehicle detector

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2010006100A JP2011145166A (en) 2010-01-14 2010-01-14 Vehicle detector

Publications (1)

Publication Number Publication Date
JP2011145166A true JP2011145166A (en) 2011-07-28

Family

ID=44460153

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2010006100A Pending JP2011145166A (en) 2010-01-14 2010-01-14 Vehicle detector

Country Status (1)

Country Link
JP (1) JP2011145166A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018060572A (en) * 2012-09-05 2018-04-12 ウェイモ エルエルシー Construction zone detection using plural information sources
US10436898B2 (en) 2013-12-26 2019-10-08 Hitachi, Ltd. Object recognition device
US10466709B2 (en) 2013-11-08 2019-11-05 Hitachi, Ltd. Autonomous driving vehicle and autonomous driving system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018060572A (en) * 2012-09-05 2018-04-12 ウェイモ エルエルシー Construction zone detection using plural information sources
US10466709B2 (en) 2013-11-08 2019-11-05 Hitachi, Ltd. Autonomous driving vehicle and autonomous driving system
US10436898B2 (en) 2013-12-26 2019-10-08 Hitachi, Ltd. Object recognition device

Similar Documents

Publication Publication Date Title
US10274598B2 (en) Navigation based on radar-cued visual imaging
JP4396400B2 (en) Obstacle recognition device
JP3263699B2 (en) Driving environment monitoring device
US8737689B2 (en) Environment recognition device and environment recognition method
EP1947475B1 (en) Object detection device
JP3349060B2 (en) Outside monitoring device
US9053554B2 (en) Object detection device using an image captured with an imaging unit carried on a movable body
CN102696060B (en) Object detection apparatus and object detection method
US6670912B2 (en) Method for detecting stationary object located above road
JP2007316767A (en) Lane mark recognition device for vehicle
US20050232463A1 (en) Method and apparatus for detecting a presence prior to collision
US6956469B2 (en) Method and apparatus for pedestrian detection
JP2007310595A (en) Traveling environment recognition system
US7684590B2 (en) Method of recognizing and/or tracking objects
JP5372680B2 (en) Obstacle detection device
JP2008203992A (en) Detection device, method, and program thereof
JP2008172441A (en) Detection device, method, and program
JP2007255977A (en) Object detection method and object detector
JP2009086787A (en) Vehicle detection device
WO2003001472A1 (en) An object location system for a road vehicle
JP4967666B2 (en) Image processing apparatus and method, and program
JP2006031162A (en) Moving obstacle detection device
EP1731922A1 (en) Method and device for determining free areas in the vicinity of a motor vehicle
JP2008026997A (en) Pedestrian recognition device and pedestrian recognition method
DE112009001686T5 (en) Object detecting device