WO2020071133A1 - Dispositif de reconnaissance de panneau - Google Patents

Dispositif de reconnaissance de panneau

Info

Publication number
WO2020071133A1
WO2020071133A1 PCT/JP2019/036681 JP2019036681W WO2020071133A1 WO 2020071133 A1 WO2020071133 A1 WO 2020071133A1 JP 2019036681 W JP2019036681 W JP 2019036681W WO 2020071133 A1 WO2020071133 A1 WO 2020071133A1
Authority
WO
WIPO (PCT)
Prior art keywords
sign
unit
candidate
dimensional object
recognition
Prior art date
Application number
PCT/JP2019/036681
Other languages
English (en)
Japanese (ja)
Inventor
雄飛 椎名
永崎 健
啓佑 岩崎
Original Assignee
日立オートモティブシステムズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日立オートモティブシステムズ株式会社 filed Critical 日立オートモティブシステムズ株式会社
Priority to JP2020550281A priority Critical patent/JP7145227B2/ja
Priority to CN201980063009.8A priority patent/CN112840349A/zh
Publication of WO2020071133A1 publication Critical patent/WO2020071133A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions

Definitions

  • the present invention relates to a sign recognition device that recognizes a road sign.
  • the in-vehicle camera device measures various objects (people, cars, three-dimensional objects, white lines, road surfaces, signs, etc.) around the vehicle in detail to simultaneously measure visual information from images and distance information to the objects. It is a device that can be grasped and contribute to the improvement of safety during driving assistance.
  • a sign is one of the targets of object recognition in a vehicle-mounted camera device.
  • the sign recognition function is used for accelerating or decelerating a vehicle that performs automatic driving in cooperation with map information.
  • EuroNCAP (updated from 2016 to 2020), which is an evaluation index of an advanced driving assistance system, also includes an evaluation item relating to a speed assistance system (SAS), and its importance is increasing.
  • Patent Document 1 A device for recognizing a sign is described in Patent Document 1.
  • the problem of the road sign recognition function of the on-vehicle camera device is that it outputs the recognition result of the road sign for the target road sign under various driving conditions such as an urban area and an expressway,
  • An object of the present invention is to improve the accuracy of not outputting a recognition result of a road sign.
  • Patent Literature 1 is attached behind a signboard that resembles a road sign installed in an urban area or an expressway, or a large vehicle such as a truck, in order to identify the sign from the exterior by image processing.
  • An object that looks very similar to the target road sign, such as a speed compliance sticker, may be erroneously recognized as a road sign.
  • the present invention has been made in order to solve the above problems, and it is an object of the present invention to provide a sign recognition device that suppresses an object such as a speed compliance sticker very similar to a road sign from being erroneously recognized as a road sign. With the goal.
  • the present invention provides at least one image pickup unit, a sign estimating unit that estimates a sign candidate from an image obtained by the image pickup unit, and a predetermined three-dimensional object included in an image pickup range of the image pickup unit.
  • a three-dimensional object recognizing unit for recognizing the three-dimensional object, and determining whether or not the sign candidate is a road sign from information on the estimation of the sign candidate by the sign estimating unit and information on the recognition of the three-dimensional object by the three-dimensional object recognition unit And a sign determination unit.
  • the sign recognition device of the present invention it is possible to suppress output of an erroneous sign recognition result for an object that closely resembles a target sign that appears in various driving conditions such as an urban area and a highway.
  • FIG. 1 It is a block diagram showing a schematic structure of a sign recognition device concerning an embodiment of the invention. It is a processing flow figure by a camera device. It is a flowchart showing the processing content of the sign determination of FIG. It is a figure of an example of a road sign candidate provided behind a bus. Enlarged image of sign candidate Enlarged image of sign candidate It is a figure of the road sign candidate provided on the road. It is a figure of other examples of the road sign candidate provided behind the bus. It is a top view of the road sign candidate provided behind the bus.
  • FIG. 1 is a block diagram illustrating a schematic configuration of a sign recognition device according to an embodiment of the present invention.
  • a sign recognition device 1 according to the present embodiment is a device mounted on a vehicle and recognizing an environment outside the vehicle based on image processing of a shooting target area in front of the vehicle.
  • the sign recognition device 1 has a function of recognizing, for example, a white line on a road, a pedestrian, a vehicle, other three-dimensional objects, a traffic light, a road sign, a lighting lamp, and the like by image processing, and determining whether the road sign is true or false.
  • a function of adjusting the brake and steering of the vehicle equipped with the sign recognition device 1 can be provided, but this function can be omitted.
  • the sign recognition device 1 is a road sign recognition system mounted on a vehicle, and includes a camera device 100, a CAN (Controller Area Network) 110, an output device 114, and the like.
  • a camera device 100 includes a camera device 100, a CAN (Controller Area Network) 110, an output device 114, and the like.
  • CAN Controller Area Network
  • the camera device 100 includes a left camera 101, a right camera 102, an image input interface 103, an image processing unit 104, an arithmetic processing unit 105, a storage unit 106, a CAN interface 107, a control processing unit 108, and a bus 109.
  • the image input interface 103, the image processing unit 104, the arithmetic processing unit 105, the storage unit 106, the CAN interface 107, and the control processing unit 108 can be configured by a single or a plurality of computer units. This embodiment exemplifies a case where a single computer unit is used.
  • the image processing unit 104, the control processing unit 108, the storage unit 106, the arithmetic processing 105, the image input interface 103, and the CAN interface 107 They are interconnected via an internal bus 109 of the device 100.
  • the camera device 100 is connected to another computer unit and the output device 114 via the CAN 110 described above. Hereinafter, each part will be sequentially described.
  • the left camera 101 and the right camera 102 are arranged on the left and right to acquire image information.
  • the left camera 101 and the right camera 102 are mounted on the vehicle with a set interval left and right, so that predetermined areas are respectively included in the field of view.
  • the image input interface 103 controls the imaging of the left camera 101 and the right camera 102 and captures the captured images.
  • the image data captured through the image input interface 103 is sent to, for example, the image processing unit 104 and the arithmetic processing unit 105 via the bus 109.
  • the image processing unit 104 compares the first image obtained from the image sensor of the left camera 101 with the second image obtained from the image sensor of the right camera 102, and determines each image based on the image sensor. Of the device-specific deviation, and image correction such as noise interpolation, and the like are stored in the storage unit 106. Further, the first image and the second image calculate the mutually corresponding portions and calculate the disparity information (the difference in the direction seen when the same point is viewed from two different points, ie, between the two directions). Is calculated and stored in the storage unit 106.
  • the arithmetic processing unit 105 includes a sign estimation unit 111, a three-dimensional object recognition unit 112, and a sign determination unit 113.
  • the sign estimation unit 111 estimates a sign candidate from images captured by the left camera 101 and the right camera 102 and stored in the storage unit 106.
  • the image used as the basis for estimating a marker candidate may be either an image taken by the left camera 101 or an image taken by the right camera 102, or may be an image taken by both cameras.
  • the shape recognized by the image processing is compared with, for example, a database of road signs stored in the storage unit 106, and one having a certain degree of matching with any one of the road signs is determined as a candidate for a road sign. presume.
  • the three-dimensional object recognition unit 112 recognizes a three-dimensional object included in the imaging range of the left camera 101 and the right camera 102 from the parallax information calculated by the image processing unit 104. Specifically, a three-dimensional object is recognized from images obtained by the left camera 101 and the right camera 102 by stereo image processing. The three-dimensional object recognition unit 112 uses the image and the parallax information (distance information for each point on the image) stored in the storage unit 106 to recognize a predetermined three-dimensional object necessary for perceiving the environment around the vehicle. .
  • the three-dimensional object to be recognized is a person, a car, and other obstacles such as a traffic light, a road sign, a tail lamp of a car, and a headlight.
  • a database (for example, reference data for identification) of the target three-dimensional object is stored in, for example, the storage unit 106, and the three-dimensional object recognition unit 112 extracts a cut-out image of each subject from the input captured image and disparity information thereof. Recognizes three-dimensional objects based on the database.
  • the sign determination unit 113 determines whether or not the sign candidate estimated by the sign estimation unit 111 is a road sign based on the information on the estimation of the sign by the sign estimation unit 111 and the information on the recognition of the three-dimensional object by the three-dimensional object recognition unit 112. I do.
  • the recognition result of the road sign and the intermediate calculation result (for example, the calculation result of the sign estimation unit 111 and the three-dimensional object recognition unit 112) by the calculation processing unit 105 are stored in the storage unit 106 as appropriate. This determination algorithm will be described later.
  • the storage unit 106 is, for example, a memory, and stores information and the like obtained by the image processing unit 104 and the arithmetic processing unit 105.
  • the CAN interface 107 is an input / output unit for the CAN 110, and calculation information of the camera device 100 is output to the CAN 110 via the CAN interface 107, and is output to the control system of the own vehicle via the CAN 110. Specifically, the CAN interface 107 outputs a sign recognition result to the output device 114 via the CAN 110 when the sign judgment unit 113 judges that the sign candidate is a road sign.
  • the control processing unit 108 has a role of monitoring whether or not each processing unit has performed an abnormal operation or whether or not an error has occurred during data transfer with respect to the operation of the camera device 100 and preventing the abnormal operation. Fulfill.
  • the determination result of the road sign calculated by the camera device 100 as described above is transmitted to the output device 114 and other on-vehicle computer units (for example, a unit that executes vehicle control) via the CAN 110.
  • the output device 114 is a device mounted on the driver's cab, for example, a monitor, a lamp, a buzzer, a speaker, or the like, for outputting a display or sound, and visually or audibly outputs a road sign based on information input from the camera device 100. Inform the crew.
  • the method of notification can be such that only the presence or absence of a road sign can be notified, or the type of road sign can also be notified together.
  • FIG. 2 is a flowchart of processing by the camera device 100.
  • the in-vehicle camera device 200 shown in FIG. 2 schematically represents the camera device 100 of FIG.
  • an imaging 201 by the left camera 101 and an imaging 202 by the right camera 102 are performed.
  • image processing such as correction for absorbing a peculiar habit of the image sensor
  • image processing in front Stereo image processing 205 such as calculating a measurement distance (parallax information) is performed on the object.
  • This stereo image processing 205 is executed by the image processing unit 104.
  • a three-dimensional object detection 206 which is an image processing such as a predetermined cutout of a stereo image or a gradation image.
  • the result of the three-dimensional object detection 206 and the like are stored in the storage unit 106 in FIG.
  • processing of object recognition 208 as image processing is performed.
  • the processing of the three-dimensional object detection 206 and the object recognition 208 is performed by the three-dimensional object recognition unit 112 in FIG.
  • the three-dimensional objects recognized here are people, cars, and other obstacles, such as traffic lights, road signs, car tail lamps, and headlights.
  • the object recognition result 216 (including information on the recognized three-dimensional object and other parallax information not recognized as the three-dimensional object) obtained in the processing of the three-dimensional object detection 206 and the object recognition 208 is stored in the storage unit 106.
  • a storage process 215 is performed.
  • the object recognition result 216 of the three-dimensional object recognition unit 112 is stored in the storage unit 106 and output to another computer unit or the sign determination unit 113 via the CAN 110 at the same time.
  • the sign recognition processing 209 is performed using a monocular image as an input.
  • the sign recognition processing 209 includes a sign detection 210 for extracting a circular object from the input image, a sign identification 211 for specifying the type of the circular object, a sign tracking 212 for associating the images, and a comprehensive judgment in a plurality of frames. Is made up of four processes of the sign determination 213.
  • a circular shape is assumed as the outer shape of the road sign to be determined.
  • a shape whose matching degree with a perfect circle is estimated to be equal to or more than a certain value by image processing is a candidate for a sign.
  • the sign candidates detected in the processing of the sign detection 210 are compared with, for example, a sign database (recognition dictionary) stored in the storage unit 106, and the type of the sign candidate (the sign candidate Is a candidate).
  • a sign candidate that can be displaced along with the traveling of the vehicle or the like in the camera field of view is tracked by association between consecutive frames (camera images) by image processing.
  • Each process up to the sign detection 210, the sign identification 211, and the sign tracking 212 is executed by, for example, the sign estimation unit 111.
  • the sign determination 213 it is determined from a plurality of frames of images whether the sign candidate is a real road sign.
  • the feature of the marker determination 213 is that it is executed by the marker determination unit 113 and is executed based on the object recognition result 216 obtained by the object recognition 208 process by the three-dimensional object recognition unit 112.
  • it is determined whether or not the road sign is a true road sign based on the inclusion relationship and the positional relationship between the three-dimensional object and other parallax information recognized by the three-dimensional object recognition unit 112 and the sign candidates estimated by the sign estimation unit 111. .
  • the details of the sign determination process will be described later.
  • a save process 214 for saving the sign recognition result obtained in the above sign recognition process 209 in the storage unit 106 is performed.
  • the result of the sign recognition by the sign recognition processing 209 is stored in the storage unit 106 and is also output to another computer unit or the output device 114 via the CAN 110.
  • the sign recognition result is not output to the output device 114, and is assumed to be a true road sign.
  • the sign recognition result is output to the output device 114 only for the object, and is notified to the occupant.
  • FIG. 3 is a flowchart showing the processing contents of the sign determination in FIG.
  • the processing flow of the sign determination 213 of the present invention will be described with reference to FIG.
  • Step 501 By using the result of the sign tracking 212 in FIG. 2 as an input, the result of identifying the sign candidates in a plurality of images is obtained. Thereafter, the processing moves to the next step 502.
  • Step 502 The final determination result is output in consideration of the identification result of each acquired image.
  • a majority decision of the identification result of each image may be obtained, or the determination result may be obtained in consideration of the reliability obtained simultaneously with the identification result. It is to be noted that the lower the traveling speed of the host vehicle, the more image identification results can be obtained, and the less blur of the image, the more advantageous the situation. Thereafter, the processing moves to the next step 503.
  • Step 503 As a first check, the degree of overlap between the marker candidate and the three-dimensional object recognized by the three-dimensional object recognition unit 112 is checked. Check the inclusion relationship between the sign candidate and the three-dimensional object. If all the sign candidates are included in the three-dimensional object, judge that it is unlikely that the sign is a road sign installed on the road and do not output it as a sign result. And Thereafter, the process ends.
  • Step 504 This step is a process for detecting a part of a sign candidate that can be recognized as a road sign in step 503 but is not actually a road sign (that is, a determination omission in step 503). For example, it is determined whether the position is outside the left and right sides of the image.
  • the shielding object refers to disparity information that could not be compared with the database relating to the three-dimensional object by the three-dimensional object recognition unit 112 and was not recognized as a predetermined three-dimensional object.
  • One typical example of the parallax information treated as a shielding object is a predetermined three-dimensional object which is actually a predetermined three-dimensional object and which is not completely recognized in the captured image although the parallax information is input.
  • Step 505 The sign judgment 213 outputs a sign judgment result indicating that the sign candidate is a road sign. Thereafter, the process ends.
  • FIG. 4A is a diagram of an example of a road sign candidate provided behind a bus.
  • the bus 601 includes a marker candidate 603 and a marker candidate 604 on the rear surface.
  • the rear surface of the bus 601 is recognized as a three-dimensional object 602 by the camera device 100.
  • the sign recognition device 1 of the present invention is unlikely to be a road sign set on the road in step 503 of FIG. And suppresses output as a sign recognition result.
  • FIG. 4A it is identified that the sign candidate 603 and the sign candidate 604 are not road signs as described above, and a false report to the occupant is avoided.
  • FIG. 5 is a diagram of road sign candidates provided on a road.
  • the marker candidate 701 is not included in the three-dimensional object 702. Further, the marker candidate 701 is not included in the three-dimensional object 703.
  • the procedure shifts from step 503 to step 504 in FIG.
  • FIG. 6 is a diagram of another example of a road sign candidate provided behind a bus.
  • the speed compliance sticker attached to the rear surface 804 of the bus 801 is recognized as the sign candidate 803.
  • the sign candidate 803 is included in the rear surface 804 of the bus 801.
  • the sign candidate 803 which is a speed observing sticker is determined to be a road sticker in the determination of step 503 in FIG. It can be determined as a sign.
  • the entire side surface 802 of the bus 801 is shown, and is not recognized as a predetermined three-dimensional object, and is classified as a shield.
  • the determination in step 504 is satisfied in FIG. 3, and the marker candidate 803 is erroneously output to the output device 114 as a road sign. Can be avoided.
  • FIG. 7 is a top view (a model diagram showing the example of FIG. 6 from a bird's-eye view) of a road sign candidate provided behind the bus.
  • FIG. 7 looking at the positional relationship between the side 802 of the bus 801 and the sign candidate 803, if the sign candidate 803 is a road sign 805 installed on the road, this side 802 is The road sign 805 should not be reflected in the captured image because it is shielded by the bus 801 that the user owns.
  • the positional relationship between the marker candidate 803 and the side surface 802 it is possible to prevent an erroneous marker recognition result from being output as shown in step 504 of FIG.
  • the present invention is not limited to this, and one of the left camera 101 and the right camera 102 is omitted, and It is also possible to adopt a configuration in which object recognition is executed by monocular processing.
  • FIG. 3 the flow having the procedures of steps 503 and 504 has been described. However, for example, if sufficient accuracy is obtained for the authenticity determination of the road sign in the procedure of step 503, the procedure of step 504 is performed. May be omitted.
  • 1 sign recognition device 100 camera device, 101 left camera, 102 right camera, 103 image input interface, 104 image processing unit, 105 operation unit, 106 storage unit, 107 CAN interface, 108 control processing unit, 109 bus, 110 CAN , 111 sign estimation unit, 112 solid object recognition unit, 113 sign judgment unit, 114 output device, 200 camera device, 201 imaging, 202 imaging, 203 image data, 204 image data, 205 stereo image processing, 206 solid object detection, 207 Measured distance result, 208 object recognition, 209 sign recognition processing, 210 sign detection, 211 sign identification, 212 sign tracking, 213 sign judgment, 214 storage processing, 215 storage processing, 601 bus, 602 solid object, 603 Identify candidates 604 labeled candidate, 701 labeled candidate, 702 three-dimensional object, 703 three-dimensional object, 801 bus, 802 three-dimensional object, 803 labeled candidate, rear 804, 805 road signs

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

La présente invention ‌a‌ ‌pour‌ ‌objet‌ de‌ fournir un dispositif de reconnaissance de panneau qui réduit au minimum les incidents d'objets tels que des autocollants de limite de vitesse ressemblant à des panneaux de signalisation qui sont reconnus par erreur en tant que panneaux de signalisation. L'invention concerne un dispositif de reconnaissance de panneau caractérisé en ce qu'il comprend : au moins une unité de capture d'image ; une unité d'inférence de panneau 111 qui infère un candidat de panneau à partir d'images obtenues avec l'unité de capture d'image, 102 ; une unité de reconnaissance d'objet solide 112 qui reconnaît un objet solide 602 qui est inclus dans la plage d'imagerie de l'unité de capture d'image 101, 102 ; et une unité de détermination de panneau 113 qui détermine si le candidat de panneau 603 est un panneau de signalisation à partir d'informations relatives à l'inférence du panneau candidat 603 par l'unité d'inférence de panneau 111 et des informations relatives à la reconnaissance de l'objet solide 602 par l'unité de reconnaissance d'objet solide 112.
PCT/JP2019/036681 2018-10-04 2019-09-19 Dispositif de reconnaissance de panneau WO2020071133A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2020550281A JP7145227B2 (ja) 2018-10-04 2019-09-19 標識認識装置
CN201980063009.8A CN112840349A (zh) 2018-10-04 2019-09-19 标志识别装置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018189045 2018-10-04
JP2018-189045 2018-10-04

Publications (1)

Publication Number Publication Date
WO2020071133A1 true WO2020071133A1 (fr) 2020-04-09

Family

ID=70055884

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/036681 WO2020071133A1 (fr) 2018-10-04 2019-09-19 Dispositif de reconnaissance de panneau

Country Status (3)

Country Link
JP (1) JP7145227B2 (fr)
CN (1) CN112840349A (fr)
WO (1) WO2020071133A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017057043A1 (fr) * 2015-09-30 2017-04-06 ソニー株式会社 Dispositif de traitement d'image, procédé de traitement d'image, et programme
JP2017091232A (ja) * 2015-11-11 2017-05-25 日立オートモティブシステムズ株式会社 物体検出装置
JP2017091283A (ja) * 2015-11-12 2017-05-25 三菱電機株式会社 運転支援装置
JP2017228131A (ja) * 2016-06-23 2017-12-28 日産自動車株式会社 反射物標検出方法及び反射物標検出装置

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10200784A1 (de) * 2002-01-11 2003-07-31 Audi Ag Kraftfahrzeug
JP2010086268A (ja) 2008-09-30 2010-04-15 Mazda Motor Corp 車両用表示体認識装置
DE102010062633A1 (de) * 2010-12-08 2012-06-14 Robert Bosch Gmbh Verfahren und Vorrichtung zur Erkennung von Verkehrszeichen in der Umgebung eines Fahrzeuges und Abgleich mit Verkehrszeicheninformationen aus einer digitalen Karte
US20140025227A1 (en) * 2011-04-08 2014-01-23 Toyota Jidosha Kabushiki Kaisha Road shape estimating system
JP6251577B2 (ja) * 2014-01-17 2017-12-20 矢崎エナジーシステム株式会社 車載情報記録装置
US10402665B2 (en) 2014-05-14 2019-09-03 Mobileye Vision Technologies, Ltd. Systems and methods for detecting traffic signs
CN105740877A (zh) * 2014-12-09 2016-07-06 比亚迪股份有限公司 交通标志的识别方法、装置和车辆
JP6062473B2 (ja) * 2015-03-17 2017-01-18 本田技研工業株式会社 道路標識判断装置及び道路標識判断方法
JP6381137B2 (ja) * 2015-07-21 2018-08-29 日本電信電話株式会社 標識検出装置、方法、及びプログラム
JP6319712B2 (ja) * 2015-09-15 2018-05-09 マツダ株式会社 標識認識表示装置
DE102016003424B4 (de) * 2016-03-21 2023-09-28 Elektrobit Automotive Gmbh Verfahren und Vorrichtung zum Erkennen von Verkehrszeichen
JP6414567B2 (ja) * 2016-06-02 2018-10-31 トヨタ自動車株式会社 車両用制限速度表示装置
US10670416B2 (en) 2016-12-30 2020-06-02 DeepMap Inc. Traffic sign feature creation for high definition maps used for navigating autonomous vehicles

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017057043A1 (fr) * 2015-09-30 2017-04-06 ソニー株式会社 Dispositif de traitement d'image, procédé de traitement d'image, et programme
JP2017091232A (ja) * 2015-11-11 2017-05-25 日立オートモティブシステムズ株式会社 物体検出装置
JP2017091283A (ja) * 2015-11-12 2017-05-25 三菱電機株式会社 運転支援装置
JP2017228131A (ja) * 2016-06-23 2017-12-28 日産自動車株式会社 反射物標検出方法及び反射物標検出装置

Also Published As

Publication number Publication date
JP7145227B2 (ja) 2022-09-30
JPWO2020071133A1 (ja) 2021-09-02
CN112840349A (zh) 2021-05-25

Similar Documents

Publication Publication Date Title
CN106485233B (zh) 可行驶区域检测方法、装置和电子设备
JP4420011B2 (ja) 物体検知装置
US9180814B2 (en) Vehicle rear left and right side warning apparatus, vehicle rear left and right side warning method, and three-dimensional object detecting device
JP5345350B2 (ja) 車両の運転支援装置
KR101075615B1 (ko) 주행 차량의 운전자 보조 정보 생성 장치 및 방법
EP3161507B1 (fr) Procédé de suivi d'un véhicule cible approchant un véhicule à moteur au moyen d'un système de caméras d'un véhicule à moteur, système de caméras et véhicule à moteur
CN108263279A (zh) 基于传感器整合的行人检测和行人碰撞避免装置及方法
CN108027422A (zh) 借助于汽车传感器自动检测危险偏离车辆
CN114375467B (zh) 用于检测紧急车辆的系统和方法
CN110400478A (zh) 一种路况通知方法及装置
CN111932901B (zh) 道路车辆跟踪检测设备、方法及存储介质
US20170309181A1 (en) Apparatus for recognizing following vehicle and method thereof
CN106537180A (zh) 用于用针对行人的主动制动的摄影机输入缓解雷达传感器限制的方法
TWI535589B (zh) Active automatic driving assistance system and method
CN107408338A (zh) 驾驶员辅助系统
CN110942623A (zh) 一种辅助交通事故处理方法和系统
CN115877343A (zh) 基于雷达目标跟踪的人车匹配方法、装置和电子设备
CN110969843B (zh) 一种具抑制策略的交通标志识别报警方法
CN109515316A (zh) 一种交叉路口智能驾驶辅助系统及方法
JP4848644B2 (ja) 障害物認識システム
US11893802B2 (en) Systems and methods for traffic light identification
US20200410788A1 (en) Management apparatus, vehicle, inspection apparatus, and vehicle inspection system and inforamtion processing method therefor
CN112149560B (zh) 一种车道偏离检测方法
KR20160131196A (ko) 장애물 감지 장치
CN109895694B (zh) 一种车道偏离预警方法、装置及车辆

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19869556

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020550281

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19869556

Country of ref document: EP

Kind code of ref document: A1