WO2022262307A1 - 交通标识识别系统、方法及车辆 - Google Patents

交通标识识别系统、方法及车辆 Download PDF

Info

Publication number
WO2022262307A1
WO2022262307A1 PCT/CN2022/078052 CN2022078052W WO2022262307A1 WO 2022262307 A1 WO2022262307 A1 WO 2022262307A1 CN 2022078052 W CN2022078052 W CN 2022078052W WO 2022262307 A1 WO2022262307 A1 WO 2022262307A1
Authority
WO
WIPO (PCT)
Prior art keywords
traffic sign
sign recognition
vehicle
neural network
module
Prior art date
Application number
PCT/CN2022/078052
Other languages
English (en)
French (fr)
Inventor
石笑生
张九才
朱东华
张金池
黎国荣
Original Assignee
广州汽车集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州汽车集团股份有限公司 filed Critical 广州汽车集团股份有限公司
Priority to CN202280001829.6A priority Critical patent/CN116868246A/zh
Publication of WO2022262307A1 publication Critical patent/WO2022262307A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/04Traffic conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/09623Systems involving the acquisition of information from passive traffic signs by means mounted on the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/404Characteristics
    • B60W2554/4046Behavior, e.g. aggressive or erratic

Definitions

  • the present application relates to road safety, in particular to a traffic sign recognition system, method and vehicle.
  • Traffic sign detection and recognition is the basis of automatic driving, so automatic driving technology requires vehicles to be able to recognize traffic signs.
  • traffic sign recognition technology only uses cameras to detect and recognize traffic signs, and cannot be combined with vehicle behavior information.
  • the present application provides a traffic sign recognition system, including: a camera module, used to obtain the first traffic sign recognition result; a sensor, used to obtain behavior information of the vehicle and nearby vehicles; a training module, the training module Connected with the sensor, the training module is used to output traffic sign recognition parameters according to the behavior information of the vehicle and nearby vehicles; and a cyclic neural network module, the cyclic neural network module is connected with the training module and the camera The modules are connected; wherein, the cyclic neural network module is used to output a second traffic sign recognition result according to the traffic sign recognition parameters and the first traffic sign recognition result.
  • the traffic sign recognition system can train the traffic sign recognition process through the behavior information of the vehicle and nearby vehicles and the first traffic sign recognition result obtained by the camera module, which can improve the recognition accuracy of traffic signs.
  • the training module is a long short-term memory artificial neural network module.
  • the long-short-term memory artificial neural network module includes an information input layer; the information input layer is connected to the sensor to acquire behavior information of the own vehicle and nearby vehicles.
  • the long-short-term memory artificial neural network module further includes an information screening layer; the information screening layer is used to output the traffic sign recognition parameters according to the behavior information of the own vehicle and nearby vehicles.
  • the traffic sign recognition parameters include: the position of the vehicle center, yaw angle, speed, yaw rate, the acceleration of the vehicle, the width and height of the vehicle, and the category of the vehicle .
  • the traffic sign recognition system further includes: a Kalman filter connected to the cyclic neural network module, wherein the Kalman filter is used to fuse the first The traffic sign recognition result and the second traffic sign recognition result to generate a third traffic sign recognition result.
  • the training module includes a first convolutional neural network module and a second convolutional neural network module, and the first convolutional neural network module is connected to the camera module and the cyclic neural network module , the second convolutional neural network module is connected to the sensor and the cyclic neural network module; wherein, the camera module is used to acquire image information; the first convolutional neural network module is used to obtain image information according to the image information generating a first traffic sign recognition result; and the second convolutional neural network module is configured to output the traffic sign recognition parameters according to behavior information of the own vehicle and nearby vehicles.
  • the present application provides a traffic sign recognition method, including: obtaining the first traffic sign recognition result; obtaining behavior information of the own vehicle and nearby vehicles; outputting traffic sign recognition parameters according to the behavior information of the own vehicle and nearby vehicles outputting a second traffic sign recognition result according to the traffic sign recognition parameter and the first traffic sign recognition result, the traffic sign recognition parameter including the traffic sign recognition parameter and the first traffic sign recognition result.
  • the method further includes: outputting the traffic sign identification parameters according to behavior information of the own vehicle and nearby vehicles.
  • the traffic sign recognition parameters include: the position of the vehicle center, yaw angle, speed, yaw rate, the acceleration of the vehicle, the width and height of the vehicle, and the category of the vehicle .
  • the traffic sign recognition method further includes: fusing the first traffic sign recognition result and the second traffic sign recognition result to generate a third traffic sign recognition result.
  • the method further includes: acquiring image information; and generating a first traffic sign recognition result according to the image information.
  • the method further includes: outputting the traffic sign identification parameters according to behavior information of the own vehicle and nearby vehicles.
  • the present application provides a vehicle, including: a traffic sign recognition system, the traffic sign recognition system including: a camera module, used to obtain the first traffic sign recognition result; Behavior information; training module, the training module is connected with the sensor, and the training module is used to output traffic sign recognition parameters according to the behavior information of the vehicle and nearby vehicles; and a cyclic neural network module, the cyclic neural network module and The training module and the camera module are connected;
  • the cyclic neural network module is configured to output a second traffic sign recognition result according to the traffic sign recognition parameters and the first traffic sign recognition result.
  • the training module is a long short-term memory artificial neural network module.
  • the long-short-term memory artificial neural network module includes an information input layer; the information input layer is connected to the sensor to acquire behavior information of the own vehicle and nearby vehicles.
  • the long-short-term memory artificial neural network module further includes an information screening layer; the information screening layer is used to output the traffic sign recognition parameters according to the behavior information of the own vehicle and nearby vehicles.
  • the traffic sign recognition parameters include: the position of the vehicle center, yaw angle, speed, yaw rate, the acceleration of the vehicle, the width and height of the vehicle, and the category of the vehicle .
  • the vehicle further includes: a Kalman filter connected to the cyclic neural network module, wherein the Kalman filter is used to fuse the first traffic sign recognition result and the second traffic sign recognition result to generate a third traffic sign recognition result.
  • the training module includes a first convolutional neural network module and a second convolutional neural network module, and the first convolutional neural network module is connected to the camera module and the cyclic neural network module , the second convolutional neural network module is connected to the sensor and the cyclic neural network module; wherein, the camera module is used to obtain image information, and the first convolutional neural network module is used to obtain image information according to the image information Generate a first traffic sign recognition result; the second convolutional neural network module is used to output the traffic sign recognition parameters according to the behavior information of the own vehicle and nearby vehicles.
  • Fig. 1 is a traffic sign recognition system applied to automatic driving technology provided by an embodiment of the present application.
  • Fig. 2 is a traffic sign recognition system provided by another embodiment of the present application.
  • Fig. 3 is a traffic sign recognition system provided by another embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a traffic sign recognition method for an autonomous vehicle provided by an embodiment of the present application.
  • Fig. 5 is a schematic diagram of an autonomous vehicle provided by an embodiment of the present application.
  • Traffic sign recognition system 10 20 camera module 100 sensor 200
  • Training module 300 300a long short-term memory network module 310 first CNN module 320
  • Coupled is defined as connected, whether directly or indirectly through intermediate components, and is not necessarily limited to physical connections. Connection may be such that the objects are permanently connected or releasably connected.
  • comprising means “including, but not necessarily limited to”; it specifically indicates open inclusion or membership in such described combinations, groups, series, etc.
  • FIG. 1 is a traffic sign recognition system 10 provided by an embodiment of the present application, including a camera module 100, a sensor 200, a training module 300 and a recurrent neural network (Recurrent Neural Network, RNN) module 400.
  • the sensor 200 is connected to the training module 300 and the RNN module 400 in turn, and the training module 300 is connected to the RNN module 400 .
  • RNN recurrent Neural Network
  • the traffic sign recognition system 10 obtains the recognition result of the traffic sign through the camera module 100 .
  • the traffic sign recognition system 10 obtains information about the behavior of the own vehicle and nearby vehicles through the sensor 200 .
  • the traffic sign recognition system 10 inputs the behavior information of the own vehicle and nearby vehicles from the sensor 200 to the training module 300 .
  • the training module 300 is used to output traffic sign recognition parameters to the RNN module 400 .
  • the RNN module 400 fuses the recognition result obtained by the camera module 100 and the traffic sign recognition parameters from the training module 300 , and outputs the traffic sign recognition result according to the information input by the training module 300 and the camera module 100 .
  • the sensor 200 includes, but is not limited to, radar, locator and lidar sensors. Since traffic signs have the same constraint effect on the own vehicle and nearby vehicles, the recognition rate of traffic signs can be improved by analyzing the behavior of the own vehicle and nearby vehicles. For example, after the camera module 100 detects a traffic sign, the sensor 200 collects behavior information of the own vehicle and nearby vehicles, and the training module 300 performs learning and information screening according to the detected traffic sign and vehicle behavior. The training module 300 further outputs the learning information to the RNN module 400 . The RNN module 400 judges the type of the traffic sign according to the information acquired by the camera module 100 and the information trained by the training module 300 .
  • the behavior information of the own vehicle and nearby vehicles collected by the sensor 200 includes but not limited to the distance between the own vehicle and nearby vehicles, speed and vehicle light information.
  • the sensor 200 integrates the acquired information to form parameters of the own vehicle and nearby vehicles.
  • the parameter (x, y) represents the position of the center point of the vehicle.
  • the parameter ⁇ represents the yaw angle.
  • the parameter v represents the velocity, and the parameter ⁇ represents the yaw rate.
  • the parameter a represents the acceleration of the vehicle.
  • the parameters (w, h) represent the width and height of the vehicle.
  • Parameter c indicates the type of vehicle. In one embodiment, only the information of moving vehicles is collected to train the module 300 to recognize traffic signs according to the own vehicle and nearby vehicles.
  • the training module 300 is a Long Short-Term Memory network (Long Short-Term Memory network, LSTM) module 310.
  • the traffic sign recognition system 10 further includes a Kalman filter 500 .
  • the LSTM module 310 is connected to the sensor 200 and the RNN module 400 .
  • the Kalman filter 500 is connected to the RNN module 400 and the camera module 100 to output traffic sign recognition results.
  • the Kalman filter 500 can combine the traffic sign recognition result obtained by the camera module 100 (hereinafter referred to as the first recognition result) with the traffic sign recognition result inferred by the sensor 200 (hereinafter referred to as the second recognition result). recognition results) are fused with the behavior of the vehicle and nearby vehicles. Therefore, the Kalman filter 500 can fuse the two recognition results to obtain a new traffic sign recognition result (hereinafter referred to as the third recognition result). Since nearby vehicles observe traffic signs from different angles, taking their behavior into account reduces the possibility of misidentifying individual traffic signs. Therefore, compared with related technologies that only recognize traffic signs by cameras, the traffic sign recognition results have higher robustness and accuracy.
  • the Kalman filter 500 can fuse the first recognition result and the second recognition result, that is, the traffic sign recognition result derived from image recognition and the behavior of the own vehicle and nearby vehicles, so that the traffic sign recognition system 10 has high accuracy.
  • the LSTM module 310 generally has a two-layer structure. Among them, one layer structure is the information input layer, and the other layer structure is the information screening layer.
  • the sensor 200 can continuously collect behavior information of the own vehicle and nearby vehicles, or periodically collect behavior information of the own vehicle and nearby vehicles. The sensor 200 inputs the behavior information of the host vehicle and nearby vehicles to the information input layer of the LSTM module 310 .
  • the behavior information of the host vehicle and the nearby vehicles are given different weights.
  • the weight of the information of the own vehicle and the information of nearby vehicles can be set to 1:1 to increase the importance of the behavior information of the own vehicle in training.
  • the weight setting can also be performed with the own vehicle according to the number of nearby vehicles, and the own vehicle and each nearby vehicle have the same weight.
  • the weights of the own vehicle and nearby vehicles can be set in the gate structure of the information screening layer in the LSTM module 310 .
  • the training can be performed only based on the behavior information of the own vehicle.
  • the training results may be inaccurate due to only training based on the behavior information of the own vehicle. Therefore, if there are no other moving vehicles near the vehicle for a period of time, and there are many other vehicles in another period of time, the weight of the time period where there are many other vehicles is increased to improve the accuracy of the training results. That is, the gate structure at the information screening layer reduces the overall weight of behavior information when there is only the own vehicle. The gate structure in the information screening layer increases the overall weight of the behavior information when there are many nearby vehicles to improve the accuracy of the training results.
  • the sensor 200 is also used to acquire driver behavior information of the own vehicle and nearby vehicles.
  • the LSTM module 310 screens the driver behavior information to delete information that will have a negative impact on training. Due to the characteristics of the LSTM network, driver behavior information can be added or deleted from the cell state through the gate structure that the information screening layer has. Therefore, the LSTM module 310 can retain information that needs to be retained, and delete driver behavior information that does not need to be retained, so as to realize screening of driver behavior information.
  • the analysis results may be disturbed by the behavior of drivers who do not obey traffic rules.
  • the behavior of a driver who does not obey traffic rules will cause false learning and affect the recognition accuracy of the RNN module 400 . Therefore, it is necessary to screen driver behavior information.
  • the LSTM module 310 can delete the right-turn behavior information from the cell state through the gate structure in the information screening layer, so as to improve the recognition accuracy of the RNN module 400 and reduce the training time of the RNN module 400 .
  • the weight of the vehicle behavior during a period of time in the fusion of recognition results may be reduced. For example, when a traffic accident occurs in the middle of the road, nearby vehicles will slow down and drive around the accident vehicle, the LSTM module 310 may recognize this behavior as the corresponding deceleration and detour traffic sign, thereby causing misjudgment. Therefore, reducing the weight of the recognition result within this period of time when an abnormal state is detected can improve the recognition accuracy of the traffic sign recognition system 10 .
  • Fig. 3 is a traffic sign recognition system 20 provided by another embodiment of the present application. It can be understood that the traffic sign recognition system 20 includes a camera module 100 , a sensor 200 , a convolutional neural network (Convolutional Neural Networks, CNN) module 300 a and an RNN module 400 .
  • CNN convolutional Neural Networks
  • the traffic sign recognition system 20 is similar to the traffic sign recognition system 10 shown in FIG. 1 , the difference being that 300a is a CNN module.
  • the CNN module 300 a includes a first CNN module 320 and a second CNN module 330 .
  • the first CNN module 320 is connected to the camera module 100 and the RNN module 400 .
  • the second CNN module 330 connects the sensor 200 and the RNN module 400 .
  • the image information acquired by the camera module 100 may be input to the first CNN module 320 to obtain the first traffic sign recognition result.
  • the first CNN module 320 then inputs the first traffic sign recognition feature to the RNN module 400 .
  • the sensor 200 acquires the behavior information of the own vehicle and nearby vehicles, and inputs the information into the second CNN module 330 , so as to obtain traffic sign recognition features through the second CNN module 330 .
  • the second CNN module 330 then inputs the traffic sign recognition feature to the RNN module 400, and the RNN module 400 outputs the second traffic sign recognition feature result.
  • the first CNN module 320 and the second CNN module 330 can directly process the image information acquired by the camera module 100 and perform feature extraction according to the image information.
  • the second CNN module 330 is trained according to the vehicle behavior information of the own vehicle and nearby vehicles acquired by the sensor 200 to obtain traffic sign recognition results based on the vehicle behavior information of the own vehicle and nearby vehicles.
  • the RNN module 400 is trained according to the information input by the first CNN module 320 and the second CNN module 330 to obtain a traffic sign recognition result.
  • the traffic sign recognition system 20 does not need to perform feature fusion based on the extracted traffic signs, which can improve the applicability of the traffic sign recognition system 20 , and can perform traffic sign recognition training based on the original images acquired by the camera module 100 .
  • Fig. 4 is a flowchart of a traffic sign recognition method.
  • the embodiments are provided by way of example, since there are various ways to perform the method. For example, the methods described below may be performed using the systems shown in Figures 1-3, and reference is made to various elements of these Figures in explaining the embodiments.
  • Each step shown in FIG. 4 represents one or more procedures, methods or sub-processes performed in this embodiment. Also, the order of steps shown is an example only, and the order of steps may be changed. Additional steps may be added or fewer steps may be utilized without departing from the application. This embodiment may start at step S100.
  • step S100 a first traffic sign recognition result is acquired.
  • the camera module 100 may be used to obtain the first traffic sign recognition result.
  • step S200 behavior information of the own vehicle and nearby vehicles is acquired.
  • the senor 200 can be used to obtain behavior information of the own vehicle and nearby vehicles.
  • the present application does not limit the sequence of step S100 and step S200.
  • step S300 traffic sign identification parameters are generated according to the behavior information.
  • the training module 300 can be trained through the behavior information of the own vehicle and nearby vehicles collected by the sensor 200 to output traffic sign recognition parameters to the RNN module 400 . It can be understood that the method of outputting the traffic sign recognition parameters is the same as that in the traffic sign recognition system 10 and the traffic sign recognition system 20 , and will not be repeated here.
  • step S400 a second traffic sign recognition result is output according to the traffic sign recognition parameter and the first traffic sign recognition result.
  • the RNN module 400 may receive the first traffic sign recognition result from the camera module 100 and the output traffic sign recognition parameters from the training module 300, and generate a second traffic sign recognition result.
  • the training module 300 can be the LSTM module 310 shown in FIG. 2 or the CNN module 330 shown in FIG. 3 , and its specific functions and electrical connections can refer to the descriptions in FIG. 2 and FIG. 3 , which will not be repeated here.
  • the traffic sign recognition system 10 provided in the embodiment of the present application can be used in L3, L4 or L5 automatic driving, and can recognize traffic signs according to the behavior of nearby vehicles and camera modules.
  • Vehicle 30 includes traffic sign recognition system 10 .
  • the vehicle 30 provided in the embodiment of the present application can recognize traffic signs according to the information collected by different types of sensors on the vehicle. Compared with the related art that only recognizes traffic signs through cameras, combining other sensors for traffic sign recognition training can improve the accuracy of training results. sex.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Biology (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种在自动驾驶时识别交通标识的系统、方法及车辆,包括摄像模块,用于获取第一交通标识识别结果;传感器,用于获取本车辆及附近车辆的行为信息;训练模块与传感器连接,训练模块用于根据本车辆及附近车辆的行为信息输出交通标识识别参数;以及循环神经网络模块,循环神经网络模块与训练模块及摄像模块连接;其中,循环神经网络模块用于根据交通标识识别参数和第一交通标识识别结果输出第二交通标识识别结果。第二交通标识识别结果的训练参数包括交通标识识别参数及第一交通标识识别结果,通过结合其他传感器进行交通标识识别训练,提高了第二交通标识识别结果的准确性。

Description

交通标识识别系统、方法及车辆 技术领域
本申请涉及道路安全,尤其涉及一种交通标识识别系统、方法及车辆。
背景技术
检测并识别交通标识是驾驶的基础。交通标识检测及其识别是自动驾驶的基础,因此自动驾驶技术需要车辆能够识别交通标识。目前,交通标识识别技术仅采用摄像机对交通标识进行检测识别,且不能与车辆行为信息相结合。
因此,存在改进空间。
发明内容
有鉴于此,有必要提供一种能够将摄像机识别的交通标识与车辆行为信息结合并进行交通标识识别的交通标识识别系统、方法及车辆。
第一方面,本申请提供一种交通标识识别系统,包括:摄像模块,用于获取第一交通标识识别结果;传感器,用于获取本车辆及附近车辆的行为信息;训练模块,所述训练模块与所述传感器连接,所述训练模块用于根据所述本车辆及附近车辆的行为信息输出交通标识识别参数;以及循环神经网络模块,所述循环神经网络模块与所述训练模块及所述摄像模块连接;其中,所述循环神经网络模块用于根据所述交通标识识别参数和所述第一交通标识识别结果输出第二交通标识识别结果。显然,交通标识识别系统可以通过本车辆及附近车辆的行为信息及摄像模块获取到的第一交通标识识别结果对交通标识识别过程进行训练,可以提高交通标识的识别准确率。
在一种可能的设计中,所述训练模块为长短期记忆人工神经网络模块。
在一种可能的设计中,所述长短期记忆人工神经网络模块包括信息输入层;所述信息输入层连接所述传感器,以获取所述本车辆及附近车辆的行为信息。
在一种可能的设计中,所述长短期记忆人工神经网络模块还包括信息筛选层;所述信息筛选层用于根据所述本车辆及附近车辆的行为信息输出所述交通标识识别参数。
在一种可能的设计中,所述交通标识识别参数包括:车辆中心的位置、偏航角、速度、偏航率、所述车辆的加速度、所述车辆的宽度和高度以及所述车辆的类别。
在一种可能的设计中,所述交通标识识别系统还包括:卡尔曼滤波器,所述卡尔曼滤波器连接所述循环神经网络模块,其中,所述卡尔曼滤波器用于融合所述第一交通标识识别结果与所述第二交通标识识别结果,以生成第三交通标识识别结果。
在一种可能的设计中,所述训练模块包括第一卷积神经网络模块及第二卷积神经网络模块,所述第一卷积神经网络模块连接所述摄像模块和所述循环神经网络模块,所述第二卷积神经网络模块连接所述传感器和所述循环神经网络模块;其中,所述摄像模块用于获取图像信息;所述第一卷积神经网络模块用于根据所述图像信息生成第一交通标识识别结果;以及所述第二卷积神经网络模块用于根据所述本车辆及附近车辆的行为信息输出所述交通标识识别参数。
第二方面,本申请提供一种交通标识识别方法,包括:获取第一交通标识识别结果;获取本车辆及附近车辆的行为信息;根据所述本车辆及附近车辆的行为信息输出交通标识识别参数;根据所述交通标识识别参数和所述第一交通标识识别结果输出第二交通标识识别结果,所述交通标识识别参数包括交通标识识别参数及所述第一交通标识识别结果。
在一种可能的设计中,所述方法还包括:根据所述本车辆及附近车辆的行为信息输出所述交通标识识别参数。
在一种可能的设计中,所述交通标识识别参数包括:车辆中心的位置、偏航角、速度、偏航率、所述车辆的加速度、所述车辆的宽度和高度以及所述车辆的类别。
在一种可能的设计中,所述交通标识识别方法还包括:融合所述第一交通标识识别结果与所述第二交通标识识别结果,以生成第三交通标识识别结果。
在一种可能的设计中,所述方法还包括:获取图像信息;根据所述图像信息生成第一交通标识识别结果。
在一种可能的设计中,所述方法还包括:根据所述本车辆及附近车辆的行为信息输出所述交通标识识别参数。
第三方面,本申请提供一种车辆,包括:交通标识识别系统,所述交通标识识别系统包括:摄像模块,用于获取第一交通标识识别结果;传感器,用于获取本车辆及附近车辆的行为信息;训练模块,所述训练模块与传感器连接,所述训练模块用于根据所述本车辆及附近车辆的行为信息输出交通标识识别参数;以及循环神经网络模块,所述循环神经网络模块与所述训练模块及所述摄像模块连接;
其中,所述循环神经网络模块用于根据所述交通标识识别参数和所述第一交通标识识别结果输出第二交通标识识别结果。
在一种可能的设计中,所述训练模块为长短期记忆人工神经网络模块。
在一种可能的设计中,所述长短期记忆人工神经网络模块包括信息输入层;所述信息输入层连接所述传感器,以获取所述本车辆及附近车辆的行为信息。
在一种可能的设计中,所述长短期记忆人工神经网络模块还包括信息筛选层;所述信息筛选层用于根据所述本车辆及附近车辆的行为信息输出所述交通标识识别参数。
在一种可能的设计中,所述交通标识识别参数包括:车辆中心的位置、偏航角、速度、偏航率、所述车辆的加速度、所述车辆的宽度和高度以及所述车辆的类别。
在一种可能的设计中,所述车辆还包括:卡尔曼滤波器,所述卡尔曼滤波器连接所述循环神经网络模块,其中所述卡尔曼滤波器用于融合所述第一交通标识识别结果与所述第二交通标识识别结果,以生成第三交通标识识别结果。
在一种可能的设计中,所述训练模块包括第一卷积神经网络模块及第二卷积神经网络模块,所述第一卷积神经网络模块连接所述摄像模块和所述循环神经网络模块,所述第二卷积神经网络模块连接所述传感器和所述循环神经网络模块;其中,所述摄像模块用于获取图像信息,所述第一卷积神经网络模块用于根据所述图像信息生成第一交通标识识别结果;所述第二卷积神经网络模块用于根据所述本车辆及附近车辆的行为信息输出所述交通标识识别参数。
第二方面及第三方面所带来的技术效果可参见上述第一方面涉及的交通标识识别系统相关的描述,此处不再赘述。
附图说明
下面将参考附图通过实施例来描述本申请的实现方式。
图1为本申请一实施例提供的应用于自动驾驶技术的交通标识识别系统。
图2为本申请另一实施例提供的交通标识识别系统。
图3为本申请另一实施例提供的交通标识识别系统。
图4为本申请一实施例提供的用于自动驾驶车辆的交通标识识别方法流程示意图。
图5是本申请一实施例提供的自动驾驶车辆的示意图。
主要元件符号说明
交通标识识别系统10;20 摄像模块100 传感器200
训练模块300;300a 长短期记忆网络模块310 第一CNN模块320
第二CNN模块330 RNN模块400 卡尔曼滤波器500 车辆30
如下具体实施方式将结合上述附图进一步说明本申请。
具体实施方式
应当理解,为了说明的简单和清楚,在适当的情况下,在不同的附图中重复使用附图标记来表示相应的或类似的元件。另外,阐述了许多具体细节以便提供对本文所述实施例的透彻理解。然而,本领域普通技术人员应当理解,可以在没有这些具体细节的情况下实践本申请描述的实施例。在其它情况下,没有详细描述方法、过程和组件,以免混淆本申请描述的相关特征。 附图不一定是按比例的,并且某些部分的比例可能被放大以更好地示出细节和特征。该描述不应被认为是对本申请所述实施例的范围的限制。
现在将给出贯穿本申请应用的若干术语定义。
术语“耦合”被定义为连接,无论是直接连接还是通过中间组件间接连接,并且不一定限于物理连接。连接可以是这样的,即,物体永久地连接或可释放地连接。术语“包括”是指“包括,但不一定限于”;它具体地指示在如此描述的组合、组、系列等中的开放式包含或成员关系。
图1是本申请一实施例提供的交通标识识别系统10,包括摄像模块100、传感器200、训练模块300和循环神经网络(Recurrent Neural Network,RNN)模块400。传感器200依次连接训练模块300及RNN模块400,并且训练模块300连接至RNN模块400。
在本申请一实施例中,交通标识识别系统10通过摄像模块100获得交通标识的识别结果。交通标识识别系统10通过传感器200获得关于本车辆和附近车辆的行为的信息。交通标识识别系统10将来自传感器200的本车辆和附近车辆的行为信息输入至训练模块300。训练模块300用于输出交通标识识别参数至RNN模块400。RNN模块400融合由摄像模块100获得的识别结果和来自训练模块300的交通标识识别参数,并根据由训练模块300和摄像模块100输入的信息输出交通标识的识别结果。
在本申请一实施例中,传感器200包括但不限于雷达、定位器及激光雷达传感器。由于交通标识对于本车辆及附近车辆具有同样的约束效果,因此,通过对本车辆和附近车辆的行为进行分析,可以提高交通标识的识别率。例如,摄像模块100在检测到一交通标识后,传感器200采集本车辆及附近车辆的行为信息,训练模块300再根据检测到的交通标识和车辆行为进行学习和信息筛选。训练模块300进一步将学习信息并输出至RNN模块400。RNN模块400根据摄像模块100获取到的信息及训练模块300训练后的信息判断交通标识的类型。
在本申请一实施例中,传感器200采集到的本车辆及附近车辆的行为信息包括但不限于本车辆与附近车辆的距离、速度及车灯信息。传感器200集成获取到的信息,以形成本车辆与附近车辆的参数。传感器200向训练模块300输出的本车辆及附近车辆的参数为X={(x,y),Φ,v,ω,a,(w,h),c}。在本公式中,参数(x,y)表示该车辆的中心点位置。参数Φ表示偏航角。参数v表示速度,参数ω表示偏航率。参数a表示车辆的加速度。参数(w,h)表示车辆的宽度及高度。参数c表示车辆的种类。在一个实施例中,仅收集运动中的车辆的信息来训练模块300根据本车辆和附近车辆识别交通标识。
参阅图2,在一实施例中,训练模块300是一长短期记忆网络(Long Short-Term Memory network,LSTM)模块310。交通标识识别系统10进一步包括一卡尔曼(Kalman)滤波器500。LSTM模块310连接传感器200及RNN模块400。卡尔曼滤波器500连接RNN模块400及摄像模块100以输出交通标识识别结果。
在本申请一实施例中,卡尔曼滤波器500可将由摄像模块100所取得的交通标识识别结果(以下称为第一识别结果)与传感器200所推断的交通标识识别结果(以下称为第二识别结果)与本车辆及附近车辆的行为进行融合。因此,卡尔曼滤波器500可以将这两个识别结果融合以获得新的交通标识识别结果(以下称为第三识别结果)。由于附近车辆从不同的角度观察交通标识,因此考虑到它们的行为降低了单个交通标识识别错误的可能性。因此,与仅通过相机识别交通标识的相关技术相比,交通标识识别结果具有更高的鲁棒性和准确性。
在本申请一实施例中,卡尔曼滤波器500可融合第一识别结果与第二识别结果,即根据影像识别与本车辆与附近车辆的行为推导出的交通标识识别结果,使得交通标识识别系统10具有较高的准确性。
在本申请一实施例中,LSTM模块310通常具有两层结构。其中,一层结构为信息输入层,另一层结构为信息筛选层。传感器200可持续收集本车辆和附近车辆的行为信息,或定期收集本车辆和附近车辆的行为信息。传感器200将本车辆和附近车辆的行为信息输入至LSTM模块310的信息输入层。
在本实施例中,当传感器200将本车辆和附近车辆的行为信息输入至LSTM模块310的信息输入层时,本车辆与附近车辆的行为信息被赋予了不同的权重。例如,本车辆的信息与附近车辆的信息权重可以设置为1:1,以提高本车辆的行为信息在训练中的重要性。在其他实施例中,还可以根据附近车辆的数量与本车辆进行权重设置,本车辆与每辆附近车辆均具有相同的权重。本车辆及附近车辆的权重可以在LSTM模块310中的信息筛选层的门(gate)结构进行设置。
在本实施例中,本车辆的附近没有其他运动中的车辆,可以只根据本车辆的行为信息进行训练。然而,由于只根据本车辆的行为信息进行训练可能会导致训练结果不准确。因此,若存在一段时间内本车辆的附近没有其他运动中的车辆,而另外一段时间内存在较多其他车辆时,增加存在较多其他车辆时间段的权重,以提高训练结果的准确性。即,在信息筛选层的门结构降低仅有本车辆时的行为信息的整体权重。在信息筛选层的门结构提高存在较多附近车辆时的行为信息的整体权重,以提高训练结果的准确性。
在本实施例中,传感器200还用于获取本车辆和附近车辆的驾驶员行为信息。LSTM模块310对驾驶员行为信息进行筛选,以删除会对训练造成负面影响的信息。由于LSTM网络的特性,可以通过信息筛选层具有的门结构将驾驶员行为信息从细胞状态中添加或删除。因此,LSTM模块310可以保留需要保留的信息,将无需保留的驾驶员行为信息删除,以实现驾驶员行为信息的筛选。
例如,在进行驾驶员行为信息分析时,该分析结果可能会受到不遵守交通规则的驾驶员的行为干扰。例如,不遵守交通规则的驾驶员行为会造成误学习,并影响RNN模块400的识别准确度。因此,需要对驾驶员行为信息进行筛选。例如,路上可能存在禁止右转交通标识。而通过传感器200获取 及分析确认本车辆和临近车辆的驾驶员行为中,多数为直行或左转。因此,当存在一右转行为的,可以认定为不遵守交通规则的驾驶员行为。LSTM模块310可通过信息筛选层中的门结构从细胞状态中删除该条右转行为信息,以提高RNN模块400的识别准确度,减少RNN模块400的训练时间。
在本申请一实施例中,若本车辆和附近车辆行为由于其他因素而导致异常,可以降低一段时间内的车辆行为在识别结果融合时的权重。例如,当道路中间发生交通事故时,附近车辆会减速并绕过事故车辆行驶,LSTM模块310可能会将该行为识别为对应的减速绕行交通标识,进而造成误判。因此,在检测到异常状态时降低该段时间内识别结果的权重,可以提高交通标识识别系统10的识别准确率。
图3是本申请另一实施例提供的交通标识识别系统20。可以理解,交通标识识别系统20包括摄像模块100、传感器200、卷积神经网络(Convolutional Neural Networks,CNN)模块300a及RNN模块400。
在本实施例中,交通标识识别系统20与图1所示的交通标识识别系统10类似,其区别在于,300a为CNN模块。该CNN模块300a包括第一CNN模块320和第二CNN模块330。第一CNN模块320连接摄像模块100和RNN模块400。第二CNN模块330连接传感器200和RNN模块400。
在本申请一实施例中,摄像模块100获取的图像信息可输入至第一CNN模块320,以获取第一交通标识识别结果。第一CNN模块320再将第一交通标识识别特征输入至RNN模块400。传感器200获取本车辆及附近车辆的行为信息,并输入第二CNN模块330,以通过第二CNN模块330获取交通标识识别特征。第二CNN模块330再将交通标识识别特征输入至RNN模块400,RNN模块400输出第二交通标识识别特征结果。
在本实施例中,由于引入了第一CNN模块320和第二CNN模块330,第一CNN模块320可以直接处理摄像模块100获取到的图像信息,并根据图像信息进行特征提取。根据传感器200获取的本车辆和附近车辆的车辆行为信息训练第二CNN模块330,以获取基于本车辆和附近车辆的车辆行为信息的交通标识识别结果。RNN模块400根据第一CNN模块320和第二CNN模块330输入的信息训练得出交通标识识别结果。交通标识识别系统20无需根据已经提取的交通标识进行特征融合,可以提高交通标识识别系统20的适用性,能够根据摄像模块100获取的原始图像进行交通标识识别训练。
图4为交通标识识别方法的流程图。实施例是通过示例的方式提供的,因为存在多种方式来执行该方法。例如,可以使用图1-3中所示的系统来执行下面描述的方法,并且在解释实施例时参考这些图的各种元件。图4中所示的每个步骤代表在该实施例中执行的一个或多个过程、方法或子过程。此外,所示的步骤的顺序仅是示例,并且步骤的顺序可以改变。在不脱离本申请的情况下,可以添加附加的步骤或者可以利用更少的步骤。该实施例可以在步骤S100开始。
在步骤S100中,获取第一交通标识识别结果。
在本申请一实施例中,可通过摄像模块100获取第一交通标识识别结果。
在步骤S200中,获取本车辆及附近车辆的行为信息。
在本申请一实施例中,可通过传感器200获取本车辆及附近车辆的行为信息。
在本申请一实施例中,本申请不对步骤S100与步骤S200的顺序做限定。
在步骤S300中,根据行为信息生成交通标识识别参数。
在本申请一实施例中,可通过传感器200采集的本车辆及附近车辆的行为信息训练训练模块300,以输出交通标识识别参数至RNN模块400。可以理解的是,输出交通标识识别参数的方法与交通标识识别系统10和交通标识识别系统20中相同,在此不再赘述。
在步骤S400中,根据交通标识识别参数及第一交通标识识别结果输出第二交通标识识别结果。
在本申请一实施例中,可通过RNN模块400接收来自摄像模块100的所述第一交通标识识别结果和来自训练模块300的所述输出交通标识识别参数,并生成第二交通标识识别结果。
在本申请一实施例中,训练模块300可以是图2中示出的LSTM模块310或图3中示出的CNN模块330,其具体的功能及电连接关系可参阅图2及图3的描述,在此不再赘述。
本申请实施例提供的交通标识识别系统10可以用于L3、L4或L5级别的自动驾驶中,能够根据附近车辆的行为和摄像模块识别交通标识。
如图5所示,在本申请一实施例中,还提供一种车辆30。车辆30包括交通标识识别系统10。本申请实施例提供的车辆30可以根据车上的不同类型传感器采集的信息识别交通标识,与相关技术中仅通过摄像头识别交通标识相比,结合其他传感器进行交通标识识别训练可以提高训练结果的准确性。
尽管在上述说明书中已经阐述了本技术的许多特征和优点,以及本申请的结构和功能的细节,但是本申请仅是说明性的,并且在本申请的原理内可以在细节上进行改变,特别是在部件的形状、尺寸和布置方面,直到并且包括由权利要求中使用的术语的宽泛一般含义所建立的全部范围。因此,应当理解,可以在权利要求的范围内对上述示例性实施例进行修改。

Claims (20)

  1. 一种交通标识识别系统,包括:
    摄像模块,用于获取第一交通标识识别结果;
    传感器,用于获取本车辆及附近车辆的行为信息;
    训练模块,所述训练模块与所述传感器连接,所述训练模块用于根据所述本车辆及附近车辆的行为信息输出交通标识识别参数;以及
    循环神经网络模块,所述循环神经网络模块与所述训练模块及所述摄像模块连接;
    其中,所述循环神经网络模块用于根据所述交通标识识别参数和所述第一交通标识识别结果输出第二交通标识识别结果。
  2. 如权利要求1所述的交通标识识别系统,其中,所述训练模块为长短期记忆人工神经网络模块。
  3. 如权利要求2所述的交通标识识别系统,其中,所述长短期记忆人工神经网络模块包括信息输入层;
    所述信息输入层连接所述传感器,以获取所述本车辆及附近车辆的行为信息。
  4. 如权利要求3所述的交通标识识别系统,其中,所述长短期记忆人工神经网络模块还包括信息筛选层;
    所述信息筛选层用于根据所述本车辆及附近车辆的行为信息输出所述交通标识识别参数。
  5. 如权利要求4所述的交通标识识别系统,其中,所述交通标识识别参数包括:
    车辆中心的位置、偏航角、速度、偏航率、所述车辆的加速度、所述车辆的宽度和高度以及所述车辆的类别。
  6. 如权利要求1所述的交通标识识别系统,其中,所述交通标识识别系统还包括:
    卡尔曼滤波器,所述卡尔曼滤波器连接所述循环神经网络模块,其中,所述卡尔曼滤波器用于融合所述第一交通标识识别结果与所述第二交通标识识别结果,以生成第三交通标识识别结果。
  7. 如权利要求1所述的交通标识识别系统,其中,所述训练模块包括第一卷积神经网络模块及第二卷积神经网络模块,所述第一卷积神经网络模块连接所述摄像模块和所述循环神经网络模块,所述第二卷积神经网络模块连接所述传感器和所述循环神经网络模块;
    其中,所述摄像模块用于获取图像信息;
    所述第一卷积神经网络模块用于根据所述图像信息生成第一交通标识识别结果;以及
    所述第二卷积神经网络模块用于根据所述本车辆及附近车辆的行为信息 输出所述交通标识识别参数。
  8. 一种交通标识识别方法,包括:
    获取第一交通标识识别结果;
    获取本车辆及附近车辆的行为信息;
    根据所述本车辆及附近车辆的行为信息输出交通标识识别参数;
    根据所述交通标识识别参数和所述第一交通标识识别结果输出第二交通标识识别结果,所述交通标识识别参数包括交通标识识别参数及所述第一交通标识识别结果。
  9. 如权利要求8所述的交通标识识别方法,其中,所述方法还包括:
    根据所述本车辆及附近车辆的行为信息输出所述交通标识识别参数。
  10. 如权利要求9所述的交通标识识别方法,其中,所述交通标识识别参数包括:
    车辆中心的位置、偏航角、速度、偏航率、所述车辆的加速度、所述车辆的宽度和高度以及所述车辆的类别。
  11. 如权利要求8所述的交通标识识别方法,其中,所述交通标识识别方法还包括:
    融合所述第一交通标识识别结果与所述第二交通标识识别结果,以生成第三交通标识识别结果。
  12. 如权利要求11所述的交通标识识别方法,其中,所述方法还包括:
    获取图像信息;
    根据所述图像信息生成第一交通标识识别结果。
  13. 如权利要求12所述的交通标识识别方法,其中,所述方法还包括:
    根据所述本车辆及附近车辆的行为信息输出所述交通标识识别参数。
  14. 一种车辆,包括:
    交通标识识别系统,所述交通标识识别系统包括:
    摄像模块,用于获取第一交通标识识别结果;
    传感器,用于获取本车辆及附近车辆的行为信息;
    训练模块,所述训练模块与传感器连接,所述训练模块用于根据所述本车辆及附近车辆的行为信息输出交通标识识别参数;以及
    循环神经网络模块,所述循环神经网络模块与所述训练模块及所述摄像模块连接;
    其中,所述循环神经网络模块用于根据所述交通标识识别参数和所述第一交通标识识别结果输出第二交通标识识别结果。
  15. 如权利要求14所述的车辆,其中,所述训练模块为长短期记忆人工神经网络模块。
  16. 如权利要求15所述的车辆,其中,所述长短期记忆人工神经网络模块包括信息输入层;
    所述信息输入层连接所述传感器,以获取所述本车辆及附近车辆的行为信息。
  17. 如权利要求16所述的车辆,其中,所述长短期记忆人工神经网络模块还包括信息筛选层;
    所述信息筛选层用于根据所述本车辆及附近车辆的行为信息输出所述交通标识识别参数。
  18. 如权利要求14所述的车辆,其中,所述交通标识识别参数包括:
    车辆中心的位置、偏航角、速度、偏航率、所述车辆的加速度、所述车辆的宽度和高度以及所述车辆的类别。
  19. 如权利要求14所述的车辆,其中,所述车辆还包括:
    卡尔曼滤波器,所述卡尔曼滤波器连接所述循环神经网络模块,其中所述卡尔曼滤波器用于融合所述第一交通标识识别结果与所述第二交通标识识别结果,以生成第三交通标识识别结果。
  20. 如权利要求14所述的车辆,其中,所述训练模块包括第一卷积神经网络模块及第二卷积神经网络模块,所述第一卷积神经网络模块连接所述摄像模块和所述循环神经网络模块,所述第二卷积神经网络模块连接所述传感器和所述循环神经网络模块;
    其中,所述摄像模块用于获取图像信息,
    所述第一卷积神经网络模块用于根据所述图像信息生成第一交通标识识别结果;
    所述第二卷积神经网络模块用于根据所述本车辆及附近车辆的行为信息输出所述交通标识识别参数。
PCT/CN2022/078052 2021-06-17 2022-02-25 交通标识识别系统、方法及车辆 WO2022262307A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202280001829.6A CN116868246A (zh) 2021-06-17 2022-02-25 交通标识识别系统、方法及车辆

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US17/350,251 2021-06-17
US17/350,251 US20220405517A1 (en) 2021-06-17 2021-06-17 System, method, and vehicle for recognition of traffic signs

Publications (1)

Publication Number Publication Date
WO2022262307A1 true WO2022262307A1 (zh) 2022-12-22

Family

ID=84489236

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/078052 WO2022262307A1 (zh) 2021-06-17 2022-02-25 交通标识识别系统、方法及车辆

Country Status (3)

Country Link
US (1) US20220405517A1 (zh)
CN (1) CN116868246A (zh)
WO (1) WO2022262307A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102956118A (zh) * 2011-08-24 2013-03-06 福特全球技术公司 用于交通信号识别的装置和方法
CN103832282A (zh) * 2014-02-25 2014-06-04 长城汽车股份有限公司 一种汽车智能限速系统及汽车限速方法
CN105957379A (zh) * 2016-05-30 2016-09-21 乐视控股(北京)有限公司 交通信息识别方法、装置及车辆
DE102015210117A1 (de) * 2015-06-02 2016-12-08 Conti Temic Microelectronic Gmbh Fahrerassistenzvorrichtung zur Verkehrszeichenerkennung für ein Kraftfahrzeug sowie ein Verfahren zur Überprüfung einer etwaigen Fehlerkennung eines mit der Fahrerassistenzvorrichtung erkannten Verkehrszeichens
CN107679508A (zh) * 2017-10-17 2018-02-09 广州汽车集团股份有限公司 交通标志检测识别方法、装置及系统
CN109118797A (zh) * 2018-10-29 2019-01-01 百度在线网络技术(北京)有限公司 信息共享方法、装置、设备及存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160162743A1 (en) * 2014-12-05 2016-06-09 Magna Electronics Inc. Vehicle vision system with situational fusion of sensor data
US10235601B1 (en) * 2017-09-07 2019-03-19 7D Labs, Inc. Method for image analysis
US10929696B2 (en) * 2018-04-19 2021-02-23 Here Global B.V. Method, apparatus, and system for determining a negative observation of a road feature
US11308395B2 (en) * 2018-04-27 2022-04-19 Alibaba Group Holding Limited Method and system for performing machine learning
TWI745752B (zh) * 2019-09-23 2021-11-11 神達數位股份有限公司 行車輔助方法及系統與電腦程式產品
CN113096411A (zh) * 2021-03-17 2021-07-09 武汉大学 一种基于车联网环境系统的交叉路口处车辆碰撞预警方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102956118A (zh) * 2011-08-24 2013-03-06 福特全球技术公司 用于交通信号识别的装置和方法
CN103832282A (zh) * 2014-02-25 2014-06-04 长城汽车股份有限公司 一种汽车智能限速系统及汽车限速方法
DE102015210117A1 (de) * 2015-06-02 2016-12-08 Conti Temic Microelectronic Gmbh Fahrerassistenzvorrichtung zur Verkehrszeichenerkennung für ein Kraftfahrzeug sowie ein Verfahren zur Überprüfung einer etwaigen Fehlerkennung eines mit der Fahrerassistenzvorrichtung erkannten Verkehrszeichens
CN105957379A (zh) * 2016-05-30 2016-09-21 乐视控股(北京)有限公司 交通信息识别方法、装置及车辆
CN107679508A (zh) * 2017-10-17 2018-02-09 广州汽车集团股份有限公司 交通标志检测识别方法、装置及系统
CN109118797A (zh) * 2018-10-29 2019-01-01 百度在线网络技术(北京)有限公司 信息共享方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN116868246A (zh) 2023-10-10
US20220405517A1 (en) 2022-12-22

Similar Documents

Publication Publication Date Title
US11565721B2 (en) Testing a neural network
US11461595B2 (en) Image processing apparatus and external environment recognition apparatus
US10402670B2 (en) Parallel scene primitive detection using a surround camera system
US11091161B2 (en) Apparatus for controlling lane change of autonomous vehicle and method thereof
CN107305633A (zh) 使用车辆摄像机系统的道路特征检测
CN112700470A (zh) 一种基于交通视频流的目标检测和轨迹提取方法
CN106611512A (zh) 前车起步的处理方法、装置和系统
US7974445B2 (en) Vehicle periphery monitoring device, vehicle, and vehicle periphery monitoring program
US20210107486A1 (en) Apparatus for determining lane change strategy of autonomous vehicle and method thereof
CN108036796A (zh) 导航图像显示方法、装置及车载电子设备
US11442456B2 (en) Apparatus for determining lane change path of autonomous vehicle and method thereof
CN111127520B (zh) 一种基于视频分析的车辆跟踪方法和系统
TW202125441A (zh) 安全警示語音提示方法
CN110175620A (zh) 基于多个传感器的融合的检测
KR20200133920A (ko) 인공신경망 기반의 투사정보 인식 장치 및 그 방법
WO2022262307A1 (zh) 交通标识识别系统、方法及车辆
CN111339834B (zh) 车辆行驶方向的识别方法、计算机设备及存储介质
KR102157990B1 (ko) 신경망을 이용한 차량 번호판 인식 방법 및 시스템
US11417117B2 (en) Method and device for detecting lanes, driver assistance system and vehicle
Rafi et al. Performance analysis of deep learning YOLO models for South Asian regional vehicle recognition
WO2022123907A1 (ja) 情報処理装置及び情報処理方法、コンピュータプログラム、撮像装置、車両装置、並びに医療用ロボット装置
JP7115180B2 (ja) 画像処理システムおよび画像処理方法
Roohullah et al. Accident detection in autonomous vehicles using modified restricted Boltzmann machine
Zhao Enhancing Autonomous Driving with Grounded-Segment Anything Model: Limitations and Mitigations
WO2023178510A1 (zh) 图像处理方法、装置和系统、可移动平台

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 202280001829.6

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22823800

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22823800

Country of ref document: EP

Kind code of ref document: A1