CN110889378B - Multi-view fusion traffic sign detection and identification method and system thereof - Google Patents

Multi-view fusion traffic sign detection and identification method and system thereof Download PDF

Info

Publication number
CN110889378B
CN110889378B CN201911193295.8A CN201911193295A CN110889378B CN 110889378 B CN110889378 B CN 110889378B CN 201911193295 A CN201911193295 A CN 201911193295A CN 110889378 B CN110889378 B CN 110889378B
Authority
CN
China
Prior art keywords
feature
training
different visual
visual angles
traffic sign
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911193295.8A
Other languages
Chinese (zh)
Other versions
CN110889378A (en
Inventor
张春阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Shuaiwei Control Technology Co ltd
Original Assignee
Hunan Shuaiwei Control Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Shuaiwei Control Technology Co ltd filed Critical Hunan Shuaiwei Control Technology Co ltd
Priority to CN201911193295.8A priority Critical patent/CN110889378B/en
Publication of CN110889378A publication Critical patent/CN110889378A/en
Application granted granted Critical
Publication of CN110889378B publication Critical patent/CN110889378B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a multi-view fusion traffic sign detection and identification method and a system thereof, which are used in the unmanned field, wherein the method comprises the following steps: s1, acquiring traffic sign images of different visual angles in real time; s2, inputting traffic sign images of different visual angles into a trained neural network model to obtain feature images of different visual angles; s3, establishing corresponding relations among feature graphs of different visual angles, carrying out feature fusion, carrying out traffic sign detection and information identification after obtaining a multi-view fusion result, and sending the identification result to the auxiliary driving system. The method adopts the neural network model to detect and identify the traffic sign, combines a multi-view data fusion algorithm, enables the neural network model to process traffic sign data at a plurality of angles, is more suitable for complex real driving environments, can avoid the influence of environmental shielding and sundry interference, and improves the detection precision through the fusion of the view data.

Description

Multi-view fusion traffic sign detection and identification method and system thereof
Technical Field
The invention relates to the technical field of unmanned intelligent vehicles, in particular to a multi-view fusion traffic sign detection and identification method and a system thereof.
Background
Along with the rising of unmanned intelligent vehicles, accurately guaranteeing the road safety during unmanned becomes a problem to be solved urgently. The traffic sign beside the road can provide real and accurate traffic information in real time, and the driving speed, the advancing, the steering and other operations of the automobile can be controlled in an auxiliary mode according to the road traffic information. Therefore, accurate detection and identification of traffic signs and assistance in controlling intelligent vehicles are a key technology in the unmanned field.
Currently, a target detection algorithm using deep learning is a main way to realize traffic sign recognition and detection. The traffic sign recognition system mainly acquires natural scene images through equipment such as cameras and sensors arranged on intelligent vehicles, then detects and understands signs in the scenes in real time through technologies such as image processing and pattern recognition, and finally timely feeds back recognition information such as forbidden commands, warning and indication to effectively control the vehicles.
However, in the prior art, the single image is still processed in the process of traffic recognition so as to obtain feedback, but the single view data encounters the condition of incomplete information due to sundries and shielding in the process of processing, and the safety cannot be obviously satisfied by adopting a single image processing mode in the unmanned vehicle.
Therefore, providing a multi-view fusion traffic sign detection and recognition method and system thereof is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
In view of the above, the invention provides a multi-view fusion traffic sign detection and identification method and a system thereof, which mainly acquire multi-view data of traffic signs by adopting a plurality of cameras installed on an intelligent vehicle, provide more abundant traffic sign information, and solve the problems of real-time identification and detection of the traffic signs of the intelligent vehicle in the actual blocking interference driving environment by fusion of the multi-view data and combination of a network model with simple structure and high detection precision.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
a multi-view fusion traffic sign detection and identification method, the identified result is sent to an auxiliary driving system, comprising the following steps:
s1, acquiring traffic sign images of different visual angles in real time;
s2, inputting traffic sign images of different visual angles into a trained neural network model to obtain feature images of different visual angles;
s3, establishing corresponding relations among feature graphs of different visual angles, carrying out feature fusion, carrying out traffic sign detection and information identification after obtaining a multi-view fusion result, and sending the identification result to the auxiliary driving system.
Preferably, the specific content of the correspondence between the feature maps for establishing different viewing angles in S3 is:
Figure GDA0002327760840000021
wherein ,
Figure GDA0002327760840000022
for any feature point on the view plane u, < +.>
Figure GDA0002327760840000023
Is any characteristic point on the view angle plane v, w j,i Is a scalar to be determined;
for a particular point i, there is only one w j,i Positive, while the remainder are all 0; and only in the u-plane
Figure GDA0002327760840000024
And v plane->
Figure GDA0002327760840000025
W is equal to the corresponding 3D point j,i Is positive.
Preferably, the training of the neural network model comprises training for feature extraction according to training sample images and multi-view fusion training.
Preferably, the step of performing multi-view fusion training by the neural network includes the following:
(1) Inputting training sample pictures of different visual angles into a neural network to obtain feature images of different visual angles, and fusing the feature images;
(2) And comparing the feature map true values with feature maps of different visual angles and the fused feature maps respectively to obtain comparison results, and further adjusting the neural network according to the comparison results.
A multi-view fusion traffic sign detection and recognition system connected with an auxiliary driving system, comprising: an image acquisition subsystem, a feature extraction subsystem and a mark recognition subsystem;
the image acquisition subsystem is used for acquiring traffic sign images of different visual angles in real time;
the feature extraction subsystem is used for training a neural network model, inputting traffic sign images with different visual angles into the trained neural network model, and obtaining feature images with different visual angles;
the sign recognition subsystem is used for carrying out feature fusion on the extracted feature graphs with different visual angles, carrying out traffic sign detection and information recognition after a multi-view fusion result is obtained, and sending the recognition result to the auxiliary driving system.
Preferably, the feature extraction subsystem comprises a model training module and a feature extraction module;
the model training module is used for acquiring training data, training the neural network model and obtaining a trained neural network model;
and the feature extraction module inputs the traffic sign images with different visual angles into the trained neural network model to obtain feature images with different visual angles.
Preferably, the model training module comprises image feature extraction training and multi-view fusion training;
the image feature extraction training is used for training the feature extraction process;
the multi-view fusion training is used for training the multi-view fusion process.
Preferably, the mark recognition subsystem comprises a corresponding relation building module, a feature fusion module, a mark detection module and an information recognition module;
the corresponding relation establishing module is used for establishing corresponding relation between feature graphs of different visual angles;
the feature fusion module is used for fusing feature graphs of different visual angles to obtain a multi-view fusion result;
the sign detection module is used for detecting traffic signs according to the multi-view fusion result to obtain a detection result;
and the information identification module is used for identifying the information represented by the traffic sign according to the detection result and further transmitting the identification result to the auxiliary driving system.
Compared with the prior art, the invention discloses a multi-view fusion traffic sign detection and identification method and a system thereof, wherein the method fuses characteristic graphs obtained after multi-view data are processed by a neural network model, so that the neural network model can process traffic sign data of multiple angles to obtain more accurate target detection and identification results, the obtained neural network model is more suitable for complex real driving environments, the influence of environmental shielding and sundry interference can be avoided, and the detection precision is improved through fusion of the visual data.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a multi-view fusion traffic sign detection and identification method provided by the invention;
FIG. 2 is a flow chart of two-view fusion training of a neural network in a multi-view fusion traffic sign detection and identification method provided by the invention;
FIG. 3 is a schematic view of a P-point spatial imaging structure according to a first embodiment of the present invention;
fig. 4 is a schematic diagram of feature point fusion according to a first embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Embodiment one:
the embodiment of the invention discloses a multi-view fusion traffic sign detection and identification method, wherein the identified result is sent to an auxiliary driving system, as shown in fig. 1, and the method comprises the following steps:
s1, acquiring traffic sign images of different visual angles in real time;
s2, inputting traffic sign images of different visual angles into a trained neural network model to obtain feature images of different visual angles;
s3, establishing corresponding relations among feature graphs of different view angles, carrying out feature fusion, carrying out traffic sign detection and information identification after obtaining a multi-view fusion result, and sending the identification result to an auxiliary driving system.
Preferably, the specific content of the correspondence between the feature maps for establishing different viewing angles in S3 is:
Figure GDA0002327760840000041
wherein ,
Figure GDA0002327760840000042
for any feature point on the view plane u, < +.>
Figure GDA0002327760840000043
Is any characteristic point on the view angle plane v, w j,i Is a scalar to be determined;
for a particular point i, there is only one w j,i Positive, while the remainder are all 0; and only in the u-plane
Figure GDA0002327760840000051
And v plane->
Figure GDA0002327760840000052
W is equal to the corresponding 3D point j,i Is positive.
Preferably, the training of the neural network model comprises training for feature extraction according to training sample images and multi-view fusion training.
Preferably, as shown in fig. 2, the step of performing multi-view fusion training by the neural network includes the following:
(1) Inputting training sample pictures of different visual angles into a neural network to obtain feature images of different visual angles, and fusing the feature images;
(2) And comparing the feature map true values with feature maps of different visual angles and the fused feature maps respectively to obtain comparison results, and further adjusting the neural network according to the comparison results.
What needs to be further explained is:
as shown in fig. 3, three points A, B and P points exist in the three-dimensional space, wherein the projections of the P points on two different viewing angle planes u and v are respectively
Figure GDA0002327760840000053
and />
Figure GDA0002327760840000054
wherein />
Figure GDA0002327760840000055
and />
Figure GDA0002327760840000056
Respectively representing the pixel position, the projection of the point A and the point B on the viewing angle plane u is the same as the point P, and the projection on the viewing angle plane v is equal to +.>
Figure GDA0002327760840000057
On the same straight line l.
The heatmaps of view planes u and v contain feature points, respectively
Figure GDA0002327760840000058
and />
Figure GDA0002327760840000059
The core of feature fusion between different views is to establish a corresponding relation between views:
Figure GDA00023277608400000510
wherein ,wj,i Is a scalar to be determined. Ideally, for a particular point i, there is only one w j,i Is positive in terms of the direction of the current,while the remainder are all 0. And only in the u-plane
Figure GDA00023277608400000511
And +.>
Figure GDA00023277608400000512
When corresponding to the same 3D point, w j,i Is positive.
If it is known that
Figure GDA00023277608400000513
The position in the u-plane can only be determined from the geometrical relationship that the P-point lies in a straight line +.>
Figure GDA00023277608400000514
On a straight line
Figure GDA00023277608400000515
The projection line l on the plane v can then be determined, while the point P is located at the projection point +.>
Figure GDA00023277608400000516
Must lie on a straight line l. In the feature map, true +.>
Figure GDA00023277608400000517
The position of the dot will have a larger response value, whereas the response values of the other dots on the line l are almost absent, so that +.>
Figure GDA00023277608400000518
And all features on line l are fused as shown in fig. 4. And meanwhile, the characteristic fusion is used as a layer of neural network, so that the network automatically learns weights from training data, and more accurate target detection information is obtained by fusing multi-view information.
Further, specific contents of training the neural network model by taking the R-FCN neural network model as an example include:
and acquiring driving video data containing traffic signs of different scenes, marking the data by using an image marking tool Labelling, and establishing a traffic sign data set based on the intelligent vehicle platform. And after the equipment condition is finished, acquiring actual road data to perfect a traffic sign data set. Because the data set contains continuous traffic sign labeling images, the influences of undersize, shielding, shape distortion and the like of traffic signs in complex road scenes can be well eliminated, and a reliable data basis is provided for traffic sign detection and identification based on the intelligent vehicle platform.
Deep convolutional neural networks possess powerful feature extraction and expression capabilities, and the network itself requires large amounts or even massive amounts of data to drive model training, which may otherwise lead to a trapped model training over-fit. The traffic sign data set manufactured by the embodiment comprises 6000 pictures and is expanded, the number of training samples can be expanded by effective data expansion, the diversity of the training samples is increased, and overfitting and improvement of model performance can be avoided. The R-FCN framework uses image level flipping for dataset expansion, which can double the original dataset.
Besides the mode of adding training data and expanding the data to prevent network overfitting, a subset can be randomly divided from the training set data to serve as a verification set before model training, so that model prediction performance can be evaluated in a training stage. And (3) respectively carrying out network forward operation on the training set and the verification set after each round or each batch of training, predicting sample marks of the training set and the verification set, and drawing a learning curve so as to test the generalization capability of the model.
The traffic sign samples used for training after data expansion are still few, a large number of training sets with label data are needed for deep learning target detection, and if the data samples are too few, even if a very good network structure is utilized, a very high detection effect cannot be achieved. The idea of fine tuning the model can well solve the problem, and the model ResNet trained on the ImageNet is fine tuned and then applied to the established intelligent vehicle traffic sign data set. In the process of training the R-FCN model, the size of the image is reduced to a certain size before the image is input into the basic network to extract the characteristics, and the recognition result of the traffic sign of the training model is greatly influenced.
In the embodiment, a GPU server platform is adopted for training, a Jetson TX1 embedded platform is used for testing, an R-FCN is combined with a ResNet-50 network for model training, 5000 images are selected for training, 1000 images are used for testing, 50W iterations are performed, four resize sizes are respectively 600 x 600, 1000 x 1000 and 1111 x 889 (original image size training), other parameter settings are the same, the learning rate is set to be 0.01, and 10W iterations are attenuated once, and the attenuation amplitude is one tenth.
On the basis of the original trained neural network model, a multi-view fusion feature layer is added, traffic sign multi-view data are input, feature fusion is regarded as weight learning of the neural network layer from end to end, and the network is allowed to learn weights freely from training data.
Embodiment two:
the embodiment discloses a multi-view fusion traffic sign detection and identification system, which is connected with an auxiliary driving system and is characterized by comprising: an image acquisition subsystem, a feature extraction subsystem and a mark recognition subsystem;
the image acquisition subsystem is used for acquiring traffic sign images of different visual angles in real time;
the feature extraction subsystem is used for training the neural network model, inputting traffic sign images with different visual angles into the trained neural network model, and obtaining feature images with different visual angles;
the sign recognition subsystem is used for carrying out feature fusion on the extracted feature graphs with different visual angles, carrying out traffic sign detection and information recognition after a multi-view fusion result is obtained, and sending the recognition result to the auxiliary driving system.
Preferably, the feature extraction subsystem comprises a model training module and a feature extraction module;
the model training module is used for acquiring training data, training the neural network model and obtaining a trained neural network model;
and the feature extraction module is used for inputting the traffic sign images with different visual angles into the trained neural network model to obtain feature images with different visual angles.
Preferably, the model training module comprises image feature extraction training and multi-view fusion training;
image feature extraction training for training the feature extraction process;
and the multi-view fusion training is used for training the multi-view fusion process.
Preferably, the mark recognition subsystem comprises a corresponding relation building module, a feature fusion module, a mark detection module and an information recognition module;
the corresponding relation establishing module is used for establishing corresponding relation between the feature graphs of different visual angles;
the feature fusion module is used for fusing the feature images of different visual angles to obtain a multi-view fusion result;
the sign detection module is used for detecting traffic signs according to the multi-view fusion result to obtain a detection result;
and the information identification module is used for identifying the information represented by the traffic sign according to the detection result and further transmitting the identification result to the auxiliary driving system.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (2)

1. A multi-view fusion traffic sign detection and recognition method, the recognized result being transmitted to a driving assistance system, characterized by comprising the steps of:
s1, acquiring traffic sign images of different visual angles in real time;
s2, inputting traffic sign images of different visual angles into a trained neural network model to obtain feature images of different visual angles;
s3, establishing corresponding relations among feature graphs of different visual angles, carrying out feature fusion, carrying out traffic sign detection and information identification after obtaining a multi-view fusion result, and sending the identification result to the auxiliary driving system;
and S3, establishing corresponding relations among feature graphs of different visual angles, wherein the specific content of feature fusion is as follows:
Figure FDA0003870764920000011
wherein ,
Figure FDA0003870764920000012
for any feature point on the view plane u, < +.>
Figure FDA0003870764920000013
Is any characteristic point on the view angle plane v, w j,i Is a scalar to be determined;
for a particular point i, there is only one w j,i Positive, while the remainder are all 0; and only in the u-plane
Figure FDA0003870764920000014
And v plane->
Figure FDA0003870764920000015
W is equal to the corresponding 3D point j,i Is positive;
training of the neural network model comprises training of feature extraction and multi-view fusion training according to training sample images;
the step of the neural network for multi-view fusion training comprises the following steps:
(1) Inputting training sample pictures of different visual angles into a neural network to obtain feature images of different visual angles, and fusing the feature images;
(2) And comparing the feature map true values with feature maps of different visual angles and the fused feature maps respectively to obtain comparison results, and further adjusting the neural network according to the comparison results.
2. A multi-view fusion traffic sign detection and recognition system connected with an auxiliary driving system, comprising: an image acquisition subsystem, a feature extraction subsystem and a mark recognition subsystem;
the image acquisition subsystem is used for acquiring traffic sign images of different visual angles in real time;
the feature extraction subsystem is used for training a neural network model, inputting traffic sign images with different visual angles into the trained neural network model, and obtaining feature images with different visual angles;
the sign recognition subsystem is used for carrying out feature fusion on the extracted feature graphs with different visual angles, carrying out traffic sign detection and information recognition after a multi-view fusion result is obtained, and sending the recognition result to the auxiliary driving system;
the feature extraction subsystem comprises a model training module and a feature extraction module;
the model training module is used for acquiring training data, training the neural network model and obtaining a trained neural network model;
the feature extraction module inputs traffic sign images with different visual angles into the trained neural network model to obtain feature images with different visual angles;
the model training module comprises image feature extraction training and multi-view fusion training;
the image feature extraction training is used for training the feature extraction process;
the multi-view fusion training is used for training a multi-view fusion process;
the mark recognition subsystem comprises a corresponding relation building module, a feature fusion module, a mark detection module and an information recognition module;
the corresponding relation establishing module is used for establishing corresponding relation between feature graphs of different visual angles;
the feature fusion module is used for fusing the feature images of different visual angles, comparing the feature image true values with the feature images of different visual angles and the fused feature images respectively to obtain a comparison result, and further adjusting the neural network according to the comparison result to obtain a multi-view fusion result;
the sign detection module is used for detecting traffic signs according to the multi-view fusion result to obtain a detection result;
and the information identification module is used for identifying the information represented by the traffic sign according to the detection result and further transmitting the identification result to the auxiliary driving system.
CN201911193295.8A 2019-11-28 2019-11-28 Multi-view fusion traffic sign detection and identification method and system thereof Active CN110889378B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911193295.8A CN110889378B (en) 2019-11-28 2019-11-28 Multi-view fusion traffic sign detection and identification method and system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911193295.8A CN110889378B (en) 2019-11-28 2019-11-28 Multi-view fusion traffic sign detection and identification method and system thereof

Publications (2)

Publication Number Publication Date
CN110889378A CN110889378A (en) 2020-03-17
CN110889378B true CN110889378B (en) 2023-06-09

Family

ID=69749287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911193295.8A Active CN110889378B (en) 2019-11-28 2019-11-28 Multi-view fusion traffic sign detection and identification method and system thereof

Country Status (1)

Country Link
CN (1) CN110889378B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111832388B (en) * 2020-05-22 2022-07-26 南京邮电大学 Method and system for detecting and identifying traffic sign in vehicle running

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110119768A (en) * 2019-04-24 2019-08-13 苏州感测通信息科技有限公司 Visual information emerging system and method for vehicle location

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006042142A2 (en) * 2004-10-07 2006-04-20 Bernard Widrow Cognitive memory and auto-associative neural network based pattern recognition and searching
KR101912914B1 (en) * 2014-01-17 2018-10-29 주식회사 만도 Method and system for recognition of speed limit sign using front camera
CN104063877B (en) * 2014-07-16 2017-05-24 中电海康集团有限公司 Hybrid judgment identification method for candidate lane lines
CN104463933A (en) * 2014-11-05 2015-03-25 南京师范大学 Three-view-based automatic 2.5-dimensional cartoon animation generation method
CN104535070B (en) * 2014-12-26 2017-11-14 上海交通大学 Graph data structure, collection and processing system and method in high-precision
CN104700099B (en) * 2015-03-31 2017-08-11 百度在线网络技术(北京)有限公司 The method and apparatus for recognizing traffic sign
WO2017149526A2 (en) * 2016-03-04 2017-09-08 May Patents Ltd. A method and apparatus for cooperative usage of multiple distance meters
CN106980855B (en) * 2017-04-01 2020-04-17 公安部交通管理科学研究所 Traffic sign rapid identification and positioning system and method
CN108154102B (en) * 2017-12-21 2021-12-10 安徽师范大学 Road traffic sign identification method
CN109405824A (en) * 2018-09-05 2019-03-01 武汉契友科技股份有限公司 A kind of multi-source perceptual positioning system suitable for intelligent network connection automobile
CN109374008A (en) * 2018-11-21 2019-02-22 深动科技(北京)有限公司 A kind of image capturing system and method based on three mesh cameras
CN109657584B (en) * 2018-12-10 2022-12-09 西安汇智信息科技有限公司 Improved LeNet-5 fusion network traffic sign identification method for assisting driving
CN110059691B (en) * 2019-03-29 2022-10-14 南京邮电大学 Multi-view distorted document image geometric correction method based on mobile terminal
CN110070139B (en) * 2019-04-28 2021-10-19 吉林大学 Small sample in-loop learning system and method facing automatic driving environment perception
CN110210362A (en) * 2019-05-27 2019-09-06 中国科学技术大学 A kind of method for traffic sign detection based on convolutional neural networks

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110119768A (en) * 2019-04-24 2019-08-13 苏州感测通信息科技有限公司 Visual information emerging system and method for vehicle location

Also Published As

Publication number Publication date
CN110889378A (en) 2020-03-17

Similar Documents

Publication Publication Date Title
EP3822852B1 (en) Method, apparatus, computer storage medium and program for training a trajectory planning model
CN111126453A (en) Fine-grained image classification method and system based on attention mechanism and cut filling
CN109886210A (en) A kind of traffic image recognition methods, device, computer equipment and medium
DE102018205915A1 (en) Monocular localization in urban environments using road markings
CN107481292A (en) The attitude error method of estimation and device of vehicle-mounted camera
KR20210078530A (en) Lane property detection method, device, electronic device and readable storage medium
KR20210080459A (en) Lane detection method, apparatus, electronic device and readable storage medium
CN111832410B (en) Forward train detection method based on fusion of vision and laser radar
CN111160205A (en) Embedded multi-class target end-to-end unified detection method for traffic scene
CN111091023A (en) Vehicle detection method and device and electronic equipment
CN112712052A (en) Method for detecting and identifying weak target in airport panoramic video
CN111738071B (en) Inverse perspective transformation method based on motion change of monocular camera
CN110889378B (en) Multi-view fusion traffic sign detection and identification method and system thereof
CN110472508B (en) Lane line distance measurement method based on deep learning and binocular vision
CN109389095B (en) Pavement marking image recognition method and training method
CN112785610B (en) Lane line semantic segmentation method integrating low-level features
CN112654998B (en) Lane line detection method and device
CN114120270A (en) Point cloud target detection method based on attention and sampling learning
Wang et al. Lane detection algorithm based on temporal–spatial information matching and fusion
CN116259040A (en) Method and device for identifying traffic sign and electronic equipment
CN115393655A (en) Method for detecting industrial carrier loader based on YOLOv5s network model
WO2022243337A2 (en) System for detection and management of uncertainty in perception systems, for new object detection and for situation anticipation
CN112215042A (en) Parking space limiter identification method and system and computer equipment
CN115063594B (en) Feature extraction method and device based on automatic driving
US20230342944A1 (en) System and Method for Motion Prediction in Autonomous Driving

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant