CN110889378A - Multi-view fusion traffic sign detection and identification method and system - Google Patents

Multi-view fusion traffic sign detection and identification method and system Download PDF

Info

Publication number
CN110889378A
CN110889378A CN201911193295.8A CN201911193295A CN110889378A CN 110889378 A CN110889378 A CN 110889378A CN 201911193295 A CN201911193295 A CN 201911193295A CN 110889378 A CN110889378 A CN 110889378A
Authority
CN
China
Prior art keywords
training
traffic sign
fusion
different visual
visual angles
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911193295.8A
Other languages
Chinese (zh)
Other versions
CN110889378B (en
Inventor
张春阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Rate Control Technology Co Ltd
Original Assignee
Hunan Rate Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Rate Control Technology Co Ltd filed Critical Hunan Rate Control Technology Co Ltd
Priority to CN201911193295.8A priority Critical patent/CN110889378B/en
Publication of CN110889378A publication Critical patent/CN110889378A/en
Application granted granted Critical
Publication of CN110889378B publication Critical patent/CN110889378B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a multi-view fusion traffic sign detection and identification method and a system thereof, which are used in the field of unmanned driving, and the method comprises the following steps: s1, collecting traffic sign images at different visual angles in real time; s2, inputting the traffic sign images at different visual angles into a trained neural network model to obtain characteristic maps at different visual angles; and S3, establishing corresponding relations among the feature maps with different viewing angles, carrying out feature fusion, carrying out traffic sign detection and information identification after obtaining a multi-view fusion result, and sending the identification result to the auxiliary driving system. The method adopts the neural network model to detect and identify the traffic signs, and combines the multi-view data fusion algorithm, so that the neural network model can process the traffic sign data of a plurality of angles, the obtained neural network model is more suitable for the complex real driving environment, the influence of environmental shielding and sundry interference can be avoided, and the detection precision is improved through the fusion of the view data.

Description

Multi-view fusion traffic sign detection and identification method and system
Technical Field
The invention relates to the technical field of unmanned intelligent vehicles, in particular to a multi-view fusion traffic sign detection and identification method and a multi-view fusion traffic sign detection and identification system.
Background
With the rise of unmanned intelligent vehicles, the problem of urgent solution is achieved by accurately ensuring the road safety during unmanned driving. The traffic signs on the road sides can provide real and accurate real-time traffic information, and can assist in controlling the running speed, the advancing, the steering and other operations of the automobile according to the road traffic information. Therefore, accurately detecting and identifying the traffic signs and assisting in controlling the intelligent vehicle become a key technology in the field of unmanned driving.
At present, a target detection algorithm utilizing deep learning is a main approach for realizing identification and detection of traffic signs. The traffic sign recognition system mainly obtains natural scene images through equipment such as a camera and a sensor which are installed on an intelligent vehicle, then carries out real-time detection and understanding on marks in the scene through technologies such as image processing and mode recognition, and finally feeds back recognition information such as prohibition, warning and indication in time to effectively control the vehicle.
However, in the prior art, a single image is still processed in the process of traffic identification to obtain feedback, but the information of single-view data is incomplete due to sundries and occlusion in the processing process, and the mode of processing the single image in an unmanned vehicle obviously cannot meet the safety requirement.
Therefore, it is an urgent need to solve the problem of providing a multi-view fusion traffic sign detection and identification method and system.
Disclosure of Invention
In view of the above, the invention provides a multi-view fusion traffic sign detection and identification method and a system thereof, which mainly adopt a plurality of cameras installed on an intelligent vehicle to obtain multi-view data of a traffic sign, provide richer traffic sign information, and solve the problem of real-time identification and detection of the traffic sign of the intelligent vehicle in the actual driving environment with shielding interference by fusion of the multi-view data and combination of a network model with simple structure and high detection precision.
In order to achieve the purpose, the invention adopts the following technical scheme:
a multi-view fusion traffic sign detection and identification method is disclosed, and an identified result is sent to an auxiliary driving system, and the method comprises the following steps:
s1, collecting traffic sign images at different visual angles in real time;
s2, inputting the traffic sign images with different visual angles into a trained neural network model to obtain characteristic maps with different visual angles;
and S3, establishing corresponding relations among the feature graphs at different visual angles, performing feature fusion, performing traffic sign detection and information identification after obtaining a multi-view fusion result, and sending the identification result to the driving assisting system.
Preferably, the specific content of the correspondence relationship between the feature maps establishing different viewing angles in S3 is as follows:
Figure BDA0002294106170000021
wherein ,
Figure BDA0002294106170000022
is any characteristic point on the viewing angle plane u,
Figure BDA0002294106170000023
is any characteristic point in the viewing angle plane v, wj,iIs a scalar to be determined;
for a particular point i, there is only one wj,iPositive, and the remainder are 0; and only in the u plane
Figure BDA0002294106170000024
And on the v plane
Figure BDA0002294106170000025
When corresponding to the same 3D point, wj,iIs positive.
Preferably, the training of the neural network model includes training of feature extraction and multi-view fusion training based on the training sample images.
Preferably, the step of performing multi-view fusion training by the neural network includes the following steps:
(1) inputting training sample pictures of different visual angles into a neural network to obtain characteristic graphs of different visual angles, and fusing the characteristic graphs;
(2) and comparing the characteristic diagram truth value with the characteristic diagrams of different visual angles and the fused characteristic diagram respectively to obtain a comparison result, and further adjusting the neural network according to the comparison result.
A multi-view fusion traffic sign detection and recognition system connected with a driving assistance system, comprising: the system comprises an image acquisition subsystem, a feature extraction subsystem and a mark identification subsystem;
the image acquisition subsystem is used for acquiring traffic sign images with different visual angles in real time;
the characteristic extraction subsystem is used for training the neural network model, inputting the traffic sign images at different visual angles into the trained neural network model, and obtaining characteristic diagrams at different visual angles;
and the mark identification subsystem is used for carrying out feature fusion on the extracted feature maps with different visual angles, carrying out traffic mark detection and information identification after obtaining a multi-view fusion result, and sending the identification result to the auxiliary driving system.
Preferably, the feature extraction subsystem comprises a model training module and a feature extraction module;
the model training module is used for acquiring training data and training the neural network model to obtain a trained neural network model;
and the characteristic extraction module is used for inputting the traffic sign images at different visual angles into the trained neural network model to obtain characteristic diagrams at different visual angles.
Preferably, the model training module comprises image feature extraction training and multi-view fusion training;
the image feature extraction training is used for training the process of feature extraction;
the multi-view fusion training is used for training a multi-view fusion process.
Preferably, the mark identification subsystem comprises a corresponding relation establishing module, a feature fusion module, a mark detection module and an information identification module;
the corresponding relation establishing module is used for establishing corresponding relations among the feature graphs of different visual angles;
the characteristic fusion module is used for fusing characteristic graphs of different visual angles to obtain a multi-view fusion result;
the sign detection module is used for detecting the traffic sign according to the multi-view fusion result to obtain a detection result;
and the information identification module is used for identifying the information represented by the traffic sign according to the detection result and further sending the identification result to an auxiliary driving system.
According to the technical scheme, compared with the prior art, the multi-view fusion traffic sign detection and identification method and the multi-view fusion traffic sign detection and identification system are provided, the multi-view data are fused through the feature map obtained after being processed through the neural network model, the neural network model can process traffic sign data of multiple angles to obtain more accurate target detection and identification results, the obtained neural network model is more suitable for complex real driving environments, the influences of environment shielding and sundry interference can be avoided, and the detection precision is improved through the fusion of the view data.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of a multi-view fusion traffic sign detection and identification method provided by the invention;
FIG. 2 is a flow chart of a neural network two-view fusion training in the multi-view fusion traffic sign detection and recognition method provided by the invention;
FIG. 3 is a schematic diagram of a P-point spatial imaging structure according to a first embodiment of the present invention;
fig. 4 is a schematic diagram illustrating feature point fusion according to a first embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The first embodiment is as follows:
the embodiment of the invention discloses a multi-view fusion traffic sign detection and identification method, wherein an identified result is sent to an assistant driving system, and as shown in figure 1, the method comprises the following steps:
s1, collecting traffic sign images at different visual angles in real time;
s2, inputting the traffic sign images with different visual angles into a trained neural network model to obtain characteristic maps with different visual angles;
and S3, establishing corresponding relations among the feature graphs at different visual angles, performing feature fusion, performing traffic sign detection and information identification after obtaining a multi-view fusion result, and sending the identification result to an auxiliary driving system.
Preferably, the specific content of the correspondence relationship between the feature maps establishing different viewing angles in S3 is as follows:
Figure BDA0002294106170000041
wherein ,
Figure BDA0002294106170000042
is any characteristic point on the viewing angle plane u,
Figure BDA0002294106170000043
is any characteristic point in the viewing angle plane v, wj,iIs a scalar to be determined;
for a particular pointi, only one wj,iPositive, and the remainder are 0; and only in the u plane
Figure BDA0002294106170000051
And on the v plane
Figure BDA0002294106170000052
When corresponding to the same 3D point, wj,iIs positive.
Preferably, the training of the neural network model includes training of feature extraction and multi-view fusion training based on the training sample images.
Preferably, as shown in fig. 2, the step of performing the multi-view fusion training by the neural network includes the following contents:
(1) inputting training sample pictures of different visual angles into a neural network to obtain characteristic graphs of different visual angles, and fusing the characteristic graphs;
(2) and comparing the characteristic diagram truth value with the characteristic diagrams of different visual angles and the fused characteristic diagram respectively to obtain a comparison result, and further adjusting the neural network according to the comparison result.
It needs to be further explained that:
as shown in FIG. 3, let three-dimensional space have three points A, B and P, where the projection of P on two different view planes u and v is
Figure BDA0002294106170000053
And
Figure BDA0002294106170000054
wherein
Figure BDA0002294106170000055
And
Figure BDA0002294106170000056
respectively, the projection of points A and B on the viewing angle plane u is the same as point P, and the projection on the viewing angle plane v is the same as point P
Figure BDA0002294106170000057
On the same straight line l.
The heat maps of view planes u and v contain feature points, respectively
Figure BDA0002294106170000058
And
Figure BDA0002294106170000059
the core of feature fusion between different views is to establish the correspondence between the views:
Figure RE-GDA00023277608400000510
wherein ,wj,iIs a scalar to be determined. Ideally, there is only one w for a particular point ij,iIs positive and the rest are 0. And only in the u plane
Figure BDA00022941061700000511
And on the v plane
Figure BDA00022941061700000512
When corresponding to the same 3D point, wj,iIs positive.
If it is known
Figure BDA00022941061700000513
The position on the u plane can only determine that the P point is positioned on a straight line according to the geometrical relation
Figure BDA00022941061700000514
Upper, and straight line
Figure BDA00022941061700000515
The straight line l projected on the plane v can be determined, and the point P is located at the projected point on the plane v
Figure BDA00022941061700000516
Necessarily on the straight line i. In the feature map, true
Figure BDA00022941061700000517
The position of a point will have a larger response value, while the other points on the line l will have almost no response value, and therefore, will have a larger response value
Figure BDA00022941061700000518
And all features on the line l are fused as shown in fig. 4. Meanwhile, the features are fused to be used as a layer of neural network, so that the network automatically learns the weight from the training data, and more accurate target detection information is obtained by fusing multi-view information.
Furthermore, taking the R-FCN neural network model as an example, the specific contents of training the neural network model include:
the method comprises the steps of obtaining driving video data containing traffic signs in different scenes, labeling the data by using an image labeling tool Labelling, and establishing a traffic sign data set based on an intelligent vehicle platform. After the equipment condition is complete, the actual road data is collected to complete the traffic sign data set. Because the data set comprises continuous traffic sign labeling images, the influences of undersize, occlusion, shape distortion and the like of the traffic signs in complex road scenes can be well eliminated, and a reliable data basis is provided for the detection and identification of the traffic signs based on the intelligent vehicle platform.
The deep convolutional neural network has strong feature extraction and expression capability, the network needs a large amount of even mass data to drive model training, and otherwise the model training overfitting can be caused. The traffic sign data set manufactured by the embodiment comprises 6000 pictures and expands the data, so that the effective data expansion can not only expand the number of training samples and increase the diversity of the training samples, but also avoid overfitting and the improvement of model performance. The R-FCN framework uses image horizontal flipping to perform data set expansion, which can double the original data set.
In addition to the way of adding training data and expanding data to prevent overfitting of the network, a subset can be randomly divided from the training set data before model training to be used as a verification set so as to evaluate the prediction performance of the model in the training stage. Generally, after each round or batch of training, the network forward operation is performed on the training set and the verification set respectively, the sample labels of the training set and the verification set are predicted, and a learning curve is drawn, so as to check the generalization ability of the model.
The traffic sign samples for training are still few after data expansion, a large amount of training sets formed by labeled data are needed for deep learning target detection, and if the data samples are too few, a high detection effect cannot be achieved even if a very good network structure is utilized. The problem can be well solved by the idea of fine tuning the model, and the model ResNet trained on ImageNet is finely tuned and then applied to the established intelligent vehicle traffic sign data set. In the R-FCN model training process, resize is in a certain size before an image is input into a basic network to extract features, and the recognition result of the traffic sign of the training model is greatly influenced.
In the embodiment, a GPU server platform is adopted for training, a platform is embedded in Jetson TX1 for testing, R-FCN is used for model training in combination with ResNet-50 network, 5000 images are selected for training and 1000 images are selected for testing in a constructed intelligent vehicle traffic sign data set, iteration is performed for 50W times, the four resize sizes are 600 × 600, 1000 × 1000 and 1111 × 889 (original size training), other parameter settings are the same, the learning rate is set to be 0.01, attenuation is performed once every 10W iterations, and the attenuation amplitude is one tenth.
On the basis of an original trained neural network model, a multi-view fusion feature layer is added, traffic sign multi-view data is input, feature fusion is regarded as the neural network layer to use end-to-end weight learning, and the network is allowed to freely learn weights from training data.
Example two:
the embodiment discloses a multi-view fusion traffic sign detection and recognition system, which is connected with an auxiliary driving system and is characterized by comprising: the system comprises an image acquisition subsystem, a feature extraction subsystem and a mark identification subsystem;
the image acquisition subsystem is used for acquiring the traffic sign images with different visual angles in real time;
the characteristic extraction subsystem is used for training the neural network model, inputting the traffic sign images at different visual angles into the trained neural network model, and obtaining characteristic graphs at different visual angles;
and the mark identification subsystem is used for carrying out feature fusion on the extracted feature maps with different visual angles, carrying out traffic mark detection and information identification after obtaining a multi-view fusion result, and sending the identification result to the assistant driving system.
Preferably, the feature extraction subsystem comprises a model training module and a feature extraction module;
the model training module is used for acquiring training data and training the neural network model to obtain a trained neural network model;
and the characteristic extraction module is used for inputting the traffic sign images at different visual angles into the trained neural network model to obtain characteristic diagrams at different visual angles.
Preferably, the model training module comprises image feature extraction training and multi-view fusion training;
image feature extraction training, which is used for training the process of feature extraction;
and multi-view fusion training, which is used for training the multi-view fusion process.
Preferably, the mark identification subsystem comprises a corresponding relation establishing module, a feature fusion module, a mark detection module and an information identification module;
the corresponding relation establishing module is used for establishing corresponding relations among the feature graphs of different visual angles;
the characteristic fusion module is used for fusing characteristic graphs of different visual angles to obtain a multi-view fusion result;
the sign detection module is used for detecting the traffic sign according to the multi-view fusion result to obtain a detection result;
and the information identification module is used for identifying the information represented by the traffic sign according to the detection result and further sending the identification result to the driving assistance system.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. A multi-view fusion traffic sign detection and identification method is characterized in that an identified result is sent to an assistant driving system, and the method comprises the following steps:
s1, collecting traffic sign images at different visual angles in real time;
s2, inputting the traffic sign images at different visual angles into a trained neural network model to obtain characteristic maps at different visual angles;
and S3, establishing corresponding relations among the feature maps with different viewing angles, carrying out feature fusion, carrying out traffic sign detection and information identification after obtaining a multi-view fusion result, and sending the identification result to the auxiliary driving system.
2. The method for detecting and identifying a multi-view fusion traffic sign according to claim 1, wherein the specific contents of the correspondence between the feature maps establishing different viewing angles in S3 are as follows:
Figure FDA0002294106160000011
wherein ,
Figure FDA0002294106160000012
is any characteristic point on the viewing angle plane u,
Figure FDA0002294106160000013
is any characteristic point in the viewing angle plane v, wj,iIs a scalar to be determined;
for a particular point i, there is only one wj,iPositive, and the remainder are 0; and only in the u plane
Figure FDA0002294106160000014
And on the v plane
Figure FDA0002294106160000015
When corresponding to the same 3D point, wj,iIs positive.
3. The method for detecting and identifying the multi-view fusion traffic sign according to claim 1, wherein the training of the neural network model comprises training of feature extraction and multi-view fusion training according to training sample images.
4. The method for detecting and identifying the multi-view fusion traffic sign according to claim 3, wherein the step of performing the multi-view fusion training by the neural network comprises the following steps:
(1) inputting training sample pictures of different visual angles into a neural network to obtain characteristic graphs of different visual angles, and fusing the characteristic graphs;
(2) and comparing the characteristic diagram truth value with the characteristic diagrams of different visual angles and the fused characteristic diagram respectively to obtain a comparison result, and further adjusting the neural network according to the comparison result.
5. A multi-view fusion traffic sign detection and recognition system connected with a driving assistance system, comprising: the system comprises an image acquisition subsystem, a feature extraction subsystem and a mark identification subsystem;
the image acquisition subsystem is used for acquiring traffic sign images with different visual angles in real time;
the characteristic extraction subsystem is used for training the neural network model, inputting the traffic sign images at different visual angles into the trained neural network model, and obtaining characteristic diagrams at different visual angles;
and the mark identification subsystem is used for carrying out feature fusion on the extracted feature maps with different visual angles, carrying out traffic mark detection and information identification after obtaining a multi-view fusion result, and sending the identification result to the auxiliary driving system.
6. The system for detecting and recognizing a multi-view fusion traffic sign according to claim 5, wherein the feature extraction subsystem comprises a model training module and a feature extraction module;
the model training module is used for acquiring training data and training the neural network model to obtain a trained neural network model;
and the characteristic extraction module is used for inputting the traffic sign images at different visual angles into the trained neural network model to obtain characteristic diagrams at different visual angles.
7. The system for detecting and recognizing the multi-view fusion traffic sign according to claim 6, wherein the model training module comprises an image feature extraction training and a multi-view fusion training;
the image feature extraction training is used for training the process of feature extraction;
the multi-view fusion training is used for training a multi-view fusion process.
8. The system according to claim 5, wherein the sign recognition subsystem comprises a correspondence establishing module, a feature fusion module, a sign detection module and an information recognition module;
the corresponding relation establishing module is used for establishing corresponding relations among the feature graphs of different visual angles;
the characteristic fusion module is used for fusing characteristic graphs of different visual angles to obtain a multi-view fusion result;
the sign detection module is used for detecting the traffic sign according to the multi-view fusion result to obtain a detection result;
and the information identification module is used for identifying the information represented by the traffic sign according to the detection result and further sending the identification result to an auxiliary driving system.
CN201911193295.8A 2019-11-28 2019-11-28 Multi-view fusion traffic sign detection and identification method and system thereof Active CN110889378B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911193295.8A CN110889378B (en) 2019-11-28 2019-11-28 Multi-view fusion traffic sign detection and identification method and system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911193295.8A CN110889378B (en) 2019-11-28 2019-11-28 Multi-view fusion traffic sign detection and identification method and system thereof

Publications (2)

Publication Number Publication Date
CN110889378A true CN110889378A (en) 2020-03-17
CN110889378B CN110889378B (en) 2023-06-09

Family

ID=69749287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911193295.8A Active CN110889378B (en) 2019-11-28 2019-11-28 Multi-view fusion traffic sign detection and identification method and system thereof

Country Status (1)

Country Link
CN (1) CN110889378B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111832388A (en) * 2020-05-22 2020-10-27 南京邮电大学 Method and system for detecting and identifying traffic sign in vehicle running

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060133699A1 (en) * 2004-10-07 2006-06-22 Bernard Widrow Cognitive memory and auto-associative neural network based search engine for computer and network located images and photographs
CN104063877A (en) * 2014-07-16 2014-09-24 中电海康集团有限公司 Hybrid judgment identification method for candidate lane lines
CN104463933A (en) * 2014-11-05 2015-03-25 南京师范大学 Three-view-based automatic 2.5-dimensional cartoon animation generation method
CN104535070A (en) * 2014-12-26 2015-04-22 上海交通大学 High-precision map data structure, high-precision map data acquiringand processing system and high-precision map data acquiringand processingmethod
US20150206018A1 (en) * 2014-01-17 2015-07-23 Young Ha CHO System and method for recognizing speed limit sign using front camera
WO2016155371A1 (en) * 2015-03-31 2016-10-06 百度在线网络技术(北京)有限公司 Method and device for recognizing traffic signs
CN106980855A (en) * 2017-04-01 2017-07-25 公安部交通管理科学研究所 Traffic sign quickly recognizes alignment system and method
CN108154102A (en) * 2017-12-21 2018-06-12 安徽师范大学 A kind of traffic sign recognition method
CN109374008A (en) * 2018-11-21 2019-02-22 深动科技(北京)有限公司 A kind of image capturing system and method based on three mesh cameras
CN109405824A (en) * 2018-09-05 2019-03-01 武汉契友科技股份有限公司 A kind of multi-source perceptual positioning system suitable for intelligent network connection automobile
CN109657584A (en) * 2018-12-10 2019-04-19 长安大学 Assist the improvement LeNet-5 converged network traffic sign recognition method driven
US20190154439A1 (en) * 2016-03-04 2019-05-23 May Patents Ltd. A Method and Apparatus for Cooperative Usage of Multiple Distance Meters
CN110059691A (en) * 2019-03-29 2019-07-26 南京邮电大学 Multi-angle of view based on mobile terminal distorts file and picture geometric correction method
CN110070139A (en) * 2019-04-28 2019-07-30 吉林大学 Small sample towards automatic Pilot environment sensing is in ring learning system and method
CN110119768A (en) * 2019-04-24 2019-08-13 苏州感测通信息科技有限公司 Visual information emerging system and method for vehicle location
CN110210362A (en) * 2019-05-27 2019-09-06 中国科学技术大学 A kind of method for traffic sign detection based on convolutional neural networks

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060133699A1 (en) * 2004-10-07 2006-06-22 Bernard Widrow Cognitive memory and auto-associative neural network based search engine for computer and network located images and photographs
US20150206018A1 (en) * 2014-01-17 2015-07-23 Young Ha CHO System and method for recognizing speed limit sign using front camera
CN104063877A (en) * 2014-07-16 2014-09-24 中电海康集团有限公司 Hybrid judgment identification method for candidate lane lines
CN104463933A (en) * 2014-11-05 2015-03-25 南京师范大学 Three-view-based automatic 2.5-dimensional cartoon animation generation method
CN104535070A (en) * 2014-12-26 2015-04-22 上海交通大学 High-precision map data structure, high-precision map data acquiringand processing system and high-precision map data acquiringand processingmethod
WO2016155371A1 (en) * 2015-03-31 2016-10-06 百度在线网络技术(北京)有限公司 Method and device for recognizing traffic signs
US20190154439A1 (en) * 2016-03-04 2019-05-23 May Patents Ltd. A Method and Apparatus for Cooperative Usage of Multiple Distance Meters
CN106980855A (en) * 2017-04-01 2017-07-25 公安部交通管理科学研究所 Traffic sign quickly recognizes alignment system and method
CN108154102A (en) * 2017-12-21 2018-06-12 安徽师范大学 A kind of traffic sign recognition method
CN109405824A (en) * 2018-09-05 2019-03-01 武汉契友科技股份有限公司 A kind of multi-source perceptual positioning system suitable for intelligent network connection automobile
CN109374008A (en) * 2018-11-21 2019-02-22 深动科技(北京)有限公司 A kind of image capturing system and method based on three mesh cameras
CN109657584A (en) * 2018-12-10 2019-04-19 长安大学 Assist the improvement LeNet-5 converged network traffic sign recognition method driven
CN110059691A (en) * 2019-03-29 2019-07-26 南京邮电大学 Multi-angle of view based on mobile terminal distorts file and picture geometric correction method
CN110119768A (en) * 2019-04-24 2019-08-13 苏州感测通信息科技有限公司 Visual information emerging system and method for vehicle location
CN110070139A (en) * 2019-04-28 2019-07-30 吉林大学 Small sample towards automatic Pilot environment sensing is in ring learning system and method
CN110210362A (en) * 2019-05-27 2019-09-06 中国科学技术大学 A kind of method for traffic sign detection based on convolutional neural networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LIPU ZHOU等: "LIDAR and Vision-Based Real-Time Traffic Sign Detection and Recognition Algorithm for Intelligent Vehicle", 《2014 IEEE 17TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEM(ITSC)》 *
于莹莹: "道路交通标志的检测与识别", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111832388A (en) * 2020-05-22 2020-10-27 南京邮电大学 Method and system for detecting and identifying traffic sign in vehicle running
CN111832388B (en) * 2020-05-22 2022-07-26 南京邮电大学 Method and system for detecting and identifying traffic sign in vehicle running

Also Published As

Publication number Publication date
CN110889378B (en) 2023-06-09

Similar Documents

Publication Publication Date Title
CN103208008B (en) Based on the quick adaptive method of traffic video monitoring target detection of machine vision
CN110619279B (en) Road traffic sign instance segmentation method based on tracking
EP3822852B1 (en) Method, apparatus, computer storage medium and program for training a trajectory planning model
CN112307921A (en) Vehicle-mounted end multi-target identification tracking prediction method
CN107992819B (en) Method and device for determining vehicle attribute structural features
CN105844624A (en) Dynamic calibration system, and combined optimization method and combined optimization device in dynamic calibration system
KR20210078530A (en) Lane property detection method, device, electronic device and readable storage medium
CN112307978B (en) Target detection method and device, electronic equipment and readable storage medium
KR20210080459A (en) Lane detection method, apparatus, electronic device and readable storage medium
CN111091023A (en) Vehicle detection method and device and electronic equipment
CN111931683B (en) Image recognition method, device and computer readable storage medium
CN104574993A (en) Road monitoring method and device
CN111079675A (en) Driving behavior analysis method based on target detection and target tracking
CN105809718A (en) Object tracking method with minimum trajectory entropy
CN112613434A (en) Road target detection method, device and storage medium
CN111046723B (en) Lane line detection method based on deep learning
CN114596548A (en) Target detection method, target detection device, computer equipment and computer-readable storage medium
CN112654998B (en) Lane line detection method and device
CN110889378A (en) Multi-view fusion traffic sign detection and identification method and system
CN111144361A (en) Road lane detection method based on binaryzation CGAN network
CN116343513A (en) Rural highway beyond-sight-distance risk point safety monitoring and early warning method and system thereof
CN112347962A (en) System and method for detecting convolutional neural network target based on receptive field
CN112712061B (en) Method, system and storage medium for recognizing multidirectional traffic police command gestures
CN112215042A (en) Parking space limiter identification method and system and computer equipment
CN112597917B (en) Vehicle parking detection method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant