CN110889378A - Multi-view fusion traffic sign detection and identification method and system - Google Patents
Multi-view fusion traffic sign detection and identification method and system Download PDFInfo
- Publication number
- CN110889378A CN110889378A CN201911193295.8A CN201911193295A CN110889378A CN 110889378 A CN110889378 A CN 110889378A CN 201911193295 A CN201911193295 A CN 201911193295A CN 110889378 A CN110889378 A CN 110889378A
- Authority
- CN
- China
- Prior art keywords
- training
- traffic sign
- fusion
- different visual
- visual angles
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 69
- 238000001514 detection method Methods 0.000 title claims abstract description 47
- 238000000034 method Methods 0.000 title claims abstract description 31
- 230000000007 visual effect Effects 0.000 claims abstract description 47
- 238000003062 neural network model Methods 0.000 claims abstract description 31
- 238000012549 training Methods 0.000 claims description 77
- 238000000605 extraction Methods 0.000 claims description 28
- 238000010586 diagram Methods 0.000 claims description 16
- 238000013528 artificial neural network Methods 0.000 claims description 12
- 238000007499 fusion processing Methods 0.000 claims description 3
- 230000007613 environmental effect Effects 0.000 abstract 1
- 238000002372 labelling Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 238000012795 verification Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/582—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Abstract
The invention discloses a multi-view fusion traffic sign detection and identification method and a system thereof, which are used in the field of unmanned driving, and the method comprises the following steps: s1, collecting traffic sign images at different visual angles in real time; s2, inputting the traffic sign images at different visual angles into a trained neural network model to obtain characteristic maps at different visual angles; and S3, establishing corresponding relations among the feature maps with different viewing angles, carrying out feature fusion, carrying out traffic sign detection and information identification after obtaining a multi-view fusion result, and sending the identification result to the auxiliary driving system. The method adopts the neural network model to detect and identify the traffic signs, and combines the multi-view data fusion algorithm, so that the neural network model can process the traffic sign data of a plurality of angles, the obtained neural network model is more suitable for the complex real driving environment, the influence of environmental shielding and sundry interference can be avoided, and the detection precision is improved through the fusion of the view data.
Description
Technical Field
The invention relates to the technical field of unmanned intelligent vehicles, in particular to a multi-view fusion traffic sign detection and identification method and a multi-view fusion traffic sign detection and identification system.
Background
With the rise of unmanned intelligent vehicles, the problem of urgent solution is achieved by accurately ensuring the road safety during unmanned driving. The traffic signs on the road sides can provide real and accurate real-time traffic information, and can assist in controlling the running speed, the advancing, the steering and other operations of the automobile according to the road traffic information. Therefore, accurately detecting and identifying the traffic signs and assisting in controlling the intelligent vehicle become a key technology in the field of unmanned driving.
At present, a target detection algorithm utilizing deep learning is a main approach for realizing identification and detection of traffic signs. The traffic sign recognition system mainly obtains natural scene images through equipment such as a camera and a sensor which are installed on an intelligent vehicle, then carries out real-time detection and understanding on marks in the scene through technologies such as image processing and mode recognition, and finally feeds back recognition information such as prohibition, warning and indication in time to effectively control the vehicle.
However, in the prior art, a single image is still processed in the process of traffic identification to obtain feedback, but the information of single-view data is incomplete due to sundries and occlusion in the processing process, and the mode of processing the single image in an unmanned vehicle obviously cannot meet the safety requirement.
Therefore, it is an urgent need to solve the problem of providing a multi-view fusion traffic sign detection and identification method and system.
Disclosure of Invention
In view of the above, the invention provides a multi-view fusion traffic sign detection and identification method and a system thereof, which mainly adopt a plurality of cameras installed on an intelligent vehicle to obtain multi-view data of a traffic sign, provide richer traffic sign information, and solve the problem of real-time identification and detection of the traffic sign of the intelligent vehicle in the actual driving environment with shielding interference by fusion of the multi-view data and combination of a network model with simple structure and high detection precision.
In order to achieve the purpose, the invention adopts the following technical scheme:
a multi-view fusion traffic sign detection and identification method is disclosed, and an identified result is sent to an auxiliary driving system, and the method comprises the following steps:
s1, collecting traffic sign images at different visual angles in real time;
s2, inputting the traffic sign images with different visual angles into a trained neural network model to obtain characteristic maps with different visual angles;
and S3, establishing corresponding relations among the feature graphs at different visual angles, performing feature fusion, performing traffic sign detection and information identification after obtaining a multi-view fusion result, and sending the identification result to the driving assisting system.
Preferably, the specific content of the correspondence relationship between the feature maps establishing different viewing angles in S3 is as follows:
wherein ,is any characteristic point on the viewing angle plane u,is any characteristic point in the viewing angle plane v, wj,iIs a scalar to be determined;
for a particular point i, there is only one wj,iPositive, and the remainder are 0; and only in the u planeAnd on the v planeWhen corresponding to the same 3D point, wj,iIs positive.
Preferably, the training of the neural network model includes training of feature extraction and multi-view fusion training based on the training sample images.
Preferably, the step of performing multi-view fusion training by the neural network includes the following steps:
(1) inputting training sample pictures of different visual angles into a neural network to obtain characteristic graphs of different visual angles, and fusing the characteristic graphs;
(2) and comparing the characteristic diagram truth value with the characteristic diagrams of different visual angles and the fused characteristic diagram respectively to obtain a comparison result, and further adjusting the neural network according to the comparison result.
A multi-view fusion traffic sign detection and recognition system connected with a driving assistance system, comprising: the system comprises an image acquisition subsystem, a feature extraction subsystem and a mark identification subsystem;
the image acquisition subsystem is used for acquiring traffic sign images with different visual angles in real time;
the characteristic extraction subsystem is used for training the neural network model, inputting the traffic sign images at different visual angles into the trained neural network model, and obtaining characteristic diagrams at different visual angles;
and the mark identification subsystem is used for carrying out feature fusion on the extracted feature maps with different visual angles, carrying out traffic mark detection and information identification after obtaining a multi-view fusion result, and sending the identification result to the auxiliary driving system.
Preferably, the feature extraction subsystem comprises a model training module and a feature extraction module;
the model training module is used for acquiring training data and training the neural network model to obtain a trained neural network model;
and the characteristic extraction module is used for inputting the traffic sign images at different visual angles into the trained neural network model to obtain characteristic diagrams at different visual angles.
Preferably, the model training module comprises image feature extraction training and multi-view fusion training;
the image feature extraction training is used for training the process of feature extraction;
the multi-view fusion training is used for training a multi-view fusion process.
Preferably, the mark identification subsystem comprises a corresponding relation establishing module, a feature fusion module, a mark detection module and an information identification module;
the corresponding relation establishing module is used for establishing corresponding relations among the feature graphs of different visual angles;
the characteristic fusion module is used for fusing characteristic graphs of different visual angles to obtain a multi-view fusion result;
the sign detection module is used for detecting the traffic sign according to the multi-view fusion result to obtain a detection result;
and the information identification module is used for identifying the information represented by the traffic sign according to the detection result and further sending the identification result to an auxiliary driving system.
According to the technical scheme, compared with the prior art, the multi-view fusion traffic sign detection and identification method and the multi-view fusion traffic sign detection and identification system are provided, the multi-view data are fused through the feature map obtained after being processed through the neural network model, the neural network model can process traffic sign data of multiple angles to obtain more accurate target detection and identification results, the obtained neural network model is more suitable for complex real driving environments, the influences of environment shielding and sundry interference can be avoided, and the detection precision is improved through the fusion of the view data.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of a multi-view fusion traffic sign detection and identification method provided by the invention;
FIG. 2 is a flow chart of a neural network two-view fusion training in the multi-view fusion traffic sign detection and recognition method provided by the invention;
FIG. 3 is a schematic diagram of a P-point spatial imaging structure according to a first embodiment of the present invention;
fig. 4 is a schematic diagram illustrating feature point fusion according to a first embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The first embodiment is as follows:
the embodiment of the invention discloses a multi-view fusion traffic sign detection and identification method, wherein an identified result is sent to an assistant driving system, and as shown in figure 1, the method comprises the following steps:
s1, collecting traffic sign images at different visual angles in real time;
s2, inputting the traffic sign images with different visual angles into a trained neural network model to obtain characteristic maps with different visual angles;
and S3, establishing corresponding relations among the feature graphs at different visual angles, performing feature fusion, performing traffic sign detection and information identification after obtaining a multi-view fusion result, and sending the identification result to an auxiliary driving system.
Preferably, the specific content of the correspondence relationship between the feature maps establishing different viewing angles in S3 is as follows:
wherein ,is any characteristic point on the viewing angle plane u,is any characteristic point in the viewing angle plane v, wj,iIs a scalar to be determined;
for a particular pointi, only one wj,iPositive, and the remainder are 0; and only in the u planeAnd on the v planeWhen corresponding to the same 3D point, wj,iIs positive.
Preferably, the training of the neural network model includes training of feature extraction and multi-view fusion training based on the training sample images.
Preferably, as shown in fig. 2, the step of performing the multi-view fusion training by the neural network includes the following contents:
(1) inputting training sample pictures of different visual angles into a neural network to obtain characteristic graphs of different visual angles, and fusing the characteristic graphs;
(2) and comparing the characteristic diagram truth value with the characteristic diagrams of different visual angles and the fused characteristic diagram respectively to obtain a comparison result, and further adjusting the neural network according to the comparison result.
It needs to be further explained that:
as shown in FIG. 3, let three-dimensional space have three points A, B and P, where the projection of P on two different view planes u and v isAnd wherein Andrespectively, the projection of points A and B on the viewing angle plane u is the same as point P, and the projection on the viewing angle plane v is the same as point POn the same straight line l.
The heat maps of view planes u and v contain feature points, respectivelyAndthe core of feature fusion between different views is to establish the correspondence between the views:
wherein ,wj,iIs a scalar to be determined. Ideally, there is only one w for a particular point ij,iIs positive and the rest are 0. And only in the u planeAnd on the v planeWhen corresponding to the same 3D point, wj,iIs positive.
If it is knownThe position on the u plane can only determine that the P point is positioned on a straight line according to the geometrical relationUpper, and straight lineThe straight line l projected on the plane v can be determined, and the point P is located at the projected point on the plane vNecessarily on the straight line i. In the feature map, trueThe position of a point will have a larger response value, while the other points on the line l will have almost no response value, and therefore, will have a larger response valueAnd all features on the line l are fused as shown in fig. 4. Meanwhile, the features are fused to be used as a layer of neural network, so that the network automatically learns the weight from the training data, and more accurate target detection information is obtained by fusing multi-view information.
Furthermore, taking the R-FCN neural network model as an example, the specific contents of training the neural network model include:
the method comprises the steps of obtaining driving video data containing traffic signs in different scenes, labeling the data by using an image labeling tool Labelling, and establishing a traffic sign data set based on an intelligent vehicle platform. After the equipment condition is complete, the actual road data is collected to complete the traffic sign data set. Because the data set comprises continuous traffic sign labeling images, the influences of undersize, occlusion, shape distortion and the like of the traffic signs in complex road scenes can be well eliminated, and a reliable data basis is provided for the detection and identification of the traffic signs based on the intelligent vehicle platform.
The deep convolutional neural network has strong feature extraction and expression capability, the network needs a large amount of even mass data to drive model training, and otherwise the model training overfitting can be caused. The traffic sign data set manufactured by the embodiment comprises 6000 pictures and expands the data, so that the effective data expansion can not only expand the number of training samples and increase the diversity of the training samples, but also avoid overfitting and the improvement of model performance. The R-FCN framework uses image horizontal flipping to perform data set expansion, which can double the original data set.
In addition to the way of adding training data and expanding data to prevent overfitting of the network, a subset can be randomly divided from the training set data before model training to be used as a verification set so as to evaluate the prediction performance of the model in the training stage. Generally, after each round or batch of training, the network forward operation is performed on the training set and the verification set respectively, the sample labels of the training set and the verification set are predicted, and a learning curve is drawn, so as to check the generalization ability of the model.
The traffic sign samples for training are still few after data expansion, a large amount of training sets formed by labeled data are needed for deep learning target detection, and if the data samples are too few, a high detection effect cannot be achieved even if a very good network structure is utilized. The problem can be well solved by the idea of fine tuning the model, and the model ResNet trained on ImageNet is finely tuned and then applied to the established intelligent vehicle traffic sign data set. In the R-FCN model training process, resize is in a certain size before an image is input into a basic network to extract features, and the recognition result of the traffic sign of the training model is greatly influenced.
In the embodiment, a GPU server platform is adopted for training, a platform is embedded in Jetson TX1 for testing, R-FCN is used for model training in combination with ResNet-50 network, 5000 images are selected for training and 1000 images are selected for testing in a constructed intelligent vehicle traffic sign data set, iteration is performed for 50W times, the four resize sizes are 600 × 600, 1000 × 1000 and 1111 × 889 (original size training), other parameter settings are the same, the learning rate is set to be 0.01, attenuation is performed once every 10W iterations, and the attenuation amplitude is one tenth.
On the basis of an original trained neural network model, a multi-view fusion feature layer is added, traffic sign multi-view data is input, feature fusion is regarded as the neural network layer to use end-to-end weight learning, and the network is allowed to freely learn weights from training data.
Example two:
the embodiment discloses a multi-view fusion traffic sign detection and recognition system, which is connected with an auxiliary driving system and is characterized by comprising: the system comprises an image acquisition subsystem, a feature extraction subsystem and a mark identification subsystem;
the image acquisition subsystem is used for acquiring the traffic sign images with different visual angles in real time;
the characteristic extraction subsystem is used for training the neural network model, inputting the traffic sign images at different visual angles into the trained neural network model, and obtaining characteristic graphs at different visual angles;
and the mark identification subsystem is used for carrying out feature fusion on the extracted feature maps with different visual angles, carrying out traffic mark detection and information identification after obtaining a multi-view fusion result, and sending the identification result to the assistant driving system.
Preferably, the feature extraction subsystem comprises a model training module and a feature extraction module;
the model training module is used for acquiring training data and training the neural network model to obtain a trained neural network model;
and the characteristic extraction module is used for inputting the traffic sign images at different visual angles into the trained neural network model to obtain characteristic diagrams at different visual angles.
Preferably, the model training module comprises image feature extraction training and multi-view fusion training;
image feature extraction training, which is used for training the process of feature extraction;
and multi-view fusion training, which is used for training the multi-view fusion process.
Preferably, the mark identification subsystem comprises a corresponding relation establishing module, a feature fusion module, a mark detection module and an information identification module;
the corresponding relation establishing module is used for establishing corresponding relations among the feature graphs of different visual angles;
the characteristic fusion module is used for fusing characteristic graphs of different visual angles to obtain a multi-view fusion result;
the sign detection module is used for detecting the traffic sign according to the multi-view fusion result to obtain a detection result;
and the information identification module is used for identifying the information represented by the traffic sign according to the detection result and further sending the identification result to the driving assistance system.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (8)
1. A multi-view fusion traffic sign detection and identification method is characterized in that an identified result is sent to an assistant driving system, and the method comprises the following steps:
s1, collecting traffic sign images at different visual angles in real time;
s2, inputting the traffic sign images at different visual angles into a trained neural network model to obtain characteristic maps at different visual angles;
and S3, establishing corresponding relations among the feature maps with different viewing angles, carrying out feature fusion, carrying out traffic sign detection and information identification after obtaining a multi-view fusion result, and sending the identification result to the auxiliary driving system.
2. The method for detecting and identifying a multi-view fusion traffic sign according to claim 1, wherein the specific contents of the correspondence between the feature maps establishing different viewing angles in S3 are as follows:
wherein ,is any characteristic point on the viewing angle plane u,is any characteristic point in the viewing angle plane v, wj,iIs a scalar to be determined;
3. The method for detecting and identifying the multi-view fusion traffic sign according to claim 1, wherein the training of the neural network model comprises training of feature extraction and multi-view fusion training according to training sample images.
4. The method for detecting and identifying the multi-view fusion traffic sign according to claim 3, wherein the step of performing the multi-view fusion training by the neural network comprises the following steps:
(1) inputting training sample pictures of different visual angles into a neural network to obtain characteristic graphs of different visual angles, and fusing the characteristic graphs;
(2) and comparing the characteristic diagram truth value with the characteristic diagrams of different visual angles and the fused characteristic diagram respectively to obtain a comparison result, and further adjusting the neural network according to the comparison result.
5. A multi-view fusion traffic sign detection and recognition system connected with a driving assistance system, comprising: the system comprises an image acquisition subsystem, a feature extraction subsystem and a mark identification subsystem;
the image acquisition subsystem is used for acquiring traffic sign images with different visual angles in real time;
the characteristic extraction subsystem is used for training the neural network model, inputting the traffic sign images at different visual angles into the trained neural network model, and obtaining characteristic diagrams at different visual angles;
and the mark identification subsystem is used for carrying out feature fusion on the extracted feature maps with different visual angles, carrying out traffic mark detection and information identification after obtaining a multi-view fusion result, and sending the identification result to the auxiliary driving system.
6. The system for detecting and recognizing a multi-view fusion traffic sign according to claim 5, wherein the feature extraction subsystem comprises a model training module and a feature extraction module;
the model training module is used for acquiring training data and training the neural network model to obtain a trained neural network model;
and the characteristic extraction module is used for inputting the traffic sign images at different visual angles into the trained neural network model to obtain characteristic diagrams at different visual angles.
7. The system for detecting and recognizing the multi-view fusion traffic sign according to claim 6, wherein the model training module comprises an image feature extraction training and a multi-view fusion training;
the image feature extraction training is used for training the process of feature extraction;
the multi-view fusion training is used for training a multi-view fusion process.
8. The system according to claim 5, wherein the sign recognition subsystem comprises a correspondence establishing module, a feature fusion module, a sign detection module and an information recognition module;
the corresponding relation establishing module is used for establishing corresponding relations among the feature graphs of different visual angles;
the characteristic fusion module is used for fusing characteristic graphs of different visual angles to obtain a multi-view fusion result;
the sign detection module is used for detecting the traffic sign according to the multi-view fusion result to obtain a detection result;
and the information identification module is used for identifying the information represented by the traffic sign according to the detection result and further sending the identification result to an auxiliary driving system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911193295.8A CN110889378B (en) | 2019-11-28 | 2019-11-28 | Multi-view fusion traffic sign detection and identification method and system thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911193295.8A CN110889378B (en) | 2019-11-28 | 2019-11-28 | Multi-view fusion traffic sign detection and identification method and system thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110889378A true CN110889378A (en) | 2020-03-17 |
CN110889378B CN110889378B (en) | 2023-06-09 |
Family
ID=69749287
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911193295.8A Active CN110889378B (en) | 2019-11-28 | 2019-11-28 | Multi-view fusion traffic sign detection and identification method and system thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110889378B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111832388A (en) * | 2020-05-22 | 2020-10-27 | 南京邮电大学 | Method and system for detecting and identifying traffic sign in vehicle running |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060133699A1 (en) * | 2004-10-07 | 2006-06-22 | Bernard Widrow | Cognitive memory and auto-associative neural network based search engine for computer and network located images and photographs |
CN104063877A (en) * | 2014-07-16 | 2014-09-24 | 中电海康集团有限公司 | Hybrid judgment identification method for candidate lane lines |
CN104463933A (en) * | 2014-11-05 | 2015-03-25 | 南京师范大学 | Three-view-based automatic 2.5-dimensional cartoon animation generation method |
CN104535070A (en) * | 2014-12-26 | 2015-04-22 | 上海交通大学 | High-precision map data structure, high-precision map data acquiringand processing system and high-precision map data acquiringand processingmethod |
US20150206018A1 (en) * | 2014-01-17 | 2015-07-23 | Young Ha CHO | System and method for recognizing speed limit sign using front camera |
WO2016155371A1 (en) * | 2015-03-31 | 2016-10-06 | 百度在线网络技术(北京)有限公司 | Method and device for recognizing traffic signs |
CN106980855A (en) * | 2017-04-01 | 2017-07-25 | 公安部交通管理科学研究所 | Traffic sign quickly recognizes alignment system and method |
CN108154102A (en) * | 2017-12-21 | 2018-06-12 | 安徽师范大学 | A kind of traffic sign recognition method |
CN109374008A (en) * | 2018-11-21 | 2019-02-22 | 深动科技(北京)有限公司 | A kind of image capturing system and method based on three mesh cameras |
CN109405824A (en) * | 2018-09-05 | 2019-03-01 | 武汉契友科技股份有限公司 | A kind of multi-source perceptual positioning system suitable for intelligent network connection automobile |
CN109657584A (en) * | 2018-12-10 | 2019-04-19 | 长安大学 | Assist the improvement LeNet-5 converged network traffic sign recognition method driven |
US20190154439A1 (en) * | 2016-03-04 | 2019-05-23 | May Patents Ltd. | A Method and Apparatus for Cooperative Usage of Multiple Distance Meters |
CN110059691A (en) * | 2019-03-29 | 2019-07-26 | 南京邮电大学 | Multi-angle of view based on mobile terminal distorts file and picture geometric correction method |
CN110070139A (en) * | 2019-04-28 | 2019-07-30 | 吉林大学 | Small sample towards automatic Pilot environment sensing is in ring learning system and method |
CN110119768A (en) * | 2019-04-24 | 2019-08-13 | 苏州感测通信息科技有限公司 | Visual information emerging system and method for vehicle location |
CN110210362A (en) * | 2019-05-27 | 2019-09-06 | 中国科学技术大学 | A kind of method for traffic sign detection based on convolutional neural networks |
-
2019
- 2019-11-28 CN CN201911193295.8A patent/CN110889378B/en active Active
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060133699A1 (en) * | 2004-10-07 | 2006-06-22 | Bernard Widrow | Cognitive memory and auto-associative neural network based search engine for computer and network located images and photographs |
US20150206018A1 (en) * | 2014-01-17 | 2015-07-23 | Young Ha CHO | System and method for recognizing speed limit sign using front camera |
CN104063877A (en) * | 2014-07-16 | 2014-09-24 | 中电海康集团有限公司 | Hybrid judgment identification method for candidate lane lines |
CN104463933A (en) * | 2014-11-05 | 2015-03-25 | 南京师范大学 | Three-view-based automatic 2.5-dimensional cartoon animation generation method |
CN104535070A (en) * | 2014-12-26 | 2015-04-22 | 上海交通大学 | High-precision map data structure, high-precision map data acquiringand processing system and high-precision map data acquiringand processingmethod |
WO2016155371A1 (en) * | 2015-03-31 | 2016-10-06 | 百度在线网络技术(北京)有限公司 | Method and device for recognizing traffic signs |
US20190154439A1 (en) * | 2016-03-04 | 2019-05-23 | May Patents Ltd. | A Method and Apparatus for Cooperative Usage of Multiple Distance Meters |
CN106980855A (en) * | 2017-04-01 | 2017-07-25 | 公安部交通管理科学研究所 | Traffic sign quickly recognizes alignment system and method |
CN108154102A (en) * | 2017-12-21 | 2018-06-12 | 安徽师范大学 | A kind of traffic sign recognition method |
CN109405824A (en) * | 2018-09-05 | 2019-03-01 | 武汉契友科技股份有限公司 | A kind of multi-source perceptual positioning system suitable for intelligent network connection automobile |
CN109374008A (en) * | 2018-11-21 | 2019-02-22 | 深动科技(北京)有限公司 | A kind of image capturing system and method based on three mesh cameras |
CN109657584A (en) * | 2018-12-10 | 2019-04-19 | 长安大学 | Assist the improvement LeNet-5 converged network traffic sign recognition method driven |
CN110059691A (en) * | 2019-03-29 | 2019-07-26 | 南京邮电大学 | Multi-angle of view based on mobile terminal distorts file and picture geometric correction method |
CN110119768A (en) * | 2019-04-24 | 2019-08-13 | 苏州感测通信息科技有限公司 | Visual information emerging system and method for vehicle location |
CN110070139A (en) * | 2019-04-28 | 2019-07-30 | 吉林大学 | Small sample towards automatic Pilot environment sensing is in ring learning system and method |
CN110210362A (en) * | 2019-05-27 | 2019-09-06 | 中国科学技术大学 | A kind of method for traffic sign detection based on convolutional neural networks |
Non-Patent Citations (2)
Title |
---|
LIPU ZHOU等: "LIDAR and Vision-Based Real-Time Traffic Sign Detection and Recognition Algorithm for Intelligent Vehicle", 《2014 IEEE 17TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEM(ITSC)》 * |
于莹莹: "道路交通标志的检测与识别", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111832388A (en) * | 2020-05-22 | 2020-10-27 | 南京邮电大学 | Method and system for detecting and identifying traffic sign in vehicle running |
CN111832388B (en) * | 2020-05-22 | 2022-07-26 | 南京邮电大学 | Method and system for detecting and identifying traffic sign in vehicle running |
Also Published As
Publication number | Publication date |
---|---|
CN110889378B (en) | 2023-06-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103208008B (en) | Based on the quick adaptive method of traffic video monitoring target detection of machine vision | |
CN110619279B (en) | Road traffic sign instance segmentation method based on tracking | |
EP3822852B1 (en) | Method, apparatus, computer storage medium and program for training a trajectory planning model | |
CN112307921A (en) | Vehicle-mounted end multi-target identification tracking prediction method | |
CN107992819B (en) | Method and device for determining vehicle attribute structural features | |
CN105844624A (en) | Dynamic calibration system, and combined optimization method and combined optimization device in dynamic calibration system | |
KR20210078530A (en) | Lane property detection method, device, electronic device and readable storage medium | |
CN112307978B (en) | Target detection method and device, electronic equipment and readable storage medium | |
KR20210080459A (en) | Lane detection method, apparatus, electronic device and readable storage medium | |
CN111091023A (en) | Vehicle detection method and device and electronic equipment | |
CN111931683B (en) | Image recognition method, device and computer readable storage medium | |
CN104574993A (en) | Road monitoring method and device | |
CN111079675A (en) | Driving behavior analysis method based on target detection and target tracking | |
CN105809718A (en) | Object tracking method with minimum trajectory entropy | |
CN112613434A (en) | Road target detection method, device and storage medium | |
CN111046723B (en) | Lane line detection method based on deep learning | |
CN114596548A (en) | Target detection method, target detection device, computer equipment and computer-readable storage medium | |
CN112654998B (en) | Lane line detection method and device | |
CN110889378A (en) | Multi-view fusion traffic sign detection and identification method and system | |
CN111144361A (en) | Road lane detection method based on binaryzation CGAN network | |
CN116343513A (en) | Rural highway beyond-sight-distance risk point safety monitoring and early warning method and system thereof | |
CN112347962A (en) | System and method for detecting convolutional neural network target based on receptive field | |
CN112712061B (en) | Method, system and storage medium for recognizing multidirectional traffic police command gestures | |
CN112215042A (en) | Parking space limiter identification method and system and computer equipment | |
CN112597917B (en) | Vehicle parking detection method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |