CN113052057A - Traffic sign identification method based on improved convolutional neural network - Google Patents

Traffic sign identification method based on improved convolutional neural network Download PDF

Info

Publication number
CN113052057A
CN113052057A CN202110297712.4A CN202110297712A CN113052057A CN 113052057 A CN113052057 A CN 113052057A CN 202110297712 A CN202110297712 A CN 202110297712A CN 113052057 A CN113052057 A CN 113052057A
Authority
CN
China
Prior art keywords
traffic sign
neural network
convolutional neural
training set
stn
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110297712.4A
Other languages
Chinese (zh)
Inventor
魏中华
李霞
张然
褚思南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202110297712.4A priority Critical patent/CN113052057A/en
Publication of CN113052057A publication Critical patent/CN113052057A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a traffic sign identification method based on an improved convolutional neural network, which comprises the following steps: acquiring a traffic sign data set, dividing the traffic sign data set to obtain an original training set and an original test set, and respectively processing the original training set and the original test set to obtain a training set and a test set; constructing a convolutional neural network, introducing a space transformation network into the convolutional neural network to obtain an STN-CNN model, inputting a training set into the STN-CNN model for training to obtain a trained STN-CNN model; inputting the test set into the trained STN-CNN model for recognition and prediction to obtain an optimal model; and inputting the traffic sign data to be tested into the optimal model to obtain a prediction result. The method provided by the invention has the advantages of few model parameters, strong robustness, short time required by model training and operation calculation, and capability of meeting the real-time requirement in practical application.

Description

Traffic sign identification method based on improved convolutional neural network
Technical Field
The invention relates to the field of image recognition, in particular to a traffic sign recognition method based on an improved convolutional neural network.
Background
With the development of intelligent transportation systems, Advanced Driver Assistance System (ADAS) technology has been proposed and gradually widely used in intelligent automobile systems. The Traffic Sign Recognition System (TSR) is an important component of the ADAS System. The TSR transmits the traffic sign information on the road to the image processing module for sign detection and identification, and guides a driver or an automatic driving vehicle to take reasonable measures according to the identification result, so that the driving pressure is reduced, the urban traffic pressure is relieved, and the road traffic safety is facilitated. Due to the influence of factors such as multi-angle shooting, motion blur, image shielding, illumination conditions and the like under natural conditions, the development of a high-accuracy and real-time traffic sign recognition system is always a basic problem to be solved.
The existing traffic sign recognition methods can be generally divided into the following three categories: the image recognition method based on color, the image recognition method based on shape and the image recognition method based on the convolutional neural network for feature extraction. The method based on color and shape relies more on the salient features of the traffic sign itself, but if the salient features are affected by objective factors such as shading, weather conditions or lighting conditions, the algorithm cannot accurately capture the features, and thus the required recognition effect cannot be achieved. With the development of artificial intelligence, the method based on the convolutional neural network can adapt to the influence caused by different interferences, and the accuracy is improved. However, when the accuracy is improved by excessively pursuing the depth and complexity of the neural network, the time required by model training and operation calculation is obviously improved, the efficiency is greatly reduced, the required operation configuration is extremely high, and the cost is extremely high, so that the existing traffic sign identification method has the defects of the model, and meanwhile, the complex network structure and parameters of the model and the network performance can not meet the requirement of real-time performance.
Disclosure of Invention
In order to overcome the problems in the prior art, the invention provides a traffic sign identification method based on an improved convolutional neural network. The invention improves the accuracy and the real-time performance of the model by preprocessing the original data and simultaneously using the improved convolutional neural network, and comprises the following specific steps:
step 1, acquiring a traffic sign data set, dividing the traffic sign data set to obtain an original training set and an original test set, respectively preprocessing the original training set and the original test set to obtain a preprocessed training set and a preprocessed test set, and then performing data expansion on the preprocessed training set to obtain a training set;
step 2, constructing a convolutional neural network, inserting a space transformation network into the convolutional neural network to obtain an STN-CNN model, inputting a training set into the STN-CNN model for training to obtain a trained STN-CNN model;
step 3, inputting the preprocessed test set into the trained STN-CNN model for recognition and prediction to obtain an optimal STN-CNN model;
and 4, acquiring a traffic sign image to be detected, and inputting the traffic sign image to be detected into the optimal STN-CNN model to obtain a prediction result.
Preferably, in step 1, the specific steps of preprocessing the original test set include,
scaling the image data in the original test set to the same size through up-sampling or down-sampling; and obtaining a test set after pretreatment.
Preferably, in step 1, the preprocessing of the original training set specifically includes:
the image data in the original training set is up-sampled or down-sampled to the same size as the test set; obtaining a training set after sampling;
processing the zoomed image data by local histogram equalization on the sampled training set;
carrying out gray level processing on the equalized image data;
and performing image enhancement on the image data after the gray processing to obtain a training set after the preprocessing.
Preferably, in step 1, the specific steps of data expansion include:
newly adding the number of the traffic sign images to balance the number of various traffic sign images, and preprocessing the newly added traffic sign image data according to a preprocessing method of an original training set to obtain a balanced training set;
and integrating the balance training set and the preprocessed training set to obtain a training set.
Preferably, the specific steps of image enhancement include image flipping, image rotation, projection, noise addition, and image blurring.
Preferably, in step 2, the convolutional neural network includes:
the system comprises an input layer, three convolution modules and three full-connection layers;
the three convolution modules are connected in sequence; the three full-connection layers are connected in sequence;
the input layer, the convolution module and the full connection layer are connected in sequence;
the convolution module comprises a convolution layer, a ReLU activation function and a maximum pooling layer;
the convolutional layer, the ReLU activation function and the maximum pooling layer are connected in sequence.
Preferably, in the step 2, the inserting the spatial transform network into the convolutional neural network specifically comprises the following steps:
and respectively inserting the front end of each convolution module into the space transformation network to obtain the STN-CNN model.
Preferably, in the step 2, inputting the training set into the STN-CNN model for training specifically includes:
step 2.1, inputting the training set into the STN-CNN model for forward propagation to obtain an output result;
step 2.2, acquiring an actual result according to the traffic sign data set, updating the weight of the STN-CNN model by adopting back propagation according to the error between an output result and the actual result, and completing an iterative process of forward propagation and back propagation;
and 2.3, repeating the iteration process until a set iteration stop condition is reached, and stopping repeating the iteration process to obtain the trained STN-CNN model.
Preferably, in step 2.2, the back propagation adopts a random gradient descent algorithm.
Preferably, the iteration stop condition set in the step includes that the set iteration number or the weight of the STN-CNN model is stable and does not change.
The invention has the beneficial effects that: the method has the advantages that the space transformation network is added into the convolutional neural network model for improvement, the obtained STN-CNN model has the characteristics of high recognition rate, high recognition speed, strong generalization capability and the like, meanwhile, the space transformation network is added, so that an independent module in the network model can carry out transformation on various images, the network model can keep the space invariance of input data in an efficient calculation mode, and further the calculation capability and the calculation speed of the network model are improved. The traffic sign recognition method based on the improved convolutional neural network has the recognition accuracy rate of 99.36%, the model parameters are few, the robustness is strong, meanwhile, the time required by model training and operation calculation is short, and the real-time requirement in practical application can be met.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flow chart provided by an embodiment of the present invention;
FIG. 2 is a schematic diagram of an STN-CNN model according to an embodiment of the present invention;
fig. 3 is a preprocessed image according to an embodiment of the present invention.
FIG. 4 is a diagram illustrating a pre-and post-comparison of an input image through a spatial transformation network according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to overcome the problems that the prior art has the defects of the model, the complex network structure and parameters of the model, and the network performance can not meet the real-time requirement, the invention provides a traffic sign identification method based on an improved convolutional neural network, which comprises the following specific steps as shown in fig. 1:
step 1, acquiring a traffic sign data set, and carrying out the following steps on the traffic sign data set according to the following steps of 1: 9, dividing the ratio to obtain an original training set and an original test set, respectively preprocessing the original training set and the original test set to obtain a preprocessed training set and a preprocessed test set, and performing data expansion on the preprocessed training set to obtain a training set;
in the step 1, the specific steps of the preprocessing of the original test set comprise,
scaling the image data in the original test set to the same size through up-sampling or down-sampling; and obtaining a test set after pretreatment.
The size is 32 x 32 pixel size,
in the step 1, the specific steps of the preprocessing of the original training set include,
the image data in the original training set is up-sampled or down-sampled to the same size as the test set to obtain a sampled training set;
processing the image data of the sampled training set by adopting local histogram equalization, and solving the problem that the contrast and brightness of the image have obvious difference;
carrying out gray level processing on the image data after the equalization processing in the step, and processing the color image into a gray image;
and carrying out image enhancement on the image data after the gray processing to improve the robustness of the model, wherein the data enhancement comprises turning, rotating, projecting, adding noise, blurring the image and the like, and obtaining a training set after preprocessing after the data enhancement.
In the step 1, the specific steps of data expansion include,
newly adding the number of the traffic sign images to balance the number of various traffic sign image data, and preprocessing the newly added traffic sign image data according to a preprocessing method of an original training set to obtain a balanced training set;
and integrating the balance training set and the preprocessed training set to obtain a training set.
The data processing method solves the problems of gradient disappearance, gradient explosion, over-fitting, under-fitting and unbalanced data set.
Step 2, constructing a convolutional neural network, inserting a space transformation network into the convolutional neural network to obtain an STN-CNN model, inputting a training set into the STN-CNN model for training to obtain a trained STN-CNN model
The step 2 specifically comprises the following steps:
and 2.1, inputting the training set into the 9-layer STN-CNN model for forward propagation, and obtaining an output result of the model through an output layer after passing through 3 spatial transformation networks, 3 convolutional layers, 3 pooling layers and 3 full-connection layers as shown in FIG. 2.
The constructed convolutional neural network comprises an input layer, three convolutional modules and three connecting layers. The three convolution modules are connected in sequence; the three full-connection layers are connected in sequence; the input layer, the convolution module and the full connection layer are connected in sequence. The convolution module comprises a convolution layer, a ReLU activation function and a maximum pooling layer; the convolutional layer, the ReLU activation function and the maximum pooling layer are connected in sequence.
The convolution module is used for extracting the traffic sign features. The convolution module is connected with three full-connection layers in sequence after being connected in sequence, the first full-connection layer is used for connecting the convolution module before, the second full-connection layer is used for combining the output of the previous full-connection layer into a one-dimensional characteristic vector, and the last full-connection layer is used as a classifier and an output layer and is used for classifying and outputting the traffic signs according to the combined one-dimensional characteristic vector.
The convolutional neural network model structure is shown in table 1.
TABLE 1
Layer(s) Layer classes Depth of field Feature size Convolution kernel size
0 Input layer 32×32
1 Convolutional layer 32 32×32 5×5
2 Maximum pooling layer 32 16×16 2×2
3 Convolutional layer 64 16×16 5×5
4 Maximum pooling layer 64 8×8 2×2
5 Convolutional layer 128 8×8 5×5
6 Maximum pooling layer 128 4×4 2×2
7 Full connection layer 3584
8 Full connection layer 1024
9 Output layer 43
And respectively inserting a space transformation network at the front end of each convolution module. And inserting a space transformation network between the input layer and the convolution module and between the convolution module and the convolution module, wherein the output of the convolution module or the input layer is the input of the space transformation network, and the output of the space transformation network is the input of the convolution module, so as to obtain the STN-CNN model.
The spatial transformation network needs to be designed for localization net. The localization net is composed of a convolutional layer, a max pooling layer, a ReLU activation function, and a full connection layer. Spatial transformation characteristic information required by localization net of a spatial transformation network is extracted through the convolution layer and the pooling layer, an affine transformation matrix learned by the full connection layer is output, optimized input image data is obtained, and the optimized input image data can enable the detected parts to be distributed to the center of the image as far as possible and rotate to a unified coordinate system. The parameter implementation of localization net is based on the size of the input sample and the size of the output parameter matrix, which is 6 × 1 in this embodiment. Table 2 shows the detailed structure and specific parameters of localization net in spatial transform networks s1, s2, and s3, where the convolution kernel size in the convolutional layer is 5 × 5, the maximum pooling layer kernel size is 2 × 2, and the kernel size and the number of input/output feature maps are fixed.
TABLE 2
Layer/layer classification Parameter at s1 Parameter at s2 Parameter at s3
0/input 32×32×1 16×16×32 8×8×64
1/max pooling layer 16×16×1 8×8×32 4×4×64
2/convolutional layer 16×16×250 8×8×250 4×4×250
3/ReLU 16×16×250 8×8×250 4×4×250
4/max pooling layer 8×8×250 4×4×250 2×2×250
5/convolutional layer 8×8×250 4×4×250 2×2×250
6/ReLU 8×8×250 4×4×250 2×2×250
7/max pooling layer 4×4×250 2×2×250 1×1×250
8/full connection layer 250 250 250
9/full connection layer 6 6 6
Step 2.2, acquiring an actual result according to the traffic sign data set, updating the weight of the STN-CNN model by adopting a random gradient descent algorithm (SGD) reverse propagation according to the error between an output result and the actual result, and finishing an iterative training process of forward propagation and reverse propagation;
and 2.3, repeating the iterative training process until the set iteration times are reached or the weight of the STN-CNN model is stable and does not change, stopping the iterative training process, and terminating the training to obtain the trained STN-CNN model.
Step 3, inputting the test set into the trained STN-CNN model for recognition and prediction to obtain an optimal STN-CNN model;
and 4, acquiring a traffic sign image to be detected, and inputting the traffic sign image into the optimal STN-CNN model to obtain a prediction result.
And 3, before and after the accuracy of the prediction result of the convolutional neural network introduced by the space transformation network is compared with the performance of other models.
TABLE 3
Figure BDA0002984949130000091
Figure BDA0002984949130000101
The accuracy of an s1_ c _ s2_ c _ s3_ c model introduced into a space transformation network reaches 99.36%, the identification time of a single picture is 4.30 mu s, compared with other models, model parameters are greatly reduced under the condition of not losing the accuracy, the comparison accuracy with a model of the same model parameter order is higher, and the requirements of real-time performance and high accuracy in practical application can be met.
In a traffic sign recognition task, a running vehicle can observe a traffic sign from different angles, and although a convolutional neural network model has good translation invariance, scale invariance and deformation invariance, the convolutional neural network model has almost no rotation and distortion invariance and is extremely sensitive to the rotation change of an image, and the characteristic will influence the performance of the convolutional neural network model. Although the collected traffic signs can be transformed through data enhancement methods such as rotation, translation, zooming, tilting and clipping so as to enhance the feature learning in the aspect, compared with the transformation capability of learning and rotating implicitly by the network, an explicit processing module is designed for the network to specially learn and process various transformations. Spatial Transformer Networks (STNs)) aim at geometrically transforming an input image so that CNNs can maintain input data spatially invariant in an efficient computational manner. This differentiable module can be inserted directly into the existing CNN architecture, since the conversion parameters applied to the feature map are learned by the back propagation algorithm.
According to the invention, the STN-CNN model obtained by adding the spatial transform network to the convolutional neural network model is improved, and has the characteristics of high recognition rate, high recognition speed, strong generalization capability and the like. Traffic sign classification identification experiments were performed using the german traffic sign data set (GTSRB). The experimental result shows that the identification accuracy rate of the traffic sign identification method based on the improved convolutional neural network is 99.36%, compared with other models, the model has the advantages of few parameters, strong robustness, short time required by model training and operation calculation, and capability of meeting the requirements of real-time performance and high accuracy rate in practical application.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (10)

1. A traffic sign identification method based on an improved convolutional neural network is characterized by comprising the following steps:
step 1, acquiring a traffic sign data set, dividing the traffic sign data set to obtain an original training set and an original test set, respectively preprocessing the original training set and the original test set to obtain a preprocessed training set and a preprocessed test set, and then performing data expansion on the preprocessed training set to obtain a training set;
step 2, constructing a convolutional neural network, introducing a space transformation network into the convolutional neural network to obtain an STN-CNN model, and inputting a training set into the STN-CNN model for training to obtain a trained STN-CNN model;
step 3, inputting the preprocessed test set into the trained STN-CNN model for recognition and prediction to obtain an optimal STN-CNN model;
and 4, acquiring a traffic sign image to be detected, and inputting the traffic sign image to be detected into the optimal STN-CNN model to obtain a prediction result.
2. The traffic sign recognition method based on the improved convolutional neural network as claimed in claim 1, wherein:
in step 1, the preprocessing of the original test set specifically includes:
scaling the image data in the original test set to the same size through up-sampling or down-sampling; and obtaining a test set after pretreatment.
3. The traffic sign recognition method based on the improved convolutional neural network as claimed in claim 2, wherein:
in step 1, the preprocessing of the original training set specifically includes:
the image data in the original training set is up-sampled or down-sampled to the same size as the test set; obtaining a training set after sampling;
processing the zoomed image data by local histogram equalization on the sampled training set;
carrying out gray level processing on the equalized image data;
and performing image enhancement on the image data after the gray processing to obtain a training set after the preprocessing.
4. The traffic sign recognition method based on the improved convolutional neural network as claimed in claim 3, wherein:
in step 1, the specific steps of data expansion include:
newly adding the number of the traffic sign images to balance the number of various traffic sign images, and preprocessing the newly added traffic sign image data according to a preprocessing method of an original training set to obtain a balanced training set;
and integrating the balance training set and the preprocessed training set to obtain a training set.
5. The traffic sign recognition method based on the improved convolutional neural network as claimed in claim 3, wherein:
the specific steps of image enhancement include image flipping, image rotation, projection, noise addition, and image blurring.
6. The traffic sign recognition method based on the improved convolutional neural network as claimed in claim 1, wherein:
in step 2, the convolutional neural network includes:
the system comprises an input layer, three convolution modules and three full-connection layers;
the three convolution modules are connected in sequence; the three full-connection layers are connected in sequence;
the input layer, the convolution module and the full connection layer are connected in sequence;
the convolution module comprises a convolution layer, a ReLU activation function and a maximum pooling layer;
the convolutional layer, the ReLU activation function and the maximum pooling layer are connected in sequence.
7. The traffic sign recognition method based on the improved convolutional neural network as claimed in claim 6, wherein:
in the step 2, the step of introducing the spatial transformation network into the convolutional neural network comprises the following specific steps:
and respectively inserting the front end of each convolution module into the space transformation network to obtain the STN-CNN model.
8. The traffic sign recognition method based on the improved convolutional neural network as claimed in claim 1, wherein:
in the step 2, inputting the training set into the STN-CNN model for training specifically comprises the following steps:
step 2.1, inputting the training set into the STN-CNN model for forward propagation to obtain an output result;
step 2.2, acquiring an actual result according to the traffic sign data set, updating the weight of the STN-CNN model by adopting back propagation according to the error between an output result and the actual result, and completing an iterative process of forward propagation and back propagation;
and 2.3, repeating the iteration process until a set iteration stop condition is reached, and stopping repeating the iteration process to obtain the trained STN-CNN model.
9. The traffic sign recognition method based on the improved convolutional neural network as claimed in claim 8, wherein:
in step 2.2, a random gradient descent algorithm is adopted for the back propagation.
10. The traffic sign recognition method based on the improved convolutional neural network as claimed in claim 8, wherein:
the iteration stopping conditions set in the step comprise that the set iteration times or the weight of the STN-CNN model are stable and do not change.
CN202110297712.4A 2021-03-19 2021-03-19 Traffic sign identification method based on improved convolutional neural network Pending CN113052057A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110297712.4A CN113052057A (en) 2021-03-19 2021-03-19 Traffic sign identification method based on improved convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110297712.4A CN113052057A (en) 2021-03-19 2021-03-19 Traffic sign identification method based on improved convolutional neural network

Publications (1)

Publication Number Publication Date
CN113052057A true CN113052057A (en) 2021-06-29

Family

ID=76514279

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110297712.4A Pending CN113052057A (en) 2021-03-19 2021-03-19 Traffic sign identification method based on improved convolutional neural network

Country Status (1)

Country Link
CN (1) CN113052057A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113625144A (en) * 2021-08-11 2021-11-09 北京信息科技大学 IGBT fault prediction method and system
CN114544868A (en) * 2022-01-20 2022-05-27 上海工程技术大学 Gas detection method and system for eliminating influence of interference gas
CN115205637A (en) * 2022-09-19 2022-10-18 山东世纪矿山机电有限公司 Intelligent identification method for mine car materials
CN116453121A (en) * 2023-06-13 2023-07-18 合肥市正茂科技有限公司 Training method and device for lane line recognition model
CN116721403A (en) * 2023-06-19 2023-09-08 山东高速集团有限公司 Road traffic sign detection method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108985217A (en) * 2018-07-10 2018-12-11 常州大学 A kind of traffic sign recognition method and system based on deep space network
CN111274971A (en) * 2020-01-21 2020-06-12 南京航空航天大学 Traffic identification method based on color space fusion network and space transformation network
CN111325152A (en) * 2020-02-19 2020-06-23 北京工业大学 Deep learning-based traffic sign identification method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108985217A (en) * 2018-07-10 2018-12-11 常州大学 A kind of traffic sign recognition method and system based on deep space network
CN111274971A (en) * 2020-01-21 2020-06-12 南京航空航天大学 Traffic identification method based on color space fusion network and space transformation network
CN111325152A (en) * 2020-02-19 2020-06-23 北京工业大学 Deep learning-based traffic sign identification method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ÁLVARO ARCOS-GARCÍA ET AL: "Deep neural network for traffic sign recognition systems: An analysis of spatial transformers and stochastic optimisation methods", 《NEURAL NETWORKS:THE OFFICIAL JOURNAL OF THE INTERNATIONAL NEURAL NETWORK SOCIETY》 *
MRINAL HALOI ET AL: "Traffic Sign Classification Using Deep Inception Based Convolutional Networks", 《ARXIV:1511.02992V2》 *
高志强等: "《深度学习从入门到实战》", 30 June 2018 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113625144A (en) * 2021-08-11 2021-11-09 北京信息科技大学 IGBT fault prediction method and system
CN114544868A (en) * 2022-01-20 2022-05-27 上海工程技术大学 Gas detection method and system for eliminating influence of interference gas
CN114544868B (en) * 2022-01-20 2024-03-26 上海工程技术大学 Gas detection method and system for eliminating influence of interference gas
CN115205637A (en) * 2022-09-19 2022-10-18 山东世纪矿山机电有限公司 Intelligent identification method for mine car materials
CN116453121A (en) * 2023-06-13 2023-07-18 合肥市正茂科技有限公司 Training method and device for lane line recognition model
CN116453121B (en) * 2023-06-13 2023-12-22 合肥市正茂科技有限公司 Training method and device for lane line recognition model
CN116721403A (en) * 2023-06-19 2023-09-08 山东高速集团有限公司 Road traffic sign detection method

Similar Documents

Publication Publication Date Title
CN113052210B (en) Rapid low-light target detection method based on convolutional neural network
CN113052057A (en) Traffic sign identification method based on improved convolutional neural network
CN108647585B (en) Traffic identifier detection method based on multi-scale circulation attention network
CN113313657B (en) Unsupervised learning method and system for low-illumination image enhancement
CN109753878B (en) Imaging identification method and system under severe weather
CN107506765B (en) License plate inclination correction method based on neural network
CN112183203A (en) Real-time traffic sign detection method based on multi-scale pixel feature fusion
CN116665176B (en) Multi-task network road target detection method for vehicle automatic driving
CN112132145B (en) Image classification method and system based on model extended convolutional neural network
CN112287941B (en) License plate recognition method based on automatic character region perception
CN110910413A (en) ISAR image segmentation method based on U-Net
CN111310766A (en) License plate identification method based on coding and decoding and two-dimensional attention mechanism
CN113723377A (en) Traffic sign detection method based on LD-SSD network
CN111709307B (en) Resolution enhancement-based remote sensing image small target detection method
CN112446292B (en) 2D image salient object detection method and system
CN117671509B (en) Remote sensing target detection method and device, electronic equipment and storage medium
CN114913498A (en) Parallel multi-scale feature aggregation lane line detection method based on key point estimation
CN115861756A (en) Earth background small target identification method based on cascade combination network
CN117557774A (en) Unmanned aerial vehicle image small target detection method based on improved YOLOv8
Cho et al. Modified perceptual cycle generative adversarial network-based image enhancement for improving accuracy of low light image segmentation
CN113393385B (en) Multi-scale fusion-based unsupervised rain removing method, system, device and medium
CN117994573A (en) Infrared dim target detection method based on superpixel and deformable convolution
CN117495718A (en) Multi-scale self-adaptive remote sensing image defogging method
CN117115616A (en) Real-time low-illumination image target detection method based on convolutional neural network
CN117115770A (en) Automatic driving method based on convolutional neural network and attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210629

RJ01 Rejection of invention patent application after publication