CN113505821A - Deep neural network image identification method and system based on sample reliability - Google Patents

Deep neural network image identification method and system based on sample reliability Download PDF

Info

Publication number
CN113505821A
CN113505821A CN202110726015.6A CN202110726015A CN113505821A CN 113505821 A CN113505821 A CN 113505821A CN 202110726015 A CN202110726015 A CN 202110726015A CN 113505821 A CN113505821 A CN 113505821A
Authority
CN
China
Prior art keywords
recognition
sample
image
credibility
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110726015.6A
Other languages
Chinese (zh)
Other versions
CN113505821B (en
Inventor
戴大伟
唐晓宇
刘颖格
夏书银
朱宏飞
王国胤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202110726015.6A priority Critical patent/CN113505821B/en
Publication of CN113505821A publication Critical patent/CN113505821A/en
Application granted granted Critical
Publication of CN113505821B publication Critical patent/CN113505821B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention belongs to the field of image recognition, and particularly relates to a deep neural network image recognition method and system based on sample reliability, wherein the method comprises the steps of obtaining images to be processed, inputting the images to be processed into a trained deep neural network model, transmitting each image into a result output after a pre-recognition network, taking the maximum value after Softmax processing as the reliability of the image, obtaining a recognition result after the image with high reliability is subjected to a shallow convolution module, performing feature extraction and pre-recognition again when the image with low reliability enters a next layer of network, and repeating the above operations until the reliability of the image reaches high reliability or the deepest layer of network; the sample distribution mode of the invention reduces the calculation amount of the network, realizes the layered isolation training of the credible sample and the incredible sample in the training process, and improves the respective recognition accuracy and the anti-interference capability.

Description

Deep neural network image identification method and system based on sample reliability
Technical Field
The invention belongs to the field of image recognition, and particularly relates to a deep neural network image recognition method and system based on sample credibility.
Background
With the rapid development of computer technology and information processing technology, people have entered the information age. The amount of knowledge that people can acquire has increased explosively, and continuous improvement and development of information processing technology are urgently required, so that more convenient and diversified services can be provided for people. It is difficult for a computer to understand the content of a picture because the picture viewed by the computer is a digital matrix, and the meaning of the picture cannot be understood. In order to make a computer understand the content of an image, people in early years establish the relation between a matrix and actual meanings through a machine learning algorithm; in recent years, with the rapid development of computer technology, deep learning based on artificial neural networks is gradually dominant. The digital image is input into the neural network, the output data is compared with the label, and the parameters of the neural network are adjusted, so that the computer learns the mapping relation between the input image and the actual meaning. For simple tasks, the neural network can assign a label to an image, such as cat, dog, or elephant; in complex terms, a neural network can interpret the contents of an image and return a human-readable sentence. Besides the application of the image classification, the image classification can also provide support for tasks such as face recognition, target detection and the like, and has a good application prospect.
The image classification task aims at judging the category of an input picture through feature extraction and analysis. Most of the existing neural networks directly input all pictures into a deep neural network, and all samples are predicted through the same network to obtain a final result, so that the calculation resources are wasted greatly. Taking Resnet as an example: on a CIFAR-10 data set, Resnet-56 achieves an accuracy rate of 93.03% by using 85 ten thousand parameters, and Resnet-110 achieves an accuracy rate of 93.57% by using 170 ten thousand parameters, so that the accuracy rate is improved a little while the calculated amount is greatly improved.
In the deep neural network, most pictures can obtain a better recognition effect after passing through a shallow neural network, and only a few pictures can better determine the recognition result after passing through a deep network.
Disclosure of Invention
The method comprises the steps of obtaining images to be processed, inputting the images to be processed into a trained deep neural network model, transmitting each image into an output result of a pre-recognition network, using the maximum value of the output result after Softmax processing as the reliability of the image, obtaining a recognition result after the image with high reliability is subjected to a shallow convolution module, performing feature extraction and pre-recognition on the image with low reliability entering a next layer of network, and repeating the above operations until the reliability of the image reaches high reliability or the deepest layer of network.
Further, the process of training the deep neural network model includes:
s1: preprocessing an original image, and dividing data into a training data set and a test data set;
s2: inputting a training data set, and performing feature extraction by adopting a convolution module to obtain a sample feature map;
s3: adopting the backbone network characteristics of the current neural network layer and using a pre-recognition network to pre-recognize the image, dividing the sample into a low-reliability sample and a high-reliability sample, calculating the pre-recognition result and the loss of the label, and carrying out back propagation loss to train the backbone network and the pre-recognition network of the current layer;
s4: for the high-reliability sample, a light-weight convolution module is used for recognition, a result is output, the loss of the recognition result and the loss of the label and the reverse propagation loss are calculated, and the layer of credible sample network is trained;
s5: for the low-reliability sample, performing iteration S2-4 operation, performing feature extraction and pre-identification, obtaining a high-reliability sample feature map for identification, and a low-reliability sample feature map for continuous iteration until the last layer of the network is passed;
s6: after the last sample shunting operation, extracting an original image corresponding to the low-reliability sample feature map, sending the original image into a convolution module to obtain a feature map, and fusing the feature map with the low-reliability sample feature map obtained by the last shunting;
s7: inputting the low-reliability sample feature map obtained by the last fusion into a convolution module and a full-link layer, performing feature extraction and identification, outputting a result, calculating the loss of the identification result and a label, performing back propagation loss, and training the convolution module and the last layer of backbone network;
s8: and continuously adjusting model parameters according to the data in the training set, inputting the data in the testing set into the model, calculating the recognition precision of the model according to the recognition result, storing the model with the highest recognition precision, and finishing the training of the model.
Further, the backbone network is a first convolution module formed by stacking a plurality of convolution layers.
Furthermore, the pre-recognition network comprises a second convolution module and a full connection layer, the second convolution module used for recognizing the high-reliability sample is recognized, the second convolution module is formed by stacking a plurality of convolution layers, and the network scale and the parameter number of the second convolution module are smaller than those of the first convolution module.
The invention also provides a deep neural network image recognition system based on sample credibility, which comprises a credibility recognition module, a low credibility feature extraction module and a high credibility feature extraction module, wherein the credibility recognition module is used for recognizing the credibility of an image and respectively inputting the credibility recognition module into the low credibility feature extraction module and the high credibility feature extraction module according to threshold value distribution in the module.
Furthermore, the reliability identification module comprises a convolution layer, a full-link layer, a Softmax layer and a threshold value shunting layer, the maximum value processed by the Softmax layer is used as the reliability of the picture, the threshold value shunting layer judges the relation between the reliability and a set threshold value, if the reliability is larger than the set threshold value, the reliability is input into the high reliability feature extraction module, and if the reliability is not larger than the set threshold value, the reliability is input into the low reliability feature extraction module.
Further, the low-reliability recognition module re-inputs the low-reliability sample into a reliability recognition network for feature extraction and reliability recognition, when the result of the last distribution of the reliability recognition module is still a low-reliability image, a feature map obtained by extracting an original image corresponding to the feature through a convolution layer is fused with the feature map of the last distribution, the fusion result is input into a backbone network, and the output of the backbone network is used as the recognition result of the image.
According to the method, the reliability of the sample is determined by pre-calculating the maximum Softmax value of the prediction result of the sample, so that a credible sample is isolated from an incredible sample, and specialized training is carried out on each layer of network; in addition, a large number of credible samples can obtain identification results through a shallow network structure, and network computing cost is reduced; in addition, the credibility of the untrusted sample is improved through a deep network, and the anti-interference capability of the network is improved.
Drawings
FIG. 1 is a high confidence feature identification module of the present invention;
FIG. 2 is a block diagram of an iterative pre-identified network module of the present invention;
FIG. 3 is a diagram of an image recognition method based on a sample confidence deep neural network according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a deep neural network image recognition method based on sample credibility, which comprises the steps of obtaining images to be processed, inputting the images to be processed into a trained deep neural network model, transmitting each image into a result output after a pre-recognition network, using the maximum value after Softmax processing as the credibility of the image, obtaining a recognition result after a shallow convolution module for the image with high credibility, performing feature extraction and pre-recognition again for the image with low credibility entering the next layer network, and repeating the above operations until the credibility of the image reaches high credibility or the deepest layer network.
Example 1
In this embodiment, the image to be processed is input into the trained improved deep neural network model to obtain the recognition result, and the process of training the improved deep neural network model includes:
s1: reading an original data set, and preprocessing an original image to obtain a training data set and a test data set;
s2: inputting a training data set, and performing feature extraction by adopting a convolution module (backbone network) to obtain a sample feature map;
s3: pre-identifying the image by adopting a lightweight convolution module and a full-connection layer, dividing the sample into a low-reliability sample and a high-reliability sample, calculating the loss of a pre-identification result and a label, and training a backbone network and a pre-identification network of the layer;
s4: and for the high-reliability sample, a lightweight convolution module and a full-connection layer are adopted for identification, the result is output, the loss of the identification result and the label and the reverse propagation loss are calculated, and the local layer of credible sample network is trained.
S5: for the low-reliability sample, iterating the operations of S2, S3 and S4, extracting and pre-identifying the features, obtaining a high-reliability sample feature map for identification and a low-reliability sample feature map for continuous iteration until the last layer of the network is passed;
s6: after the last sample shunting operation, extracting an original image corresponding to the low-reliability sample feature map, sending the original image into a convolution module to obtain a feature map, and fusing the feature map with the low-reliability sample feature map obtained by the last shunting;
s7: inputting the low-reliability sample feature map obtained by final fusion into a convolution module and a full connection layer, performing feature extraction and identification, outputting a result, calculating the loss of the identification result and a label, performing back propagation loss, and training a final layer of backbone network;
s8: and continuously adjusting model parameters according to the data in the training set, inputting the data in the testing set into the model, calculating the recognition precision of the model according to the recognition result, storing the model with the highest recognition precision, and finishing the training of the model.
After an original image is input into a network, a backbone network consisting of convolution modules is used for carrying out primary feature extraction to obtain a feature map, as shown in fig. 2, in the training process, the process of transmitting a sample into different networks for layered isolation training according to a pre-recognition result comprises the following steps:
(1) and transmitting the characteristic diagram into a pre-recognition network consisting of the convolutional layer and the full-connection layer to obtain a pre-recognition result, and training the pre-recognition network and the backbone network according to cross entropy loss back propagation.
(2) And performing Softmax processing on the pre-recognition result, taking the maximum value predicted by each sample, namely the reliability of the sample, sequencing the samples according to the reliability, taking a part with high reliability of the whole training batch as a credible sample according to a preset threshold percentage, and taking the rest samples as unreliable samples. And training the backbone network and the pre-recognition network of the current layer according to the cross entropy.
(3) Dividing a characteristic graph obtained by the backbone network according to the divided samples; the characteristic diagram of the credible sample is transmitted into a credible sample network consisting of a convolutional layer and a full-connection layer, and the credible sample network is trained according to cross entropy loss; transmitting the feature graph of the untrusted sample into a next layer of backbone network, and training the feature graph and the next layer of pre-recognition network together; and the least reliable samples obtained after multiple iterations are used for training the convolution module and the last layer of backbone network.
In the training process, the specific process of transmitting the unreliable sample into the deep network to improve the credibility of the sample comprises the following steps: after the samples with low credibility are transmitted into the deep network, when the loss value is calculated according to the pre-recognition result, the samples firstly pass through Softmax, and then the cross entropy is used. According to the Softmax principle, the sum of the prediction probabilities of all the recognition results is 1; when the value of the correct result is large, the cross entropy loss value is reduced, so that the prediction probability of the correct result can be improved, the prediction probabilities of other wrong results can be inhibited, the reliability of the sample can be improved, the influence of disturbance on the network result can be reduced, and the anti-interference capability of the network can be improved.
In the testing process, the specific process of determining the hierarchy of the input image passing through the network according to the pre-recognition result comprises the following steps: after the input images are transmitted into the backbone network and the pre-recognition network, the credibility of each sample is sequenced, a threshold value is preset according to the flow distribution data in the training process, the images with the credibility being greater than or equal to the threshold value are considered to be credible, the images with the credibility being smaller than the threshold value are directly recognized by the credible sample network, the images with the credibility being smaller than the threshold value are considered to be not credible, and the images need to be transmitted into a deeper network for judgment, so that the network operation amount is reduced.
The backbone network and the lightweight convolution modules in the embodiment, namely the first convolution module and the second convolution module, are formed by stacking a plurality of convolution layers, have no specific structure, and can be applied to any common deep neural network model; the backbone network corresponds to the light-weight convolution module, and the light-weight convolution module is a network with smaller scale and smaller parameter quantity; those skilled in the art set the actual number of convolution layers of the backbone network and the lightweight convolution module according to actual requirements, which is not limited by the present invention.
Example 2
The embodiment provides a deep neural network image recognition system based on sample credibility, which comprises a credibility recognition module, a low credibility feature recognition module and a high credibility feature recognition module, wherein the credibility recognition module is used for recognizing the credibility of an image and respectively inputting the credibility recognition module and the high credibility feature recognition module according to threshold value distribution in the module; the system divides the data sample into a credible sample and an incredible sample, the characteristic diagram of the credible sample is transmitted into the credible sample network of the current layer and is trained, and the characteristic diagram of the incredible sample is transmitted into the backbone network of the next layer for further iterative division; different networks in different levels are fitted with different samples, and layered isolation training is realized on the credible samples and the incredible samples, so that the identification accuracy and the anti-interference capability are improved.
Further, as shown in fig. 2, the reliability identification module includes a second convolution module, a full connection layer, a Softmax layer, and a threshold value splitting layer, where a maximum value processed by the Softmax layer is used as a reliability of the picture, the threshold value splitting layer determines a relationship between the reliability and a set threshold value, and if the maximum value is greater than the set threshold value, the maximum value is input to the high-reliability feature extraction module, otherwise, the maximum value is input to the low-reliability feature extraction module.
Further, the low-reliability recognition module re-inputs the low-reliability sample into the reliability recognition network for feature extraction and reliability recognition, and when the result of the last distribution of the reliability recognition module is still a low-reliability image, a feature map obtained by extracting an original image corresponding to the feature through a layer of convolution layer is fused with the feature map after the last distribution, so that multi-level feature fusion is realized, and the accuracy is improved; and inputting the fusion result into a backbone network, wherein the output of the backbone network is used as the identification result of the image.
Further, as shown in fig. 1, the high-reliability feature recognition module includes a convolutional layer and a fully-connected layer.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (9)

1. A deep neural network image recognition method based on sample credibility is characterized in that images to be processed are obtained, the images to be processed are input into a trained deep neural network model, then each image is transmitted into a result output after a pre-recognition network, the maximum value after Softmax processing is used as the credibility of the image, for the image with high credibility, a recognition result is obtained after the image is subjected to a shallow convolution module, for the image with low credibility, the image enters the next layer of network to be subjected to feature extraction and pre-recognition again, and the operations are repeated until the credibility of the image reaches the high credibility or the deepest layer of network.
2. The method for deep neural network image recognition based on sample credibility as claimed in claim 1, wherein the process of training the deep neural network model comprises:
s1: preprocessing an original image, and dividing data into a training data set and a test data set;
s2: inputting a training data set, and performing feature extraction by adopting a backbone network to obtain a sample feature map;
s3: pre-identifying the image by using a pre-identification network, dividing the sample into a low-reliability sample and a high-reliability sample, calculating the loss of a pre-identification result and a label, reversely propagating the loss, and training a current-layer backbone network and the pre-identification network;
s4: for the high-reliability sample, a light-weight convolution module is used for recognition, a result is output, the loss of the recognition result and the loss of the label and the reverse propagation loss are calculated, and the layer of credible sample network is trained;
s5: for the low-reliability sample, performing iteration S2-4 operation, performing feature extraction and pre-identification, obtaining a high-reliability sample feature map for identification, and a low-reliability sample feature map for continuous iteration until the last layer of the network is passed;
s6: after the last sample shunting operation, extracting an original image corresponding to the low-reliability sample feature map, sending the original image into a convolution module to obtain a feature map, and fusing the feature map with the low-reliability sample feature map obtained by the last shunting;
s7: inputting the low-reliability sample feature map obtained by the last fusion into a convolution module and a full-link layer, performing feature extraction and identification, outputting a result, calculating the loss of the identification result and a label, performing back propagation loss, and training the convolution module and the last layer of backbone network;
s8: and continuously adjusting model parameters according to the data in the training set, inputting the data in the testing set into the model, calculating the recognition precision of the model according to the recognition result, storing the model with the highest recognition precision, and finishing the training of the model.
3. The method of claim 2, wherein the backbone network is a first convolution module formed by stacking a plurality of convolution layers.
4. The method as claimed in claim 3, wherein the pre-recognition network comprises a second convolution module and a full connection layer, the second convolution module is used for recognizing the high-reliability sample, the second convolution module is formed by stacking multiple layers of convolution layers, and the network scale and parameters of the second convolution module are smaller than those of the first convolution module.
5. The deep neural network image recognition system based on the sample credibility is characterized by comprising a credibility recognition module, a low credibility feature recognition module and a high credibility feature recognition module, wherein the credibility recognition module is used for recognizing the credibility of an image and respectively inputting the credibility recognition module into the low credibility feature recognition module and the high credibility feature recognition module according to threshold value distribution in the module.
6. The deep neural network image recognition system based on the sample reliability as claimed in claim 5, wherein the reliability recognition module comprises a second convolution module, a full connection layer, a Softmax layer and a threshold value shunting layer, the maximum value processed by the Softmax layer is used as the reliability of the image, the threshold value shunting layer judges the relation between the reliability and a set threshold value, if the maximum value is larger than the set threshold value, the high reliability characteristic recognition module is input, otherwise, the low reliability characteristic recognition module is input.
7. The method for deep neural network image recognition based on sample credibility as claimed in claim 5, wherein the low credibility recognition module re-inputs the low credibility sample into the credibility recognition network for feature recognition and credibility recognition, when the result after the last credibility recognition module shunting is still a low credibility image, the feature map obtained by extracting the original image corresponding to the feature through a layer of convolution layer is fused with the feature map after the last shunting, the fused result is input into a layer of backbone network, and the output of the backbone network is used as the recognition result of the image.
8. The method for deep neural network image recognition based on sample credibility as claimed in claim 5, wherein the first convolution module is a first convolution module formed by stacking a plurality of convolution layers.
9. The method of claim 8, wherein the high-confidence feature recognition module comprises a second convolutional layer and a fully-connected layer, the second convolutional module is formed by stacking a plurality of convolutional layers, and the network size and the parameters of the second convolutional module are smaller than those of the first convolutional module.
CN202110726015.6A 2021-06-29 2021-06-29 Deep neural network image identification method and system based on sample reliability Active CN113505821B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110726015.6A CN113505821B (en) 2021-06-29 2021-06-29 Deep neural network image identification method and system based on sample reliability

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110726015.6A CN113505821B (en) 2021-06-29 2021-06-29 Deep neural network image identification method and system based on sample reliability

Publications (2)

Publication Number Publication Date
CN113505821A true CN113505821A (en) 2021-10-15
CN113505821B CN113505821B (en) 2022-09-27

Family

ID=78011080

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110726015.6A Active CN113505821B (en) 2021-06-29 2021-06-29 Deep neural network image identification method and system based on sample reliability

Country Status (1)

Country Link
CN (1) CN113505821B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709511A (en) * 2016-12-08 2017-05-24 华中师范大学 Urban rail transit panoramic monitoring video fault detection method based on depth learning
US20170177997A1 (en) * 2015-12-22 2017-06-22 Applied Materials Israel Ltd. Method of deep learining-based examination of a semiconductor specimen and system thereof
CN110197205A (en) * 2019-05-09 2019-09-03 三峡大学 A kind of image-recognizing method of multiple features source residual error network
CN110689025A (en) * 2019-09-16 2020-01-14 腾讯医疗健康(深圳)有限公司 Image recognition method, device and system, and endoscope image recognition method and device
US20200125877A1 (en) * 2018-10-22 2020-04-23 Future Health Works Ltd. Computer based object detection within a video or image
CN111414942A (en) * 2020-03-06 2020-07-14 重庆邮电大学 Remote sensing image classification method based on active learning and convolutional neural network
CN111965183A (en) * 2020-08-17 2020-11-20 沈阳飞机工业(集团)有限公司 Titanium alloy microstructure detection method based on deep learning
CN112115973A (en) * 2020-08-18 2020-12-22 吉林建筑大学 Convolutional neural network based image identification method
CN112818871A (en) * 2021-02-04 2021-05-18 南京师范大学 Target detection method of full-fusion neural network based on half-packet convolution

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170177997A1 (en) * 2015-12-22 2017-06-22 Applied Materials Israel Ltd. Method of deep learining-based examination of a semiconductor specimen and system thereof
CN106709511A (en) * 2016-12-08 2017-05-24 华中师范大学 Urban rail transit panoramic monitoring video fault detection method based on depth learning
US20200125877A1 (en) * 2018-10-22 2020-04-23 Future Health Works Ltd. Computer based object detection within a video or image
CN110197205A (en) * 2019-05-09 2019-09-03 三峡大学 A kind of image-recognizing method of multiple features source residual error network
CN110689025A (en) * 2019-09-16 2020-01-14 腾讯医疗健康(深圳)有限公司 Image recognition method, device and system, and endoscope image recognition method and device
CN111414942A (en) * 2020-03-06 2020-07-14 重庆邮电大学 Remote sensing image classification method based on active learning and convolutional neural network
CN111965183A (en) * 2020-08-17 2020-11-20 沈阳飞机工业(集团)有限公司 Titanium alloy microstructure detection method based on deep learning
CN112115973A (en) * 2020-08-18 2020-12-22 吉林建筑大学 Convolutional neural network based image identification method
CN112818871A (en) * 2021-02-04 2021-05-18 南京师范大学 Target detection method of full-fusion neural network based on half-packet convolution

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MERIN ANNIE VINCENT ET AL.: "Traffic Sign Classification Using Deep Neural Network", 《2020 IEEE RECENT ADVANCES IN INTELLIGENT COMPUTATIONAL SYSTEMS》 *
马骁: "基于深度卷积神经网络的图像语义分割", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *
魏杨: "基于深度卷积神经网络的图像篡改检测", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *

Also Published As

Publication number Publication date
CN113505821B (en) 2022-09-27

Similar Documents

Publication Publication Date Title
CN109214452B (en) HRRP target identification method based on attention depth bidirectional cyclic neural network
CN108229588B (en) Machine learning identification method based on deep learning
CN110532880B (en) Sample screening and expression recognition method, neural network, device and storage medium
CN110781776A (en) Road extraction method based on prediction and residual refinement network
CN112149547A (en) Remote sensing image water body identification based on image pyramid guidance and pixel pair matching
CN114187311A (en) Image semantic segmentation method, device, equipment and storage medium
CN114861875A (en) Internet of things intrusion detection method based on self-supervision learning and self-knowledge distillation
CN113657115A (en) Multi-modal Mongolian emotion analysis method based on ironic recognition and fine-grained feature fusion
CN116206327A (en) Image classification method based on online knowledge distillation
CN116580322A (en) Unmanned aerial vehicle infrared small target detection method under ground background
CN115294397A (en) Classification task post-processing method, device, equipment and storage medium
CN115114409A (en) Civil aviation unsafe event combined extraction method based on soft parameter sharing
CN109685823B (en) Target tracking method based on deep forest
CN111242028A (en) Remote sensing image ground object segmentation method based on U-Net
CN108229692B (en) Machine learning identification method based on dual contrast learning
CN113505821B (en) Deep neural network image identification method and system based on sample reliability
CN111783688A (en) Remote sensing image scene classification method based on convolutional neural network
CN116467930A (en) Transformer-based structured data general modeling method
CN108345943B (en) Machine learning identification method based on embedded coding and contrast learning
CN115147432A (en) First arrival picking method based on depth residual semantic segmentation network
CN113095386B (en) Gesture recognition method and system based on triaxial acceleration space-time feature fusion
CN115238073A (en) Service classification method for fusing heterogeneous information network and generating countermeasure network
CN115410250A (en) Array type human face beauty prediction method, equipment and storage medium
CN114529096A (en) Social network link prediction method and system based on ternary closure graph embedding
CN114529908A (en) Offline handwritten chemical reaction type image recognition technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant