CN109816002B - Single sparse self-encoder weak and small target detection method based on feature self-migration - Google Patents
Single sparse self-encoder weak and small target detection method based on feature self-migration Download PDFInfo
- Publication number
- CN109816002B CN109816002B CN201910028640.6A CN201910028640A CN109816002B CN 109816002 B CN109816002 B CN 109816002B CN 201910028640 A CN201910028640 A CN 201910028640A CN 109816002 B CN109816002 B CN 109816002B
- Authority
- CN
- China
- Prior art keywords
- training
- sample set
- sae
- model
- sparse
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Abstract
The invention discloses a method for detecting weak and small targets of a single sparse encoder based on feature self-migration, which comprises the following steps: constructing a training sample set, a test sample set and an original data set of weak and small targets for training; inputting the training sample set into SAE model for training to obtain sparse characteristics of sample, namely model parametersTraining softmax with sparse features, i.e. input features f (W) m+1 x+b m+1 ) After training of softmax is completed each time, keeping positive samples, and randomly selecting negative samples with the number similar to that of the positive samples; the model parameters are used as initial model parameters of next training, the parameters of the SAE model are updated, the steps are repeated, and when the value of the loss function of the SAE model training is the same as the value of the loss function of the previous training, the training is finished; and inputting the test sample set into softmax obtained by the last training for testing to obtain a test result. The invention can accurately detect the weak and small targets in the image.
Description
Technical Field
The invention relates to the technical field of computer vision processing, in particular to a method for detecting weak and small targets of a single sparse self-encoder based on feature self-migration.
Background
The detection of weak targets is a difficult point in the field of image processing, the detection difficulty of the weak targets in natural images, particularly medical images, is very high, the weak targets are not clear at the edges in the images, the contrast is low, noise interference exists under most conditions, and the detection difficulty is greatly increased. At present, the traditional method and the deep learning have certain limitations for the detection of the weak target. For the detection of weak targets, the feature extraction is very important work, and the accuracy of later detection can be greatly improved by effective feature extraction.
Disclosure of Invention
The invention provides a method for detecting weak and small targets by a single sparse self-encoder based on feature self-migration, which aims to solve the problem that the weak and small targets cannot be detected with high precision in the prior art, and can accurately detect the weak and small targets in an image.
In order to realize the purpose of the invention, the technical scheme is as follows: a method for detecting weak and small targets of a single sparse encoder based on feature self-migration comprises the following steps:
s1: selecting a quantity a of image data from an image database as a training sample set for constructing positive samples and negative samples in the training sample set; selecting image data with the quantity of 1-a from a database as a test sample set for constructing positive samples and negative samples in the test sample set; the positive sample contains a microaneurysm and a block of 21 x 21 pixels is constructed centered on the microaneurysm; the negative examples do not contain pixels of microaneurysms, with a block size of 21 × 21 pixels; simultaneously, respectively selecting a green channel and a blue channel in the color image from the positive sample and the negative sample, and obtaining a contrast enhancement result through Gamma correction as an original data set;
wherein: a represents the percentage of the training sample set in the image database, 0< a <1, a is manually set;
s2: training the training sample set, inputting the training sample set into an SAE model, and obtaining sparse characteristics of the training sample set, namely SAE model parameters
s3: training softmax, i.e. input feature f (W), using sparse features m+1 x+b m+1 ) Training softmax, reserving positive samples after each training, and randomly selecting negative samples with the number similar to that of the positive samples;
wherein: f denotes the sigmod activation functionm represents the mth training; w m+1 、b m+1 Respectively representing the weight and the bias of SAE of the (m + 1) th training;
s4: SAE model parametersAs the initial model parameter of the next training, the parameter updating of the SAE model is realized, and the characteristic self-migration of the SAE model is completed; execution of S2; executing S5 until the value of the loss function of SAE model training is the same as the value of the loss function of the previous time;
s5: and after the trained SAE model is obtained, inputting the test sample set into the final softmax to obtain a test result.
Preferably, the value of a is 0.75, that is, 75% of image data is selected from the image database as a training sample set, and 25% of image data is selected from the database as a testing sample set.
Preferably, in step S2, the expression of Softmax is as follows:
wherein the content of the first and second substances,V i is the output of the preceding output unit of the classifier; i represents a category index, and the total number of categories is C; s i The ratio of the index of the feature vector corresponding to the current training sample to the sum of the indexes of all samples is shown.
Preferably, the formula of the parameter update of the SAE model in step S4 is:
wherein: w m Representing the weight matrix of SAE in the mth training; alpha is the learning rate; s 2 Representing the number of hidden layer units; Δ W m A partial derivative matrix of the loss function with respect to the weight at the mth training; λ is a regularization penalty factor; b m Represents the offset matrix of SAE in the mth training;is a matrix Δ W m One element in the matrix; h is W,b (x (i) ) When the input is x (i) A corresponding output;average activation degree of a shadow storage layer of the sparse encoder;representing activation of the jth neuron of the hidden layer; Δ b m For the mth training time to turn offThe partial derivative matrix at weight b.
Further, the formula of the loss function of S4 is:
wherein, beta is a sparse term penalty factor;the KL divergence is used for measuring the closeness degree of the two probability distributions;mean activation of the jth neuron of the hidden layer; rho is a sparsity parameter; the J (W, b) is expressed by the following formula:
wherein: n is the number of samples; x is the number of (i) Represents the input of the ith neuron;the weight of the ith neuron in the l layer to the jth neuron in the next layer is given.
The invention has the following beneficial effects: the method comprises the steps of constructing a training sample set and a testing sample set, repeatedly updating the training sample set and an SAE model by utilizing an SAE model of the training sample set, and finishing training when the value of a loss function of the SAE model is the same as the value of the previous time; and testing the trained model through the test sample set to obtain a test result, wherein the method can accurately detect the weak and small targets in the image.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a schematic diagram of the training process of the present invention.
Fig. 3 is a schematic diagram of the structure of the sparse encoder.
FIG. 4 is a graph comparing the test results of the present example.
Detailed Description
The invention is described in detail below with reference to the drawings and the detailed description.
Example 1
As shown in fig. 1, a method for detecting weak and small targets of a single sparse encoder based on feature self-migration includes the following specific steps:
step S1: respectively selecting 75% of image data from databases of Retinopathy on line Challenge, DIARETDB1 and E-ophtha as training sample sets for constructing positive samples and negative samples in the training sample sets; selecting 25% of image data from databases of Retinopathy Online Change, DIARETDB1 and E-ophtha as a test sample set for constructing positive samples and negative samples in the test sample set; the positive sample is a block containing microaneurysms and 21 × 21 pixels are constructed by taking the microaneurysms as the center; the negative sample is a block containing no microaneurysms, pixels 21 x 21; meanwhile, a green channel and a blue channel in the color image are respectively selected from the positive sample and the negative sample, and a contrast enhancement result obtained through Gamma correction is used as an original data set; the color image shown is a true color image, and the color value of each pixel is determined by R, G, B.
In this embodiment, when the training sample set is constructed, an original data set composed of a green channel, a blue channel, and a contrast enhancement result obtained through Gamma correction is also constructed at the same time. The embodiment requires that the selected database should conform to the characteristics of the small target, and the samples in the databases, Retinopathy Online Challenge, DIARETDB1 and E-ophtha, all conform to the characteristics of the small target.
Step S2: training a training sample set, as shown in fig. 2, inputting the training sample set into an SAE model, and obtaining sparse characteristics of the sample through training, i.e. model parameters
step S3: training softmax with sparse features, i.e. input features f (W) m+1 x+b m+1 ) Training softmax; after each training is finished, keeping the positive samples, and randomly selecting negative samples with the number similar to that of the positive samples;
wherein: f denotes the sigmod activation functionm represents the mth training; w m+1 、b m+1 Respectively representing the weight and the bias of SAE of the (m + 1) th training;
the SAE model described in this embodiment is a deep neural network model composed of multiple sparse layers of self-encoders, in which the output of the self-encoder in the previous layer is used as the input of the self-encoder in the next layer, and the classifier (local classifier or softmax classifier) is arranged in the last layer
As shown in fig. 3, the sparse self-encoder is an unsupervised machine learning algorithm, and sparsity means that when the output of a neuron is 1, the neuron is considered to be activated; when the output is 0, then this neuron is considered to be inhibitory. Leaving neurons in an inhibitory state for most of the time is called sparsity limiting. In the practical training process, the machine is expected to learn some important features in the sample by itself, and by applying some limits and sparsity limits to the hidden layer, the machine can learn the features of the best expression sample in a severe environment and can effectively reduce the dimension of the sample. However, in the actual operation process, it is impossible to correctly judge which neurons need to be activated and which need to be inhibited. Therefore, it is necessary to introduce the concept of average activation degreeExpressed by the following formula:
wherein: s 2 Representing the number of hidden layer neurons;representing the activation of the hidden unit when the network is given a specific input x; in the calculation process, a parameter rho is introduced at the same time, called as sparsity parameter, and the parameter rho is used as much as possible
The Softmax described in this embodiment is widely applied to machine learning, the Softmax is simple to calculate, and has significant effects, especially when the multi-classification (C >2) problem is handled, the last output unit of the classifier needs a Softmax function to perform numerical processing. The definition for the Softmax function is as follows:
wherein, V i Is the output of the preceding output unit of the classifier; i represents category index, and the total category number is C; s i The ratio of the index of the current element to the sum of the indices of all elements is shown.
Softmax translates the output values of multiple classes into relative probabilities, making the values easier to understand and compare.
Step S4: model parametersAs the initial model parameter of the next training, the parameter updating of the SAE model is realized, and the characteristic self-migration of the SAE model is completed; after the characteristics are transferred, updating each parameter of the SAE model; repeating the steps S2 and S3; and finishing training until the value of the loss function of the SAE model training is the same as the value of the previous time.
In the embodiment, SAE model parameters are updated by adopting back propagation, the back propagation is propagation motion with error as a main factor, and the weight and the bias are gradually updated by partial derivation of the weight and the bias in the back propagation process. The parameter update can be obtained by the following formula:
wherein: w m Representing the weight matrix of SAE in the mth training; alpha is the learning rate; s is 2 Representing the number of hidden layer units; Δ W m A partial derivative matrix of the loss function with respect to the weight at the mth training time; λ is a regularization penalty factor; b m Represents the offset matrix of SAE in the mth training;is a matrix Δ W m One element in the matrix; h is W,b (x (i) ) When the input is x (i) A corresponding output;average activation degree of a shadow storage layer of the sparse encoder;represents activation of the jth neuron of the hidden layer; Δ b m Is the partial derivative matrix with respect to weight b at the mth training.
The value of the loss function in this embodiment is calculated by a loss function, and the loss function formula is:
wherein, beta is a sparse term penalty factor;the KL divergence is used for measuring the closeness degree of the two probability distributions;mean activation of the jth neuron of the hidden layer; ρ is a sparsity parameter; the J (W, b) is expressed by the following formula:
wherein: n is the number of samples; x is the number of (i) Represents the input of the ith neuron;the weight of the ith neuron in the l layer to the jth neuron in the next layer is given.
In this embodiment, the training sample set during each training includes an original data set, and the primary function of the original data set is in the training process, so as to improve the accuracy of the final model.
Step S5: after the SAE model is trained, testing by utilizing softmax obtained by the last training; and inputting the test sample set into softmax to obtain a test result.
The test result of the embodiment mainly passes two measuring standards, namely specificity Sensitivity and Accuracy. In this embodiment, the test result obtained by the method for detecting the weak and small targets by using the single sparse encoder based on the feature self-migration is shown in fig. 4, and the accuracy and the specificity of each database are obtained. The test result shows that the weak and small target detection method based on the feature migration of the sparse encoder and the softmax can well extract the sparse features of the sample through the sparse encoder, further improve the classification capability of the softmax through a progressive training mode, and obviously improve the accuracy and specificity of target detection.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.
Claims (2)
1. A method for detecting weak and small targets of a single sparse encoder based on feature self-migration is characterized by comprising the following steps: the method comprises the following steps:
s1: selecting image data with the quantity of a from an image database as a training sample set for constructing positive samples and negative samples in the training sample set; selecting image data with the quantity of 1-a from a database as a test sample set for constructing positive samples and negative samples in the test sample set; the positive sample contains a microaneurysm and a block of 21 x 21 pixels is constructed centered on the microaneurysm; the negative examples do not contain pixels of microaneurysms, with a block size of 21 × 21 pixels; simultaneously, respectively selecting a green channel and a blue channel in the color image from the positive sample and the negative sample, and obtaining a contrast enhancement result through Gamma correction as an original data set;
wherein: a represents the percentage of the training sample set in the image database, 0< a <1, and a is manually set;
s2: training the training sample set, inputting the training sample set into an SAE model, and obtaining sparse characteristics of the training sample set, namely SAE model parameters
s3: training softmax with sparse features, i.e. input features f (W) m+1 x+b m+1 ) Training softmaxAfter each training is finished, keeping the positive samples, and randomly selecting the negative samples with the number similar to that of the positive samples;
wherein: f denotes the sigmod activation functionm represents the mth training; w m+1 、b m+1 Respectively representing the weight and the bias of SAE of the (m + 1) th training;
s4: SAE model parametersAs the initial model parameter of the next training, the parameter updating of the SAE model is realized, and the characteristic self-migration of the SAE model is completed; execution of S2; executing S5 until the value of the loss function of SAE model training is the same as the value of the loss function of the previous time;
s5: after the SAE model is trained, inputting the test sample set into the last softmax to obtain a test result;
at step S2, the expression of Softmax is as follows:
wherein, V i Is the output of the preceding output unit of the classifier; i represents a category index, and the total number of categories is C; s i The ratio of the index of the feature vector corresponding to the current training sample to the sum of the indexes of all samples is represented;
the formula for updating parameters of the SAE model in step S4 is:
wherein: w m Representing the weight matrix of SAE in the mth training; alpha is the learning rate; s 2 Representing the number of hidden layer units; Δ W m A partial derivative matrix of the loss function with respect to the weight at the mth training; λ is a regularization penalty factor; b is a mixture of m Represents the bias matrix of SAE in the mth training;is a matrix Δ W m One element in the matrix; h is a total of W,b (x (i) ) When the input is x (i) A corresponding output;average activation degree of a shadow storage layer of the sparse encoder;represents activation of the jth neuron of the hidden layer; Δ b m A partial derivative matrix with respect to the weight b at the mth training; x is the number of (i) Represents the input of the ith neuron;
the formula of the loss function of S4 is:
wherein beta is a sparse term penalty factor;called KL divergence, for measuring the closeness of two probability distributionsDegree;mean activation of the jth neuron of the hidden layer; rho is a sparsity parameter; the J (W, b) is expressed by the following formula:
2. The method for detecting weak and small targets of a single sparse encoder based on feature self-migration according to claim 1, wherein: and the value of a is 0.75, namely 75% of image data is selected from the image database to serve as a training sample set, and 25% of image data is selected from the database to serve as a testing sample set.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910028640.6A CN109816002B (en) | 2019-01-11 | 2019-01-11 | Single sparse self-encoder weak and small target detection method based on feature self-migration |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910028640.6A CN109816002B (en) | 2019-01-11 | 2019-01-11 | Single sparse self-encoder weak and small target detection method based on feature self-migration |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109816002A CN109816002A (en) | 2019-05-28 |
CN109816002B true CN109816002B (en) | 2022-09-06 |
Family
ID=66603394
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910028640.6A Active CN109816002B (en) | 2019-01-11 | 2019-01-11 | Single sparse self-encoder weak and small target detection method based on feature self-migration |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109816002B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110472667B (en) * | 2019-07-19 | 2024-01-09 | 广东工业大学 | Small target classification method based on deconvolution neural network |
CN110930409B (en) * | 2019-10-18 | 2022-10-14 | 电子科技大学 | Salt body semantic segmentation method and semantic segmentation system based on deep learning |
CN110972174B (en) * | 2019-12-02 | 2022-12-30 | 东南大学 | Wireless network interruption detection method based on sparse self-encoder |
CN111462817B (en) * | 2020-03-25 | 2023-06-20 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Classification model construction method and device, classification model and classification method |
CN112465042B (en) * | 2020-12-02 | 2023-10-24 | 中国联合网络通信集团有限公司 | Method and device for generating classified network model |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104156736A (en) * | 2014-09-05 | 2014-11-19 | 西安电子科技大学 | Polarized SAR image classification method on basis of SAE and IDL |
CN104166859A (en) * | 2014-08-13 | 2014-11-26 | 西安电子科技大学 | Polarization SAR image classification based on SSAE and FSALS-SVM |
CN105224943A (en) * | 2015-09-08 | 2016-01-06 | 西安交通大学 | Based on the image swift nature method for expressing of multi thread normalization non-negative sparse coding device |
AU2014271202A1 (en) * | 2013-05-19 | 2016-01-07 | Commonwealth Scientific And Industrial Research Organisation | A system and method for remote medical diagnosis |
CN105320965A (en) * | 2015-10-23 | 2016-02-10 | 西北工业大学 | Hyperspectral image classification method based on spectral-spatial cooperation of deep convolutional neural network |
CN105654117A (en) * | 2015-12-25 | 2016-06-08 | 西北工业大学 | Hyperspectral image spectral-spatial cooperative classification method based on SAE depth network |
CN105787517A (en) * | 2016-03-11 | 2016-07-20 | 西安电子科技大学 | Polarized SAR image classification method base on wavelet sparse auto encoder |
CN106096652A (en) * | 2016-06-12 | 2016-11-09 | 西安电子科技大学 | Based on sparse coding and the Classification of Polarimetric SAR Image method of small echo own coding device |
CN106651899A (en) * | 2016-12-09 | 2017-05-10 | 东北大学 | Fundus image micro-aneurysm detection system based on Adaboost |
CN106815601A (en) * | 2017-01-10 | 2017-06-09 | 西安电子科技大学 | Hyperspectral image classification method based on recurrent neural network |
CN107341511A (en) * | 2017-07-05 | 2017-11-10 | 西安电子科技大学 | Classification of Polarimetric SAR Image method based on super-pixel Yu sparse self-encoding encoder |
CN107590515A (en) * | 2017-09-14 | 2018-01-16 | 西安电子科技大学 | The hyperspectral image classification method of self-encoding encoder based on entropy rate super-pixel segmentation |
CN107798349A (en) * | 2017-11-03 | 2018-03-13 | 合肥工业大学 | A kind of transfer learning method based on the sparse self-editing ink recorder of depth |
CN108537233A (en) * | 2018-03-15 | 2018-09-14 | 南京师范大学 | A kind of pathology brain image sorting technique based on the sparse self-encoding encoder of depth stack |
CN108921233A (en) * | 2018-07-31 | 2018-11-30 | 武汉大学 | A kind of Raman spectrum data classification method based on autoencoder network |
CN109033952A (en) * | 2018-06-12 | 2018-12-18 | 杭州电子科技大学 | M-sequence recognition methods based on sparse self-encoding encoder |
CN109102019A (en) * | 2018-08-09 | 2018-12-28 | 成都信息工程大学 | Image classification method based on HP-Net convolutional neural networks |
CN109145832A (en) * | 2018-08-27 | 2019-01-04 | 大连理工大学 | Polarimetric SAR image semisupervised classification method based on DSFNN Yu non local decision |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8098907B2 (en) * | 2005-07-01 | 2012-01-17 | Siemens Corporation | Method and system for local adaptive detection of microaneurysms in digital fundus images |
US20150379708A1 (en) * | 2010-12-07 | 2015-12-31 | University Of Iowa Research Foundation | Methods and systems for vessel bifurcation detection |
US10115194B2 (en) * | 2015-04-06 | 2018-10-30 | IDx, LLC | Systems and methods for feature detection in retinal images |
-
2019
- 2019-01-11 CN CN201910028640.6A patent/CN109816002B/en active Active
Patent Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2014271202A1 (en) * | 2013-05-19 | 2016-01-07 | Commonwealth Scientific And Industrial Research Organisation | A system and method for remote medical diagnosis |
CN104166859A (en) * | 2014-08-13 | 2014-11-26 | 西安电子科技大学 | Polarization SAR image classification based on SSAE and FSALS-SVM |
CN104156736A (en) * | 2014-09-05 | 2014-11-19 | 西安电子科技大学 | Polarized SAR image classification method on basis of SAE and IDL |
CN105224943A (en) * | 2015-09-08 | 2016-01-06 | 西安交通大学 | Based on the image swift nature method for expressing of multi thread normalization non-negative sparse coding device |
CN105320965A (en) * | 2015-10-23 | 2016-02-10 | 西北工业大学 | Hyperspectral image classification method based on spectral-spatial cooperation of deep convolutional neural network |
CN105654117A (en) * | 2015-12-25 | 2016-06-08 | 西北工业大学 | Hyperspectral image spectral-spatial cooperative classification method based on SAE depth network |
CN105787517A (en) * | 2016-03-11 | 2016-07-20 | 西安电子科技大学 | Polarized SAR image classification method base on wavelet sparse auto encoder |
CN106096652A (en) * | 2016-06-12 | 2016-11-09 | 西安电子科技大学 | Based on sparse coding and the Classification of Polarimetric SAR Image method of small echo own coding device |
CN106651899A (en) * | 2016-12-09 | 2017-05-10 | 东北大学 | Fundus image micro-aneurysm detection system based on Adaboost |
CN106815601A (en) * | 2017-01-10 | 2017-06-09 | 西安电子科技大学 | Hyperspectral image classification method based on recurrent neural network |
CN107341511A (en) * | 2017-07-05 | 2017-11-10 | 西安电子科技大学 | Classification of Polarimetric SAR Image method based on super-pixel Yu sparse self-encoding encoder |
CN107590515A (en) * | 2017-09-14 | 2018-01-16 | 西安电子科技大学 | The hyperspectral image classification method of self-encoding encoder based on entropy rate super-pixel segmentation |
CN107798349A (en) * | 2017-11-03 | 2018-03-13 | 合肥工业大学 | A kind of transfer learning method based on the sparse self-editing ink recorder of depth |
CN108537233A (en) * | 2018-03-15 | 2018-09-14 | 南京师范大学 | A kind of pathology brain image sorting technique based on the sparse self-encoding encoder of depth stack |
CN109033952A (en) * | 2018-06-12 | 2018-12-18 | 杭州电子科技大学 | M-sequence recognition methods based on sparse self-encoding encoder |
CN108921233A (en) * | 2018-07-31 | 2018-11-30 | 武汉大学 | A kind of Raman spectrum data classification method based on autoencoder network |
CN109102019A (en) * | 2018-08-09 | 2018-12-28 | 成都信息工程大学 | Image classification method based on HP-Net convolutional neural networks |
CN109145832A (en) * | 2018-08-27 | 2019-01-04 | 大连理工大学 | Polarimetric SAR image semisupervised classification method based on DSFNN Yu non local decision |
Non-Patent Citations (1)
Title |
---|
A Deep Learning Method for Microaneurysm Detection in Fundus Images;Shan Juan et al.;《2016 IEEE FIRST INTERNATIONAL CONFERENCE ON CONNECTED HEALTH: APPLICATIONS, SYSTEMS AND ENGINEERING TECHNOLOGIES》;20161231;第357-358页 * |
Also Published As
Publication number | Publication date |
---|---|
CN109816002A (en) | 2019-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109816002B (en) | Single sparse self-encoder weak and small target detection method based on feature self-migration | |
CN110472667B (en) | Small target classification method based on deconvolution neural network | |
CN110533631B (en) | SAR image change detection method based on pyramid pooling twin network | |
CN112232476B (en) | Method and device for updating test sample set | |
CN110992354A (en) | Abnormal region detection method for countering self-encoder based on introduction of automatic memory mechanism | |
CN107563999A (en) | A kind of chip defect recognition methods based on convolutional neural networks | |
CN105957086A (en) | Remote sensing image change detection method based on optimized neural network model | |
CN107229914B (en) | Handwritten digit recognition method based on deep Q learning strategy | |
CN108491864B (en) | Hyperspectral image classification based on automatic determination of convolution kernel size convolutional neural network | |
CN113177937B (en) | Improved YOLOv 4-tiny-based cloth defect detection method | |
CN113112446A (en) | Tunnel surrounding rock level intelligent judgment method based on residual convolutional neural network | |
CN110363230A (en) | Stacking integrated sewage handling failure diagnostic method based on weighting base classifier | |
CN110751644A (en) | Road surface crack detection method | |
CN112861982A (en) | Long-tail target detection method based on gradient average | |
CN115797694A (en) | Display panel microdefect classification method based on multi-scale twin neural network | |
CN109872326A (en) | Profile testing method based on the connection of deeply network hop | |
CN114627383A (en) | Small sample defect detection method based on metric learning | |
CN108596044A (en) | Pedestrian detection method based on depth convolutional neural networks | |
CN111833310A (en) | Surface defect classification method based on neural network architecture search | |
CN114972194A (en) | Method for detecting defects from inconsistent labels | |
CN110633739A (en) | Polarizer defect image real-time classification method based on parallel module deep learning | |
CN114511521A (en) | Tire defect detection method based on multiple representations and multiple sub-field self-adaption | |
CN114219762A (en) | Defect detection method based on image restoration | |
CN110659274B (en) | Abnormal data flow online calibration system inspired by brain-like hierarchical memory mechanism | |
CN109934835B (en) | Contour detection method based on deep strengthening network adjacent connection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |