CN108665005A - A method of it is improved based on CNN image recognition performances using DCGAN - Google Patents

A method of it is improved based on CNN image recognition performances using DCGAN Download PDF

Info

Publication number
CN108665005A
CN108665005A CN201810467893.9A CN201810467893A CN108665005A CN 108665005 A CN108665005 A CN 108665005A CN 201810467893 A CN201810467893 A CN 201810467893A CN 108665005 A CN108665005 A CN 108665005A
Authority
CN
China
Prior art keywords
dcgan
image recognition
learning rate
cnn
loss function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810467893.9A
Other languages
Chinese (zh)
Other versions
CN108665005B (en
Inventor
方巍
张飞鸿
丁叶文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN201810467893.9A priority Critical patent/CN108665005B/en
Publication of CN108665005A publication Critical patent/CN108665005A/en
Application granted granted Critical
Publication of CN108665005B publication Critical patent/CN108665005B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of the method based on CNN image recognition performances is improved using DCGAN, data generative capacity outstanding DCGAN has been carried out two degree of combinations by this method with based on CNN image recognitions frame, and DCGAN is to generate network by improved novel confrontation on the basis of GAN, CNN has been applied in prototype structure by the method, so that GAN is provided with the characteristic of depth convolution, and possess better character representation form in terms of data generation.The present invention solves the problems such as training sample data in image recognition processes are difficult to collect, Sample Similarity is excessive well, sample size and limitation of the quality in disaggregated model optimization problem are broken through, further strengthen disaggregated model, improves the accuracy of image recognition.

Description

A method of it is improved based on CNN image recognition performances using DCGAN
Technical field
The invention belongs to image recognition processings, and in particular to a kind of image recognition improvement based on depth machine learning, especially It is related to it is a kind of utilize DCGAN improve the method based on CNN image recognition performances.
Background technology
With the development of deep learning, people increasingly pursue the accuracy of image recognition.Currently, many researchs all use Convolutional neural networks (CNN) improve the precision of image recognition as point of penetration.CNN can directly use the original image of image Element is as input, it is not necessary to feature is first extracted, in addition, the model of CNN training has invariance to distortion such as scaling, translation, rotations, With very strong generalization ability.Wherein the parameter of neural network can be greatly reduced in the local sensing of convolution and weights sharing Amount, prevents the complexity for reducing neural network model while over-fitting again, and it is empty to give the prodigious optimization of classification precision Between.The radar profile that the present invention is studied mainly is made of the bottoms semanteme such as spectrum, color lump.Conventional method such as texture is examined Survey, statistical method etc. can not play too many advantage for this special image, and reason mainly has following three points:
1, can not effectively learn in radar profile feature, include the distribution and gradual change etc. of pixel;
2, radar profile includes that information content is too big, and conventional process is excessively slow, can not solve the problems, such as big data;
3, lack efficient learning strategy, it is difficult to improve the accuracy of identification.
The upper layer of CNN is to semantic more sensitive, and middle layer is then especially sensitive in bottom pattern, such as color and gradient, therefore Solve the problems, such as that radar profile identification is a science and feasible practice using CNN.Most of CNN image classifications are all based on Supervised learning, this mode of learning need a large amount of data as training sample and can just obtain more accurate in the training process Classification.In radar profile identification process, due to the limitation of weather condition, lead to the sample as hazard weathers such as thunderstorm gales This collection work is extremely difficult.Moreover, similitude excessive between sample can also influence training effect, cause feature be difficult to by Effectively study.For the problem of sample size is few and sample excessive similarity, we are generated using projected depth convolution and fight net The method of network (DCGAN) solves.The essence of DCGAN is to realize to expand on the basis of GAN, remains outstanding generation data energy While power, the advantages of also having merged CNN feature extractions, it is made to get a promotion in image analysis and processing capacity.The present invention Tch Normalization realize part normalization, to solve network model gradient disappearance and gradient disperse in training The problems such as.After testing, true large-scale datas of the DCGAN in this real world of celebA, LSUN and GoogleImageNet Training, as a result satisfactory on collection.The present invention is based on the network structures of DCGAN to carry out sample generation operation, and combines and be based on CNN Image identification system effectively increase the accuracy of identification so that the secondary combination of DCGAN and CNN can preferably be scientific research, Production and decision service.
Invention content
Goal of the invention:In view of the above shortcomings of the prior art, the present invention is provided a kind of improved using DCGAN and is schemed based on CNN As the method for recognition performance, this method has carried out two by data generative capacity outstanding DCGAN and based on CNN image recognitions frame Degree combines, and solves the problems such as training sample data in image recognition processes are difficult to collect, Sample Similarity is excessive, punching well The limitation in disaggregated model optimization problem of sample size and quality has been broken, strengthened disaggregated model, improve the accurate of image recognition Property.
Technical solution:A method of it is improved based on CNN image recognition performances using DCGAN, steps are as follows:
(1) structure that model and discrimination model are generated in DCGAN is defined;
(2) learning rate acceleration strategy is established;
(3) pattern detection is generated;
(4) the image recognition frame based on CNN is built;
(5) performance optimizes.
Further, the generation model described in step (1) includes data conversion coating and warp lamination, the data conversion Activation primitive between layer and warp lamination is LeakyReLu functions.Noise vector is mainly passed through reshape by data conversion coating Method is converted into the vector of image type.Data dimension is further converted to picture format by deconvolution, between all use LeakyReLu is as activation primitive.
Further, the discrimination model described in step (1) includes convolutional layer and full articulamentum, the convolutional layer and full connection Activation primitive is ReLu functions between layer, and two classification are done in the full articulamentum end using Sigmoid or SoftMax functions;It is excellent Choosing, identification model includes four layers of convolutional layer and full articulamentum.
Further, step (1) includes training generator G, and the generator has and can generate and authentic specimen phase The ability of the very few data of difference.Its effect is that a noise is packaged into another sample true to nature so that arbiter is mistakenly considered Authentic specimen.Arbiter D is two graders, is used for the true and false of judgement sample, is the source of generator study.
Further, step (1) includes establishing network losses function, and the network losses function includes that network totally damages It loses function, generate model loss function and discrimination model loss function, be defined as follows:
The network overall loss function computational chart formula is as follows:
The generation model loss function computational chart formula is as follows:
LOSS(G)=-(log (D2(G(z))));
The discrimination model loss function computational chart formula is as follows:
LOSS(D)=-(log (D1(x))+log(1-D2(G(z))));
Wherein:D (x) is the discriminant function according to data x, and G (z) is the generating function according to noise z;Table Show that x derives from data probability distributions, similarlyMiddle z derives from noise profile;D1(x) and D2(x) operation mode is of equal value 's.
Further, step (2) includes the method optimizing network parameter declined using mini-batch gradients, the net Network parameter includes batch size batch, iterations epoch, learning rate α, needs the weight adjusted and bias W and b, and Increased factor of momentum m and v is wanted when optimizing gradient descent method.
Further, step (2) specifically includes following steps:
(21) be arranged learning rate, the learning rate initial value range be [0.9,1.0], backpropagation update weights and partially It sets value and follows following calculation formula:
W=W- α (learning rate) [loss function seeks weights the value of local derviation],
Wherein, W indicates that update weights or bias, α are learning rate;
(22) learning rate is gradually reduced by iteration, backpropagation mechanism is called in each cycle to adjust weights and partially Value is set, and then seeks the minimum value of loss function, learning rate attenuation amplitude is put into iterative operation, learning rate decaying strategy Subtract and follows following formula:
Wherein:Decay_rate sizes take 0.1 to 1.0 ranges, epochiIt is trained for ith iteration, α0Initially to learn Rate, value range are 0.1 to 1.0.
Further, the identification model in step (4) uses the neural network of 4 layers of convolutional layer and 3 layers of full articulamentum, output Result be four kinds of classification, respectively rain has wind class, rainy calm class, has wind class without rain and without the calm class of rain.
Advantageous effect:Compared with prior art, significantly effect is the present invention:First, the present invention is by DCGAN outstanding Data generative capacity has carried out two degree of combinations with based on CNN image recognitions frame, solves in image recognition processes instruct well Practice the problems such as sample data is difficult to collect, Sample Similarity is excessive;Second, it can learn automatically hiding thin in radar profile Section, it is not necessary to pass through manual extraction;Third copes with big data batch processing problem;4th, break through sample size and quality Limitation in disaggregated model optimization problem, by effective algorithm, repeatedly training steps up the accuracy of image recognition.
Description of the drawings
Fig. 1 is the method for the present invention system flow chart;
Fig. 2 is the structure of the self-defined DCGAN of the present invention;
Fig. 3 is true picture and generation image effect in sample of the present invention data;
Fig. 4 is identification model frame diagram of the present invention;
Fig. 5 is the randomization result schematic diagram of four classes of the invention;
Fig. 6 is present invention identification network and original CNN performance comparisons figure;
Fig. 7 is 4 classification results schematic diagrames after identification framework pre-training of the present invention;
Fig. 8 is the recognition result schematic diagram after model of the present invention is strengthened.
Specific implementation mode
In order to which technical solution disclosed by the invention is described in detail, with reference to the accompanying drawings of the specification with specific implementation reason do into The elaboration of one step.
Present invention is generally directed to radar cross-section images to identify.Radar profile is different from general subject image, it is to lean on Area distribution and color similar to spectrum describe classification, therefore the semanteme of this rank can be better using CNN Carry out feature extraction.In identifying system, also need specifically to be classified after extracting feature.In order to make feature extraction and classifying return One changes, and the present invention does not use traditional SVM as grader, but carries out sort operation by full articulamentum and Softmax.
It is disclosed by the invention be it is a kind of utilize DCGAN improve the method based on CNN image recognition performances, the system of this method Flow chart is as shown in Figure 1, be as follows:
Step 1:Build self-defined DCGAN
According to the scale of training data, the structure of model and discrimination model is generated in self-defined DCGAN, including parameter Setting and depth setting.In the present invention, the full articulamentum for eliminating discrimination model, all activation primitives are arranged to LeakyReLu functions, and two classification, the i.e. classification of "true" and "false" are done by Sigmoid or SoftMax functions.Generate model The process of substantially one deconvolution, the nonlinear activation function between all convolutional layers all use ReLu functions, output layer Then use tanh functions.The target that we design DCGAN is to convert noise vector z to sample data x to train one Generator G, so as to the later stage strengthen identification model.The training objective of generator G is then defined by arbiter D, the effect of D It is to discriminate between authentic specimen data pdata(x) and data p is generatedz(z), and generator G utmostly allows arbiter D to think the defeated of it It is true to go out.G and D can be allowed to eventually find the balance of a non-convex game by repetition training, and then generated and authentic specimen phase The very few data of difference.We do not do any hypothesis or model needs to data distribution in advance, but directly by using under gradient The mode of drop optimizes.The loss function of network totality passes through defined below:
The convergence direction of network is minGmaxDV(D,G).We carry out the loss function in formula 1 according to two models It decomposes, wherein formula 2 is discrimination model loss function, and formula 3 is to generate model loss function.
LOSS(D)=-(log (D1(x))+log(1-D2(G (z)))) (formula 2)
LOSS(G)=-(log (D2(G (z)))) (formula 3)
We make these loss functions converge to minimum using the machine learning algorithm in Tensorflow frames, and pass through Backpropagation obtains optimal weighting function.Iterate optimization operation, continues to optimize weights and bias, can train one A outstanding generation model data that we need so as to generation.Discrimination model is also in this way, we are with reference in BP neural network Batch gradient decline principle, while by the loss function of the two minimize.As shown in Fig. 2 the structure of self-defined DCGAN.
Step 2:It introduces learning rate decaying strategy and accelerates study
To accelerate the training process of DCGAN, we use the strategy that a learning rate is constantly decayed.Self-defined DCGAN Optimize network parameter using the mode that mini-batch gradients decline, although being added between convolution and activation primitive Batch Normalization protect gradient, but must will appear noise in an iterative process and make decline process accurate Minimum value is converged to, but is swung near minimum value.The reason for introducing learning rate decaying strategy is as follows:When initial stage, compared with Big learning rate can realize very fast convergence rate.As learning rate becomes smaller, convergence paces are also opposite to be reduced, even if in minimum value Too many error will not be caused by nearby swinging.Along with Normalization operations before keep gradient more steady, therefore instruct It is fixed to practice process meeting fast and steady.A learning rate decaying is just carried out in training per the certain number of iteration, is comprised the concrete steps that:
1, larger learning rate is used first;
2, learning rate is gradually reduced by iteration.
Learning rate decaying strategy, which subtracts, follows formula 4:
It can be with presetting decay_rate sizes for 0.95, epochiIt is trained for ith iteration, α0For initial learning rate.It learns The decaying of habit rate is to operate synchronous progress with backpropagation, and when backpropagation is primary per iteration, learning rate just and then updates one It is secondary, ensure that each learning rate is all different.
The effect of stochastic gradient descent method is not so good, learning rate can only optimize it as a result, but its effect cannot be improved Rate.In order to quickly obtain an optimal solution and make that the later stage is trained more to stablize, the learning rate after decaying needs to come in conjunction with optimizer It is restrained.Most optimizer, such as the Momentum operated is in undated parameter.It takes into account momentum factor Come so that gradient becomes steeper, although can restrain can make process become very tortuous;And another optimizer is such as AdaGrad is then to modify on learning rate, is equivalent to and adds punishment pattern so that each parameter has oneself Practise efficiency.We combine the two method, utilize the training of Adam accelerans networks, its following institute of mathematical form Show.
mi=b1*mi-1+ (1-b1) * dx (formula 5)
vi=b2*vi-1+(1-b2)*dx2(formula 6)
The update of weight parameter depends on two variables m and v, and dx is knots modification.M includes Momentum gradients in formula 5 Attribute, formula 5 contain AdaGrad resistance attributes when calculating v.M and v is taken into account the update for realizing weight parameter by formula 7. In an experiment, we pass to loss function in optimizer, same that iterative operation is coordinated to carry out as the source of backpropagation. After each training, we can check the accuracy and fault rate that feedforward network is returned, and carry out judgment models with this Robustness.
Step 3:Generate pattern detection
Before pattern detection, it is necessary first to generate sample.Rain is had wind and rainy calm two categories by us respectively Radar profile train DCGAN as sample because in contrast the sample of both types is the difficult collection of comparison.For Training more efficiently is also prevented from simultaneously once leads to interim card by all pictures reading memory, we use mini-batch Trained mode, a batch train 64 pictures.Whenever living through 100 batches, a sample figure will be locally generated.It wants The feature of fully study image needs many times effectively training.For convenience of continuing to train and generate sample next time, we often carry out 100 training will just generate a model and preserve.After training, sample can be carried out by loading trained model It generates.It is true picture and generation image effect shown in Fig. 3.Although DCGAN generate sample visually with authentic specimen very It is close, but human eye be can not be used as judge generation sample whether the standard of qualification.We need to test it, it was demonstrated that raw At sample whether with truthful data attribute.We are by using CNN identification frameworks trained in advance in Fig. 4 as detection Tool, the sample that stochastic inputs part generates, and the quality for generating sample is verified according to classification results.If the sample generated is accurate It is really categorized into corresponding classification, we can think that the sample generated is qualified.After tested, the rainy life for having wind class By the success rate of Accurate classification it is 90% at sample, the success rate of rainy calm class is 88%.This is with truthful data in pre-training When the success rate that approaches be maintained in reasonable error range, it was demonstrated that generation sample is that use can be trained together with authentic specimen 's.
There is generation sample, so that it may to carry out pattern detection.Since the sample of training is two types, generate Sample also can there are two types of.We do four sort operations with generation sample, if two class samples can correctly be classified, illustrate these It is qualified to generate sample.To avoid the identification of small probability from influencing, for detect sample network and realize image recognition below Network be the same.We are good by image recognition model buildings first, and complete pre-training, while being carried out pair with original CNN Than.After showing its superior function, its all tensor value and structure are replicated, i.e. the ckpt class files of model carry out point of sample Class is examined.
Step 4:Build the image recognition frame based on CNN
In the present invention, model for identification is built up into the neural network with 4 layers of convolutional layer and 3 layers of full articulamentum, it is defeated The result gone out is four kinds of classification.The depth of network model is that the scale of test data as needed and classification quantity determine, after Phase can be expanded according to actual conditions.Model framework is as shown in Figure 4.
In first convolutional layer, we define the convolution kernel of 32 5x5 dimensions, and initialization weights take normal distribution mark Quasi- difference is the random value on 0.01, and it is 0 to initialize bias.Convolution operation step-length is uniformly set as 1, and BORDER PROCESSING is set as 0 form of benefit of crossing the border.The step-length of pondization operation is set as 2, and its boundary processing method is the region to insufficient convolution kernel size Directly abandon.Weights are to maintain unanimously with bias, convolution kernel and the initialization operation in pond and first layer in remaining convolutional layer 's.Second convolutional layer is provided with the convolution kernel of 64 5x5;Third convolutional layer is provided with the convolution kernel of 128 3x3;4th A convolutional layer is also provided with the convolution kernel of 128 3x3.Since CNN can be using picture pixels as directly inputting, so needing to change Data dimension can just obtain final one-dimensional classification results.Therefore, we define 1024 god in first full articulamentum Through member, for converting dimension.In view of the activation rule of neuron:When data have activation effect, activation effect is more apparent The effect that neuron is invoked is stronger.Therefore nonlinear activation function ReLu.Our excessive unnecessary nerves in order to prevent Member participates in calculating, and dropout mechanism is defined between full articulamentum, it can make partial nerve member in a dormant state, mesh Be the problem for avoiding causing calculation amount excessive because starting excessive neuron, and closer to the mechanism like human thinking.Second Layer connects us and defines 512 neurons entirely, equally using ReLu activation primitives and additional dropout mechanism.Last layer Us are connected entirely and defines 4 neurons for result output, respectively represent the randomization of four classes as a result, as shown in Figure 5.
Step 5:Performance comparison and optimization
Our pre-training data set has 10000 radar profiles, including 4 class categories:Rain has wind, has Rain is calm, has wind and calm without rain without rain.It is 540*440 that each classification, which respectively has 2500 images, every figure Pixel Dimensions,.Image Radar observation point from In Nanjing and Anhui province in 2016 and 2017.DCGAN data for quality verification are two Class:Rain has wind and rainy calm each 200 generation images.
In last combined training, the data set that DCGAN is generated is expanded per class to 1000.After truthful data is added, 4 Class data keep quantity consistent.In the final test stage, we carry out 4 class tests, each 200 radar images of each classifying, These images are true radar data and never participated in training.
The DCGAN data generated are put into respective classes training set and are trained again together with truthful data, are found The global accuracy rate of combined training is promoted.As shown in fig. 6, before the global accuracy rate after combined training is compared, trained Cheng Gengjia stablizes, and accuracy rate is also improved.In order to verify whether the model after combined training is strengthened, we will compare one group Recognition result.What Fig. 7 was represented is 4 classification results after identification framework pre-training, and Fig. 8 illustrates the identification after model is strengthened As a result.In contrast, the Model Identification accuracy rate by reinforcing is improved.

Claims (7)

1. a kind of improving the method based on CNN image recognition performances using DCGAN, it is characterised in that:Include the following steps:
(1) structure that model and discrimination model are generated in DCGAN is defined;
(2) learning rate acceleration strategy is established;
(3) pattern detection is generated;
(4) the image recognition frame based on CNN is built;
(5) performance optimizes.
2. according to claim 1 a kind of using method of the DCGAN raisings based on CNN image recognition performances, feature exists In:Generation model described in step (1) includes data conversion coating and warp lamination, the data conversion coating and warp lamination Activation primitive is LeakyReLu functions.
3. according to claim 1 a kind of using method of the DCGAN raisings based on CNN image recognition performances, feature exists In:Discrimination model described in step (1) includes convolutional layer and full articulamentum, activation primitive between the convolutional layer and full articulamentum For ReLu functions, complete two classification function of articulamentum end is Sigmoid or SoftMax functions.
4. according to claim 1 a kind of using method of the DCGAN raisings based on CNN image recognition performances, feature exists In:Step (1) includes establishing network losses function, and the network losses function includes network overall loss function, generates mould Type loss function and discrimination model loss function, above-mentioned function expression are defined as follows shown:
The network overall loss function expression is as follows:
The generation model loss function expression formula is as follows:
LOSS(G)=-(log (D2(G(z))));
The discrimination model loss function expression formula is as follows:
LOSS(D)=-(log (D1(x))+log(1-D2(G(z))));
Wherein:D (x) is the discriminant function according to data x, and G (z) is the generating function according to noise z,Indicate that x comes Derived from data probability distributions, similarlyMiddle z derives from noise profile, D1(x) and D2(x) operation mode is of equal value.
5. according to claim 1 a kind of using method of the DCGAN raisings based on CNN image recognition performances, feature exists In:Step (2) includes the method optimizing network parameter declined using mini-batch gradients, and the network parameter includes batch Size batch, iterations epoch, learning rate α need the weight adjusted and bias W and b, and optimization gradient descent method When want increased factor of momentum m and v.
6. according to claim 1 a kind of using method of the DCGAN raisings based on CNN image recognition performances, feature exists In:Step (2) specifically includes following steps:
(21) learning rate is set, and the learning rate initial value range is [0.9,1.0], and backpropagation updates weights and bias Follow following calculation formula:
W=W- α (learning rate) [loss function seeks weights the value of local derviation]
Wherein, W indicates that update weights or bias, α are learning rate;
(22) learning rate is gradually reduced by iteration, calls backpropagation mechanism to adjust weights and biasing in each cycle Value, and then the minimum value of loss function is sought, learning rate attenuation amplitude is put into iterative operation, learning rate decaying strategy subtracts Follow following formula:
Wherein:Decay_rate sizes take 0.1 to 1.0 ranges, epochiIt is trained for ith iteration, α0For initial learning rate, take Value ranging from 0.1 to 1.0.
7. according to claim 1 a kind of using method of the DCGAN raisings based on CNN image recognition performances, feature exists In:Identification model in step (4) uses the neural network of 4 layers of convolutional layer and 3 layers of full articulamentum, and the result of output includes four Class, respectively rain have wind class, rainy calm class, have wind class without rain and without the calm class of rain.
CN201810467893.9A 2018-05-16 2018-05-16 Method for improving CNN-based image recognition performance by using DCGAN Active CN108665005B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810467893.9A CN108665005B (en) 2018-05-16 2018-05-16 Method for improving CNN-based image recognition performance by using DCGAN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810467893.9A CN108665005B (en) 2018-05-16 2018-05-16 Method for improving CNN-based image recognition performance by using DCGAN

Publications (2)

Publication Number Publication Date
CN108665005A true CN108665005A (en) 2018-10-16
CN108665005B CN108665005B (en) 2021-12-07

Family

ID=63779730

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810467893.9A Active CN108665005B (en) 2018-05-16 2018-05-16 Method for improving CNN-based image recognition performance by using DCGAN

Country Status (1)

Country Link
CN (1) CN108665005B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109523478A (en) * 2018-11-09 2019-03-26 北京智慧眼科技股份有限公司 Image removes grid method, storage medium
CN109711442A (en) * 2018-12-15 2019-05-03 中国人民解放军陆军工程大学 Unsupervised layer-by-layer generation fights character representation learning method
CN109829495A (en) * 2019-01-29 2019-05-31 南京信息工程大学 Timing image prediction method based on LSTM and DCGAN
CN110188774A (en) * 2019-05-27 2019-08-30 昆明理工大学 A kind of current vortex scan image classifying identification method based on deep learning
CN110246506A (en) * 2019-05-29 2019-09-17 平安科技(深圳)有限公司 Voice intelligent detecting method, device and computer readable storage medium
CN110516561A (en) * 2019-08-05 2019-11-29 西安电子科技大学 SAR image target recognition method based on DCGAN and CNN
CN110660045A (en) * 2019-08-30 2020-01-07 杭州电子科技大学 Lymph node identification semi-supervision method based on convolutional neural network
CN110717374A (en) * 2019-08-20 2020-01-21 河海大学 Hyperspectral remote sensing image classification method based on improved multilayer perceptron
CN110956255A (en) * 2019-11-26 2020-04-03 中国医学科学院肿瘤医院 Difficult sample mining method and device, electronic equipment and computer readable storage medium
CN110992334A (en) * 2019-11-29 2020-04-10 深圳易嘉恩科技有限公司 Quality evaluation method for DCGAN network generated image
CN111768325A (en) * 2020-04-03 2020-10-13 南京信息工程大学 Security improvement method based on generation of countermeasure sample in big data privacy protection
CN111986142A (en) * 2020-05-23 2020-11-24 冶金自动化研究设计院 Unsupervised enhancement method for surface defect image data of hot-rolled plate coil
CN114169385A (en) * 2021-09-28 2022-03-11 北京工业大学 MSWI process combustion state identification method based on mixed data enhancement

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909082A (en) * 2017-10-30 2018-04-13 东南大学 Sonar image target identification method based on depth learning technology

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909082A (en) * 2017-10-30 2018-04-13 东南大学 Sonar image target identification method based on depth learning technology

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WEI FANG等: "A Method for Improving CNN-Based Image Recognition Using DCGAN", 《COMPUTERS, MATERIALS & CONTINUA》 *
赵菲妮: "基于深度学习网络的SAR图像目标识别研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109523478B (en) * 2018-11-09 2021-06-04 智慧眼科技股份有限公司 Image descreening method and storage medium
CN109523478A (en) * 2018-11-09 2019-03-26 北京智慧眼科技股份有限公司 Image removes grid method, storage medium
CN109711442A (en) * 2018-12-15 2019-05-03 中国人民解放军陆军工程大学 Unsupervised layer-by-layer generation fights character representation learning method
CN109829495A (en) * 2019-01-29 2019-05-31 南京信息工程大学 Timing image prediction method based on LSTM and DCGAN
CN110188774A (en) * 2019-05-27 2019-08-30 昆明理工大学 A kind of current vortex scan image classifying identification method based on deep learning
CN110188774B (en) * 2019-05-27 2022-12-02 昆明理工大学 Eddy current scanning image classification and identification method based on deep learning
CN110246506A (en) * 2019-05-29 2019-09-17 平安科技(深圳)有限公司 Voice intelligent detecting method, device and computer readable storage medium
CN110516561A (en) * 2019-08-05 2019-11-29 西安电子科技大学 SAR image target recognition method based on DCGAN and CNN
CN110516561B (en) * 2019-08-05 2022-12-06 西安电子科技大学 SAR image target identification method based on DCGAN and CNN
CN110717374A (en) * 2019-08-20 2020-01-21 河海大学 Hyperspectral remote sensing image classification method based on improved multilayer perceptron
CN110660045B (en) * 2019-08-30 2021-12-10 杭州电子科技大学 Lymph node identification semi-supervision method based on convolutional neural network
CN110660045A (en) * 2019-08-30 2020-01-07 杭州电子科技大学 Lymph node identification semi-supervision method based on convolutional neural network
CN110956255A (en) * 2019-11-26 2020-04-03 中国医学科学院肿瘤医院 Difficult sample mining method and device, electronic equipment and computer readable storage medium
CN110956255B (en) * 2019-11-26 2023-04-07 中国医学科学院肿瘤医院 Difficult sample mining method and device, electronic equipment and computer readable storage medium
CN110992334B (en) * 2019-11-29 2023-04-07 四川虹微技术有限公司 Quality evaluation method for DCGAN network generated image
CN110992334A (en) * 2019-11-29 2020-04-10 深圳易嘉恩科技有限公司 Quality evaluation method for DCGAN network generated image
CN111768325A (en) * 2020-04-03 2020-10-13 南京信息工程大学 Security improvement method based on generation of countermeasure sample in big data privacy protection
CN111768325B (en) * 2020-04-03 2023-07-25 南京信息工程大学 Security improvement method based on generation of countermeasure sample in big data privacy protection
CN111986142A (en) * 2020-05-23 2020-11-24 冶金自动化研究设计院 Unsupervised enhancement method for surface defect image data of hot-rolled plate coil
CN114169385A (en) * 2021-09-28 2022-03-11 北京工业大学 MSWI process combustion state identification method based on mixed data enhancement
CN114169385B (en) * 2021-09-28 2024-04-09 北京工业大学 MSWI process combustion state identification method based on mixed data enhancement

Also Published As

Publication number Publication date
CN108665005B (en) 2021-12-07

Similar Documents

Publication Publication Date Title
CN108665005A (en) A method of it is improved based on CNN image recognition performances using DCGAN
Zhao et al. A visual long-short-term memory based integrated CNN model for fabric defect image classification
CN108491765B (en) Vegetable image classification and identification method and system
CN105184312B (en) A kind of character detecting method and device based on deep learning
CN108510194A (en) Air control model training method, Risk Identification Method, device, equipment and medium
CN107016406A (en) The pest and disease damage image generating method of network is resisted based on production
CN110414601A (en) Photovoltaic module method for diagnosing faults, system and equipment based on depth convolution confrontation network
CN107529650A (en) The structure and closed loop detection method of network model, related device and computer equipment
CN108615010A (en) Facial expression recognizing method based on the fusion of parallel convolutional neural networks characteristic pattern
CN109409198A (en) AU detection model training method, AU detection method, device, equipment and medium
CN108229381A (en) Face image synthesis method, apparatus, storage medium and computer equipment
CN109255364A (en) A kind of scene recognition method generating confrontation network based on depth convolution
CN106980858A (en) The language text detection of a kind of language text detection with alignment system and the application system and localization method
CN107945153A (en) A kind of road surface crack detection method based on deep learning
CN110232280A (en) A kind of software security flaw detection method based on tree construction convolutional neural networks
CN111507884A (en) Self-adaptive image steganalysis method and system based on deep convolutional neural network
CN110532920A (en) Smallest number data set face identification method based on FaceNet method
CN107122375A (en) The recognition methods of image subject based on characteristics of image
CN109102014A (en) The image classification method of class imbalance based on depth convolutional neural networks
CN106372581A (en) Method for constructing and training human face identification feature extraction network
CN111339935B (en) Optical remote sensing picture classification method based on interpretable CNN image classification model
CN111860171A (en) Method and system for detecting irregular-shaped target in large-scale remote sensing image
CN110390347A (en) Conditions leading formula confrontation for deep neural network generates test method and system
CN109389171A (en) Medical image classification method based on more granularity convolution noise reduction autocoder technologies
Sarigül et al. Comparison of different deep structures for fish classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 210044 No. 219 Ningliu Road, Jiangbei New District, Nanjing City, Jiangsu Province

Applicant after: Nanjing University of Information Science and Technology

Address before: 211500 Yuting Square, 59 Wangqiao Road, Liuhe District, Nanjing City, Jiangsu Province

Applicant before: Nanjing University of Information Science and Technology

GR01 Patent grant
GR01 Patent grant