CN109543745B - Feature learning method and image identification method based on conditional countermeasure self-coding network - Google Patents

Feature learning method and image identification method based on conditional countermeasure self-coding network Download PDF

Info

Publication number
CN109543745B
CN109543745B CN201811379900.6A CN201811379900A CN109543745B CN 109543745 B CN109543745 B CN 109543745B CN 201811379900 A CN201811379900 A CN 201811379900A CN 109543745 B CN109543745 B CN 109543745B
Authority
CN
China
Prior art keywords
layer
sample
self
network model
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811379900.6A
Other languages
Chinese (zh)
Other versions
CN109543745A (en
Inventor
陈秀宏
肖汉雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangnan University
Original Assignee
Jiangnan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangnan University filed Critical Jiangnan University
Priority to CN201811379900.6A priority Critical patent/CN109543745B/en
Publication of CN109543745A publication Critical patent/CN109543745A/en
Application granted granted Critical
Publication of CN109543745B publication Critical patent/CN109543745B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques

Abstract

The feature learning method based on the conditional countermeasure self-coding network controls a generation result through a label variable, generates a sample with a label to assist other learning tasks, and improves the image identification efficiency; meanwhile, the invention also discloses an image identification method based on the feature learning method of the conditional countermeasure self-encoding network, which can obtain high-precision image identification effect in complex object layout and multi-label image scenes. It includes S1 selecting a training set; s2 selecting a prior distribution function; s3, based on the confrontation self-coding network model, introducing label information of sample data into the input data of a discriminator or an encoder in the network model to construct the network model; s4 training the network model to optimize parameters in the network model; s5 extracts data features by the discriminants in the trained network model or generates a reconstruction of the original input by the trained generator.

Description

Feature learning method and image identification method based on conditional countermeasure self-coding network
Technical Field
The invention relates to the technical field of image recognition, in particular to a feature learning method and an image recognition method based on a conditional countermeasure self-coding network.
Background
In recent years, a technology based on Deep Learning (Deep Learning) has achieved many remarkable results in the field of computer vision, and representative Deep Learning methods such as CNN (convolutional neural network), AutoEncoder (AE), Generative Adaptive Networks (GAN), and Adaptive Auto Encoder (AAE) are widely used in the field of image recognition. However, with the development of technology, image recognition technology is widely applied to complex image layouts and scenes of multi-label images, and at this time, it is still a very challenging task how to effectively learn data distribution from limited samples by using label information of data, so as to improve efficiency and accuracy of image recognition when facing complex scenes.
As shown in fig. 1, the GAN model captures data distribution through a "min-max confrontation game" between the generator 3 and the discriminator 4, the GAN model generates a pseudo training sample 5 from a given prior distribution 2 during training, and the discriminator 4 judges a real source from the input pseudo training sample 5 and the real sample 1 to improve the capability of judging the source of the sample. Although the GAN model can capture the distribution of data well, the method still faces the problems that the pattern collapse phenomenon exists in the training process and the mapping from the sample space to the hidden space does not exist. As shown in fig. 2, in the AAE model, an encoder 6 encodes a real sample 1 to obtain an implicit layer 8, and a decoder 7 decodes the encoded implicit layer 8 to obtain a reconstruction of an original input; the encoder 6 and the decoder 7 constitute an AE section, and the AAE model performs countermeasure training by using the encoding characteristics of the data of the real sample 1 extracted through the hidden layer 8 as negative samples and the given prior distribution 2 as positive samples in combination with the discriminator 4. The AAE is an unsupervised learning method, and although the generated samples can well fit the distribution of training samples and can well solve the problem that the image generated by the GAN model is not true when the image recognition technology is used, the AAE can only generate samples without labels and cannot effectively utilize label information.
Disclosure of Invention
In order to solve the problem that the label information of data cannot be effectively utilized to learn data characteristics in limited samples so as to obtain higher image recognition efficiency and recognition accuracy, the invention provides a characteristic learning method based on a conditional countermeasure self-encoding network, wherein a generation result is controlled through a label variable, a sample with a label is generated to assist other learning tasks, and the image recognition efficiency is improved; meanwhile, the invention also discloses an image identification method based on the feature learning method of the conditional countermeasure self-encoding network, which can obtain high-precision image identification effect in complex object layout and multi-label image scenes.
The technical scheme of the invention is as follows: the feature learning method for the conditional countermeasure self-coding network comprises the following steps:
s1: selecting a training set;
s2: selecting a prior distribution function;
s3: constructing a network model on the basis of the confrontation self-coding network model;
s4: training the network model to optimize parameters in the network model;
s5: extracting data characteristics through a discriminant in the trained network model, or generating reconstruction of original input by using a trained generator;
the method is characterized in that:
the network model based on the countering self-coding network model constructed in the step S3 is a conditional-countering self-coding-based network model, and tag information of sample data is introduced into input data of a discriminator or an encoder in the network model, so that the capability of discriminating the conditional-countering self-coding-based network model is improved.
It is further characterized in that:
the conditional countermeasure self-coding network model is a conditional countermeasure self-coding network based on hidden variable distribution fitting, and is used for fusing label information of the sample data in a discriminator, wherein the label information of the sample data is used as an input of the discriminator by combining the prior distribution function selected in the step S2 as a positive sample and data coding characteristics of a real sample extracted through a hidden layer as a negative sample;
the conditional countering self-encoding network model is a sample-based conditional countering self-encoding network that fuses in an encoder tag information of the sample data as an input to the encoder in conjunction with the a priori distribution function selected in step S2;
in the sample-based conditional countermeasure self-coding network, a pseudo training sample is generated through the hidden layer, the pseudo training sample is used as a negative sample, a real sample generated through training data is used as a positive sample and is input into the discriminator, and the discriminator judges a real source from the input pseudo training sample and the input real sample; the decoder decodes the hidden layer, and restores the pseudo training sample to the given prior distribution function and the label information of the sample data;
the training process in step S4 includes the following steps:
s4-1: determining the number of training iterations;
s4-2: sampling from the training set to obtain a small batch of training samples { x(1),……x(m)};
S4-3: updating network parameters theta of the encoder and the decoder using a gradient method according to the following equation 1AeAnd thetaAd
Figure DEST_PATH_IMAGE001
S4-4: updating a network parameter theta of the encoder using a gradient method according to the following equation 2Ae
Figure 741757DEST_PATH_IMAGE002
S4-5: updating the network parameter theta of the discriminator using a gradient method according to the following equation 3d
Figure DEST_PATH_IMAGE003
S4-6: repeating steps S4-2 through S4-5 until the number of repetitions equals the number of training iterations;
in the above-mentioned formulas, the first and second substrates,
e (.) indicates that it is desired to,
pdata(.) represent the distribution of data subject to real samples,
p (z) denotes the distribution of data subject to a random sample distribution,
z is a random variable of the input and,
x represents the sample of the input and,
d represents a discriminator;
the encoder and the decoder in the conditional robust self-encoding network based on the hidden variable distribution fitting respectively comprise the following four combinations:
full connection layer, deconvolution layer, convolution layer, full connection layer, convolution layer, deconvolution layer;
the encoder in the conditional countermeasure self-encoding network based on the implicit variable distribution fitting comprises a convolution layer structure, a vectorization layer and a full connection layer which are sequentially connected and used for carrying out vectorization operation on input vectors, wherein each convolution layer structure comprises a convolution layer, and a BatchNorm layer, an activation function and a down sampling which are sequentially followed by the convolution layer; the decoder comprises a full connection layer, an inverse vectorization layer and deconvolution layer structures which are sequentially connected, wherein each deconvolution layer structure comprises a deconvolution layer and a BatchNorm layer, an activation function and an upsampling which are sequentially followed by the deconvolution layer; the discriminator comprises a full connection layer; each fully connected layer is followed by the activation function;
the encoder of the sample-based conditional immunity self-encoding network comprises a fully-connected layer; the decoder comprises a fully connected layer; the discriminator comprises convolution layer structures, a vectorization layer and a full-connection layer which are connected in sequence, wherein each convolution layer structure comprises a convolution layer followed by a BatchNorm layer, an activation function and a down-sampling; each fully connected layer is followed by the activation function;
the activation function is a Rectified Linear Units function; the sizes of the convolution kernels of the convolutional layers and the deconvolution layers are all set to 3 × 3.
The image recognition method based on the feature learning method of the conditional countermeasure self-coding network comprises the following steps:
step 1, constructing a network model;
step 2, selecting a data set and a prior distribution function, and training the network model;
step 3, inputting image data to be recognized into the trained network model, and performing image classification recognition by using the discriminator;
the method is characterized in that:
in step 1, the network model is a conditional countermeasure self-encoding-based network model, and label information of the sample data is fused in a discriminator or an encoder of the network model to complete more effective image data identification.
The feature learning method based on the conditional countermeasure self-coding network provided by the invention is characterized in that the conditional countermeasure self-coding network is constructed by introducing data label information into a countermeasure self-coding network model, and supervised label information is introduced into an unsupervised training process of the conditional countermeasure self-coding network, so that the training process of the countermeasure self-coding network model is strengthened, the training process of the countermeasure self-coding network model is more stable, and the collapse phenomenon of a similar generation type countermeasure network can not occur; the generation result is controlled through the label variable, the sample with the label is generated, the capability of learning distinguishing characteristics of the confrontation self-coding network model is further improved, the utilization rate of the data label is more effectively improved based on the condition confrontation self-coding network compared with the confrontation self-coding network model, and the data distribution is more effectively learned in the limited sample; by adopting the image identification method based on the feature learning method of the conditional countermeasure self-encoding network, the features of the image data can be better extracted, and the image identification can be carried out with higher efficiency; when the complex object layout and the multi-label image scene are faced, the high-precision image recognition effect can be obtained by effectively utilizing the label information of the data in limited samples.
Drawings
FIG. 1 is a schematic diagram of a GAN network in the prior art;
FIG. 2 is a schematic diagram of a prior art AAE network;
FIG. 3 is a schematic structural diagram of a conditional countermeasure self-encoding network based on implicit variable distribution fitting;
FIG. 4 is a schematic diagram of a sample-based conditional countermeasure self-encoding network;
FIG. 5 is a schematic diagram of the structure of an encoder, a decoder and a discriminator in a conditional countermeasure self-encoding network based on implicit variable distribution fitting;
fig. 6 is a schematic diagram of the structure of the middle encoder, decoder and discriminator of the sample-based condition-confronted self-coding network.
Detailed Description
As shown in fig. 1 to 6, the feature learning method for the conditional countermeasure self-encoding network includes the following steps:
s1: selecting a training set;
s2: selecting a prior distribution function;
s3: constructing a confrontation self-coding network model; the countermeasure self-coding network model is a conditional countermeasure self-coding network model, label information including sample data is input, the label information of the sample data is introduced into the input data of a discriminator or an encoder in the network model, and the data generated by the conditional countermeasure self-coding network model has the label information;
s4: training a confrontation self-coding network model to optimize parameters in the network model;
s5: and extracting data characteristics through a discriminant in the trained confrontation self-coding network model or generating reconstruction of the original input by using a trained generator.
The conditional countermeasure-based self-coding network model is divided into two types according to different construction of fused data label information: a conditional-robust self-coding network based on implicit variable distribution fitting and a sample-based conditional-robust self-coding network.
As shown in fig. 3, label information 9 of sample data is fused in the discriminator 4-1 Based on a Conditional robust self-encoding network (hereinafter, referred to as L-CAAE) of hidden variable Distribution fitting, and the label information 9 of the sample data is used as an input of the discriminator 4-1 in combination with the prior Distribution function 2 selected in step S2 as a positive sample and the data encoding characteristics of the real sample extracted through the hidden layer 8 as a negative sample.
The encoder 6 and the decoder 7 in the conditional robust self-encoding network based on implicit variable distribution fitting respectively comprise the following four combinations:
full connection layer, deconvolution layer, convolution layer, full connection layer, convolution layer, deconvolution layer;
as shown in fig. 5, as an optimal structure, the encoder 6 in the conditional robust self-encoding network based on implicit variable distribution fitting includes a plurality of convolutional layer structures, vectorization layers, and full-link layers, which are connected in sequence, where each convolutional layer structure includes a convolutional layer followed by a BatchNorm layer (BN layer for short), an activation function, and a down-sampling layer, and the vectorization layer performs vectorization operation on the convolved feature map, converts the feature map into a common vector, and then performs full-link operation; the decoder 7 comprises a full connection layer, an inverse vectorization layer and a plurality of continuous deconvolution layer structures which are sequentially connected, each deconvolution layer structure comprises a deconvolution layer and a BN layer, an activation function and an upsampling which are sequentially followed by the deconvolution layer, and the inverse vectorization layer converts the structure of the full connection layer into an acceptable input of a common volume base layer; the arbiter 4-1 comprises a plurality of fully connected layers, each fully connected layer followed by an activation function; the activation function is a reduced Linear Units function (hereinafter referred to as ReLU); the sizes of convolution kernels of the convolution layer and the deconvolution layer are all set to 3 × 3.
The conditional countermeasure self-encoding network based on implicit variable distribution fitting is a network of an encoding-decoding structure, and the capability of learning distinguishing characteristics of an encoder is indirectly enhanced through label information of data in a discriminator.
As shown in fig. 4, a sample-Based Conditional adaptive auto-encoding network (S-CAAE), which fuses tag information 9 of sample data in an encoder 6-1, the tag information 9 of the sample data being used as an input of the encoder 6-1 in combination with the prior distribution function 2 selected in step S2; in the sample-based conditional countermeasure self-encoding network, a pseudo training sample 5 is generated through a hidden layer, the pseudo training sample 5 serves as a negative sample, a real sample 1 generated through training data 10 serves as a positive sample and is input into a discriminator 4, and the discriminator 4 judges a real source from the input pseudo training sample 5 and the real sample 1; the decoder 7-1 decodes the hidden layer, and restores the pseudo training samples 5 to the given prior distribution function and the label information of the sample data.
As shown in FIG. 6, the encoder 6-1 of the sample-based conditional immunity self-encoding network includes a plurality of fully-connected layers; the decoder 7-1 comprises a plurality of fully connected layers; the discriminator 4 comprises a plurality of convolution layer structures, vectorization layers and full-connection layers which are connected in sequence, wherein each convolution layer structure comprises a convolution layer and a BN layer, an activation function and down sampling which follow the convolution layer; the activation function is a ReLU function; each fully connected layer is followed by an activation function; the sizes of convolution kernels of the convolution layer and the deconvolution layer are all set to 3 × 3.
The conditional countermeasure self-encoding network model is a network model of a decoding-encoding structure, and uses label information of samples in an encoder, so that mapping from a generated image to input features can be completed, and feature learning capability of the model can be improved.
The conditional countermeasure self-coding network model is based on the countermeasure self-coding network model, so the optimization model is as follows:
Figure 737526DEST_PATH_IMAGE004
wherein:
e denotes an encoder, and E denotes,
d denotes a discriminator, which is a two-classifier, for estimating the probability that an input sample is from the true training data set instead of the generated data set,
x represents the sample of the input and,
z is a random variable of the input and,
y represents the parameter to be optimized.
The training process in step S4 includes the following steps:
s4-1: determining the number of training iterations;
s4-2: sampling from the training set to obtain a small batch of training samples { x(1),……x(m)}
S4-3: updating network parameter theta of encoder and decoder using gradient method according to the following equation 1AeAnd thetaAd
Figure 986104DEST_PATH_IMAGE001
S4-4: updating network parameter theta of encoder using gradient method according to the following equation 2Ae
Figure 53417DEST_PATH_IMAGE002
S4-5: updating network parameter theta of the discriminator using a gradient method according to the following equation 3d
Figure 336631DEST_PATH_IMAGE003
S4-6: repeating steps S4-2 to S4-5 until the number of repetitions equals the number of training iterations; training the discriminator and the generator repeatedly and alternately to improve the performance of the discriminator and the generator continuously, and when the discrimination capability of the discriminator is improved to a certain degree and the data source cannot be discriminated correctly, the generator can be considered to learn the distribution of the real data, so as to obtain the final network parameters;
wherein the content of the first and second substances,
e (.) indicates that it is desired to,
pdata(.) represent the distribution of data subject to real samples,
p (z) represents a distribution of data subject to a random cost distribution, and p (z) may be chosen to be a mixed gaussian distribution, or may be a uniform distribution function,
z is a random variable of the input and,
x represents the sample of the input and,
d denotes a discriminator.
The image recognition method based on the feature learning method of the conditional countermeasure self-coding network comprises the following steps:
1. constructing a network model, wherein the network model is a conditional countermeasure-based self-coding network model, and label information of the sample data is fused in a discriminator or a coder of the network model to finish more effective image data identification;
2. selecting a data set and a prior distribution function, and training a network model;
3. inputting image data to be recognized into a trained network model, and performing image classification and recognition by using a discriminator;
taking a conditional countermeasure self-coding network based on hidden variable distribution fitting as an example, selecting a Gaussian mixture model by a prior distribution function, setting the number of hidden units of the conditional countermeasure self-coding network based on hidden variable distribution fitting to be 2, and setting the convolutional layer structure in the encoder structure to be Conv-BN-ReLU-downsampling to be 784-1024-2; the deconvolution layer structure of the decoder structure is Deconv-BN-ReLU-upsampling set as 784-1024-2; the number of training iterations in the training process is set to 200; and respectively adopting an Adam optimization method for each part. Experiments were performed on three datasets, MNIST, ETH80 and CIFAR10, respectively, using the following prior network models for comparison, all algorithms using softmax-based fine tuning as the final result, yielding the results of table 1:
the comparative network model is as follows:
(1) self-coding network (AutoEncoder, AE)
(2) Sparse AutoEncoder (Sparse AutoEncoder, SAE for short)
(3) Noise reduction AutoEncoders (Denoising AutoEncoders, DAE for short)
(4) Compression autocoder (continuous AutoEncoders, CAE for short)
(5) K-Sparse Autoencoder (K-Sparse Autoencoder, K-SAE for short)
(6) Stack convolution automatic encoder (Stacked Convolutional AutoEncoders, S-CAE for short)
TABLE 1 error Rate (%) -of Classification of the models on the Experimental data set
Figure DEST_PATH_IMAGE005
As can be seen from table 1 above, although the classification accuracy of each model is significantly reduced as the complexity of data increases, the classification effect obtained from the coding network is best resisted by the condition based on the implicit variable distribution fitting. Therefore, the conditional anti-self-coding network based on hidden variable distribution fitting combined with the data label information can well fit the given prior distribution, so that the network learning has stronger discriminability; when the complex object layout and the multi-label image scene are faced, the high-precision image recognition effect is obtained by effectively utilizing the label information of the data in the limited sample.

Claims (1)

1. The image recognition method based on the feature learning method of the conditional countermeasure self-coding network comprises the following steps:
step 1, constructing a network model;
step 2, selecting a data set and a prior distribution function, and training the network model;
step 3, inputting image data to be recognized into the trained network model, and performing image classification recognition by using a discriminator;
the method is characterized in that:
in the step 1, the network model is a conditional countermeasure self-coding network model, and the conditional countermeasure self-coding network model is a conditional countermeasure self-coding network based on implicit variable distribution fitting or a sample-based conditional countermeasure self-coding network; fusing the label information of the sample data in a discriminator of a conditional countermeasure self-coding network based on implicit variable distribution fitting, or fusing the label information of the sample data in an encoder of the conditional countermeasure self-coding network based on the sample, so that the discrimination capability of the conditional countermeasure self-coding network based on the conditional countermeasure network model is improved;
the conditional countermeasure self-coding network model is trained based on a feature learning method of the conditional countermeasure self-coding network, and comprises the following steps:
s1: selecting a training set;
s2: selecting a prior distribution function;
s3: constructing a network model on the basis of the confrontation self-coding network model;
label information of sample data is introduced into input data of a discriminator or an encoder in a network model, so that the discrimination capability of the condition-based confrontation self-coding network model is improved;
s4: training the network model to optimize parameters in the network model;
s5: extracting data characteristics through an encoder in the trained network model;
the conditional countermeasure self-coding network model is a conditional countermeasure self-coding network based on hidden variable distribution fitting, which fuses in the discriminator the label information of the sample data as the input of the discriminator in combination with the prior distribution function selected in step S2 as a positive sample and the data coding features of a real sample extracted through a hidden layer as a negative sample;
the encoder and the decoder in the conditional robust self-encoding network based on the hidden variable distribution fitting respectively comprise the following four combinations:
full connection layer, deconvolution layer, convolution layer, full connection layer, convolution layer, deconvolution layer;
the encoder in the conditional countermeasure self-encoding network based on the implicit variable distribution fitting comprises a convolution layer structure, a vectorization layer and a full connection layer which are sequentially connected and used for carrying out vectorization operation on input vectors, wherein each convolution layer structure comprises a convolution layer, and a BatchNorm layer, an activation function and a down sampling which are sequentially followed by the convolution layer; the decoder comprises a full connection layer, an inverse vectorization layer and deconvolution layer structures which are sequentially connected, wherein each deconvolution layer structure comprises a deconvolution layer and a BatchNorm layer, an activation function and an upsampling which are sequentially followed by the deconvolution layer; the discriminator comprises a full connection layer; each fully connected layer is followed by the activation function;
the conditional adversary-based self-coding network model is a sample-based conditional adversary self-coding network that fuses in the encoder tag information of the sample data, which is taken as an input of the encoder in conjunction with the a priori distribution function selected in step S2;
in the sample-based conditional countermeasure self-coding network, a pseudo training sample is generated through a hidden layer, the pseudo training sample is used as a negative sample, a real sample generated through training data is used as a positive sample and is input into the discriminator, and the discriminator judges a real source from the input pseudo training sample and the input real sample; decoding the hidden layer by a decoder, and restoring the pseudo training sample to the given prior distribution function and the label information of the sample data;
the encoder of the sample-based conditional immunity self-encoding network comprises a fully-connected layer; the decoder comprises a fully connected layer; the discriminator comprises convolution layer structures, a vectorization layer and a full-connection layer which are connected in sequence, wherein each convolution layer structure comprises a convolution layer followed by a BatchNorm layer, an activation function and a down-sampling; each fully connected layer is followed by the activation function;
the training process in step S4 includes the following steps:
s4-1: determining the number of training iterations;
s4-2: sampling from the training set to obtain a small batch of training samples { x(1),……x(m)};
Wherein m is the number of samples;
s4-3: updating network parameters theta of the encoder and the decoder using a gradient method according to the following equation 1AeAnd thetaAd
Figure 602780DEST_PATH_IMAGE001
S4-4: updating a network parameter theta of the encoder using a gradient method according to the following equation 2Ae
Figure 113395DEST_PATH_IMAGE002
S4-5: updating the network parameter theta of the discriminator using a gradient method according to the following equation 3d
Figure 361974DEST_PATH_IMAGE003
S4-6: repeating steps S4-2 through S4-5 until the number of repetitions equals the number of training iterations;
in the above-mentioned formulas, the first and second substrates,
e (.) indicates that it is desired to,
pdata(.) represent the distribution of data subject to real samples,
p (z) represents a random data distribution,
z is a random variable of the input and,
x represents the sample of the input and,
d denotes a discriminator.
CN201811379900.6A 2018-11-20 2018-11-20 Feature learning method and image identification method based on conditional countermeasure self-coding network Active CN109543745B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811379900.6A CN109543745B (en) 2018-11-20 2018-11-20 Feature learning method and image identification method based on conditional countermeasure self-coding network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811379900.6A CN109543745B (en) 2018-11-20 2018-11-20 Feature learning method and image identification method based on conditional countermeasure self-coding network

Publications (2)

Publication Number Publication Date
CN109543745A CN109543745A (en) 2019-03-29
CN109543745B true CN109543745B (en) 2021-08-24

Family

ID=65848319

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811379900.6A Active CN109543745B (en) 2018-11-20 2018-11-20 Feature learning method and image identification method based on conditional countermeasure self-coding network

Country Status (1)

Country Link
CN (1) CN109543745B (en)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163796B (en) * 2019-05-29 2023-03-24 北方民族大学 Unsupervised multi-modal countermeasures self-encoding image generation method and framework
CN110458003B (en) * 2019-06-29 2023-09-19 天津大学 Facial expression action unit countermeasure synthesis method based on local attention model
CN110602494A (en) * 2019-08-01 2019-12-20 杭州皮克皮克科技有限公司 Image coding and decoding system and method based on deep learning
CN110503636B (en) * 2019-08-06 2024-01-26 腾讯医疗健康(深圳)有限公司 Parameter adjustment method, focus prediction method, parameter adjustment device and electronic equipment
CN110609477B (en) * 2019-09-27 2021-06-29 东北大学 Electric power system transient stability discrimination system and method based on deep learning
CN110969585B (en) * 2019-10-22 2024-04-12 广东石油化工学院 Rain removing method based on conditional variation self-coding network
CN111080605A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Method for identifying railway wagon manual brake shaft chain falling fault image
CN111382675B (en) * 2020-02-24 2024-02-27 江苏大学 Generation countermeasure network system for pedestrian recognition data set enhancement training
CN111402179B (en) * 2020-03-12 2022-08-09 南昌航空大学 Image synthesis method and system combining countermeasure autoencoder and generation countermeasure network
CN111428734B (en) * 2020-03-17 2022-08-09 山东大学 Image feature extraction method and device based on residual countermeasure inference learning and computer readable storage medium
CN111859790B (en) * 2020-07-08 2022-09-16 大连理工大学 Intelligent design method for curve reinforcement structure layout based on image feature learning
CN111985161A (en) * 2020-08-21 2020-11-24 广东电网有限责任公司清远供电局 Transformer substation three-dimensional model reconstruction method
CN111967534A (en) * 2020-09-03 2020-11-20 福州大学 Incremental learning method based on generation of confrontation network knowledge distillation
CN112015932A (en) * 2020-09-11 2020-12-01 深兰科技(上海)有限公司 Image storage method, medium and device based on neural network
CN112329875B (en) * 2020-11-16 2022-05-03 电子科技大学 Continuous image sequence identification method based on continuous attractor network
CN112767344A (en) * 2021-01-16 2021-05-07 北京工业大学 Disease enhancement method based on vehicle-mounted camera shooting and coupling tradition and machine learning
CN113222147B (en) * 2021-05-11 2024-02-13 北华航天工业学院 Construction method of conditional double-countermeasure learning reasoning model
CN113295722B (en) * 2021-05-21 2022-09-20 厦门大学 X-ray spectral data correction method and device based on deep learning algorithm
CN113962381B (en) * 2021-09-30 2023-02-03 西南电子技术研究所(中国电子科技集团公司第十研究所) Weak signal data enhancement method
CN114897722B (en) * 2022-04-29 2023-04-18 中国科学院西安光学精密机械研究所 Wavefront image restoration method based on self-coding network
CN114972878A (en) * 2022-06-14 2022-08-30 云南大学 Method for detecting migratable confrontation sample independent of attack
CN115691695B (en) * 2022-11-11 2023-06-30 中南大学 Material component generation method and evaluation method based on GAN and VAE

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709997A (en) * 2016-04-29 2017-05-24 电子科技大学 Three-dimensional key point detection method based on deep neural network and sparse auto-encoder
CN107590515A (en) * 2017-09-14 2018-01-16 西安电子科技大学 The hyperspectral image classification method of self-encoding encoder based on entropy rate super-pixel segmentation
CN107977629A (en) * 2017-12-04 2018-05-01 电子科技大学 A kind of facial image aging synthetic method of feature based separation confrontation network
CN108334497A (en) * 2018-02-06 2018-07-27 北京航空航天大学 The method and apparatus for automatically generating text

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10733744B2 (en) * 2017-05-11 2020-08-04 Kla-Tencor Corp. Learning based approach for aligning images acquired with different modalities

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709997A (en) * 2016-04-29 2017-05-24 电子科技大学 Three-dimensional key point detection method based on deep neural network and sparse auto-encoder
CN107590515A (en) * 2017-09-14 2018-01-16 西安电子科技大学 The hyperspectral image classification method of self-encoding encoder based on entropy rate super-pixel segmentation
CN107977629A (en) * 2017-12-04 2018-05-01 电子科技大学 A kind of facial image aging synthetic method of feature based separation confrontation network
CN108334497A (en) * 2018-02-06 2018-07-27 北京航空航天大学 The method and apparatus for automatically generating text

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Adversarial Autoencoders;Alireza Makhzani等;《Machine Learning》;20160525;第1-16页 *
Age Progression/Regression by Conditional Adversarial Autoencoder;Zhifei Zhang等;《Computer Vision and Pattern Recognition》;20170328;第4-6页,图3、6 *
Conditional Generative Adversarial Nets;Mehdi Mirza等;《Machine Learning》;20151106;第1-7页 *
特征聚类自适应变组稀疏自编码网络及图像识别;肖汉雄等;《计算机工程与科学》;20181031;第40卷(第10期);第1858-1866页 *
生成式对抗网络研究进展;王万良等;《通信学报》;20180228;第39卷(第2期);第2018032-1-14页 *

Also Published As

Publication number Publication date
CN109543745A (en) 2019-03-29

Similar Documents

Publication Publication Date Title
CN109543745B (en) Feature learning method and image identification method based on conditional countermeasure self-coding network
Han et al. A survey on visual transformer
Tao et al. Self-supervised video representation learning using inter-intra contrastive framework
CN109543502B (en) Semantic segmentation method based on deep multi-scale neural network
CN111325165B (en) Urban remote sensing image scene classification method considering spatial relationship information
CN113128591B (en) Rotary robust point cloud classification method based on self-supervision learning
Valsesia et al. Learning localized representations of point clouds with graph-convolutional generative adversarial networks
CN116258719B (en) Flotation foam image segmentation method and device based on multi-mode data fusion
CN115393396B (en) Unmanned aerial vehicle target tracking method based on mask pre-training
CN111639564A (en) Video pedestrian re-identification method based on multi-attention heterogeneous network
CN111860116B (en) Scene identification method based on deep learning and privilege information
CN113076957A (en) RGB-D image saliency target detection method based on cross-modal feature fusion
CN113192073A (en) Clothing semantic segmentation method based on cross fusion network
CN112906623A (en) Reverse attention model based on multi-scale depth supervision
Bounsaythip et al. Genetic algorithms in image processing-a review
CN116310305A (en) Coding and decoding structure semantic segmentation model based on tensor and second-order covariance attention mechanism
Guo et al. Blind detection of glow-based facial forgery
Ale et al. Lightweight deep learning model for facial expression recognition
Huang et al. Busy-quiet video disentangling for video classification
Jiang et al. Supervised contrastive learning with hard negative samples
Xiao et al. ANE: Network embedding via adversarial autoencoders
CN114638994B (en) Multi-modal image classification system and method based on attention multi-interaction network
CN112215282B (en) Meta-generalization network system based on small sample image classification
CN113869154A (en) Video actor segmentation method according to language description
CN114821631A (en) Pedestrian feature extraction method based on attention mechanism and multi-scale feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant