CN113222147A - Construction method of conditional dual-confrontation learning inference model - Google Patents

Construction method of conditional dual-confrontation learning inference model Download PDF

Info

Publication number
CN113222147A
CN113222147A CN202110510707.7A CN202110510707A CN113222147A CN 113222147 A CN113222147 A CN 113222147A CN 202110510707 A CN202110510707 A CN 202110510707A CN 113222147 A CN113222147 A CN 113222147A
Authority
CN
China
Prior art keywords
distribution
sample
inference
samples
inference engine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110510707.7A
Other languages
Chinese (zh)
Other versions
CN113222147B (en
Inventor
赵川
冯志鹏
王成龙
高海珍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China Institute of Aerospace Engineering
Original Assignee
North China Institute of Aerospace Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China Institute of Aerospace Engineering filed Critical North China Institute of Aerospace Engineering
Priority to CN202110510707.7A priority Critical patent/CN113222147B/en
Publication of CN113222147A publication Critical patent/CN113222147A/en
Application granted granted Critical
Publication of CN113222147B publication Critical patent/CN113222147B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Complex Calculations (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for constructing a conditional dual-confrontation learning inference model, which comprises the following steps: in the inference engine part, a hidden variable sample is output through an inference network; in a generator part, establishing linear combination Gaussian distribution related to label information, and outputting a signal sample by a generating network through a sample; in the countermeasure part, two discriminators are introduced to carry out bidirectional countermeasure training on hidden variables and data samples of an inference engine and a generator; establishing a new target function, and converging the target function through bidirectional countertraining; the whole model is trained by using training data, network parameters of each part are determined, test data are input into an inference machine to obtain characteristics, and a result is given out through classifier identification. The construction method of the conditional dual-confrontation learning inference model can strengthen the difference among different types of features by controlling prior distribution, improve the calculation efficiency, improve the feature pattern recognition performance and improve the pattern recognition accuracy.

Description

Construction method of conditional dual-confrontation learning inference model
Technical Field
The invention relates to the technical field of antagonistic learning inference models, in particular to a construction method of a conditional dual-antagonistic learning inference model.
Background
Vincent et al proposed an Adaptive Learning Index (ALI) model in 2017, which has Learned generation network (generation) through countermeasure mechanismn networks) and inference networks (inference networks). The generation network maps the random hidden variable samples to the data space, and the corresponding samples are
Figure BDA0003060267220000011
The inference network maps the data space samples to the hidden variable space, and the corresponding samples are
Figure BDA0003060267220000012
Two networks play a game of confrontation, and a discrimination network is trained to distinguish joint samples
Figure BDA0003060267220000013
And
Figure BDA0003060267220000014
researchers verify the effectiveness of the inference engine and the generator trained by the countermeasure mechanism, and meanwhile, the hidden variables extracted by the inference engine can be used as effective representations of the original data. Thus, in pattern recognition, the ALI model may be trained to extract features through an inference engine. However, subsequent studies show that there is a certain limitation when the ALI is used to process multi-class samples for pattern recognition: 1) the multi-class features obey the same distribution, the difference between the different class features is not strong, and the performance of pattern recognition is poor, 2) the combined sample is matched through a countermeasure mechanism
Figure BDA0003060267220000015
And
Figure BDA0003060267220000016
the distribution of (3) the required calculation amount is large, the calculation time required for reaching convergence is long, the model operation is easy to collapse, namely Nash balance cannot be reached, and 3) the characteristic polymerization degree extracted by the inference engine is poor, and the pattern recognition accuracy rate is not high enough.
Disclosure of Invention
The invention aims to provide a construction method of a conditional dual-confrontation learning inference model, which can strengthen the difference among different types of features by controlling prior distribution, improve the calculation efficiency, improve the feature pattern recognition performance and improve the pattern recognition accuracy.
In order to achieve the purpose, the invention provides the following scheme:
a construction method of a conditional dual-confrontation learning inference model comprises the following steps:
s1, in the inference engine part, the sample x obeys the distribution q (x), passing through the inference network Gz(x) Outputting latent variable samples
Figure BDA0003060267220000017
Obeying a conditional distribution q (z | x);
s2, in the generator part, firstly, establishing a linear combination Gaussian distribution related to the label information to restrain K characteristics, and secondly, enabling the sample z to pass through the generation network Gx(z) output signal samples
Figure BDA0003060267220000021
Obeying a conditional distribution p (x | z);
s3, in the countermeasure part, two discriminators are introduced to carry out bidirectional countermeasure training on hidden variables and data samples of the inference engine and the generator, and the distribution of the samples is determined;
s4, establishing a new target function, converging the target function through bidirectional confrontation training, realizing applying exclusive distribution to hidden variables of the inference engine, approximating the output of the generator to the input of the inference engine, and obeying the same distribution;
s5, inputting the output of the inference engine, namely the extracted features, in the classifier part, training the whole model by using training data, determining the network parameters of each part, inputting the test data into the inference engine to obtain the features, and identifying by the classifier to give a result.
Optionally, in step S2, establishing a linear combination gaussian distribution related to the tag information to constrain the K features, specifically including:
provided with a random variable z ═ z1,z2,…,zd]And d is the dimension of Gaussian distribution, the linear combination Gaussian distribution model is shown as the formula (1):
Figure BDA0003060267220000022
in the formula, N (z | μ)k,Hk) Is the K-th component in the combined distribution, K is the total number of components, mukRepresenting d-dimensional mean vector, and H is covariance matrix for describing correlation degree, pi, between variables in each dimension in each Gaussian distributionkRepresents a specific gravity of each gaussian distribution in a linear combination, and satisfies formula (2):
Figure BDA0003060267220000023
the joint probability density function for the kth component is then expressed as (3):
Figure BDA0003060267220000024
setting d to be 2, calculating an angle variable theta according to a formula (4) for constraining the K-type samples, wherein table represents a sample type, a corresponding K-dimensional type vector is y, wherein the number of +1 elements of the table is 1, and the rest are 0; let vector quantity
Figure BDA0003060267220000025
And is
Figure BDA0003060267220000026
r is a real number, then μ is associated with class informationkCan be calculated by the formula (5) while letting Hk=[h11,0;0,h12](ii) a The sampling is performed from a distribution, the sample z and the corresponding y constitute a composite sample (z, y), given that K is 10, r is 6, h11=0.5,h22As 0.1, 100 samples were taken from each gaussian distribution, respectively, and equations (4) and (5) are expressed as follows:
Figure BDA0003060267220000031
Figure BDA0003060267220000032
optionally, in the step S3, in the countermeasure part, two discriminators are introduced to perform bidirectional countermeasure training on hidden variables and data samples of the inference engine and the generator, and determine which distribution the samples come from specifically includes:
the input of the first discriminator is a complex vector, which is composed of hidden variable samples
Figure BDA0003060267220000033
And the corresponding label information vector y, and taking the composite sample (z, y) of the generator as true, the composite sample of the inference engine
Figure BDA0003060267220000034
Is false;
the input of the second discriminator is the data sample, and the output of the generator is true with the input x of the inference engine
Figure BDA0003060267220000035
Is false; if the sample is (z, y) or x, it is considered to be a true sample and the output is 1, otherwise the output is zero.
Optionally, in step S4, a new objective function is established, which specifically includes:
Figure BDA0003060267220000036
in the formula, E represents expectation, q (x) and p (z) represent probability distribution conditions of obedience of x and z respectively, and D (×) is an output result of the discriminator and is used for judging the probability of input of the discriminator; the objective function hopes that the output probability value is the maximum with the objective function value after the real data is put into the discriminator, and hopes to find the optimal generating function G (x) to ensure the minimum objective function value; in the two confrontation processes, the two carry out confrontation learning; finally, training is performed using an alternating random gradient descent method until convergence.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects: the invention provides a construction method of a conditional dual-confrontation learning inference model, which comprises the following steps of 1) applying exclusive distribution to extracted sample characteristics of different classes, and strengthening the difference between the characteristics by controlling the distribution; 2) constructing a new constraint mechanism, enabling the output of the generator to reconstruct the input of the inference engine and obey the same distribution; 3) establishing a new objective function, and improving the convergence speed of training; compared with the Adversallyarned learning in the prior art, the model provided by the application is stable in operation, can give explicit significance to the extracted features, can strengthen the difference among different types of features by controlling prior distribution, and improves the calculation rate and the state recognition accuracy.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a flow chart of a method for constructing a conditional dual-countervailing learning inference model according to the present invention;
fig. 2 is a schematic diagram of a linear combined gaussian distribution (K10);
fig. 3 is a MNIST dataset portion sample;
fig. 4(a) to 4(d) are graphs showing results of MNIST partial data (10000 times of cyclic training).
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a construction method of a conditional dual-confrontation learning inference model, which aims to strengthen the difference between different types of features, improve the calculation efficiency, improve the feature pattern recognition performance and improve the pattern recognition accuracy.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
As shown in fig. 1, the method for constructing a conditional dual-countermeasure learning inference model provided by the present invention includes the following steps:
s1, in the inference engine part, the sample x obeys the distribution q (x), passing through the inference network Gz(x) Outputting latent variable samples
Figure BDA0003060267220000041
Obeying a conditional distribution q (z | x);
s2, in the generator part, firstly, establishing a linear combination Gaussian distribution related to the label information to restrain K characteristics, and secondly, enabling the sample z to pass through the generation network Gx(z) output signal samples
Figure BDA0003060267220000042
Obeying a conditional distribution p (x | z);
s3, in the countermeasure part, two discriminators are introduced to carry out bidirectional countermeasure training on hidden variables and data samples of the inference engine and the generator, and the distribution of the samples is determined;
s4, establishing a new target function, converging the target function through bidirectional confrontation training, realizing applying exclusive distribution to hidden variables of the inference engine, approximating the output of the generator to the input of the inference engine, and obeying the same distribution;
s5, inputting the output of the inference engine, namely the extracted features, in the classifier part, training the whole model by using training data, determining the network parameters of each part, inputting the test data into the inference engine to obtain the features, and identifying by the classifier to give a result.
In step S2, establishing a gaussian mixture distribution related to the label information to constrain the K features specifically includes:
as shown in fig. 2, which is a schematic structural diagram of a linear combination gaussian distribution (K ═ 10), in step S2, a linear combination gaussian distribution related to the label information is established to constrain K features, which specifically includes:
provided with a random variable z ═ z1,z2,…,zd]And d is the dimension of Gaussian distribution, the linear combination Gaussian distribution model is shown as the formula (1):
Figure BDA0003060267220000051
in the formula, N (z | μ)k,Hk) Is the K-th component in the combined distribution, K is the total number of components, mukRepresenting d-dimensional mean vector, and H is covariance matrix for describing correlation degree, pi, between variables in each dimension in each Gaussian distributionkRepresents a specific gravity of each gaussian distribution in a linear combination, and satisfies formula (2):
Figure BDA0003060267220000052
the joint probability density function for the kth component is then expressed as (3):
Figure BDA0003060267220000053
setting d to be 2, calculating an angle variable theta according to a formula (4) for constraining the K-type samples, wherein table represents a sample type, a corresponding K-dimensional type vector is y, wherein the number of +1 elements of the table is 1, and the rest are 0; let vector quantity
Figure BDA0003060267220000061
And is
Figure BDA0003060267220000062
r is a real number, then μ is associated with class informationkCan pass through the meter of formula (5)Make H at the same timek=[h11,0;0,h12](ii) a The sampling is performed from a distribution, the sample z and the corresponding y constitute a composite sample (z, y), given that K is 10, r is 6, h11=0.5,h22As 0.1, 100 samples were taken from each gaussian distribution, respectively, and equations (4) and (5) are expressed as follows:
Figure BDA0003060267220000063
Figure BDA0003060267220000064
in the step S3, in the countermeasure section, two discriminators are introduced to perform bidirectional countermeasure training on hidden variables and data samples of the inference engine and the generator, and determine which distribution the samples come from specifically includes:
the input of the first discriminator is a complex vector, which is composed of hidden variable samples
Figure BDA0003060267220000065
And the corresponding label information vector y, and taking the composite sample (z, y) of the generator as true, the composite sample of the inference engine
Figure BDA0003060267220000066
Is false;
the input of the second discriminator is the data sample, and the output of the generator is true with the input x of the inference engine
Figure BDA0003060267220000067
Is false; if the sample is (z, y) or x, it is considered to be a true sample and the output is 1, otherwise the output is zero.
In step S4, a new objective function is established, which specifically includes:
Figure BDA0003060267220000068
in the formula (6), E represents expectation, q (x) and p (z) represent the probability distribution of obedience of x and z respectively, and D (×) is the output result of the discriminator and is used for judging the probability that the input of the discriminator is (indicates any corresponding input); the objective function hopes that the output probability value is the maximum with the objective function value after the real data is put into the discriminator, and hopes to find the optimal generating function G (x) to ensure the minimum objective function value; in the two confrontation processes, the two carry out confrontation learning; finally, training is performed using an alternating random gradient descent method until convergence. The mathematical expression of the algorithmic process is shown in table 1.
TABLE 1 conditional dual-countermeasures learning inference algorithm
Figure BDA0003060267220000071
The present invention was validated using the MNIST dataset, which is a very classical dataset in the field of machine learning from the National Institute of Standards and Technology (NIST), consisting of 60000 training samples and 10000 test samples, each sample being a 28 × 28 pixel handwritten digital picture. The training set consists of numbers handwritten from 250 different people, 50% of which are high school students and 50% of which are from the staff of the Census Bureau of population. And 4 files in total comprise a training set, a training set label, a test set and a test set label. The test set is also handwritten digital data of the same scale. Compared with the adaptive sparse inference in the prior art, the model provided by the application strengthens the difference between different classes of features and improves the calculation rate and the feature pattern recognition performance.
This verification aims to observe the advantages of BiALI compared with ALI in terms of computational efficiency, sample reconstruction and feature extraction, and fig. 3 shows part of samples used for training, and the test results are shown in fig. 4(a) to 4 (b). Fig. 4MNIST partial data results plot (10000 rounds of cyclic training): fig. 4(a) and 4(c) are numbers reconstructed by ALI and extracted features, and fig. 4(b) and 4(d) are numbers reconstructed by BiALI and extracted features, which take into account the randomness of sample selection, the whole operation process is repeated 5 times, and the running average time is shown in table 2.
TABLE 2 comparison of computational efficiencies
Figure BDA0003060267220000081
As can be seen from fig. 4 and table 2, compared with the conditional dual-countermeasures learning inference model, the operating efficiency is higher, the reconstruction effect on the sample is better, and the extracted feature clustering effect is more obvious under the same conditions and the same cycle training times. In addition, in the research, the ALI model is easy to generate unstable conditions when processing MNIST, and the operation collapse phenomenon occurs because Nash balance cannot be achieved, so that an accurate and ideal result cannot be obtained; BiALI operation is relatively more stable, and the phenomenon of operation breakdown does not occur. Therefore, the model constructed by the method can strengthen the difference among different types of features by controlling prior distribution, improve the calculation efficiency and improve the pattern recognition performance of the features.
The construction method of the conditional dual-confrontation learning inference model provided by the invention applies exclusive distribution to the extracted characteristics of different types of samples, and strengthens the difference between the characteristics; constructing a new constraint mechanism, enabling the output of the generator to reconstruct the input of the inference engine and obey the same distribution; and a new objective function is established, and the convergence speed of training is improved.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (4)

1. A construction method of a conditional dual-confrontation learning inference model is characterized by comprising the following steps:
s1, in the inference engine part, the sample x obeys the distribution q (x), passing through the inference network Gz(x) Outputting latent variable samples
Figure FDA0003060267210000013
Obeying a conditional distribution q (z | x);
s2, in the generator part, firstly, establishing a linear combination Gaussian distribution related to the label information to restrain K characteristics, and secondly, enabling the sample z to pass through the generation network Gx(z) output signal samples
Figure FDA0003060267210000014
Obeying a conditional distribution p (x | z);
s3, in the countermeasure part, two discriminators are introduced to carry out bidirectional countermeasure training on hidden variables and data samples of the inference engine and the generator, and the distribution of the samples is determined;
s4, establishing a new target function, converging the target function through bidirectional confrontation training, realizing applying exclusive distribution to hidden variables of the inference engine, approximating the output of the generator to the input of the inference engine, and obeying the same distribution;
s5, inputting the output of the inference engine, namely the extracted features, in the classifier part, training the whole model by using training data, determining the network parameters of each part, inputting the test data into the inference engine to obtain the features, and identifying by the classifier to give a result.
2. The method for constructing a conditional dual-confrontation learning inference model according to claim 1, wherein in the step S2, a linear combination gaussian distribution related to label information is established to constrain K features, specifically comprising:
provided with a random variable z ═ z1,z2,…,zd]And d is the dimension of Gaussian distribution, the linear combination Gaussian distribution model is shown as the formula (1):
Figure FDA0003060267210000011
in the formula, N (z | μ)k,Hk) Is the K-th component in the combined distribution, K is the total number of components, mukRepresenting d-dimensional mean vector, and H is covariance matrix for describing correlation degree, pi, between variables in each dimension in each Gaussian distributionkRepresents a specific gravity of each gaussian distribution in a linear combination, and satisfies formula (2):
Figure FDA0003060267210000012
the joint probability density function for the kth component is then expressed as (3):
Figure FDA0003060267210000021
setting d to be 2, calculating an angle variable theta according to a formula (4) for constraining the K-type samples, wherein table represents a sample type, a corresponding K-dimensional type vector is y, wherein the number of +1 elements of the table is 1, and the rest are 0; let vector quantity
Figure FDA0003060267210000022
And is
Figure FDA0003060267210000023
r is a real number, then μ is associated with class informationkCan be calculated by the formula (5) while letting Hk=[h11,0;0,h12](ii) a The sampling is performed from a distribution, the sample z and the corresponding y constitute a composite sample (z, y), given that K is 10, r is 6, h11=0.5,h22As 0.1, 100 samples were taken from each gaussian distribution, respectively, and equations (4) and (5) are expressed as follows:
Figure FDA0003060267210000024
Figure FDA0003060267210000025
3. the method for constructing a conditional dual-countermeasure learning inference model according to claim 1, wherein in the countermeasure portion, two discriminators are introduced to perform bidirectional countermeasure training on hidden variables and data samples of the inference engine and the generator, and determining from which distribution the samples come, specifically comprising:
the input of the first discriminator is a complex vector, which is composed of hidden variable samples
Figure FDA0003060267210000027
And the corresponding label information vector y, and taking the composite sample (z, y) of the generator as true, the composite sample of the inference engine
Figure FDA0003060267210000028
Is false;
the input of the second discriminator is the data sample, and the output of the generator is true with the input x of the inference engine
Figure FDA0003060267210000029
Is false; if the sample is (z, y) or x, it is considered to be a true sample and the output is 1, otherwise the output is zero.
4. The method for constructing the conditional dual-countermeasure learning inference model according to claim 1, wherein in the step S4, a new objective function is established, specifically:
Figure FDA0003060267210000026
in the formula, E represents expectation, q (x) and p (z) represent probability distribution conditions of obedience of x and z respectively, and D (×) is an output result of the discriminator and is used for judging the probability of input of the discriminator; the objective function hopes that the output probability value is the maximum with the objective function value after the real data is put into the discriminator, and hopes to find the optimal generating function G (x) to ensure the minimum objective function value; in the two confrontation processes, the two carry out confrontation learning; finally, training is performed using an alternating random gradient descent method until convergence.
CN202110510707.7A 2021-05-11 2021-05-11 Construction method of conditional double-countermeasure learning reasoning model Active CN113222147B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110510707.7A CN113222147B (en) 2021-05-11 2021-05-11 Construction method of conditional double-countermeasure learning reasoning model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110510707.7A CN113222147B (en) 2021-05-11 2021-05-11 Construction method of conditional double-countermeasure learning reasoning model

Publications (2)

Publication Number Publication Date
CN113222147A true CN113222147A (en) 2021-08-06
CN113222147B CN113222147B (en) 2024-02-13

Family

ID=77094808

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110510707.7A Active CN113222147B (en) 2021-05-11 2021-05-11 Construction method of conditional double-countermeasure learning reasoning model

Country Status (1)

Country Link
CN (1) CN113222147B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019028839A (en) * 2017-08-01 2019-02-21 国立研究開発法人情報通信研究機構 Classifier, method for learning of classifier, and method for classification by classifier
CN109543745A (en) * 2018-11-20 2019-03-29 江南大学 Feature learning method and image-recognizing method based on condition confrontation autoencoder network
US20190295302A1 (en) * 2018-03-22 2019-09-26 Northeastern University Segmentation Guided Image Generation With Adversarial Networks
JP2020030403A (en) * 2018-08-24 2020-02-27 ネイバー コーポレーションNAVER Corporation Method and system for generating interactive response by using deep-learning generation model and multi-modal distribution
CN110889496A (en) * 2019-12-11 2020-03-17 北京工业大学 Human brain effect connection identification method based on confrontation generation network
CN110926594A (en) * 2019-11-22 2020-03-27 北京科技大学 Method for extracting time-varying frequency characteristics of rotary machine signal
CN111785329A (en) * 2020-07-24 2020-10-16 中国人民解放军国防科技大学 Single-cell RNA sequencing clustering method based on confrontation automatic encoder
CN112070209A (en) * 2020-08-13 2020-12-11 河北大学 Stable controllable image generation model training method based on W distance
US20200410285A1 (en) * 2019-06-25 2020-12-31 The Board Of Trustees Of The Leland Stanford Junior University Anomaly Augmented Generative Adversarial Network
CN112686894A (en) * 2021-03-10 2021-04-20 武汉大学 FPCB (flexible printed circuit board) defect detection method and device based on generative countermeasure network
KR20210048100A (en) * 2019-10-23 2021-05-03 서울대학교산학협력단 Condition monitoring data generating apparatus and method using generative adversarial network

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019028839A (en) * 2017-08-01 2019-02-21 国立研究開発法人情報通信研究機構 Classifier, method for learning of classifier, and method for classification by classifier
US20190295302A1 (en) * 2018-03-22 2019-09-26 Northeastern University Segmentation Guided Image Generation With Adversarial Networks
JP2020030403A (en) * 2018-08-24 2020-02-27 ネイバー コーポレーションNAVER Corporation Method and system for generating interactive response by using deep-learning generation model and multi-modal distribution
CN109543745A (en) * 2018-11-20 2019-03-29 江南大学 Feature learning method and image-recognizing method based on condition confrontation autoencoder network
US20200410285A1 (en) * 2019-06-25 2020-12-31 The Board Of Trustees Of The Leland Stanford Junior University Anomaly Augmented Generative Adversarial Network
KR20210048100A (en) * 2019-10-23 2021-05-03 서울대학교산학협력단 Condition monitoring data generating apparatus and method using generative adversarial network
CN110926594A (en) * 2019-11-22 2020-03-27 北京科技大学 Method for extracting time-varying frequency characteristics of rotary machine signal
CN110889496A (en) * 2019-12-11 2020-03-17 北京工业大学 Human brain effect connection identification method based on confrontation generation network
CN111785329A (en) * 2020-07-24 2020-10-16 中国人民解放军国防科技大学 Single-cell RNA sequencing clustering method based on confrontation automatic encoder
CN112070209A (en) * 2020-08-13 2020-12-11 河北大学 Stable controllable image generation model training method based on W distance
CN112686894A (en) * 2021-03-10 2021-04-20 武汉大学 FPCB (flexible printed circuit board) defect detection method and device based on generative countermeasure network

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
BAIHONG LIN 等: "Hyperspectral and Multispectral Image Fusion Based on Low Rank Constrained Gaussian Mixture Model", 《 IEEE ACCESS》, vol. 6, pages 16901 - 16910, XP011680534, DOI: 10.1109/ACCESS.2018.2817071 *
VINCENT DUMOULIN 等: "Adversarially learned inference", 《INTERNATIONAL CONFERENCE ON LEARNING REPRESENTATIONS(2017)》, pages 1 - 18 *
ZHANG, J H 等: "Preliminary study on the application of abdominal aortic balloon occlusion in the treatment of cesarean scar pregnancy", 《CHINESE JOURNAL OF OBSTETRICS AND GYNECOLOGY》, vol. 55, no. 8, pages 516 - 520 *
张健: "基于RBM的深度神经网络算法研究", 《中国博士学位论文全文数据库 信息科技辑》, no. 2021, pages 140 - 75 *
李博超 等: "改进逆向习得推理的网络异常行为检测模型", 《现代电子技术》, vol. 43, no. 18, pages 14 - 18 *
李崇轩 等: "条件概率图产生式对抗网络", 《软件学报》, vol. 31, no. 4, pages 1 - 7 *
赵川: "特征降维与自适应特征提取方法及其在行星齿轮箱故障诊断中的应用研究", 《中国博士学位论文全文数据库 工程科技Ⅱ辑》, no. 2018, pages 029 - 5 *

Also Published As

Publication number Publication date
CN113222147B (en) 2024-02-13

Similar Documents

Publication Publication Date Title
CN108717568B (en) A kind of image characteristics extraction and training method based on Three dimensional convolution neural network
Thai et al. Image classification using support vector machine and artificial neural network
Lee et al. Wasserstein introspective neural networks
CN106326288B (en) Image search method and device
Shao et al. Feature learning for image classification via multiobjective genetic programming
CN110826338B (en) Fine-grained semantic similarity recognition method for single-selection gate and inter-class measurement
CN112784929B (en) Small sample image classification method and device based on double-element group expansion
CN108960301B (en) Ancient Yi-nationality character recognition method based on convolutional neural network
Kalibhat et al. Winning lottery tickets in deep generative models
CN105184260A (en) Image characteristic extraction method, pedestrian detection method and device
CN107729311A (en) A kind of Chinese text feature extracting method of the fusing text tone
CN113642621A (en) Zero sample image classification method based on generation countermeasure network
CN109255339B (en) Classification method based on self-adaptive deep forest human gait energy map
Dong et al. A combined deep learning model for the scene classification of high-resolution remote sensing image
Kumar Verma et al. Generative model for zero-shot sketch-based image retrieval
Camacho et al. Convolutional neural network initialization approaches for image manipulation detection
Shan et al. A theory of neural tangent kernel alignment and its influence on training
CN113409157B (en) Cross-social network user alignment method and device
CN114332565A (en) Method for generating image by generating confrontation network text based on distribution estimation condition
CN109948589A (en) Facial expression recognizing method based on quantum deepness belief network
CN113420833A (en) Visual question-answering method and device based on question semantic mapping
CN116258504B (en) Bank customer relationship management system and method thereof
CN111767949A (en) Multi-task learning method and system based on feature and sample confrontation symbiosis
Shrivastava et al. CLIP-Lite: Information Efficient Visual Representation Learning with Language Supervision
CN111126617A (en) Method, device and equipment for selecting fusion model weight parameters

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant