CN109492662A - A kind of zero sample classification method based on confrontation self-encoding encoder model - Google Patents
A kind of zero sample classification method based on confrontation self-encoding encoder model Download PDFInfo
- Publication number
- CN109492662A CN109492662A CN201811134474.XA CN201811134474A CN109492662A CN 109492662 A CN109492662 A CN 109492662A CN 201811134474 A CN201811134474 A CN 201811134474A CN 109492662 A CN109492662 A CN 109492662A
- Authority
- CN
- China
- Prior art keywords
- classification
- visual signature
- decoder
- encoder
- parameter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
A kind of zero sample classification method based on confrontation self-encoding encoder model, utilize confrontation self-encoding encoder network trained in visible classification, it selects approximate simulation visual signature that can be best to be distributed and make visual signature and the associated network parameter w and v of classification semantic feature, will then have no the classification semantic feature a of classificationtIt is input in the network, generates visual signature using decoder network G, calculate the Euclidean distance between the visual signature of generation and true visual signature.Finally, it is believed that be the classification of prediction apart from the smallest classification, zero sample classification task is realized with this.The present invention is more in line with the characteristics of truthful data, while being aligned visual signature and classification semantic feature, can be realized better classifying quality in zero sample task.
Description
Technical field
The present invention relates to a kind of zero sample classification methods.More particularly to a kind of zero sample based on confrontation self-encoding encoder model
This classification method.
Background technique
Deep learning is greatly promoted the development of computer vision, such as object classification, image retrieval and action recognition
Deng.The performance of these tasks is usually assessed after using the training of a large amount of labeled data.However, some tasks only have it is one small
Part training data is even without training data, so that traditional classification model performance is poor.In order to improve traditional classification model pair
With low volume data or the classification performance of the classification of data, zero sample learning do not attract wide attention.Zero sample learning
The task of (Zero ShotLearning) is exactly to classify to the classification of not training data.The mankind have the ability of reasoning,
That is the mankind can successfully infer the classification for having no object according to the description and priori knowledge to object.For example, working as
Such description is given: " shape of unicorn is similar to horse, unlike unicorn mostly long angle on head ", people
Unicorn can be recognized at once.Zero sample learning identifies new classification by simulating the inferential capability of the mankind.In zero sample
In study, data are divided into two parts, are training data (visible classification) and test data (having no classification) respectively, and the two
Classification is different.The knowledge migration of classification, which is realized, usually to be had no by being clipped to from visible class to the identification for having no classification, at this
In the process, it in order to characterize the semantic association between classification, by means of visible classification and has no the common semantic feature of classification, commonly uses
Classification semantic feature have attributive character and two kinds of text vector characteristic.Attributive character is by manually marking, and text vector is special
Sign is to be obtained on big text corpus with natural language technical treatment.
Image is usually indicated that there are semantic gaps between semantic feature by visual signature, cannot be direct with semantic space
Establish connection.Most of existing zero sample learning method includes two steps, first study visual space and semantic space
Mapping function, then using between the semantic feature that the mapping function learnt calculates the visual signature of test data and has no classification
Similarity, take the biggish classification of similarity be test data label.
Compared with the reasoning process of the mankind, these methods are using the semantic feature of visible classification as priori knowledge, having no
The semantic feature of classification is as the description to object, but the mankind do not learn above-mentioned mapping function in itself, but in brain
In be envisioned as having no the general profile of object to classify.It is therefore believed that zero sample learning can simulate the mankind's
Behavior generates the visual signature for having no classification.
Generating confrontation network (GAN) is the generation model that can learn specific data distribution as one.GAN is mainly solved
Certainly be generate class problem, can use one section of arbitrary generating random number image.GAN includes two network models, a life
At model G (Generator) and a discrimination model D (Discriminator).G generates one using random noise as input
Then G (z) and true picture x are input in D by image G (z), do one two classification to G (z) and x, it is really to scheme that whom, which is detected,
As who is the fault image generated.The case where G and D can be exported according to D continuously improves oneself, and G improves the phase of G (z) He x as far as possible
D is cheated like degree, and D can not then be cheated by G as far as possible by study.When generation image and true picture there is no difference,
When namely the output of D is 0.5, G obtains the ability for generating image.When classification information and noise are input in G jointly,
The image for meeting specific distribution can be generated, used in zero Sample Method with this.
In zero Sample Method, usually assumes that and give in the training stage by N number of tripleWhat is defined is visible
The data of classification, wherein xi∈RpIt is the expression of i-th of visual signature of visible classification, ai∈RqIt is the classification of i-th of visual signature
Semantic feature,It is the class label of i-th of visual signature, p and q are the dimension of vision and semantic space respectively.It is testing
Stage, according to the classification semantic feature and class label { a for having no classificationt,yt, to its visual signature xtClassify, whereinAnd haveThe task of zero sample is exactly the data training pattern using visible classification, and then utilizes training
Good model prediction has no the label y of classificationt。
Existing is mainly comprised the steps that based on the method for generating class
1) training sample is utilized, is realized by linear model or depth model by classification semantic space A to visual space X
Mapping relations
2) the true classification semantic feature for having no classification is mapped to vision by the mapping relationship f learnt using training sample
Space obtains having no the corresponding prediction visual signature of classification.
3) the similarity relationship between the visual signature obtained using prediction and the actual visual feature for having no classification is determined not
See classification generic.Usually determine that the discrimination standard that classification uses is arest neighbors method.
However there is following problems for the method based on generation class:
When acquiring the mapping relations by classification semantic space to visual space using linear model, linear model is in training
Stage is likely to cause the loss of the visible some identifying informations of classification, however these discrimination property information are possibly comprised in and have no classification
In the middle.When acquiring the mapping relations using depth model, network is fought usually using generating.It fights network and utilizes generator G
Confrontation study between discriminator D, training one can be fitted the generator G of true visual signature distribution.But it is most of
Confrontation network is only focused in the distribution for generating approaching to reality visual signature, is but had ignored between visual signature and classification semantic feature
Corresponding relationship makes the visual signature generated lack discrimination property information to a certain extent.
Summary of the invention
It can more convenient and more accurately apply the technical problem to be solved by the invention is to provide one kind and know in image
Not, the zero sample classification method based on confrontation self-encoding encoder model of information retrieval.
The technical scheme adopted by the invention is that: a kind of zero sample classification method based on confrontation self-encoding encoder model, packet
Include following steps:
1) parameter r, w and the v of discriminator D, encoder E and decoder G are initialized;
2) the visual signature x of training sample and classification semantic feature a are randomly selected to the data of one group of setting batch respectively,
Respectively correspond the input as encoder E and decoder G;
3) according to following confrontation self-encoding encoder model training encoder E and decoder G, using Adam optimizer to the mould
Shape parameter optimizes, and retains the parameter w and v for making the smallest encoder E of the model calculation Yu decoder G:
Wherein, when first item represents input classification semantic feature a, the process of visual signature is obtained by decoder G;Second
When Xiang represents input classification semantic feature a, the process of classification semantic feature is successively reconstructed by decoder G and encoder E;It is corresponding confrontation self-encoding encoder model parameter regular terms;λ is that the regular terms is corresponding
Parameter;For the expression of 2 norms;
4) according to the data of the setting batch of selection, obtain discriminator D's using trained encoder E and decoder G
Three inputs x, x' andWherein, x corresponds to true visual signature;The visual signature of the corresponding reconstruct of x', i.e. x successively pass through coding
The feature that device E and decoder network G obtain, also belongs to true visual signature;The corresponding visual signature generated, i.e. classification language
The feature that adopted feature a is obtained by decoder network G belongs to false visual signature;
5) according to the model training discriminator D of following discriminator D, the model parameter is carried out using Adam optimizer excellent
Change, retain the parameter r for keeping discriminator D performance best:
Wherein ΕxAnd ΕaThe distribution of visual signature x and classification semantic feature a are respectively represented, log is to take logarithm operation, and σ is
Softmax function;
6) according to the model training decoder G of discriminator D, the model parameter is optimized using Adam optimizer,
Retain the parameter v for keeping decoder G performance best;
7) step 2)~step 6) is repeated by setting number, obtains final parameter r, w and v;
8) the classification semantic feature a of classification will be had notIt is input in decoder G, obtains having no that the vision that classification generates is special
Sign
9) according to the minimum principle of Euclidean distance, compare the visual signature for having no that classification generatesWith the vision of test sample
Feature xtBetween distance, the class label predicted.
A kind of zero sample classification method based on confrontation self-encoding encoder model of the invention, utilizes the method mould of self-encoding encoder
Being associated between the generating process and visual signature and classification semantic feature of quasi- visual signature, has preferably probed into visual signature
Distribution, advantage are mainly reflected in:
(1) self-encoding encoder is introduced into confrontation study by the present invention for the first time, constructs the network knot of a two-way generation feature
Structure completes the alignment relation between vision and semanteme, devises the zero sample classification technology for being suitable for image data feature.
(2) present invention can synthesize the visual signature for more leveling off to and being really distributed.Model includes a confrontation network, will be true
Real visual signature reconstructs input of the pseudo- visual signature of visual signature and generation as discriminator, can make to reconstruct vision
Feature and true visual signature are as similar as possible, thus can both complete being associated with for visual signature and classification semantic feature,
The semantic information of the overwhelming majority can be retained, synthesize more true visual signature.
Detailed description of the invention
Fig. 1 is a kind of flow chart of the zero sample classification method based on confrontation self-encoding encoder model of the present invention.
Specific embodiment
Below with reference to embodiment and attached drawing to a kind of zero sample classification side based on confrontation self-encoding encoder model of the invention
Method is described in detail.
A kind of zero sample classification method based on confrontation self-encoding encoder model of the invention, it is assumed that semantic special in use classes
While sign generates visual signature, it is contemplated that the reversed process for generating classification semantic feature by visual signature.It is using as a result,
On the basis of fighting network, self-encoding encoder is introduced, by its coding and decoded process, two-way generating process is completed, reaches
It generates visual signature and is associated with the purpose of visual signature and classification semantic feature.
Self-encoding encoder is one kind of neural network, and input can be copied to output by training.Self-encoding encoder is by two
It is grouped as, is encoder h=E (x) and decoder x'=G (h) respectively, wherein for h as intermediate hidden layer, x is corresponding with x' defeated
Enter output.When the dimension of x and x' are identical as visual signature, when the dimension of h is identical as classification semantic feature dimension, life can achieve
At the purpose of visual signature and association visual signature and classification semantic feature.
The zero sample image classification method based on the self-encoding encoder model of confrontation is that view is contacted by two-way generating process
Feel feature and classification semantic feature.Specifically, when inputting x and output x' is visual signature, encoder E is by visual signature x
It is compressed in latent space h, latent space h is made visual signature and classification semantic feature by the supervision of true classification semantic feature in turn
It is associated;The feature reconstruction of latent space is then obtained visual signature x' by decoder G, is obtained:
Wherein w and v is respectively the parameter of encoder E and decoder G,For the feature of latent space h.
So when inputting x and output x' is classification semantic feature, classification semantic feature is directly obtained by its encoder E
The pseudo- visual signature of generation, and this encoder E is then the decoder G that input uses when being visual signature;The pseudo- vision of generation
Feature passes through the classification semantic feature of its decoder G reconstruct input in turn, and then corresponding input is visual signature to this decoder G
When encoder E.
As shown in Figure 1, a kind of zero sample classification method based on confrontation self-encoding encoder model of the invention, it is assumed that x is instruction
Practice the visual signature of sample, a is the classification semantic feature of training sample, xtFor the visual signature for having no classification, atTo have no classification
Classification semantic feature.Method includes the following steps:
1) parameter r, w and the v of discriminator D, encoder E and decoder G are initialized;
2) the visual signature x of training sample and classification semantic feature a are randomly selected to the data of one group of setting batch respectively,
Respectively correspond the input as encoder E and decoder G;
3) according to following confrontation self-encoding encoder model training encoder E and decoder G, using Adam optimizer to the mould
Shape parameter optimizes, and retains the parameter w and v for making the smallest encoder E of the model calculation Yu decoder G:
Wherein, when first item represents input classification semantic feature a, the process of visual signature is obtained by decoder G;Second
When Xiang represents input classification semantic feature a, the process of classification semantic feature is successively reconstructed by decoder G and encoder E;It is corresponding confrontation self-encoding encoder model parameter regular terms;λ is that the regular terms is corresponding
Parameter;For the expression of 2 norms;
4) in order to make decoder G obtain the preferable ability for generating visual signature, discriminator D is added, according to setting for selection
The data for determining batch, using trained encoder E and decoder G obtain three the inputs x, x' of discriminator D withWherein, x
Corresponding true visual signature;The visual signature of the corresponding reconstruct of x', i.e. x successively passes through encoder E and decoder network G obtains
Feature also belongs to true visual signature;The corresponding visual signature generated, i.e. classification semantic feature a pass through decoder network G
Obtained feature belongs to false visual signature;
5) according to the model training discriminator D of following discriminator D, the model parameter is carried out using Adam optimizer excellent
Change, retain the parameter r for keeping discriminator D performance best:
Wherein ΕxAnd ΕaThe distribution of visual signature x and classification semantic feature a are respectively represented, log is to take logarithm operation, and σ is
Softmax function;
6) according to the model training decoder G of discriminator D, the model parameter is optimized using Adam optimizer,
Retain the parameter v for keeping decoder G performance best;
7) step 2)~step 6) is repeated by setting number, obtains final parameter r, w and v;
8) the classification semantic feature a of classification will be had notIt is input in decoder G, obtains having no that the vision that classification generates is special
Sign
9) according to the minimum principle of Euclidean distance, compare the visual signature for having no that classification generatesWith the vision for having no classification
Feature xtBetween distance, the class label predicted.
For zero sample image classification task, for having no the visual signature x of classificationt, the present invention is using in visible classification
Upper trained confrontation self-encoding encoder model, approximate simulation visual signature distribution that selection can be best and make visual signature and
The parameter w and v of classification semantic feature associated encoder E and decoder G will then have no the classification semantic feature a of classificationtIt is defeated
Enter into decoder G, generate visual signature using decoder G, calculates visual signature and true visual signature that output generates
Between Euclidean distance.Finally, it is believed that be the classification of prediction apart from the smallest classification, zero sample classification task is realized with this.This hair
The characteristics of bright method is more in line with truthful data, while it being aligned visual signature and classification semantic feature, in zero sample task
In can be realized better classifying quality.
Claims (1)
1. a kind of zero sample classification method based on confrontation self-encoding encoder model, which comprises the steps of:
1) parameter r, w and the v of discriminator D, encoder E and decoder G are initialized;
2) the visual signature x of training sample and classification semantic feature a are randomly selected to the data of one group of setting batch respectively, respectively
Input to should be used as encoder E and decoder G;
3) according to following confrontation self-encoding encoder model training encoder E and decoder G, the model is joined using Adam optimizer
Number optimizes, and retains the parameter w and v for making the smallest encoder E of the model calculation Yu decoder G:
Wherein, when first item represents input classification semantic feature a, the process of visual signature is obtained by decoder G;Section 2 generation
When table inputs classification semantic feature a, the process of classification semantic feature is successively reconstructed by decoder G and encoder E;It is corresponding confrontation self-encoding encoder model parameter regular terms;λ is that the regular terms is corresponding
Parameter;For the expression of 2 norms;
4) according to the data of the setting batch of selection, three of discriminator D are obtained using trained encoder E and decoder G
Input x, x' andWherein, x corresponds to true visual signature;The visual signature of the corresponding reconstruct of x', i.e. x successively pass through encoder E
The feature obtained with decoder network G also belongs to true visual signature;The corresponding visual signature generated, i.e. classification are semantic special
The feature that sign a is obtained by decoder network G, belongs to false visual signature;
5) according to the model training discriminator D of following discriminator D, the model parameter is optimized using Adam optimizer,
Retain the parameter r for keeping discriminator D performance best:
Wherein ΕxAnd ΕaThe distribution of visual signature x and classification semantic feature a are respectively represented, log is to take logarithm operation, and σ is
Softmax function;
6) according to the model training decoder G of discriminator D, the model parameter is optimized using Adam optimizer, is retained
The parameter v for keeping decoder G performance best;
7) step 2)~step 6) is repeated by setting number, obtains final parameter r, w and v;
8) the classification semantic feature a of classification will be had notIt is input in decoder G, obtains having no the visual signature that classification generates
9) according to the minimum principle of Euclidean distance, compare the visual signature for having no that classification generatesWith the visual signature of test sample
xtBetween distance, the class label predicted.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811134474.XA CN109492662B (en) | 2018-09-27 | 2018-09-27 | Zero sample image classification method based on confrontation self-encoder model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811134474.XA CN109492662B (en) | 2018-09-27 | 2018-09-27 | Zero sample image classification method based on confrontation self-encoder model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109492662A true CN109492662A (en) | 2019-03-19 |
CN109492662B CN109492662B (en) | 2021-09-14 |
Family
ID=65690082
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811134474.XA Active CN109492662B (en) | 2018-09-27 | 2018-09-27 | Zero sample image classification method based on confrontation self-encoder model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109492662B (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110097095A (en) * | 2019-04-15 | 2019-08-06 | 天津大学 | A kind of zero sample classification method generating confrontation network based on multiple view |
CN110135459A (en) * | 2019-04-15 | 2019-08-16 | 天津大学 | A kind of zero sample classification method based on double triple depth measure learning networks |
CN110427967A (en) * | 2019-06-27 | 2019-11-08 | 中国矿业大学 | The zero sample image classification method based on embedded feature selecting semanteme self-encoding encoder |
CN110443293A (en) * | 2019-07-25 | 2019-11-12 | 天津大学 | Based on double zero sample image classification methods for differentiating and generating confrontation network text and reconstructing |
CN110580501A (en) * | 2019-08-20 | 2019-12-17 | 天津大学 | Zero sample image classification method based on variational self-coding countermeasure network |
CN110598759A (en) * | 2019-08-23 | 2019-12-20 | 天津大学 | Zero sample classification method for generating countermeasure network based on multi-mode fusion |
CN110795585A (en) * | 2019-11-12 | 2020-02-14 | 福州大学 | Zero sample image classification model based on generation countermeasure network and method thereof |
CN110826638A (en) * | 2019-11-12 | 2020-02-21 | 福州大学 | Zero sample image classification model based on repeated attention network and method thereof |
CN111914929A (en) * | 2020-07-30 | 2020-11-10 | 南京邮电大学 | Zero sample learning method |
CN112364851A (en) * | 2021-01-13 | 2021-02-12 | 北京邮电大学 | Automatic modulation recognition method and device, electronic equipment and storage medium |
CN112364894A (en) * | 2020-10-23 | 2021-02-12 | 天津大学 | Zero sample image classification method of countermeasure network based on meta-learning |
CN112487193A (en) * | 2020-12-18 | 2021-03-12 | 贵州大学 | Zero sample picture classification method based on self-encoder |
CN112733954A (en) * | 2021-01-20 | 2021-04-30 | 湖南大学 | Abnormal traffic detection method based on generation countermeasure network |
CN113111917A (en) * | 2021-03-16 | 2021-07-13 | 重庆邮电大学 | Zero sample image classification method and device based on dual self-encoders |
CN113191381A (en) * | 2020-12-04 | 2021-07-30 | 云南大学 | Image zero-order classification model based on cross knowledge and classification method thereof |
CN113269274A (en) * | 2021-06-18 | 2021-08-17 | 南昌航空大学 | Zero sample identification method and system based on cycle consistency |
CN113361611A (en) * | 2021-06-11 | 2021-09-07 | 南京大学 | Robust classifier training method under crowdsourcing task |
CN113657172A (en) * | 2021-07-20 | 2021-11-16 | 西安理工大学 | Cross-domain human body action recognition method based on semantic level domain invariant features |
WO2022110158A1 (en) * | 2020-11-30 | 2022-06-02 | Intel Corporation | Online learning method and system for action recongition |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106778804A (en) * | 2016-11-18 | 2017-05-31 | 天津大学 | The zero sample image sorting technique based on category attribute transfer learning |
CN107679556A (en) * | 2017-09-18 | 2018-02-09 | 天津大学 | The zero sample image sorting technique based on variation autocoder |
CN107977629A (en) * | 2017-12-04 | 2018-05-01 | 电子科技大学 | A kind of facial image aging synthetic method of feature based separation confrontation network |
CN108491874A (en) * | 2018-03-19 | 2018-09-04 | 天津大学 | A kind of image list sorting technique for fighting network based on production |
CN108537257A (en) * | 2018-03-26 | 2018-09-14 | 天津大学 | The zero sample classification method based on identification dictionary matrix pair |
-
2018
- 2018-09-27 CN CN201811134474.XA patent/CN109492662B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106778804A (en) * | 2016-11-18 | 2017-05-31 | 天津大学 | The zero sample image sorting technique based on category attribute transfer learning |
CN107679556A (en) * | 2017-09-18 | 2018-02-09 | 天津大学 | The zero sample image sorting technique based on variation autocoder |
CN107977629A (en) * | 2017-12-04 | 2018-05-01 | 电子科技大学 | A kind of facial image aging synthetic method of feature based separation confrontation network |
CN108491874A (en) * | 2018-03-19 | 2018-09-04 | 天津大学 | A kind of image list sorting technique for fighting network based on production |
CN108537257A (en) * | 2018-03-26 | 2018-09-14 | 天津大学 | The zero sample classification method based on identification dictionary matrix pair |
Non-Patent Citations (5)
Title |
---|
ANDERS BOESEN LINDBO LARSEN: "Autoencoding beyond pixels using a learned similarity metric", 《ARXIV:1512.09300V2》 * |
WENLIN WANG,YUNCHEN PU,VINAY KUMAR VERMA: "Zero-Shot Learning via Class-Conditioned Deep Generative Models", 《ARXIV:1711.05820V2》 * |
YUNLONG YU,ZHONG JI: "Zero-Shot Learning via Latent Space Encoding", 《ARXIV:1712.09300V2》 * |
冀中,李慧慧,何宇清: "基于深度示例差异化的零样本多标签图像分类", 《计算机科学与探索》 * |
潘兴会: "基于语义属性的零样本图像分类", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110135459A (en) * | 2019-04-15 | 2019-08-16 | 天津大学 | A kind of zero sample classification method based on double triple depth measure learning networks |
CN110097095A (en) * | 2019-04-15 | 2019-08-06 | 天津大学 | A kind of zero sample classification method generating confrontation network based on multiple view |
CN110097095B (en) * | 2019-04-15 | 2022-12-06 | 天津大学 | Zero sample classification method based on multi-view generation countermeasure network |
CN110135459B (en) * | 2019-04-15 | 2023-04-07 | 天津大学 | Zero sample classification method based on double-triple depth measurement learning network |
CN110427967A (en) * | 2019-06-27 | 2019-11-08 | 中国矿业大学 | The zero sample image classification method based on embedded feature selecting semanteme self-encoding encoder |
CN110443293B (en) * | 2019-07-25 | 2023-04-07 | 天津大学 | Zero sample image classification method for generating confrontation network text reconstruction based on double discrimination |
CN110443293A (en) * | 2019-07-25 | 2019-11-12 | 天津大学 | Based on double zero sample image classification methods for differentiating and generating confrontation network text and reconstructing |
CN110580501A (en) * | 2019-08-20 | 2019-12-17 | 天津大学 | Zero sample image classification method based on variational self-coding countermeasure network |
CN110580501B (en) * | 2019-08-20 | 2023-04-25 | 天津大学 | Zero sample image classification method based on variational self-coding countermeasure network |
CN110598759A (en) * | 2019-08-23 | 2019-12-20 | 天津大学 | Zero sample classification method for generating countermeasure network based on multi-mode fusion |
CN110795585A (en) * | 2019-11-12 | 2020-02-14 | 福州大学 | Zero sample image classification model based on generation countermeasure network and method thereof |
CN110826638B (en) * | 2019-11-12 | 2023-04-18 | 福州大学 | Zero sample image classification model based on repeated attention network and method thereof |
CN110826638A (en) * | 2019-11-12 | 2020-02-21 | 福州大学 | Zero sample image classification model based on repeated attention network and method thereof |
CN110795585B (en) * | 2019-11-12 | 2022-08-09 | 福州大学 | Zero sample image classification system and method based on generation countermeasure network |
CN111914929B (en) * | 2020-07-30 | 2022-08-23 | 南京邮电大学 | Zero sample learning method |
CN111914929A (en) * | 2020-07-30 | 2020-11-10 | 南京邮电大学 | Zero sample learning method |
CN112364894A (en) * | 2020-10-23 | 2021-02-12 | 天津大学 | Zero sample image classification method of countermeasure network based on meta-learning |
WO2022110158A1 (en) * | 2020-11-30 | 2022-06-02 | Intel Corporation | Online learning method and system for action recongition |
CN113191381A (en) * | 2020-12-04 | 2021-07-30 | 云南大学 | Image zero-order classification model based on cross knowledge and classification method thereof |
CN112487193A (en) * | 2020-12-18 | 2021-03-12 | 贵州大学 | Zero sample picture classification method based on self-encoder |
CN112487193B (en) * | 2020-12-18 | 2022-11-22 | 贵州大学 | Zero sample picture classification method based on self-encoder |
CN112364851B (en) * | 2021-01-13 | 2021-11-02 | 北京邮电大学 | Automatic modulation recognition method and device, electronic equipment and storage medium |
CN112364851A (en) * | 2021-01-13 | 2021-02-12 | 北京邮电大学 | Automatic modulation recognition method and device, electronic equipment and storage medium |
CN112733954A (en) * | 2021-01-20 | 2021-04-30 | 湖南大学 | Abnormal traffic detection method based on generation countermeasure network |
CN113111917A (en) * | 2021-03-16 | 2021-07-13 | 重庆邮电大学 | Zero sample image classification method and device based on dual self-encoders |
CN113361611A (en) * | 2021-06-11 | 2021-09-07 | 南京大学 | Robust classifier training method under crowdsourcing task |
CN113361611B (en) * | 2021-06-11 | 2023-12-12 | 南京大学 | Robust classifier training method under crowdsourcing task |
CN113269274A (en) * | 2021-06-18 | 2021-08-17 | 南昌航空大学 | Zero sample identification method and system based on cycle consistency |
CN113657172A (en) * | 2021-07-20 | 2021-11-16 | 西安理工大学 | Cross-domain human body action recognition method based on semantic level domain invariant features |
CN113657172B (en) * | 2021-07-20 | 2023-08-01 | 西安理工大学 | Cross-domain human body action recognition method based on constant characteristics of semantic level field |
Also Published As
Publication number | Publication date |
---|---|
CN109492662B (en) | 2021-09-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109492662A (en) | A kind of zero sample classification method based on confrontation self-encoding encoder model | |
Xu et al. | Imagereward: Learning and evaluating human preferences for text-to-image generation | |
CN107766447B (en) | Method for solving video question-answer by using multilayer attention network mechanism | |
CN110097094B (en) | Multiple semantic fusion few-sample classification method for character interaction | |
CN109189933A (en) | A kind of method and server of text information classification | |
Li et al. | Universal sketch perceptual grouping | |
CN106202352A (en) | The method that indoor furniture style based on Bayesian network designs with colour match | |
CN108416065A (en) | Image based on level neural network-sentence description generates system and method | |
CN110826638A (en) | Zero sample image classification model based on repeated attention network and method thereof | |
CN109902912B (en) | Personalized image aesthetic evaluation method based on character features | |
CN111209384A (en) | Question and answer data processing method and device based on artificial intelligence and electronic equipment | |
US20150147728A1 (en) | Self Organizing Maps (SOMS) for Organizing, Categorizing, Browsing and/or Grading Large Collections of Assignments for Massive Online Education Systems | |
CN110097095A (en) | A kind of zero sample classification method generating confrontation network based on multiple view | |
CN110321870A (en) | A kind of vena metacarpea recognition methods based on LSTM | |
CN113672720A (en) | Power audit question and answer method based on knowledge graph and semantic similarity | |
CN111598252B (en) | University computer basic knowledge problem solving method based on deep learning | |
CN116704085A (en) | Avatar generation method, apparatus, electronic device, and storage medium | |
CN111598153A (en) | Data clustering processing method and device, computer equipment and storage medium | |
CN108268629A (en) | Image Description Methods and device, equipment, medium, program based on keyword | |
Hu et al. | Leveraging sub-class discimination for compositional zero-shot learning | |
CN114186497B (en) | Intelligent analysis method, system, equipment and medium for value of art work | |
US11734389B2 (en) | Method for generating human-computer interactive abstract image | |
CN115690276A (en) | Video generation method and device of virtual image, computer equipment and storage medium | |
Borges et al. | Automated generation of synthetic in-car dataset for human body pose detection | |
CN112507879A (en) | Evaluation method, evaluation device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |