CN109582960A - The zero learn-by-example method based on structured asso- ciation semantic embedding - Google Patents
The zero learn-by-example method based on structured asso- ciation semantic embedding Download PDFInfo
- Publication number
- CN109582960A CN109582960A CN201811423691.0A CN201811423691A CN109582960A CN 109582960 A CN109582960 A CN 109582960A CN 201811423691 A CN201811423691 A CN 201811423691A CN 109582960 A CN109582960 A CN 109582960A
- Authority
- CN
- China
- Prior art keywords
- classification
- semantic embedding
- indicate
- pseudo
- space
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Machine Translation (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The zero learn-by-example method based on structured asso- ciation semantic embedding that the present invention provides a kind of, the method constructs semantic embedding space by extracting the attributive character of input picture, semantic embedding incidence relation is excavated in semantic embedding space and constructs the relevant graph structure of local linear, the disaggregated model of the structuring restriction known class in semantic embedding space is recycled, final synthesis has no the model of classification and realizes being effectively predicted for non-training sample classification.The present invention retains the global structure of semantic space while excavating semantic embedding information similar in classification, and available effective semantic expressiveness simultaneously effectively improves the class prediction accuracy rate for having no classification image pattern.
Description
Technical field
The present invention relates to technical field of computer vision, and in particular, to zero based on structured asso- ciation semantic embedding shows
Example learning method.
Background technique
In computer vision field, zero learn-by-example is an extremely important and challenging problem.It supervises at this stage
Educational inspector's learning method realizes prominent achievement in multiple fields, however the apparent altitude of supervised learning is depended on and marked on a large scale
Infuse data.On the contrary, zero learn-by-example can obtain knowledge from the sample for trained classification, the sample of untrained types is moved to
And realize the differentiation of untrained types sample.In order to achieve this goal, it has trained between classification sample and untrained types sample
Knowledge migration and untrained types sample is effectively described most important.
By the literature search discovery to the prior art, L.C.H, N.Hannes and H.Stefan, in " IEEE in 2014
Transactions on Pattern Analysis and Machine Intelligence " deliver in meeting
" Attribute-based classification for zero-shot visual object categorization " one
A kind of method for learning attributive classification device is proposed in text, as each attribute learns a classifier, tests number by differentiating
According to the output of attributive classification device distinguish and have no the sample of classification.However, deviation can be introduced during establishing classifier, thus
Influence has no the differentiation of classification sample.To solve this problem, researcher proposes directly to measure between image and classification insertion
Correlation, by study one linearly or nonlinearly be compatible with function, it is directly that image is associated with classification insertion.However, this
Class method can be deviated in training data and inconsistent test data distribution, therefore researcher attempts using based on fusion
The method of model solves the problems, such as zero learn-by-example, i.e., classification insertion is considered as to the combination for having seen class component.N.Mohammad etc.
People delivers in " International Conference on Learning Representations " meeting in 2014
" Zero-shot learning by convex combination of semantic embeddings " text in propose
The semantic embedding of classification is had no by having seen the weight combinatorial compound of classification semantic embedding, combining weights are its pre-training classifier
Corresponding probability output.However this method need to train classifier in advance, it is relatively time consuming and inefficient.In addition, C.Soravit etc.
" the IEEE Conference on Computer Vision and Pattern Recognition " meeting of people in 2016
On propose a kind of utilization in " Synthesized classifiers for zero-shot learning " text for delivering
Pseudo- classification realizes the knowledge migration between semantic embedding and classifier, and synthesizes new classifier by the convex combination of pseudo- classification.This
Kind method has ignored the partial structurtes of semantic embedding, and partial structurtes may provide more information for accurate synthesis classifier.
Summary of the invention
For the defects in the prior art, the object of the present invention is to provide a kind of zero based on structured asso- ciation semantic embedding
Learn-by-example method.
A kind of zero learn-by-example method based on structured asso- ciation semantic embedding provided according to the present invention, comprising:
Step 1: extracting the attributive character of known class image;
Step 2: semantic embedding space, and the disaggregated model of building known class sample are constructed according to the attributive character
The model space of composition;
Step 3: R pseudo- classification is introduced in the semantic embedding space and the model space, it is embedding as the semanteme
Enter the relationship migration carrier of space and the model space;
Step 4: obtaining sized square of Euclidean distance between the true classification of known class image and pseudo- classification, and will
The sized square of Euclidean distance is as connection weight;
Step 5: in the model space, according to the connection weight, rebuilding the graph structure in semantic embedding space;
Step 6: obtaining the R of the connection weight of the true classification of each known class image within a preset rangenA puppet class
Not;
Step 7: building is used for so that each semantic embedding in the semantic embedding space is by RnA puppet classification synthesis
Objective function;
Step 8: building disaggregated model, wherein the disaggregated model is by the R that abutsnA puppet classification synthesizes to obtain;
Step 9: according to the objective function, learning R pseudo- classification, R of the semantic embedding space and the model spacenIt is a
Pseudo- classification, so that the minimization of loss of disaggregated model;
Step 10: the attributive character of unknown classification image is extracted, by the attributive character RnA puppet categorized representation, obtains
The graph structure of the unknown classification image;
Step 11: by the graph structure of the unknown classification image from the semantic embedding space projection to the model space, obtaining
Pseudo- classification after to projection synthesizes the disaggregated model of the unknown classification image according to the pseudo- classification after projection.
Optionally, the step 4 includes:
Assuming that in training sample, it is known that classification number is S, and unknown classification number is U, and pseudo- classification number is R;Define each classification i
Coordinate be si(i=1,2 ..., S+U) presets all categories of training sample in the same manifold in semantic embedding space,
By the pseudo- classification p of introducingm(m=1,2 ..., R) retains the correlativity between training sample attribute, presets each semantic embedding
And neighbouring RnA puppet classification is on the local linear block of semantic embedding space manifold;Sized square of Euclidean distance is defined as
Similarity measurement;Each siWith neighbouring pseudo- classification pmBetween connection weight calculation formula it is as follows:
Wherein, siIndicate the semantic embedding of classification i, the pseudo- classification of neighbour shares RnIt is a;pmIndicate siThe R of arest neighborsnIt is a
One in pseudo- classification;wmiIndicate each siWith the pseudo- classification p of its neighbourmBetween connection weight, wmiValue it is bigger, indicate pseudo-
Classification pmWith semantic embedding siBetween similitude it is bigger;
The R of neighbour is corresponding with according to each semantic embeddingnA puppet classification, constructs the category associations in semantic embedding space
Figure;Wherein, the vertex set of the category associations figure includes the attribute of semantic embedding and the pseudo- classification of introducing, and line set includes to belong to
Property and pseudo- classification between the side that links;The neighbouring semantic embedding representation of each puppet classification is as follows:
Wherein, μmiIndicate the weight between semantic embedding and pseudo- classification.
Optionally, the step 8 includes:
Defining the corresponding disaggregated model of each classification i is ui(i=1,2 ..., S+U) presets all disaggregated models same
In a manifold, similarly, R pseudo- classification is introduced in the manifold, then each disaggregated model uiIt can be by semantic embedding space
Weight wmiAnd RnThe pseudo- classification t of a adjoiningm(m=1,2 ..., R) it indicates:
Wherein: uiPresentation class model;tmIndicate pseudo- classification;wmiIndicate each siWith the pseudo- classification p of its neighbourmBetween
Connection weight.
Optionally, the formula of the objective function is as follows:
Wherein:Indicate the pseudo- classification t in learning model spacemWith the weight mu in semantic embedding spacemiMake
Multi-class Classification Loss minimizes, and R indicates pseudo- classification sum, and classification sum, R have been shown in S expressionnIndicate each classification i neighbour's
Pseudo- classification number, xkIndicate input data, the weighting coefficient of λ presentation class model, γ indicates the weighting coefficient of weight;
The first item of objective function is square chain type loss using one-to-many more classification policies Wherein uiIndicate the classifier of classification i, xkInput data is indicated, for ensuring that classifier more closes
Overall classification error is infused, total error in classification is made to tend to minimum;
Indicator functionIndicate classification ykIt is whether identical as classification i, then take 1 if they are the same, it is different then take -1;
The Section 2 of objective function is to disaggregated model uiConstraint, prevent over-fitting;
The Section 3 target item of objective function is the weight mu to semantic embedding spacemiIt uses restraint, it is made to tend to be sparse;|
G | indicate modulo operation;Indicate norm operation.
Optionally, when considering Category Relevance, the method additionally uses the multi-class support of Crammer-Singer
Vector machine strategy is tested, i.e., by the first item of the objective functionIt replaces with
Wherein: i ∈ S- { ykIndicate that classification i belongs in known class set S in addition to classification ykExcept one of classification;It indicates optimization classification i, makes itself and classification ykDifference in semantic embedding space and the model space is maximum;sgIndicate class
Semantic embedding of the other g in semantic embedding space, ugIndicate the corresponding classifier of classification g, xkIndicate input data;uiIndicate classification i
Classifier,Indicate classification ykClassifier, lstructIt indicates to consider that the Crammer-Singer of Category Relevance is multi-class
Loss, siIndicate the semantic embedding of classification i,Indicate classification ykSemantic embedding,Indicate siWithThe model of difference
Number operation;
Optionally, using for the objective function substitutes Optimization Method.
Compared with prior art, the present invention have it is following the utility model has the advantages that
The zero learn-by-example method provided by the invention based on structured asso- ciation semantic embedding, passes through zero learn-by-example frame
It is assumed that and utilize the more structured asso- ciations in semantic embedding space, using the semantic expressiveness projectable of similar categorization to embedded space
Close position hypothesis condition, can help to prediction and have no the classifier of classification.The present invention is excavating semanteme similar in classification
Retain the global structure of semantic space while embedding information, available effective semantic expressiveness simultaneously effectively improves and has no classification
The class prediction accuracy rate of image pattern.
In optinal plan, the present invention is associated in view of the neighbouring insertion of similar categorization, using structured asso- ciation and is proposed
By predicting new classifier to have no that classification constructs the graph structure that local linear connects.The present invention is utilizing local association relationship
While remain global structure, the influence by reinforcing adjacent insertion obtains more effective semantic information and indicates.
Detailed description of the invention
Upon reading the detailed description of non-limiting embodiments with reference to the following drawings, other feature of the invention,
Objects and advantages will become more apparent upon:
Fig. 1 is the principle for the zero learn-by-example method based on structured asso- ciation semantic embedding that the present invention-embodiment provides
Structural schematic diagram;
Fig. 2 is the process schematic of structured relations migration in the present invention-embodiment.
Specific embodiment
The present invention is described in detail combined with specific embodiments below.Following embodiment will be helpful to the technology of this field
Personnel further understand the present invention, but the invention is not limited in any way.It should be pointed out that the ordinary skill of this field
For personnel, without departing from the inventive concept of the premise, several changes and improvements can also be made.These belong to the present invention
Protection scope.
Fig. 1 is the principle for the zero learn-by-example method based on structured asso- ciation semantic embedding that the present invention-embodiment provides
Structural schematic diagram, as shown in Figure 1, the present embodiment, comprising: the class relations in manifold space construct, structuring constrains disaggregated model,
Have no that class models synthesize, in which: class relations are constructed by the structured relations for excavating similar categorization semantic embedding, consider phase
Like the local syntople of classification, to have no that the building of classification sample retains the graph structure of local linear relationship;Structuring constraint point
The graph structure of semantic space is projected to by pseudo- classification by introducing pseudo- classification in semantic space and has seen classification point by class model
The model space that class device is formed;Have no class models synthesis using the weight of pseudo- classification and semantic space after projection in Model Space
Between middle reconstruct graph structure and synthesize and have no the model of classification.The process of structured relations migration is as shown in Figure 2.
The method calculates the R of each semantic embedding first in the training processnA arest neighbors, and form local linear pass
The graph structure of connection.The pseudo- classification in semantic space is projected in the model space again.In forecast period, the language of classification is each had no
Justice insertion is by its RnThe pseudo- classification of a arest neighbors indicates, then by pseudo- classification by semantic embedding space projection to the model space, and protect
Stay the weight in semantic embedding space, in the model space reconstruct graph model with synthesize new classifier and to have no classification carry out
It distinguishes.Since the partial structurtes of overlapping can provide global aggregate information, therefore the method is utilizing semantic embedding local distribution
While remain global structure in semantic space.When local semantic expressiveness projects to the model space, global non-linear knot
Structure is also restored from local connection relationship.The optimization aim of training process is as follows:
Wherein:Indicate the pseudo- classification t in learning model spacemWith the weight mu of semantic spacemiMake multiclass
Other Classification Loss minimizes, and R indicates pseudo- classification sum, and classification sum, R have been shown in S expressionnIndicate the puppet of each classification i arest neighbors
Classification number, xkIndicate input data, the weighting coefficient of λ presentation class model, γ indicates the weighting coefficient of weight.Objective function
First item is square chain type loss l (x, the y using one-to-many more classification policies;U)=max (0,1-yuTx)2, wherein y indicates class
Distinguishing label, u presentation class device, x indicate input data, it makes total for ensuring that classifier is more concerned about overall classification error
Error in classification tends to minimum;Indicator functionIndicate classification ykIt is whether identical as classification i, 1 is then taken if they are the same, no
It is same then take -1;The Section 2 of objective function is to disaggregated model uiConstraint, prevent over-fitting;Section 3 target item is to semanteme
The weight mu in spacemiIt uses restraint, it is made to tend to be sparse;| g | indicate modulo operation;Indicate norm operation.Further, it is contemplated that
To Category Relevance, the first item of objective function additionally uses the multi-class support vector machines of Crammer-Singer and has carried out reality
It tests, i.e.,
Wherein: i ∈ S- { ykIndicate that classification i belongs in known class set S in addition to classification ykExcept one of classification;It indicates optimization classification i, makes itself and classification ykIt is maximum in the difference of semantic space and the model space.sgIndicate that classification g exists
The semantic embedding of semantic space, ugIndicate the corresponding classifier of classification g, xkIndicate input data.
The present invention is averaged by the attribute expression to the same category image to obtain the attribute of classification grade and indicate.It is all
Variable random initializtion, the super ginseng in objective function carries out cross validation by the training data to each data set and obtains.This
Cross validation strategy in embodiment divides training data and unconventional direct segmentation training data using category.This
Outside, the back gauge being not connected in the graph model of building is set as ∞.Further, neighbouring pseudo- class number RnIt is initially set to puppet
The small several times of classification number R, then range is gradually reduced to obtain more accurate result in the variation according to classification accuracy.
It is trained and tests on AWA data set and SUN data set respectively in the present embodiment.AWA data set be about
The data set of animal instinct, it includes 85 double attributes and 30475 pictures, totally 50 classifications.SUN data set is about field
The data set of scape attribute, it includes 102 attributes and 14340 pictures, totally 717 scene types.AWA data in the present embodiment
Collect the semantic space attributive character used and extract acquisition with AlexNet, the semantic space attributive character that SUN data set uses uses
GoogleNet, which is extracted, to be obtained.The method extracts the validity of semantic expressiveness and generalization passes through the more of untrained types image
Category classification accuracy rate is assessed.It compares using zero example of the present invention based on structured asso- ciation semantic embedding
Learning method and C.Soravit et al. are proposed at " Synthesized classifiers for zero-shot learning "
Synthesize the zero learn-by-example method classification accuracies that remaining loses under strategy with two kinds of structuring SVM in a pair of classifier.
As a result: application structureization is associated with zero learn-by-example method of semantic embedding, the test sample under AWA data set
Classification accuracy respectively reaches 72.3% and 74.6% under a pair of remaining and two kinds of losses strategies of structuring SVM, divides than synthesis
2.6% and 2.3% is respectively increased in the method for class device model;Under SUN data set test sample classification accuracy it is a pair of remaining
62.4% and 62.7% are respectively reached under two kinds of loss strategies of structuring SVM, the method than synthesizing sorter model mentions respectively
High 0.3% and 0.3%.Due to consideration that the structured asso- ciation of semantic embedding, method of the invention is equal under the conditions of different losses
Higher classification accuracy is obtained, experimental result confirms the validity of proposition method.
Specific embodiments of the present invention are described above.It is to be appreciated that the invention is not limited to above-mentioned
Particular implementation, those skilled in the art can make a variety of changes or modify within the scope of the claims, this not shadow
Ring substantive content of the invention.In the absence of conflict, the feature in embodiments herein and embodiment can any phase
Mutually combination.
Claims (6)
1. a kind of zero learn-by-example method based on structured asso- ciation semantic embedding characterized by comprising
Step 1: extracting the attributive character of known class image;
Step 2: semantic embedding space being constructed according to the attributive character, and the disaggregated model of building known class sample is constituted
The model space;
Step 3: R pseudo- classification is introduced in the semantic embedding space and the model space, it is empty as the semantic embedding
Between and the model space relationship migrate carrier;
Step 4: obtaining sized square of Euclidean distance between the true classification of known class image and pseudo- classification, and will be described
Sized square of Euclidean distance is as connection weight;
Step 5: in the model space, according to the connection weight, rebuilding the graph structure in semantic embedding space;
Step 6: obtaining the R of the connection weight of the true classification of each known class image within a preset rangenA puppet classification;
Step 7: building is used for so that each semantic embedding in the semantic embedding space is by RnThe target letter of a puppet classification synthesis
Number;
Step 8: building disaggregated model, wherein the disaggregated model is by the R that abutsnA puppet classification synthesizes to obtain;
Step 9: according to the objective function, learning R pseudo- classification, R of the semantic embedding space and the model spacenA puppet class
Not, so that the minimization of loss of disaggregated model;
Step 10: the attributive character of unknown classification image is extracted, by the attributive character RnA puppet categorized representation, obtains described
The graph structure of unknown classification image;
Step 11: by the graph structure of the unknown classification image from the semantic embedding space projection to the model space, being thrown
The pseudo- classification of movie queen synthesizes the disaggregated model of the unknown classification image according to the pseudo- classification after projection.
2. the zero learn-by-example method according to claim 1 based on structured asso- ciation semantic embedding, which is characterized in that institute
Stating step 4 includes:
Assuming that in training sample, it is known that classification number is S, and unknown classification number is U, and pseudo- classification number is R;Define the seat of each classification i
It is designated as si(i=1,2 ..., S+U) presets all categories of training sample in the same manifold in semantic embedding space, by
The pseudo- classification p of introducingm(m=1,2 ..., R) retains the correlativity between training sample attribute, presets each semantic embedding and neighbour
Close RnA puppet classification is on the local linear block of semantic embedding space manifold;Sized square of Euclidean distance is defined as similar
Property measurement;Each siWith neighbouring pseudo- classification pmBetween connection weight calculation formula it is as follows:
Wherein, siIndicate the semantic embedding of classification i, the pseudo- classification of neighbour shares RnIt is a;pmIndicate siThe R of arest neighborsnA puppet class
One in not;wmiIndicate each siWith the pseudo- classification p of its neighbourmBetween connection weight, wmiValue it is bigger, indicate pseudo- classification
pmWith semantic embedding siBetween similitude it is bigger;
The R of neighbour is corresponding with according to each semantic embeddingnA puppet classification, constructs the category associations figure in semantic embedding space;Its
In, the vertex set of the category associations figure includes the attribute of semantic embedding and the pseudo- classification of introducing, line set include attribute with
The side linked between pseudo- classification;The neighbouring semantic embedding representation of each puppet classification is as follows:
Wherein, μmiIndicate the weight between semantic embedding and pseudo- classification.
3. the zero learn-by-example method according to claim 1 based on structured asso- ciation semantic embedding, which is characterized in that institute
Stating step 8 includes:
Defining the corresponding disaggregated model of each classification i is ui(i=1,2 ..., S+U) presets all disaggregated models in the same stream
In shape, similarly, R pseudo- classification is introduced in the manifold, then each disaggregated model uiIt can be by the weight in semantic embedding space
wmiAnd RnThe pseudo- classification t of a adjoiningm(m=1,2 ..., R) it indicates:
Wherein: uiPresentation class model;tmIndicate pseudo- classification;wmiIndicate each siWith the pseudo- classification p of its neighbourmBetween connection
Weight.
4. the zero learn-by-example method according to any one of claim 1-3 based on structured asso- ciation semantic embedding,
It is characterized in that, the formula of the objective function is as follows:
Wherein:Indicate the pseudo- classification t in learning model spacemWith the weight mu in semantic embedding spacemiMake multi-class
Classification Loss minimizes, and R indicates pseudo- classification sum, and classification sum, R have been shown in S expressionnIndicate the pseudo- classification of each classification i neighbour
Number, xkIndicate input data, the weighting coefficient of λ presentation class model, γ indicates the weighting coefficient of weight;
The first item of objective function is square chain type loss using one-to-many more classification policies Wherein uiIndicate the classifier of classification i, xkInput data is indicated, for ensuring that classifier more closes
Overall classification error is infused, total error in classification is made to tend to minimum;
Indicator functionIndicate classification ykIt is whether identical as classification i, then take 1 if they are the same, it is different then take -1;Target letter
Several Section 2 is to disaggregated model uiConstraint, prevent over-fitting;
The Section 3 target item of objective function is the weight mu to semantic embedding spacemiIt uses restraint, it is made to tend to be sparse;|g|
Indicate modulo operation;Indicate norm operation.
5. the zero learn-by-example method according to claim 4 based on structured asso- ciation semantic embedding, which is characterized in that
When in view of Category Relevance, the method additionally uses the multi-class support vector machines strategy of Crammer-Singer and has carried out reality
It tests, i.e., by the first item of the objective functionIt replaces with
Wherein: i ∈ S- { ykIndicate that classification i belongs in known class set S in addition to classification ykExcept one of classification;
It indicates optimization classification i, makes itself and classification ykDifference in semantic embedding space and the model space is maximum;sgIndicate classification g in language
The semantic embedding of adopted embedded space, ugIndicate the corresponding classifier of classification g, xkIndicate input data;uiIndicate the classification of classification i
Device,Indicate classification ykClassifier, lstructIndicate the multi-class loss of Crammer-Singer of consideration Category Relevance, si
Indicate the semantic embedding of classification i,Indicate classification ykSemantic embedding,Indicate siWithThe norm of difference is transported
It calculates.
6. the zero learn-by-example method according to claim 4 based on structured asso- ciation semantic embedding, which is characterized in that institute
That states objective function uses substitution Optimization Method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811423691.0A CN109582960B (en) | 2018-11-27 | 2018-11-27 | Zero example learning method based on structured association semantic embedding |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811423691.0A CN109582960B (en) | 2018-11-27 | 2018-11-27 | Zero example learning method based on structured association semantic embedding |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109582960A true CN109582960A (en) | 2019-04-05 |
CN109582960B CN109582960B (en) | 2020-11-24 |
Family
ID=65924782
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811423691.0A Active CN109582960B (en) | 2018-11-27 | 2018-11-27 | Zero example learning method based on structured association semantic embedding |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109582960B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110309875A (en) * | 2019-06-28 | 2019-10-08 | 哈尔滨工程大学 | A kind of zero sample object classification method based on the synthesis of pseudo- sample characteristics |
WO2020206876A1 (en) * | 2019-04-08 | 2020-10-15 | 清华大学 | Method and device for constructing graph convolutional neural network for learning disentangled representation |
CN112418351A (en) * | 2020-12-11 | 2021-02-26 | 天津大学 | Zero sample learning image classification method based on global and local context sensing |
CN112529772A (en) * | 2020-12-18 | 2021-03-19 | 深圳龙岗智能视听研究院 | Unsupervised image conversion method under zero sample setting |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9278255B2 (en) * | 2012-12-09 | 2016-03-08 | Arris Enterprises, Inc. | System and method for activity recognition |
CN105701504A (en) * | 2016-01-08 | 2016-06-22 | 天津大学 | Multimode manifold embedding method used for zero sample learning |
CN107563444A (en) * | 2017-09-05 | 2018-01-09 | 浙江大学 | A kind of zero sample image sorting technique and system |
US20180197050A1 (en) * | 2016-12-30 | 2018-07-12 | Hrl Laboratories, Llc | Zero-shot learning using multi-scale manifold alignment |
CN108399421A (en) * | 2018-01-31 | 2018-08-14 | 南京邮电大学 | A kind of zero sample classification method of depth of word-based insertion |
WO2018161217A1 (en) * | 2017-03-06 | 2018-09-13 | Nokia Technologies Oy | A transductive and/or adaptive max margin zero-shot learning method and system |
-
2018
- 2018-11-27 CN CN201811423691.0A patent/CN109582960B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9278255B2 (en) * | 2012-12-09 | 2016-03-08 | Arris Enterprises, Inc. | System and method for activity recognition |
CN105701504A (en) * | 2016-01-08 | 2016-06-22 | 天津大学 | Multimode manifold embedding method used for zero sample learning |
US20180197050A1 (en) * | 2016-12-30 | 2018-07-12 | Hrl Laboratories, Llc | Zero-shot learning using multi-scale manifold alignment |
WO2018161217A1 (en) * | 2017-03-06 | 2018-09-13 | Nokia Technologies Oy | A transductive and/or adaptive max margin zero-shot learning method and system |
CN107563444A (en) * | 2017-09-05 | 2018-01-09 | 浙江大学 | A kind of zero sample image sorting technique and system |
CN108399421A (en) * | 2018-01-31 | 2018-08-14 | 南京邮电大学 | A kind of zero sample classification method of depth of word-based insertion |
Non-Patent Citations (3)
Title |
---|
HINTON G E, ET AL.: "Stochastic neighbor embedding", 《ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS》 * |
LIN G,ET AL.: "Structure propagation for zero-shot learning", 《ARXIV PREPRINT ARXIV》 * |
SORAVIT CHANGPINYO, ET AL.: "Synthesized Classifiers for Zero-Shot Learning", 《2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020206876A1 (en) * | 2019-04-08 | 2020-10-15 | 清华大学 | Method and device for constructing graph convolutional neural network for learning disentangled representation |
CN110309875A (en) * | 2019-06-28 | 2019-10-08 | 哈尔滨工程大学 | A kind of zero sample object classification method based on the synthesis of pseudo- sample characteristics |
CN112418351A (en) * | 2020-12-11 | 2021-02-26 | 天津大学 | Zero sample learning image classification method based on global and local context sensing |
CN112418351B (en) * | 2020-12-11 | 2023-04-07 | 天津大学 | Zero sample learning image classification method based on global and local context sensing |
CN112529772A (en) * | 2020-12-18 | 2021-03-19 | 深圳龙岗智能视听研究院 | Unsupervised image conversion method under zero sample setting |
CN112529772B (en) * | 2020-12-18 | 2024-05-28 | 深圳龙岗智能视听研究院 | Unsupervised image conversion method under zero sample setting |
Also Published As
Publication number | Publication date |
---|---|
CN109582960B (en) | 2020-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109582960A (en) | The zero learn-by-example method based on structured asso- ciation semantic embedding | |
CN109086658B (en) | Sensor data generation method and system based on generation countermeasure network | |
Xue et al. | Attention-based second-order pooling network for hyperspectral image classification | |
CN110826638B (en) | Zero sample image classification model based on repeated attention network and method thereof | |
CN110263705A (en) | Towards two phase of remote sensing technology field high-resolution remote sensing image change detecting method | |
CN108875818A (en) | Based on variation from code machine and confrontation network integration zero sample image classification method | |
CN108764308A (en) | A kind of recognition methods again of the pedestrian based on convolution loop network | |
CN110717553A (en) | Traffic contraband identification method based on self-attenuation weight and multiple local constraints | |
CN104504362A (en) | Face detection method based on convolutional neural network | |
Aditya et al. | Batik classification using neural network with gray level co-occurence matrix and statistical color feature extraction | |
CN110132263A (en) | A kind of method for recognising star map based on expression study | |
CN109902202A (en) | A kind of video classification methods and device | |
CN108154133A (en) | Human face portrait based on asymmetric combination learning-photo array method | |
CN110008861A (en) | A kind of recognition methods again of the pedestrian based on global and local feature learning | |
CN108960201A (en) | A kind of expression recognition method extracted based on face key point and sparse expression is classified | |
Soumya et al. | Emotion recognition from partially occluded facial images using prototypical networks | |
Li et al. | Adversarial domain adaptation via category transfer | |
Lin et al. | Multi-scale face restoration with sequential gating ensemble network | |
Zhang et al. | Facial component-landmark detection with weakly-supervised lr-cnn | |
CN110458215A (en) | Pedestrian's attribute recognition approach based on multi-time Scales attention model | |
Xie et al. | Writer-independent online signature verification based on 2D representation of time series data using triplet supervised network | |
CN113657473A (en) | Web service classification method based on transfer learning | |
Liu et al. | Learning a similarity metric discriminatively with application to ancient character recognition | |
Ali et al. | Indoor scene recognition using ResNet-18 | |
Raihan et al. | CNN modeling for recognizing local fish |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |