CN106203472A - A kind of zero sample image sorting technique based on the direct forecast model of mixed attributes - Google Patents

A kind of zero sample image sorting technique based on the direct forecast model of mixed attributes Download PDF

Info

Publication number
CN106203472A
CN106203472A CN201610483014.2A CN201610483014A CN106203472A CN 106203472 A CN106203472 A CN 106203472A CN 201610483014 A CN201610483014 A CN 201610483014A CN 106203472 A CN106203472 A CN 106203472A
Authority
CN
China
Prior art keywords
image
mixed attributes
attribute
test image
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610483014.2A
Other languages
Chinese (zh)
Other versions
CN106203472B (en
Inventor
王雪松
乔雪
程玉虎
陈晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Mining and Technology CUMT
Original Assignee
China University of Mining and Technology CUMT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Mining and Technology CUMT filed Critical China University of Mining and Technology CUMT
Priority to CN201610483014.2A priority Critical patent/CN106203472B/en
Publication of CN106203472A publication Critical patent/CN106203472A/en
Application granted granted Critical
Publication of CN106203472B publication Critical patent/CN106203472B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a kind of zero sample image sorting technique based on mixed attributes direct attribute forecast model.First, training image low-level image feature is carried out sparse coding and utilizes the non-semantic attribute that obtains of coding to assist semantic attribute;Then, non-semantic attribute is constituted mixed attributes the attribute intermediate layer as direct attribute forecast model with semantic attribute, utilizes the thought of direct attribute forecast model to carry out the training of mixed attributes grader;Finally, according to the relation between mixed attributes and attribute and the classification of prediction, the prediction of test sample class label is carried out.The classification that the invention enables attribute similarity originally is more prone to be distinguished, thus improves the discrimination of zero sample image classification.

Description

A kind of zero sample image sorting technique based on the direct forecast model of mixed attributes
Technical field
The present invention relates to a kind of method utilizing mixed attributes direct attribute forecast model to realize zero sample image classification, Belong to zero sample image classification field.
Background technology
The semantic attribute utilizing image realizes the study hotspot that zero sample image classification is current attribute application, with biography The image classification problem of system is different, and zero sample image is sorted in the sample that test phase classifies and identify and has neither part nor lot in grader mould The training of type.In zero sample image classification problem, in order to realize being clipped to have no the knowledge migration of classification from visible class, mould of classifying Type is accomplished by building a bridge from low-level image feature to class label by semantic attribute.Nearest research work proposes A lot of image classification methods based on semantic attribute, that representative is direct attribute forecast model (Direct Attribute Prediction, DAP) and proxy attribute forecast model (Indirect Attribute Prediction, IAP).In zero sample image sorting technique based on semantic attribute, semantic attribute considers whether sample has a certain kind Property, may determine that the sample position in attribute space according to " having ", the "None" of attribute, and then determine the class label of sample.But It is that, for the classification much like to those semantic attributes (attribute space is positioned proximate to), semantic attribute is difficult to carry out them Distinguish.
Summary of the invention
Goal of the invention: in order to overcome the deficiencies in the prior art, the present invention provides a kind of direct based on mixed attributes Zero sample image sorting technique of forecast model, it is possible to the classification making semantic attribute similar is easily distinguished.
Technical scheme: for achieving the above object, the technical solution used in the present invention is:
A kind of zero sample image sorting technique based on the direct forecast model of mixed attributes, comprises the steps:
Step 1, is learnt by sparse coding algorithm with training image low-level image feature collection, obtains base vector set;
Step 2, linearly weighs training image low-level image feature collection and test image low-level image feature collection with base vector set Structure, obtains non-semantic property set, carries out supplementing and extending to semantic attribute collection, and then structure mixed attributes collection;
Step 3, utilizes training image low-level image feature to integrate each mixed attributes concentrated as mixed attributes and trains one to mix Close attributive classification device;
Step 4, utilizes mixed attributes grader to be predicted test image blend attribute, obtains testing image blend and belongs to Property prediction probability, attribute corresponding to maximum predicted probit is the test image blend attribute that prediction obtains;
Step 5, the class label of test image is predicted by the test image blend attribute utilizing prediction to obtain, and obtains Posterior probability estimation from test image low-level image feature to test image category label;
Step 6, finds out the class label maximum so that posterior probability estimation, and category label is distributed to test figure Picture, thus obtains testing image category label.
As the preferred version of the present invention, in described step 1, given training image low-level image feature collection X={x1,x2,..., xΚ}∈Rd×K, wherein K is the classification number of training image, xi, i=1,2 ..., K is the i-th class training image low-level image feature, and d is The dimension of image low-level image feature, Rd×KRepresentation dimension is the space of d × K;WithRepresent base vector set, wherein N represents the number of base vector,J=1,2 ..., N represents jth base vector;Sparse coding Algorithm Learning is then utilized to obtain base The method of vector set Φ is specific as follows:
Step 1.1, random initializtion base vector set Φ;
Step 1.2, substitutes into training image low-level image feature collection X in following constraint equation:
In formula, bi,j, i=1,2 ..., K;J=1,2 ..., N is activation amount, represents the i-th class training image low-level image feature xiJth sparse coding;λ represents weight attenuation quotient;Represent sparse regular terms;Represent L1Norm, c represents one Individual constant, is used for limiting sparse degree;
Step 1.3, first fixing initialization base vector set Φ, solve the activation that constraint equation (1) can be made to minimize Amount bi,j;Then activation amount b is fixedi,j, solve the base vector set Φ that constraint equation (1) can be made to minimize;
Step 1.4, constantly repeats step 1.3 until convergence, tries to achieve one group of base vector set representing image low-level image feature
As the preferred version of the present invention, in described step 2, given test image set low-level image feature collection X '={ x '1,x ′2,...,x′Z}∈Rd×Z, wherein Z is the classification number of test image, x 'i, i=1,2 ..., Z is the i-th class testing image bottom Feature, d is the dimension of image low-level image feature, Rd×ZRepresentation dimension is the space of d × Z;The basal orientation quantity set obtained is learnt by step 1 CloseTraining image low-level image feature collection X and test image low-level image feature collection X ' is carried out linear reconstruction, obtainsI=1,2 ..., K andI=1,2 ..., Z;Obtain non-semantic according to linear reconstruction result The method of property set B is:
Adjust activation amount bi,jMake formula (2) minimum, the set B={B finally given1,B2,...,BN{ 0,1} is i.e. ∈ Training image and the non-semantic property set of test image;Wherein Bj={ b1,j,b2,j,...,bK+Z,j, j=1,2 ..., N represents Training image and the non-semantic attribute of jth of test image;
Given semantic attribute collection A={A1,A2,...,AM{ 0,1}, wherein M represents the number of semantic attribute, A to ∈i, i= 1,2 ..., M represents training image and the i-th semantic attribute subset of test image;By the described non-semantic property set obtained Semantic attribute collection A is carried out supplementing and extending by B, and then structure obtains mixed attributes collection H={A, B}={A1,A2,...,AM,B1, B2,...,BN}={ h1,...,hf,...,hM+N}∈{0,1};hf, f=1,2 ..., M+N is the f mixed attributes.
As the preferred version of the present invention, in described step 3, by training image low-level image feature collection X={x1,x2,...,xΚ} ∈Rd×KAs the training sample of mixed attributes grader, using mixed attributes collection H as the training sample mark of mixed attributes grader Sign, and utilize multi-category support vector machines for each mixed attributes h in mixed attributes collection HfA mixed attributes is trained to divide Class device;Training mixed attributes grader method particularly includes:
W = arg m i n Σ i = 1 K Σ f = 1 M + N ( w f T x i - h f ) 2 + γ Σ f = 1 M + N || w f || L 2 2 - - - ( 3 )
In formula, wfRepresent the f mixed attributes hfThe regression parameter of mixed attributes grader,Represent L2Norm, W= {w1,w2,...,wM+NRepresent mixed attributes grader regression parameter collection, γ is for regulating mixed attributes grader complexity Balance parameters;The regression parameter collection W={w of mixed attributes grader is obtained by solving formula (3)1,w2,...,wM+N, to obtain final product Arrive each mixed attributes hfMixed attributes grader.
As the preferred version of the present invention, in described step 4, given Z class testing image low-level image feature collection { x '1,x ′2,...,x′Z}∈Rd×Z, test image blend attribute is predicted by the mixed attributes grader utilizing step 3 to obtain, and obtains The prediction probability p of the i-th class testing image blend attribute (H | x 'i) method be:
p ( H | x i ′ ) = Σ f = 1 M + N p ( h f | x i ′ )
In formula, x 'iRepresent the i-th class testing image low-level image feature, p (hf|x′i) represent that f of the i-th class testing image is mixed Closing the prediction probability of attribute, attribute corresponding to maximum predicted probit is the test image blend attribute that prediction obtains.
As the preferred version of the present invention, in described step 5, according to Bayes theorem, obtain belonging to from test image blend Property collection H to test image category label z Probability p (z | H):
p ( z | H ) = p ( z ) p ( H ) p ( H | z )
Wherein, p (z) represents that test image belongs to the probability of test image category label z, and p (H) represents that test image has The probability of mixed attributes collection H, p (H | z) represent the probability from test image category label z to test image blend property set H, with The mode of discriminant determines from test image category label z to probability distribution p (H | z) of test image blend property set H:
p ( H | z ) = 1 , i f H = H z 0 , e l s e
In formula, HzRepresent test image actual mixed attributes collection;So, from test image low-level image feature x 'iTo test image Posterior probability estimation p of class label z (z | x 'i) it is expressed as:
p ( z | x i ′ ) = Σ f = 1 M + N p ( z | h f ) p ( h f | x i ′ ) = p ( z ) p ( h f z ) Π f = 1 M + N p ( h f z | x i ′ )
In formula, and p (z | hf) represent from the f mixed attributes hfTo the probability of test image category label z, p (hf|x′i) table Show the low-level image feature x ' from i-th test imageiTo the f mixed attributes hfProbability,Represent that test image has f Individual actual mixed attributesProbability,Represent from test image low-level image feature x 'iTo the f actual mixed attributes Probability;
In above formula, it is assumed that it is all equal that category of test is divided into the probit of any class, then carrying out category of test The impact of p (z) is ignored during Tag Estimation, and described p (z | x 'i) it is represented by:
p ( z | x i ′ ) = Σ f = 1 M + N p ( z | h f ) p ( h f | x i ′ ) = Σ f = 1 M + N p ( h f z | x i ′ ) p ( h f z ) .
As the preferred version of the present invention, in described step 6, at label allocated phase, pass through maximum a-posteriori estimation The method predicting the i-th class testing image category label is:
F ( x i ′ ) argmax z = 1 , ... , Z = { Π f = 1 M + N p ( h f z | x i ′ ) p ( h f z ) }
From test image Z class label find out so that posterior probability estimation p (z | x 'i) maximum class label F (x′i), and category label is distributed to test image, thus obtain testing image category label.
Beneficial effect: a kind of based on the direct forecast model of mixed attributes the zero sample image classification side that the present invention provides Method, utilizes sparse coding to be reconstructed the low-level image feature of image, and then obtains the another kind of low-dimensional of image, compact expression side Formula.Owing to the feature after this reconstruct does not has semantic information, therefore can named non-semantic attribute.By this extra dimension Original semantic attribute is carried out supplementing and assisting by non-semantic attribute, attribute space is extended and constitutes mixed attributes.Non- Semantic attribute can increase the diversity of semantic attribute such that it is able to the classification making semantic attribute similar is more prone to distinguish.Enter One step, is applied to the mixed attributes of structure in DAP model so that the classification of attribute similarity is more prone to be distinguished originally.
This method combines direct attribute forecast model and the advantage of sparse coding, has the advantage that (1) is to semantic attribute Supplemented and extended.Carry out supplementing and assisting to original semantic attribute by the non-semantic attribute of extra dimension, will belong to Property space is extended and is constituted mixed attributes.(2) diversity of attribute space is added by extending non-semantic attribute. (3) by increasing the diversity of attribute space so that the classification that semantic attribute is much like originally is more prone to be distinguished, thus carries The discrimination of high zero sample image classification.
Accompanying drawing explanation
Fig. 1 is the flow chart of the inventive method.
Detailed description of the invention
Below in conjunction with the accompanying drawings the present invention is further described.
As it is shown in figure 1, this method specifically includes following steps:
A kind of zero sample image sorting technique based on the direct forecast model of mixed attributes, comprises the steps:
Step 1, is learnt by sparse coding algorithm with training image low-level image feature collection, obtains base vector set;Specifically For:
Given training image low-level image feature collection X={x1,x2,...,xΚ}∈Rd×K, wherein K is the classification of training image Number, xi, i=1,2 ..., K is the i-th class training image low-level image feature, and d is the dimension of image low-level image feature, Rd×KRepresentation dimension is The space of d × K;WithRepresenting base vector set, wherein N represents the number of base vector,J=1,2 ..., N represents jth base vector;Sparse coding Algorithm Learning is then utilized to obtain the method for base vector set Φ specific as follows:
Step 1.1, random initializtion base vector set Φ;
Step 1.2, substitutes into training image low-level image feature collection X in following constraint equation:
In formula, bi,j, i=1,2 ..., K;J=1,2 ..., N is activation amount, represents the i-th class training image low-level image feature xiJth sparse coding;λ represents weight attenuation quotient;Represent sparse regular terms;Represent L1Norm, c represents one Individual constant, is used for limiting sparse degree;
Step 1.3, first fixing initialization base vector set Φ, solve the activation that constraint equation (1) can be made to minimize Amount bi,j;Then activation amount b is fixedi,j, solve the base vector set Φ that constraint equation (1) can be made to minimize;
Step 1.4, constantly repeats step 1.3 until convergence, tries to achieve one group of base vector set representing image low-level image feature
Step 2, linearly weighs training image low-level image feature collection and test image low-level image feature collection with base vector set Structure, obtains non-semantic property set, carries out supplementing and extending to semantic attribute collection, and then structure mixed attributes collection;Concrete steps are such as Under:
Given test image set low-level image feature collection X '={ x '1,x′2,...,x′Z}∈Rd×Z, wherein Z is the class of test image Other number, x 'i, i=1,2 ..., Z is the i-th class testing image low-level image feature, and d is the dimension of image low-level image feature, Rd×ZRepresent Dimension is the space of d × Z;The base vector set obtained is learnt by step 1To training image low-level image feature Collection X and test image low-level image feature collection X ' carries out linear reconstruction, obtainsI=1,2 ..., K andI=1,2 ..., Z;The method obtaining non-semantic property set B according to linear reconstruction result is:
Adjust activation amount bi,jMake formula (2) minimum, the set B={B finally given1,B2,...,BN{ 0,1} is i.e. ∈ Training image and the non-semantic property set of test image;Wherein Bj={ b1,j,b2,j,...,bK+Z,j, j=1,2 ..., N represents Training image and the non-semantic attribute of jth of test image;
Given semantic attribute collection A={A1,A2,...,AM{ 0,1}, wherein M represents the number of semantic attribute, A to ∈i, i= 1,2 ..., M represents training image and the i-th semantic attribute subset of test image;By the described non-semantic property set obtained Semantic attribute collection A is carried out supplementing and extending by B, and then structure obtains mixed attributes collection H={A, B}={A1,A2,...,AM,B1, B2,...,BN}={ h1,...,hf,...,hM+N}∈{0,1};hf, f=1,2 ..., M+N is the f mixed attributes.
Step 3, utilizes training image low-level image feature to integrate each mixed attributes concentrated as mixed attributes and trains one to mix Close attributive classification device;Specifically comprise the following steps that
By training image low-level image feature collection X={x1,x2,...,xΚ}∈Rd×KTraining sample as mixed attributes grader This, using mixed attributes collection H as the training sample label of mixed attributes grader, and utilize multi-category support vector machines for mixing Each mixed attributes h in property set HfTrain a mixed attributes grader;The concrete side of training mixed attributes grader Method is:
W = arg m i n Σ i = 1 K Σ f = 1 M + N ( w f T x i - h f ) 2 + γ Σ f = 1 M + N || w f || L 2 2 - - - ( 3 )
In formula, wfRepresent the f mixed attributes hfThe regression parameter of mixed attributes grader,Represent L2Norm, W= {w1,w2,...,wM+NRepresent mixed attributes grader regression parameter collection, γ is for regulating mixed attributes grader complexity Balance parameters;The regression parameter collection W={w of mixed attributes grader is obtained by solving formula (3)1,w2,...,wM+N, to obtain final product Arrive each mixed attributes hfMixed attributes grader.
In this step, the image low-level image feature collection X as the training sample of mixed attributes grader is that original bottom is special Levying, in step 2, reconstruct is intended merely to obtain activation amount, thus obtains non-semantic attribute, can't change the low-level image feature of image.
Step 4, utilizes mixed attributes grader to be predicted test image blend attribute, obtains testing image blend and belongs to The prediction probability of property;Specifically comprise the following steps that
In described step 4, given Z class testing image low-level image feature collection { x '1,x′2,...,x′Z}∈Rd×Z, utilize step 3 Test image blend attribute is predicted by the mixed attributes grader obtained, and obtains the pre-of the i-th class testing image blend attribute Survey Probability p (H | x 'i) method be:
p ( H | x i ′ ) = Π f = 1 M + N p ( h f | x i ′ )
In formula, x 'iRepresent the i-th class testing image low-level image feature, p (hf|x′i) represent that f of the i-th class testing image is mixed Closing the prediction probability of attribute, attribute corresponding to maximum predicted probit is the test image blend attribute that prediction obtains.
Step 5, the class label of test image is predicted by the test image blend attribute utilizing prediction to obtain, and obtains Posterior probability estimation from test image low-level image feature to test image category label;Specifically comprise the following steps that
According to Bayes theorem, obtain Probability p from test image blend property set H to test image category label z (z | H):
p ( z | H ) = p ( z ) p ( H ) p ( H | z )
Wherein, p (z) represents that test image belongs to the probability of test image category label z, and p (H) represents that test image has The probability of mixed attributes collection H, p (H | z) represent the probability from test image category label z to test image blend property set H, with The mode of discriminant determines from test image category label z to probability distribution p (H | z) of test image blend property set H:
p ( H | z ) = 1 , i f H = H z 0 , e l s e
In formula, HzRepresent test image actual mixed attributes collection, be given by priori;So, from test image Low-level image feature x 'iTo test image category label z posterior probability estimation p (z | x 'i) it is expressed as:
p ( z | x i ′ ) = Σ f = 1 M + N p ( z | h f ) p ( h f | x i ′ ) = p ( z ) p ( h f z ) Π f = 1 M + N p ( h f z | x i ′ )
In formula, and p (z | hf) represent from the f mixed attributes hfTo the probability of test image category label z, p (hf|x′i) table Show the low-level image feature x ' from i-th test imageiTo the f mixed attributes hfProbability,Represent that test image has f Individual actual mixed attributesProbability,Represent from test image low-level image feature x 'iTo the f actual mixed attributes Probability;
In above formula, it is assumed that it is all equal that category of test is divided into the probit of any class, then carrying out category of test Tag Estimation time ignore the impact of p (z), described p (z | x 'i) it is represented by:
p ( z | x i ′ ) = Σ f = 1 M + N p ( z | h f ) p ( h f | x i ′ ) = Σ f = 1 M + N p ( h f z | x i ′ ) p ( h f z ) .
Step 6, finds out the class label maximum so that posterior probability estimation, and category label is distributed to test figure Picture, thus obtains testing image category label;Particularly as follows:
At label allocated phase, predicted that by maximum a-posteriori estimation the method for the i-th class testing image category label is:
F ( x i ′ ) argmax z = 1 , ... , Z = { Π f = 1 M + N p ( h f z | x i ′ ) p ( h f z ) }
From test image Z class label find out so that posterior probability estimation p (z | x 'i) maximum class label F (x′i), and category label is distributed to test image, thus obtain testing image category label.
The inventive method, is distinguished for the much like image category of semantic attribute is more difficult in the zero sample image classification Problem, utilizes sparse coding to be reconstructed the low-level image feature of sample, will reconstruct after feature as non-semantic attribute to limited Semantic attribute carry out supplementing and assisting, and constitute mixed attributes with original semantic attribute, thus increase the difference of attribute space The opposite sex so that the classification of attribute similarity is more prone to be distinguished originally.
The above is only the preferred embodiment of the present invention, it should be pointed out that: for the ordinary skill people of the art For Yuan, under the premise without departing from the principles of the invention, it is also possible to make some improvements and modifications, these improvements and modifications also should It is considered as protection scope of the present invention.

Claims (7)

1. a zero sample image sorting technique based on the direct forecast model of mixed attributes, it is characterised in that include walking as follows Rapid:
Step 1, is learnt by sparse coding algorithm with training image low-level image feature collection, obtains base vector set;
Step 2, carries out linear reconstruction with base vector set to training image low-level image feature collection and test image low-level image feature collection, To non-semantic property set, carry out supplementing and extending to semantic attribute collection, and then structure mixed attributes collection;
Step 3, utilizes training image low-level image feature to integrate each mixed attributes concentrated as mixed attributes and trains a mixing to belong to Property grader;
Step 4, utilizes mixed attributes grader to be predicted test image blend attribute, obtains testing image blend attribute Prediction probability, attribute corresponding to maximum predicted probit is the test image blend attribute that prediction obtains;
Step 5, the class label of test image is predicted by the test image blend attribute utilizing prediction to obtain, and obtains from survey Attempt as low-level image feature is to the posterior probability estimation of test image category label;
Step 6, finds out the class label maximum so that posterior probability estimation, and distributes to category label test image, by This obtains testing image category label.
A kind of zero sample image sorting technique based on the direct forecast model of mixed attributes, it is special Levy and be: in described step 1, given training image low-level image feature collection X={x1,x2,...,xΚ}∈Rd×K, wherein K is training figure The classification number of picture, xi, i=1,2 ..., K is the i-th class training image low-level image feature, and d is the dimension of image low-level image feature, Rd×K Representation dimension is the space of d × K;WithRepresenting base vector set, wherein N represents the number of base vector,j =1,2 ..., N represents jth base vector;Sparse coding Algorithm Learning is then utilized to obtain the method for base vector set Φ the most such as Under:
Step 1.1, random initializtion base vector set Φ;
Step 1.2, substitutes into training image low-level image feature collection X in following constraint equation:
In formula, bi,j, i=1,2 ..., K;J=1,2 ..., N is activation amount, represents the i-th class training image low-level image feature xi? J sparse coding;λ represents weight attenuation quotient;Represent sparse regular terms;Represent L1Norm, c represents a constant, For limiting sparse degree;
Step 1.3, first fixing initialization base vector set Φ, solve the activation amount that constraint equation (1) can be made to minimize bi,j;Then activation amount b is fixedi,j, solve the base vector set Φ that constraint equation (1) can be made to minimize;
Step 1.4, constantly repeats step 1.3 until convergence, tries to achieve one group of base vector set representing image low-level image feature
A kind of zero sample image sorting technique based on the direct forecast model of mixed attributes, it is special Levy and be: in described step 2, given test image set low-level image feature collection X '={ x '1,x′2,...,x′Z}∈Rd×Z, wherein Z is The classification number of test image, x'i, i=1,2 ..., Z is the i-th class testing image low-level image feature, and d is image low-level image feature Dimension, Rd×ZRepresentation dimension is the space of d × Z;The base vector set obtained is learnt by step 1To training Image low-level image feature collection X and test image low-level image feature collection X ' carries out linear reconstruction, obtainsWith AndThe method obtaining non-semantic property set B according to linear reconstruction result is:
Adjust activation amount bi,jMake formula (2) minimum, the set B={B finally given1,B2,...,BN{ 0,1} is i.e. training to ∈ Image and the non-semantic property set of test image;Wherein Bj={ b1,j,b2,j,...,bK+Z,j, j=1,2 ..., N represents training Image and the non-semantic attribute of jth of test image;
Given semantic attribute collection A={A1,A2,...,AM{ 0,1}, wherein M represents the number of semantic attribute, A to ∈i, i=1, 2 ..., M represents training image and the i-th semantic attribute subset of test image;By the described non-semantic property set B obtained Carry out supplementing and extending to semantic attribute collection A, and then structure obtains mixed attributes collection H={A, B}={A1,A2,...,AM,B1, B2,...,BN}={ h1,...,hf,...,hM+N}∈{0,1};hf, f=1,2 ..., M+N is the f mixed attributes.
A kind of zero sample image sorting technique based on the direct forecast model of mixed attributes, it is special Levy and be: in described step 3, by training image low-level image feature collection X={x1,x2,...,xΚ}∈Rd×KClassify as mixed attributes The training sample of device, using mixed attributes collection H as the training sample label of mixed attributes grader, and utilize many classification support to Amount machine is each mixed attributes h in mixed attributes collection HfTrain a mixed attributes grader;Training mixed attributes classification Device method particularly includes:
W = arg m i n Σ i = 1 K Σ f = 1 M + N ( w f T x i - h f ) 2 + γ Σ f = 1 M + N | | w f | | L 2 2 - - - ( 3 )
In formula, wfRepresent the f mixed attributes hfThe regression parameter of mixed attributes grader,Represent L2Norm, W={w1, w2,...,wM+NRepresent mixed attributes grader regression parameter collection, γ is for regulating mixed attributes grader complexity Balance parameters;The regression parameter collection W={w of mixed attributes grader is obtained by solving formula (3)1,w2,...,wM+N, i.e. obtain Each mixed attributes hfMixed attributes grader.
A kind of zero sample image sorting technique based on the direct forecast model of mixed attributes, it is special Levy and be: in described step 4, given Z class testing image low-level image feature collection { x '1,x′2,...,x′Z}∈Rd×Z, utilize step 3 to obtain To mixed attributes grader to test image blend attribute be predicted, obtain the prediction of the i-th class testing image blend attribute Probability p (H | x 'i) method be:
p ( H | x i ′ ) = Π f = 1 M + N p ( h f | x i ′ )
In formula, x 'iRepresent the i-th class testing image low-level image feature, p (hf|x′i) represent that the f mixing of the i-th class testing image belongs to Property prediction probability, attribute corresponding to maximum predicted probit is the test image blend attribute that prediction obtains.
A kind of zero sample image sorting technique based on the direct forecast model of mixed attributes, it is special Levy and be: in described step 5, according to Bayes theorem, obtain from test image blend property set H to test image category label The Probability p of z (z | H):
p ( z | H ) = p ( z ) p ( H ) p ( H | z )
Wherein, p (z) represents that test image belongs to the probability of test image category label z, and p (H) represents that test image has mixing The probability of property set H, and p (H | z) represent the probability from test image category label z to test image blend property set H, to differentiate The mode of formula determines from test image category label z to probability distribution p (H | z) of test image blend property set H:
p ( H | z ) = 1 , i f H = H z 0 , e l s e
In formula, HzRepresent test image actual mixed attributes collection;So, from test image low-level image feature x 'iTo test image category Posterior probability estimation p of label z (z | x 'i) it is expressed as:
p ( z | x i ′ ) = Σ f = 1 M + N p ( z | h f ) p ( h f | x i ′ ) = p ( z ) p ( h f z ) Π f = 1 M + N p ( h f z | x i ′ )
In formula, and p (z | hf) represent from the f mixed attributes hfTo the probability of test image category label z, p (hf|x′i) represent from The low-level image feature x ' of i-th test imageiTo the f mixed attributes hfProbability,Represent that test image has the f in fact Border mixed attributesProbability,Represent from test image low-level image feature x 'iTo the f actual mixed attributesGeneral Rate;
In above formula, it is assumed that it is all equal that category of test is divided into the probit of any class, then carrying out category of test label The impact of p (z) is ignored during prediction, and described p (z | x 'i) it is represented by:
p ( z | x i ′ ) = Σ f = 1 M + N p ( z | h f ) p ( h f | x i ′ ) = Π f = 1 M + N p ( h f z | x i ′ ) p ( h f z ) .
A kind of zero sample image sorting technique based on the direct forecast model of mixed attributes, it is special Levy and be: in described step 6, at label allocated phase, predict the i-th class testing image category mark by maximum a-posteriori estimation The method signed is:
F ( x i ′ ) = argmax z = 1 , ... , Z { Π f = 1 M + N p ( h f z | x i ′ ) p ( h f z ) }
From test image Z class label find out so that posterior probability estimation p (z | x 'i) maximum class label F (x 'i), And category label is distributed to test image, thus obtain testing image category label.
CN201610483014.2A 2016-06-27 2016-06-27 A kind of zero sample image classification method based on the direct prediction model of mixed attributes Active CN106203472B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610483014.2A CN106203472B (en) 2016-06-27 2016-06-27 A kind of zero sample image classification method based on the direct prediction model of mixed attributes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610483014.2A CN106203472B (en) 2016-06-27 2016-06-27 A kind of zero sample image classification method based on the direct prediction model of mixed attributes

Publications (2)

Publication Number Publication Date
CN106203472A true CN106203472A (en) 2016-12-07
CN106203472B CN106203472B (en) 2019-04-02

Family

ID=57461941

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610483014.2A Active CN106203472B (en) 2016-06-27 2016-06-27 A kind of zero sample image classification method based on the direct prediction model of mixed attributes

Country Status (1)

Country Link
CN (1) CN106203472B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106980876A (en) * 2017-03-13 2017-07-25 南京邮电大学 A kind of zero sample image recognition methods learnt based on distinctive sample attribute
WO2018161217A1 (en) * 2017-03-06 2018-09-13 Nokia Technologies Oy A transductive and/or adaptive max margin zero-shot learning method and system
CN108846406A (en) * 2018-04-18 2018-11-20 西安理工大学 A kind of zero sample image classification method based on structure-borne
CN109086710A (en) * 2018-07-27 2018-12-25 杭州电子科技大学 A kind of small sample target identification method based on mixed attributes study
CN109886289A (en) * 2019-01-08 2019-06-14 深圳禾思众成科技有限公司 A kind of deep learning method, equipment and computer readable storage medium
CN110073367A (en) * 2017-01-19 2019-07-30 赫尔实验室有限公司 The multiple view of compatible function of the utilization based on SOFT-MAX for zero sample learning is embedded in
CN110427967A (en) * 2019-06-27 2019-11-08 中国矿业大学 The zero sample image classification method based on embedded feature selecting semanteme self-encoding encoder
CN110582777A (en) * 2017-05-05 2019-12-17 赫尔实验室有限公司 Zero-sample machine vision system with joint sparse representation
CN111126049A (en) * 2019-12-14 2020-05-08 中国科学院深圳先进技术研究院 Object relation prediction method and device, terminal equipment and readable storage medium
CN112257765A (en) * 2020-10-16 2021-01-22 济南大学 Zero sample image classification method and system based on unknown similarity class set

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512679A (en) * 2015-12-02 2016-04-20 天津大学 Zero sample classification method based on extreme learning machine

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512679A (en) * 2015-12-02 2016-04-20 天津大学 Zero sample classification method based on extreme learning machine

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHARMANSKA V 等: "Augmented attribute representations[A]", 《PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 *
巩萍 等: "基于属性关系图正则化特征选择的零样本分类", 《中国矿业大学学报》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110073367A (en) * 2017-01-19 2019-07-30 赫尔实验室有限公司 The multiple view of compatible function of the utilization based on SOFT-MAX for zero sample learning is embedded in
WO2018161217A1 (en) * 2017-03-06 2018-09-13 Nokia Technologies Oy A transductive and/or adaptive max margin zero-shot learning method and system
CN106980876A (en) * 2017-03-13 2017-07-25 南京邮电大学 A kind of zero sample image recognition methods learnt based on distinctive sample attribute
CN110582777A (en) * 2017-05-05 2019-12-17 赫尔实验室有限公司 Zero-sample machine vision system with joint sparse representation
CN108846406B (en) * 2018-04-18 2022-04-22 西安理工大学 Zero sample image classification method based on structure propagation
CN108846406A (en) * 2018-04-18 2018-11-20 西安理工大学 A kind of zero sample image classification method based on structure-borne
CN109086710A (en) * 2018-07-27 2018-12-25 杭州电子科技大学 A kind of small sample target identification method based on mixed attributes study
CN109886289A (en) * 2019-01-08 2019-06-14 深圳禾思众成科技有限公司 A kind of deep learning method, equipment and computer readable storage medium
CN110427967A (en) * 2019-06-27 2019-11-08 中国矿业大学 The zero sample image classification method based on embedded feature selecting semanteme self-encoding encoder
CN111126049A (en) * 2019-12-14 2020-05-08 中国科学院深圳先进技术研究院 Object relation prediction method and device, terminal equipment and readable storage medium
CN111126049B (en) * 2019-12-14 2023-11-24 中国科学院深圳先进技术研究院 Object relation prediction method, device, terminal equipment and readable storage medium
CN112257765A (en) * 2020-10-16 2021-01-22 济南大学 Zero sample image classification method and system based on unknown similarity class set
CN112257765B (en) * 2020-10-16 2022-09-23 济南大学 Zero sample image classification method and system based on unknown similarity class set

Also Published As

Publication number Publication date
CN106203472B (en) 2019-04-02

Similar Documents

Publication Publication Date Title
CN106203472A (en) A kind of zero sample image sorting technique based on the direct forecast model of mixed attributes
Choi et al. A tree-based context model for object recognition
CN105975916A (en) Age estimation method based on multi-output convolution neural network and ordered regression
CN110348579A (en) A kind of domain-adaptive migration feature method and system
CN108875816A (en) Merge the Active Learning samples selection strategy of Reliability Code and diversity criterion
Zhao et al. Modeling Stated preference for mobility-on-demand transit: a comparison of Machine Learning and logit models
CN106295697A (en) A kind of based on semi-supervised transfer learning sorting technique
CN105740891B (en) Target detection based on multi level feature selection and context model
CN110991532B (en) Scene graph generation method based on relational visual attention mechanism
CN103020122A (en) Transfer learning method based on semi-supervised clustering
CN103745233B (en) The hyperspectral image classification method migrated based on spatial information
CN101794396A (en) System and method for recognizing remote sensing image target based on migration network learning
CN103400144B (en) Active learning method based on K-neighbor for support vector machine (SVM)
CN107563410A (en) The sorting technique and equipment with multi-task learning are unanimously clustered based on topic categories
CN103489033A (en) Incremental type learning method integrating self-organizing mapping and probability neural network
CN104966105A (en) Robust machine error retrieving method and system
CN108090499A (en) Data active mask method and system based on maximum information triple screening network
CN105005794A (en) Image pixel semantic annotation method with combination of multi-granularity context information
CN106096661A (en) Zero sample image sorting technique based on relative priority random forest
CN110119688A (en) A kind of Image emotional semantic classification method using visual attention contract network
CN109635668A (en) Facial expression recognizing method and system based on soft label integrated rolled product neural network
CN102646198B (en) Mode recognition method of mixed linear SVM (support vector machine) classifier with hierarchical structure
Wang et al. Linear twin svm for learning from label proportions
CN105975978A (en) Semi-supervised multi-tag feature selection and classification method based on tag correlation
CN110378405A (en) The Hyperspectral Remote Sensing Imagery Classification method of Adaboost algorithm based on transfer learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant