CN105335756A - Robust learning model and image classification system - Google Patents

Robust learning model and image classification system Download PDF

Info

Publication number
CN105335756A
CN105335756A CN201510726581.1A CN201510726581A CN105335756A CN 105335756 A CN105335756 A CN 105335756A CN 201510726581 A CN201510726581 A CN 201510726581A CN 105335756 A CN105335756 A CN 105335756A
Authority
CN
China
Prior art keywords
matrix
training sample
sample
label
soft
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510726581.1A
Other languages
Chinese (zh)
Other versions
CN105335756B (en
Inventor
张召
江威明
李凡长
张莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou University
Original Assignee
Suzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University filed Critical Suzhou University
Priority to CN201510726581.1A priority Critical patent/CN105335756B/en
Publication of CN105335756A publication Critical patent/CN105335756A/en
Application granted granted Critical
Publication of CN105335756B publication Critical patent/CN105335756B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification

Abstract

The invention discloses a robust learning model and an image classification system. The robust learning model comprises the steps that a training set is initialized so that an initial category tag matrix is obtained, and training samples in the training set include the samples of which the categories are known with calibration of category tags corresponding to the categories and the samples of which the categories are unknown without calibration of the category tags; the training samples are processed by a construction method based on neighboring definition and reconstruction weights, a reconstruction coefficient matrix is constructed according to similarity between the samples, and symmetrization and normalization processing is performed; soft tags without calibration samples are determined by utilizing the reconstruction coefficient matrix and the initial category tag matrix, and l2,1 normal regularization is performed on the soft tags of the training samples by adopting an iteration method so that a projection matrix and a soft tag matrix are obtained; mapping is performed on samples under test by utilizing the projection matrix so that the soft tags of the samples are obtained; and the samples under test are the samples of which the categories are unknown without calibration of the categories. Influence of mixed signals in an original space can be effectively reduced by the model so that classification accuracy can be enhanced.

Description

A kind of Robust Learning model and image classification system
Technical field
The present invention relates to pattern-recognition and data mining technology field, more particularly, relate to a kind of Robust Learning model and image classification system.
Background technology
Along with computer technology and intelligentized development, Image Classfication Technology has developed into one of most important research topic in the field such as data mining, machine learning.Sorting technique belonging to Image Classfication Technology is mainly used to the classification of the data judging unknown classification, all be of great importance in fields such as Analysis of Medical Treatment Data, text, webpage and credit card gradings, therefore, accurate sorting technique is come into operation can bring huge social and economic effects.Multinomial research proves that the performance of supervision type learning method is obviously better than without supervision type learning method, but in real world, monitoring data for supervising type learning method is often difficult to obtain, and obtain the classification information of monitoring data by the time of at substantial and manpower by artificial mode of demarcating, thus its practicality is reduced greatly.Thus, semi-supervised learning based on similar diagram structure develops into practicality and one of general classification tool already because of its practicality and classification accuracy, semi-supervised type study is mainly through demarcating the classification of low volume data in each classification in mass data, again it is propagated to the data of unknown classification by similar diagram, and then dope the classification of data of unknown classification.
In recent years, in semi-supervised learning based on the label Law of Communication of label communication theory because of its simply, advantage effectively and rapidly, one of Typical Representative becoming semi-supervised learning.Label Law of Communication is by learning the similarity between sample, by supervision to the classification information of exemplar (i.e. the sample of its classification known) that has propagate to without the exemplar sample of its classification (i.e. the unknown), and then estimate the classification information without exemplar.Label Law of Communication all adopts soft label as the classification results without exemplar mostly at present, but inventor finds, at the soft label of employing as in the label Law of Communication of the classification results without exemplar, the luv space having exemplar corresponding is often containing mixed signal, and these mixed signals can cause harmful effect to the classification without exemplar, and then the classification results without exemplar is caused to be inaccurate.
In sum, the impact existed in prior art due to the mixed signal of luv space causes without the lower problem of the classification results accuracy of exemplar.
Summary of the invention
The object of this invention is to provide a kind of Robust Learning model and image classification system, cause without the lower problem of the classification results accuracy of exemplar with the impact solving the mixed signal due to luv space existed in prior art.
To achieve these goals, the invention provides following technical scheme:
A kind of Robust Learning model, comprising:
Initialization is carried out to the training set obtained in advance, obtain initial category label matrix, wherein, described training set comprises the training sample of predetermined amount, and described training sample comprises its classification known and demarcates has the sample of the class label corresponding with its classification and its classification unknown and the sample not demarcating class label;
Building method based on neighbour's definition and reconstruct power processes described training sample, obtains the similarity measure matrix corresponding with described training sample, carries out presetting process, obtain reconstruction coefficients matrix to described similarity measure matrix;
Based on described reconstruction coefficients matrix and described initial category label matrix, determined the soft label of the training sample not demarcating class label by the level and smooth item of active balance stream shape and label matching item, adopt the soft label of mode to described training sample of iteration to carry out l 2,1norm regularization, obtains projection matrix and the soft label matrix corresponding with the soft label of described training sample;
Utilize described projection matrix to map sample to be tested, obtain the soft label of described sample to be tested; Wherein, described sample to be tested is its classification unknown and does not demarcate the sample of class label.
Preferably, the described building method based on neighbour's definition and reconstruct power processes described training sample, obtains the similarity measure matrix corresponding with described training sample, comprising:
Utilize k nearest neighbor algorithm to process each described training sample, obtain K nearest samples of each training sample, K is positive integer;
The building method of LLE-reconstruct power is adopted to utilize K nearest samples of described training sample and each described training sample to obtain the similarity measure matrix corresponding with described training sample.
Preferably, described to described similarity measure matrix carry out preset process, obtain reconstruction coefficients matrix, comprising:
Described similarity measure matrix is normalized and symmetrization process, obtains described reconstruction coefficients matrix.
Preferably, utilize described projection matrix to map sample to be tested, obtain the soft label of described sample to be tested, comprising:
Obtain sample to be tested x new;
Utilize P tx newby described sample to be tested x newbe embedded in projection matrix P, obtain predicted vector;
Determine that the soft label that the element of maximum probability in described predicted vector is corresponding is described sample to be tested x newsoft label.
Preferably, the described training set to obtaining in advance carries out initialization, obtains initial category label matrix, comprising:
Obtain initialization matrix, use Y 0=[y 1, y 2..., y l+u] represent, described training set is used represent, wherein, represent that n is multiplied by the space of matrices of l+u, X lfor demarcating the sample having class label, X l=[x 1, x 2..., x l], X ufor not demarcating the sample of class label, X u=[x l+1, x l+2..., x l+u], n, l and u are positive integer;
Determine the arbitrary y in described initialization matrix ibe a column vector, i-th training sample x in the corresponding described training set of arbitrary column vector i, i=1,2 ..., l+u;
Arbitrary demarcation is had to the training sample x of class label jif, this training sample x jbelong to the i-th classification, then determine y i, j=1, if this training sample x jdo not belong to the i-th classification, then determine y i, j=0; For arbitrary training sample x not demarcating class label j, determine y i, j=0, obtain initial category label matrix Y, wherein, and j=1,2 ..., l+u.
Preferably, the described building method based on neighbour's definition and reconstruct power processes described training sample, obtains the similarity measure matrix corresponding with described training sample, comprising:
Following formula is utilized to obtain described similarity measure matrix:
Wherein, represent training sample x ik nearest samples, q and K is corresponding, for current training sample x ireconstruction coefficient vector, characterize its neighbour at reconstruct training sample x itime percentage contribution, represent corresponding similarity reconstruction coefficients matrix.
Preferably, the soft label of mode to described training sample of described employing iteration carries out l 2,1norm regularization, obtains projection matrix and the soft label matrix corresponding with the soft label of described training sample, comprising:
Following formula is utilized to obtain projection matrix and soft label matrix:
subjf i≥0,e Tf i=1fori=1,2,...,l+u
Wherein, F represents soft label matrix, and P represents projection matrix, and W represents reconstruction coefficients matrix, and D represents diagonal matrix, and D iijw i,j, f irepresent the i-th row vector, X tp-F trepresent and return discrepance, described recurrence discrepance is used for weighing X tp and soft label F tdifference degree; μ i, α, beta, gamma is corresponding balance parameter;
Or, utilize following formula to obtain projection matrix and soft label matrix:
< F , P > = arg m i n F , P t r ( F&Omega;F T ) + t r ( ( F - Y ) U D ( F - Y ) T ) + &alpha; | | F T | | 2 , 1 + &beta; ( | | P | | 2 , 1 + &gamma; | | X T P - F T | | 2 , 1 )
subjF≥0,e TF=e T
Wherein, Ω=I-W t-W+W tw, I are unit matrix, and it only has diagonal element non-zero and is 1; U represents a diagonal matrix, and its i-th diagonal element is μ i, for demarcating the training sample having label, μ ibe set to+∞, for the training sample not demarcating class label, μ ibe set to 0.
A kind of image classification system, comprising:
Pretreatment module, for carrying out initialization to the training set obtained in advance, obtain initial category label matrix, wherein, described training set comprises the training sample of predetermined amount, and described training sample comprises its classification known and demarcates has the sample of the class label corresponding with its classification and its classification unknown and the sample not demarcating class label; And for processing described training sample based on the building method of neighbour's definition and reconstruct power, obtain the similarity measure matrix corresponding with described training sample, carry out presetting process to described similarity measure matrix, obtain reconstruction coefficients matrix;
Training module, for based on described reconstruction coefficients matrix and described initial category label matrix, determined the soft label of the training sample not demarcating class label by the level and smooth item of active balance stream shape and label matching item, adopt the soft label of mode to described training sample of iteration to carry out l 2,1norm regularization, obtains projection matrix and the soft label matrix corresponding with the soft label of described training sample;
Prediction module, for utilizing described projection matrix to map sample to be tested, obtains the soft label of described sample to be tested; Wherein, described sample to be tested is the sample not demarcating class label.
Preferably, described pretreatment module comprises:
K nearest neighbor unit, for utilizing k nearest neighbor algorithm to process each described training sample, obtain K nearest samples of each training sample, K is positive integer;
LLE-reconstruct power unit, the building method for adopting LLE-to reconstruct power utilizes K nearest samples of each described training sample to obtain the similarity measure matrix corresponding with described training sample.
Preferably, described pretreatment module comprises:
Presetting processing unit, for being normalized and symmetrization process described similarity measure matrix, obtaining described reconstruction coefficients matrix.
A kind of Robust Learning model provided by the invention and image classification system, comprise: initialization is carried out to the training set obtained in advance, obtain initial category label matrix, wherein, described training set comprises the training sample of predetermined amount, and described training sample comprises its classification known and demarcates has the sample of the class label corresponding with its classification and its classification unknown and the sample not demarcating class label; Building method based on neighbour's definition and reconstruct power processes described training sample, obtains the similarity measure matrix corresponding with described training sample, carries out presetting process, obtain reconstruction coefficients matrix to described similarity measure matrix; Based on described reconstruction coefficients matrix and described initial category label matrix, determined the soft label of the training sample not demarcating class label by the level and smooth item of active balance stream shape and label matching item, adopt the soft label of mode to described training sample of iteration to carry out l 2,1norm regularization, obtains projection matrix and the soft label matrix corresponding with the soft label of described training sample; Utilize described projection matrix to map sample to be tested, obtain the soft label of described sample to be tested; Wherein, described sample to be tested is its classification unknown and does not demarcate the sample of class label.
Compared with prior art, first utilize in the application and demarcate the training sample having the training sample of class label and do not demarcate class label and jointly construct similarity measure matrix, obtain reconstruction coefficients matrix further, and initialization obtains initial category label matrix, initial category label matrix and reconstruction coefficients matrix is utilized to determine not demarcate the soft label of the training sample of class label, by carrying out l to the soft label of training sample 2,1norm regularization, effectively reduce soft label formed luv space in mixed signal on the impact of the accuracy of classification results; By based on l 2,1the tolerance of norm regularization, effectively enhances the robustness embedding the regression residuals between feature and soft label and measure; By calculating based on l 2,1the mapping of the tolerance of norm regularization, enhances and extracts the descriptive of feature.In a word, by introducing l 2,1norm regularization technology, effectively improve the robustness of system for the mixed signal such as noise and heterogeneous data in training sample, the impact avoiding the mixed signal due to luv space mentioned in background technology causes the generation without the lower situation of the classification results accuracy of exemplar, namely, decrease the impact of the mixed signal of luv space, improve the accuracy of the classification results of sample to be tested, enhance classification performance.In addition, the application is the outer Rate Based On The Extended Creep Model of sample, and by embedding the conclusion and prediction that efficiently complete the outer data of sample, without the need to introducing extra restructuring procedure, expansibility can be good.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only embodiments of the invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to the accompanying drawing provided.
The process flow diagram of a kind of Robust Learning model that Fig. 1 provides for the embodiment of the present invention;
The structural representation of a kind of image classification system that Fig. 2 provides for the embodiment of the present invention;
The Tag Estimation schematic diagram of the classification of facial image test sample book is obtained in a kind of Robust Learning model that Fig. 3 provides for the embodiment of the present invention and image classification system.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
Refer to Fig. 1, it illustrates the process flow diagram of a kind of Robust Learning model that the embodiment of the present invention provides, can comprise the following steps:
S11: initialization is carried out to the training set obtained in advance, obtain initial category label matrix, wherein, training set comprises the training sample of predetermined amount, and training sample comprises its classification known and demarcates has the sample of the class label corresponding with its classification and its classification unknown and the sample not demarcating class label.
Wherein, in training set, the quantity of training sample can be determined according to actual needs.The class label of training sample is corresponding with the classification of training sample, for the training sample of its classification unknown, it is not carried out to the demarcation of class label, for the training sample of its classification known, utilize the class label corresponding with its classification to demarcate it.
S12: the building method based on neighbour's definition and reconstruct power processes training sample, obtains the similarity measure matrix corresponding with training sample, carries out presetting process, obtain reconstruction coefficients matrix to similarity measure matrix.
It should be noted that, step S11 and step S12 can complete simultaneously, also can be first to complete S11 to complete S12 again, or first completes S12 and complete S11 again, specifically can determine according to actual needs.
S13: based on reconstruction coefficients matrix and initial category label matrix, determines the soft label of the training sample not demarcating class label by the level and smooth item of active balance stream shape and label matching item, adopt the soft label of mode to training sample of iteration to carry out l 2,1norm regularization, obtains projection matrix and the soft label matrix corresponding with the soft label of training sample.
Can be specifically determine the level and smooth item of stream shape based on reconstruction coefficients matrix, based on initial category label matrix determination label matching item, and then determine the soft label corresponding with the stream level and smooth item of shape and label matching item.
The soft label of each training sample is also corresponding with its classification, and soft label matrix is determined according to the soft label of each training sample.The classification of whole training sample can be known by the soft label matrix obtained.
And l 2,1norm regularization is means important in machine learning, in support vector machine learning process, actual is a kind of process cost function being solved to optimum, it by adding norm in cost function, make the result learning to obtain meet rarefaction (sparsity), thus facilitate the mankind to extract feature.
S14: utilize projection matrix to map sample to be tested, obtains the soft label of sample to be tested; Wherein, sample to be tested is its classification unknown and does not demarcate the sample of class label.
Utilize projection matrix to map sample to be tested, obtain the soft label of sample to be tested, determine its classification further.
First utilize in the application and demarcate the training sample having the training sample of class label and do not demarcate class label and jointly construct similarity measure matrix, obtain reconstruction coefficients matrix further, and initialization obtains initial category label matrix, initial category label matrix and reconstruction coefficients matrix is utilized to determine not demarcate the soft label of the training sample of class label, by carrying out l to the soft label of training sample 2,1norm regularization, effectively reduce soft label formed luv space in mixed signal on the impact of the accuracy of classification results; By based on l 2,1the tolerance of norm regularization, effectively enhances the robustness embedding the regression residuals between feature and soft label and measure; By calculating based on l 2,1the mapping of the tolerance of norm regularization, enhances and extracts the descriptive of feature.In a word, by introducing l 2,1norm regularization technology, effectively improve the robustness of system for the mixed signal such as noise and heterogeneous data in training sample, the impact avoiding the mixed signal due to luv space mentioned in background technology causes the generation without the lower situation of the classification results accuracy of exemplar, namely, decrease the impact of the mixed signal of luv space, improve the accuracy of the classification results of sample to be tested, enhance classification performance.In addition, the application is the outer Rate Based On The Extended Creep Model of sample, and by embedding the conclusion and prediction that efficiently complete the outer data of sample, without the need to introducing extra restructuring procedure, expansibility can be good.
In a kind of Robust Learning model that above-described embodiment provides, initialization is carried out to the training set obtained in advance, obtains initial category label matrix, can comprise:
Obtain initialization matrix, use Y 0=[y 1, y 2..., y l+u] represent, training set is used represent, wherein, represent that n is multiplied by the space of matrices of l+u, X lfor demarcating the sample of class label, X l=[x 1, x 2..., x l], X ufor not demarcating the sample of class label, X u=[x l+1, x l+2..., x l+u], n, l and u are positive integer;
Determine the arbitrary y in initialization matrix ibe a column vector, i-th training sample x in the corresponding training set of arbitrary column vector i, i=1,2 ..., l+u;
Arbitrary demarcation is had to the training sample x of class label jif, this training sample x jbelong to the i-th classification, then determine y i, j=1, if this training sample x jdo not belong to the i-th classification, then determine y i, j=0; For arbitrary training sample x not demarcating class label j, determine y i, j=0, obtain initial category label matrix Y, wherein, and j=1,2 ..., l+u.
Thus, by above-mentioned steps, based on the class label of each training sample, training sample is changed into the initial category label matrix of the classification that can indicate each training sample, to ensure the smooth realization of subsequent step.
In a kind of Robust Learning model that above-described embodiment provides, the building method based on neighbour's definition and reconstruct power processes training sample, obtains the similarity measure matrix corresponding with training sample, can comprise:
To the training set comprising c class training sample, utilize k nearest neighbor algorithm to process each training sample, obtain K nearest samples of each training sample, K is positive integer;
The building method of LLE-reconstruct power is adopted to utilize the K of training sample and each training sample nearest samples to obtain the similarity measure matrix corresponding with training sample.
Wherein, the thinking of k nearest neighbor algorithm is: if the great majority in the sample of K (namely the most contiguous in feature space) the most similar of a sample in feature space belong to some classifications, then this sample also belongs to this classification.
And the building method of LLE-reconstruct power is the optimization method for non-linear signal features vector dimension, this dimension optimizing is not only quantitatively simple yojan, but when keeping raw data character constant, by the signal map of higher dimensional space on lower dimensional space, i.e. the second extraction of eigenwert.
Concrete, the building method based on neighbour's definition and reconstruct power processes training sample, obtains the similarity measure matrix corresponding with training sample, can comprise:
Following formula is utilized to obtain similarity measure matrix:
Wherein, represent training sample x ik nearest samples, q and K is corresponding, for current training sample x ireconstruction coefficient vector, characterize its neighbour at reconstruct training sample x itime percentage contribution, represent similarity measurements moment matrix.And similarity measurements moment matrix obtain by repeating the above-mentioned step reconstructing the building method weighed to k nearest neighbor algorithm and LLE-relevant to all training samples.
It should be noted that, the building method of LLE-reconstruct power is adopted to utilize the K of training sample and each training sample nearest samples to obtain the similarity measure matrix corresponding with training sample, be specially, adopt the building method of LLE-reconstruct power, calculate, weigh similarity between each summit, the similarity measure matrix of structure similar neighborhoods figure.
In a kind of Robust Learning model that above-described embodiment provides, carry out presetting process to similarity measure matrix, obtain reconstruction coefficients matrix, can comprise:
Similarity measure matrix is normalized and symmetrization process, obtains reconstruction coefficients matrix.
And utilize projection matrix to map sample to be tested, obtain the soft label of sample to be tested, can comprise:
Obtain sample to be tested x new;
Utilize P tx newby sample to be tested x newbe embedded in projection matrix P, obtain predicted vector;
Determine that the soft label that in predicted vector, the element of maximum probability is corresponding is sample to be tested x newsoft label.
Wherein, projection matrix is map classification device, by sample to be tested being embedded in map classification device, predicted vector can be obtained, be the soft label matrix corresponding with sample to be tested, determine that the soft label of the position that the element of maximum probability in soft label matrix is corresponding is the soft label of sample to be tested, and the hard class label of sample to be tested can be summed up as argmax i≤c(f new) i, wherein, represent the predicted vector f of prediction newi-th element position.
By above-mentioned steps, can according to projection matrix obtain sample to be tested likely belonging to classification, and using the classification of the classification of maximum probability as sample to be tested, ensure that the accuracy of classification results.
Suppose the soft label matrix predicted, namely initialization matrix is F=[f 1, f 2... f l+u], wherein training sample x jclassification and each arrange f jmiddle maximal term f i, jposition be associated; In a kind of Robust Learning model that above-described embodiment provides, the mode of iteration is adopted to carry out l to soft label 2,1norm regularization, obtains projection matrix and the soft label matrix corresponding with the soft label of training sample, can comprise:
Following formula is utilized to obtain projection matrix and soft label matrix:
subjf i≥0,e Tf i=1fori=1,2,...,l+u
Wherein, F represents soft label matrix, and P represents projection matrix, and W represents reconstruction coefficients matrix, and D represents diagonal matrix, and D iijw i,j, f irepresent the i-th row vector, X tp-F trepresent and return discrepance, return discrepance and be used for weighing X tp and soft label F tdifference degree; μ i, α, beta, gamma is the balance parameter corresponding to above-mentioned every data; Can add row and be 1 constraint condition, i.e. e tf i=1 and Condition of Non-Negative Constrains, i.e. f i>=0, to ensure in the soft label classification matrix F that exports that each training sample distribution probability meets probability and is 1 and for the distribution probability logic (e is the column vector that an element is 1) that is positive.And projection matrix P, namely map classification device can be expressed as:
And in actual iterative process, usually utilize following formula to obtain projection matrix and soft label matrix:
< F , P > = arg m i n F , P t r ( F&Omega;F T ) + t r ( ( F - Y ) U D ( F - Y ) T ) + &alpha; | | F T | | 2 , 1 + &beta; ( | | P | | 2 , 1 + &gamma; | | X T P - F T | | 2 , 1 )
subjF≥0,e TF=e T
Wherein, Ω=I-W t-W+W tw, I are unit matrix, and it only has diagonal element non-zero and is 1; U represents a diagonal matrix, and its i-th diagonal element is μ i, for demarcating the training sample having label, μ ibe set to+∞, for the training sample not demarcating class label, μ ibe set to 0.And the l of projection matrix P 2,1norm, i.e. ‖ P ‖ 2,1can make many row vanishing of projection matrix P, thus in reduction luv space, mixed signal, on the impact of soft label, guarantees that projection matrix P is sparse.
It should be noted that, in actual iterative process, on the basis of above-mentioned formula, can further formula be rewritten as:
Wherein, G, H, V are diagonal matrix, and:
g i i = 1 2 | | p i | | , i = 1 , 2 , ... , l + u
h i i = 1 2 | | f ^ i | | , i = 1 , 2 , ... , l + u
v i i = 1 2 | | r i | | , i = 1 , 2 , ... , l + u
Wherein, r ir=X tp-F tthe i-th row vector.
Pass through 0 is set to P partial derivative, following formula can be obtained:
Wherein, P V G = &gamma; ( &gamma; - XVX T + G ) - 1 .
Pass through 0 is set to F partial derivative, following formula can be obtained:
Wherein,
After each iteration upgrades, initial labels matrix Y, diagonal matrix D, soft label matrix F, and diagonal matrix H, V is updated to:
F k + 1 = &lsqb; F l k + 1 , F u k + 1 &rsqb; , Y = &lsqb; Y l , Y u &rsqb; , D = D l 0 0 D u , H k = H l k 0 0 H u k , V k = V l k 0 0 V u k
Wherein, d l, d u, due to and Y lbe respectively the original soft label of the prediction of demarcating the training sample of class label and mark matrix, and Y ufor the original soft label of the prediction of not demarcating the training sample of class label and mark matrix, above-mentioned formula can be rewritten as:
[ F l k + 1 , F u k + 1 ] &Omega; ll + &alpha; H l k + U l D l + &beta;&gamma; V l k &Omega; lu &Omega; ul &Omega; uu + &alpha; H u k + U u D u + &beta;&gamma; V u k - [ Y l , Y u ] U l D l 0 0 U u D u - &beta;&gamma; [ ( P k ) T X l , ( P k ) T X u ] V l k 0 0 V u k = 0
Wherein, U is according to original having class label and can be divided into 4 parts without the training sample of class label in initial training sample.Above-mentioned formula can be converted into following formula:
F u k + 1 ( &Omega; u u + &alpha;H u k + U u D u + &beta;&gamma;V u k ) + F l k + 1 &Omega; l u - Y u U u D u - &beta; &gamma; ( P k ) T X u V u k = 0 F l k + 1 ( &Omega; l l + &alpha;H l k + U l D l + &beta;&gamma;V l k ) + F u k + 1 &Omega; u l - Y l U l D l - &beta; &gamma; ( P k ) T X l V l k = 0
After can estimating kth+1 iteration by above-mentioned two equatioies, there is class label in luv space and without the soft tag update of the training sample of class label be:
F u k + 1 = ( Y u U u D u + &beta; &gamma; ( P k ) T X u V u k - F l k + 1 &Omega; l u ) ( &Omega; u u + &alpha;H u k + U u D u + &beta;&gamma;V u k ) - 1 F l k + 1 = ( Y l U l D l + &beta; &gamma; ( P k ) T X l V l k - F u k + 1 &Omega; u l ) ( &Omega; l l + &alpha;H l k + U l D l + &beta;&gamma;V l k ) - 1
Setting &Omega; L = &Omega; l l + &alpha;H l k + U l D l &beta;&gamma;V l k , &Omega; U = &Omega; u u + U u D u + &beta;&gamma;V u k , Can obtain by above-mentioned second equation is substituted into first equation:
F u k + 1 = ( Y u U u D u + &beta; &gamma; ( P k ) T X u V u k ) ( &Omega; U ) - 1 - ( Y l U l D l + &beta; &gamma; ( P k ) T X l V l k - F u k + 1 &Omega; u l ) ( &Omega; L ) - 1 &Omega; l u ( &Omega; U ) - 1
Suppose &Omega; A = ( Y u U u D u + &beta;&gamma;P T X u V u k ) ( &Omega; U ) - 1 , &Omega; B = ( &Omega; L ) - 1 &Omega; l u ( &Omega; U ) - 1 , The soft label of training sample original in current iteration process obtains by following formula:
F u k + 1 = ( &Omega; A - Y l U l D l &Omega; B - &beta; &gamma; ( P k ) T X l V l k ) ( I - &Omega; u l &Omega; B ) - 1 F l k + 1 = ( Y l U l D l + &beta; &gamma; ( P k ) T X l V l k - F u k + 1 &Omega; u l ) ( &Omega; L ) - 1
Renewable by above-mentioned iterative step with therefore, when original input signal contains noise or even contains error flag, a kind of Robust Learning model provided by the present invention can be upgraded by iterative step thus make identification stronger.By upgrade after with can train and obtain P k+1and G k+1, with be updated to:
Wherein, be or jth row vector, (r j) k+1, j=1,2 ..., l is R l k + 1 = X l T P k + 1 - ( F l k + 1 ) T Jth row vector, and (r j) k+1, j=l+1 ..., l+u is R u k + 1 = X u T P k + 1 - ( F u k + 1 ) T In row vector.
In actual iterative process, solve for convenience, a kind of Robust Learning model provided by the present invention is established in each iterative process simplifying optimization, therefore, can obtain by addressing the problem
Wherein, if flag data x in original input signal jstandardization parameter ψ jbe set to ψ l, Unlabeled data x jstandardization parameter ψ jbe set to ψ u, work as ψ lwhen=0, represent identical with original tally to the Tag Estimation result of flag data.I ψ=(1+U) -1, I ξ=I-I ψ, respectively by I ψwith I ξbe divided into following four parts:
Wherein, with being respectively diagonal element is ψ lwith 1-ψ ldiagonal matrix, with being respectively diagonal element is ψ uwith 1-ψ udiagonal matrix.If kth+1 iteration can obtain:
Specific algorithm is as follows:
Input:
Raw data matrix initial labels matrix Y, controling parameters ψ, α, beta, gamma, and K.
Export:
1). initialization G 0, H 0, V 0for unit matrix, F 0=Y, P 0=0, k=0;
2). calculate weight matrix W, structure diagonal matrix D, makes D iijw i,j;
3). definition matrix Ω=I-W t-W+W tw=[Ω ll, Ω lu, Ω ul, Ω uu].
When not restraining:
Address the problem upgrade soft label matrix F by fixing other values:
Address the problem upgrade projection matrix P by fixing other values:
F k + 1 = &lsqb; Y l , F u k + 1 &rsqb; , P V G = &gamma; ( &gamma;XV k X T + G k ) - 1 , P k + 1 = ( P V G ) k XV k ( F k + 1 ) T
Upgrading matrix G is:
Calculate R l k + 1 = X l T P k + 1 - ( F l k + 1 ) T , R u k + 1 = X u T P k + 1 - ( F u k + 1 ) T , Upgraded by the method for above-mentioned renewal H, V and V k + 1 = [ V l k + 1 , 0 ; 0 , V u k + 1 ] ;
If ‖ is F k+1-F kf≤ ε, ‖ P k+1-P kf≤ ε, end loop; Otherwise K=k+1, continues circulation.
Export:
Optimum solution and optimum projection matrix P *=P k+1.
Corresponding with said method embodiment, the embodiment of the present invention additionally provides a kind of image classification system, as shown in Figure 2, can comprise:
Pretreatment module 21, for carrying out initialization to the training set obtained in advance, obtain initial category label matrix, wherein, training set comprises the training sample of predetermined amount, and training sample comprises its classification known and demarcates has the sample of the class label corresponding with its classification and its classification unknown and the sample not demarcating class label; And for processing training sample based on the building method of neighbour's definition and reconstruct power, obtain the similarity measure matrix corresponding with training sample, carry out presetting process to similarity measure matrix, obtain reconstruction coefficients matrix;
Training module 22, for based on reconstruction coefficients matrix and initial category label matrix, determines the soft label of the training sample not demarcating class label by the level and smooth item of active balance stream shape and label matching item, adopt the soft label of mode to training sample of iteration to carry out l 2,1norm regularization, obtains projection matrix and the soft label matrix corresponding with the soft label of training sample;
Prediction module 23, for utilizing projection matrix to map sample to be tested, obtains the soft label of sample to be tested; Wherein, sample to be tested is the sample not demarcating class label.
First utilize in the application and demarcate the training sample having the training sample of class label and do not demarcate class label and jointly construct similarity measure matrix, obtain reconstruction coefficients matrix further, and initialization obtains initial category label matrix, initial category label matrix and reconstruction coefficients matrix is utilized to determine not demarcate the soft label of the training sample of class label, by carrying out l to soft label 2,1norm regularization, effectively reduce soft label formed luv space in mixed signal on the impact of the accuracy of classification results; By based on l 2,1the tolerance of norm regularization, effectively enhances the robustness embedding the regression residuals between feature and soft label and measure; By calculating based on l 2,1the mapping of the tolerance of norm regularization, enhances and extracts the descriptive of feature.In a word, by introducing l 2,1norm regularization technology, effectively improve the robustness of system for the mixed signal such as noise and heterogeneous data in training sample, the impact avoiding the mixed signal due to luv space mentioned in background technology causes the generation without the lower situation of the classification results accuracy of exemplar, namely, decrease the impact of the mixed signal of luv space, improve the accuracy of the classification results of sample to be tested, enhance classification performance.In addition, the application is the outer Rate Based On The Extended Creep Model of sample, and by embedding the conclusion and prediction that efficiently complete the outer data of sample, without the need to introducing extra restructuring procedure, expansibility can be good.
In a kind of image classification system that above-described embodiment provides, pretreatment module can comprise:
K nearest neighbor unit, for utilizing k nearest neighbor algorithm to process each training sample, obtain K nearest samples of each training sample, K is positive integer.
LLE-reconstruct power unit, the building method for adopting LLE-to reconstruct power utilizes the K of each training sample nearest samples to obtain the similarity measure matrix corresponding with training sample.
Presetting processing unit, for being normalized and symmetrization process similarity measure matrix, obtaining reconstruction coefficients matrix.
Initialization unit, for: obtain initialization matrix, use Y 0=[y 1, y 2..., y l+u] represent, training set is used represent, wherein, represent that n is multiplied by the space of matrices of l+u, X lfor demarcating the sample having class label, X l=[x 1, x 2..., x l], X ufor not demarcating the sample of class label, X u=[x l+1, x l+2..., x l+u], n, l and u are positive integer; Determine the arbitrary y in initialization matrix ibe a column vector, i-th training sample x in the corresponding training set of arbitrary column vector i, i=1,2 ..., l+u; Arbitrary demarcation is had to the training sample x of class label jif, this training sample x jbelong to the i-th classification, then determine y i, j=1, if this training sample x jdo not belong to the i-th classification, then determine y i, j=0; For arbitrary training sample x not demarcating class label j, determine y i, j=0, obtain initial category label matrix Y, wherein, and j=1,2 ..., l+u.
LLE-reconstruct power unit can comprise:
LLE-reconstructs power portion, for: utilize following formula to obtain similarity measure matrix:
Wherein, represent training sample x ik nearest samples, q and K is corresponding, for current training sample x ireconstruction coefficient vector, characterize its neighbour at reconstruct training sample x itime percentage contribution, represent corresponding similarity reconstruction coefficients matrix.
In a kind of image classification system that above-described embodiment provides, prediction module can comprise:
Predicting unit, for: obtain sample to be tested x new; Utilize P tx newby sample to be tested x newbe embedded in projection matrix P, obtain predicted vector; Determine that the soft label that in predicted vector, the element of maximum probability is corresponding is sample to be tested x newsoft label.
In a kind of image classification system that above-described embodiment provides, training module can comprise:
Training unit, for: utilize following formula to obtain projection matrix and soft label matrix:
subjf i≥0,e Tf i=1fori=1,2,...,l+u
Wherein, F represents soft label matrix, and P represents projection matrix, and W represents reconstruction coefficients matrix, and D represents diagonal matrix, and D iijw i,j, f irepresent the i-th row vector, X tp-F trepresent and return discrepance, return discrepance and be used for weighing X tp and soft label F tdifference degree; μ i, α, beta, gamma is corresponding balance parameter;
Or, utilize following formula to obtain projection matrix and soft label matrix:
< F , P > = arg m i n F , P t r ( F&Omega;F T ) + t r ( ( F - Y ) U D ( F - Y ) T ) + &alpha; | | F T | | 2 , 1 + &beta; ( | | P | | 2 , 1 + &gamma; | | X T P - F T | | 2 , 1 )
subjF≥0,e TF=e T
Wherein, Ω=I-W t-W+W tw, I are unit matrix, and it only has diagonal element non-zero and is 1; U represents a diagonal matrix, and its i-th diagonal element is μ i, for demarcating the training sample having label, μ ibe set to+∞, for the training sample not demarcating class label, μ ibe set to 0.
For image classification system a kind of disclosed in the embodiment of the present invention, because it is corresponding with Robust Learning model a kind of disclosed in the embodiment of the present invention, therefore, the related content can consulting said method embodiment is illustrated for it.
It should be noted that in addition, a kind of Robust Learning model that the embodiment of the present invention provides 4 True Data set pair embodiment of the present invention is tested, and comprises ExtendedYale-B, ARface, CMUPIEface, CMUposeface.Consider based on calculating high efficiency, the size of all true pictures is compressed to 32x32, therefore the vector of corresponding one 1024 dimension of every pictures in an experiment.In order to method of testing performance, we select the sample of some to form having in training set to demarcate and have the training sample of class label as there being label training sample subset from every class, then the training sample not demarcating class label selecting equal number from every class is as without label training sample subset.These data sets are collected from many aspects, and thus test result has generally illustrative.
Refer to table 1, for a kind of Robust Learning model provided by the present invention and GFHF, LLGC, SLP, LNP, SDA, Lap-LDA, FME and ELP method are at extendedYale-B, ARface, CMUPIE, CMUPose human face data collection test recognition result contrast table, provides average recognition rate and the standard deviation of each methods experiment.This experiment is at the sample composition training set of every class sample selection some, for increasing experiment fairness, the experiment parameter participating in comparative approach is also all carefully chosen, wherein, for class every in training set demarcates the number having the training sample of class label in bracket in table, the unit of recognition result is number percent.
Recognition result contrast table tested by table 1.
In addition, when image is facial image, facial image test sample book is equivalent to the sample to be tested in the application, and facial image training sample is equivalent to the training sample in the application, obtains the Tag Estimation schematic diagram of the classification of facial image test sample book as shown in Figure 3.
Result can be found out by experiment, and the successful of the Robust Learning model that the present invention proposes is better than traditional label propagation algorithm, has higher applicability and robustness.Therefore, the image classification system that the present invention proposes also has same effect.
To the above-mentioned explanation of the disclosed embodiments, those skilled in the art are realized or uses the present invention.To be apparent for a person skilled in the art to the multiple amendment of these embodiments, General Principle as defined herein can without departing from the spirit or scope of the present invention, realize in other embodiments.Therefore, the present invention can not be restricted to these embodiments shown in this article, but will meet the widest scope consistent with principle disclosed herein and features of novelty.

Claims (10)

1. a Robust Learning model, is characterized in that, comprising:
Initialization is carried out to the training set obtained in advance, obtain initial category label matrix, wherein, described training set comprises the training sample of predetermined amount, and described training sample comprises its classification known and demarcates has the sample of the class label corresponding with its classification and its classification unknown and the sample not demarcating class label;
Building method based on neighbour's definition and reconstruct power processes described training sample, obtains the similarity measure matrix corresponding with described training sample, carries out presetting process, obtain reconstruction coefficients matrix to described similarity measure matrix;
Based on described reconstruction coefficients matrix and described initial category label matrix, determined the soft label of the training sample not demarcating class label by the level and smooth item of active balance stream shape and label matching item, adopt the soft label of mode to described training sample of iteration to carry out l 2,1norm regularization, obtains projection matrix and the soft label matrix corresponding with the soft label of described training sample;
Utilize described projection matrix to map sample to be tested, obtain the soft label of described sample to be tested; Wherein, described sample to be tested is its classification unknown and does not demarcate the sample of class label.
2. method according to claim 1, is characterized in that, the described building method based on neighbour's definition and reconstruct power processes described training sample, obtains the similarity measure matrix corresponding with described training sample, comprising:
Utilize k nearest neighbor algorithm to process each described training sample, obtain K nearest samples of each training sample, K is positive integer;
The building method of LLE-reconstruct power is adopted to utilize K nearest samples of described training sample and each described training sample to obtain the similarity measure matrix corresponding with described training sample.
3. method according to claim 2, is characterized in that, described to described similarity measure matrix carry out preset process, obtain reconstruction coefficients matrix, comprising:
Described similarity measure matrix is normalized and symmetrization process, obtains described reconstruction coefficients matrix.
4. method according to claim 3, is characterized in that, utilizes described projection matrix to map sample to be tested, obtains the soft label of described sample to be tested, comprising:
Obtain sample to be tested x new;
Utilize P tx newby described sample to be tested x newbe embedded in projection matrix P, obtain predicted vector;
Determine that the soft label that the element of maximum probability in described predicted vector is corresponding is described sample to be tested x newsoft label.
5. method according to claim 2, is characterized in that, the described training set to obtaining in advance carries out initialization, obtains initial category label matrix, comprising:
Obtain initialization matrix, use Y 0=[y 1, y 2..., y l+u] represent, described training set is used represent, wherein, represent that n is multiplied by the space of matrices of l+u, X lfor demarcating the sample having class label, X l=[x 1, x 2..., x l], X ufor not demarcating the sample of class label, X u=[x l+1, x l+2..., x l+u], n, l and u are positive integer;
Determine the arbitrary y in described initialization matrix ibe a column vector, i-th training sample x in the corresponding described training set of arbitrary column vector i, i=1,2 ..., l+u;
Arbitrary demarcation is had to the training sample x of class label jif, this training sample x jbelong to the i-th classification, then determine y i, j=1, if this training sample x jdo not belong to the i-th classification, then determine y i, j=0; For arbitrary training sample x not demarcating class label j, determine y i, j=0, obtain initial category label matrix Y, wherein, and j=1,2 ..., l+u.
6. method according to claim 5, is characterized in that, the described building method based on neighbour's definition and reconstruct power processes described training sample, obtains the similarity measure matrix corresponding with described training sample, comprising:
Following formula is utilized to obtain described similarity measure matrix:
Wherein, represent training sample x ik nearest samples, q and K is corresponding, for current training sample x ireconstruction coefficient vector, characterize its neighbour at reconstruct training sample x itime percentage contribution, represent corresponding similarity reconstruction coefficients matrix.
7. method according to claim 6, is characterized in that, the soft label of mode to described training sample of described employing iteration carries out l 2,1norm regularization, obtains projection matrix and the soft label matrix corresponding with the soft label of described training sample, comprising:
Following formula is utilized to obtain projection matrix and soft label matrix:
subjf i≥0,e Tf i=1fori=1,2,...,l+u
Wherein, F represents soft label matrix, and P represents projection matrix, and W represents reconstruction coefficients matrix, and D represents diagonal matrix, and D iijw i,j, f irepresent the i-th row vector, X tp-F trepresent and return discrepance, described recurrence discrepance is used for weighing X tp and soft label F tdifference degree; μ i, α, beta, gamma is corresponding balance parameter;
Or, utilize following formula to obtain projection matrix and soft label matrix:
< F , P > = arg m i n F , P t r ( F&Omega;F T ) + t r ( ( F - Y ) U D ( F - Y ) T ) + &alpha; | | F T | | 2 , 1 + &beta; ( | | P | | 2 , 1 + &gamma; | | X T P - F T | | 2 , 1 )
subjF≥0,e TF=e T
Wherein, Ω=I-W t-W+W tw, I are unit matrix, and it only has diagonal element non-zero and is 1; U represents a diagonal matrix, and its i-th diagonal element is μ i, for demarcating the training sample having label, μ ibe set to+∞, for the training sample not demarcating class label, μ ibe set to 0.
8. an image classification system, is characterized in that, comprising:
Pretreatment module, for carrying out initialization to the training set obtained in advance, obtain initial category label matrix, wherein, described training set comprises the training sample of predetermined amount, and described training sample comprises its classification known and demarcates has the sample of the class label corresponding with its classification and its classification unknown and the sample not demarcating class label; And for processing described training sample based on the building method of neighbour's definition and reconstruct power, obtain the similarity measure matrix corresponding with described training sample, carry out presetting process to described similarity measure matrix, obtain reconstruction coefficients matrix;
Training module, for based on described reconstruction coefficients matrix and described initial category label matrix, determined the soft label of the training sample not demarcating class label by the level and smooth item of active balance stream shape and label matching item, adopt the soft label of mode to described training sample of iteration to carry out l 2,1norm regularization, obtains projection matrix and the soft label matrix corresponding with the soft label of described training sample;
Prediction module, for utilizing described projection matrix to map sample to be tested, obtains the soft label of described sample to be tested; Wherein, described sample to be tested is the sample not demarcating class label.
9. system according to claim 8, is characterized in that, described pretreatment module comprises:
K nearest neighbor unit, for utilizing k nearest neighbor algorithm to process each described training sample, obtain K nearest samples of each training sample, K is positive integer;
LLE-reconstruct power unit, the building method for adopting LLE-to reconstruct power utilizes K nearest samples of each described training sample to obtain the similarity measure matrix corresponding with described training sample.
10. system according to claim 9, is characterized in that, described pretreatment module comprises:
Presetting processing unit, for being normalized and symmetrization process described similarity measure matrix, obtaining described reconstruction coefficients matrix.
CN201510726581.1A 2015-10-30 2015-10-30 A kind of image classification method and image classification system based on Robust Learning model Active CN105335756B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510726581.1A CN105335756B (en) 2015-10-30 2015-10-30 A kind of image classification method and image classification system based on Robust Learning model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510726581.1A CN105335756B (en) 2015-10-30 2015-10-30 A kind of image classification method and image classification system based on Robust Learning model

Publications (2)

Publication Number Publication Date
CN105335756A true CN105335756A (en) 2016-02-17
CN105335756B CN105335756B (en) 2019-06-11

Family

ID=55286271

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510726581.1A Active CN105335756B (en) 2015-10-30 2015-10-30 A kind of image classification method and image classification system based on Robust Learning model

Country Status (1)

Country Link
CN (1) CN105335756B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608478A (en) * 2016-03-30 2016-05-25 苏州大学 Combined method and system for extracting and classifying features of images
CN106529604A (en) * 2016-11-24 2017-03-22 苏州大学 Adaptive image tag robust prediction method and system
CN106845358A (en) * 2016-12-26 2017-06-13 苏州大学 A kind of method and system of handwritten character characteristics of image identification
CN108108769A (en) * 2017-12-29 2018-06-01 咪咕文化科技有限公司 A kind of sorting technique of data, device and storage medium
CN108848065A (en) * 2018-05-24 2018-11-20 中电运行(北京)信息技术有限公司 A kind of network inbreak detection method, system, medium and equipment
CN108932539A (en) * 2018-05-31 2018-12-04 重庆微标科技股份有限公司 Medical inspection sample and medical tubes batch identification device and method
CN109993191A (en) * 2018-01-02 2019-07-09 中国移动通信有限公司研究院 Information processing method and device, electronic equipment and storage medium
CN110365583A (en) * 2019-07-17 2019-10-22 南京航空航天大学 A kind of sign prediction method and system based on bridged domain transfer learning
CN110781926A (en) * 2019-09-29 2020-02-11 武汉大学 Support vector machine multi-spectral-band image analysis method based on robust auxiliary information reconstruction
CN111008637A (en) * 2018-10-08 2020-04-14 北京京东尚科信息技术有限公司 Image classification method and system
CN111046933A (en) * 2019-12-03 2020-04-21 东软集团股份有限公司 Image classification method and device, storage medium and electronic equipment
CN111461345A (en) * 2020-03-31 2020-07-28 北京百度网讯科技有限公司 Deep learning model training method and device
CN113537389A (en) * 2021-08-05 2021-10-22 京东科技信息技术有限公司 Robust image classification method and device based on model embedding
CN114049567A (en) * 2021-11-22 2022-02-15 齐鲁工业大学 Self-adaptive soft label generation method and application in hyperspectral image classification
CN115511012A (en) * 2022-11-22 2022-12-23 南京码极客科技有限公司 Class soft label recognition training method for maximum entropy constraint

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103714536A (en) * 2013-12-17 2014-04-09 深圳先进技术研究院 Sparse-representation-based multi-mode magnetic resonance image segmentation method and device
CN104794489A (en) * 2015-04-23 2015-07-22 苏州大学 Deep label prediction based inducing type image classification method and system
CN104933428A (en) * 2015-07-23 2015-09-23 苏州大学 Human face recognition method and device based on tensor description

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103714536A (en) * 2013-12-17 2014-04-09 深圳先进技术研究院 Sparse-representation-based multi-mode magnetic resonance image segmentation method and device
CN104794489A (en) * 2015-04-23 2015-07-22 苏州大学 Deep label prediction based inducing type image classification method and system
CN104933428A (en) * 2015-07-23 2015-09-23 苏州大学 Human face recognition method and device based on tensor description

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHANG ZHAO 等: ""Robust Bilinear Matrix Recovery by Tensor Low Rank Representation"", 《2014 INTERNATIONAL CONFERENCE ON NEURAL NETWORKS》 *

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608478A (en) * 2016-03-30 2016-05-25 苏州大学 Combined method and system for extracting and classifying features of images
CN105608478B (en) * 2016-03-30 2019-12-06 苏州大学 image feature extraction and classification combined method and system
CN106529604A (en) * 2016-11-24 2017-03-22 苏州大学 Adaptive image tag robust prediction method and system
CN106529604B (en) * 2016-11-24 2019-09-27 苏州大学 A kind of adaptive image tag Robust Prediction method and system
CN106845358A (en) * 2016-12-26 2017-06-13 苏州大学 A kind of method and system of handwritten character characteristics of image identification
CN106845358B (en) * 2016-12-26 2020-11-10 苏州大学 Method and system for recognizing image features of handwritten characters
CN108108769A (en) * 2017-12-29 2018-06-01 咪咕文化科技有限公司 A kind of sorting technique of data, device and storage medium
CN108108769B (en) * 2017-12-29 2020-08-25 咪咕文化科技有限公司 Data classification method and device and storage medium
CN109993191B (en) * 2018-01-02 2021-07-06 中国移动通信有限公司研究院 Information processing method and device, electronic device and storage medium
CN109993191A (en) * 2018-01-02 2019-07-09 中国移动通信有限公司研究院 Information processing method and device, electronic equipment and storage medium
CN108848065A (en) * 2018-05-24 2018-11-20 中电运行(北京)信息技术有限公司 A kind of network inbreak detection method, system, medium and equipment
CN108848065B (en) * 2018-05-24 2020-12-11 中电运行(北京)信息技术有限公司 Network intrusion detection method, system, medium and equipment
CN108932539A (en) * 2018-05-31 2018-12-04 重庆微标科技股份有限公司 Medical inspection sample and medical tubes batch identification device and method
CN108932539B (en) * 2018-05-31 2021-10-08 重庆微标科技股份有限公司 Medical examination sample and medical test tube batch identification device and method
CN111008637A (en) * 2018-10-08 2020-04-14 北京京东尚科信息技术有限公司 Image classification method and system
CN110365583B (en) * 2019-07-17 2020-05-22 南京航空航天大学 Symbol prediction method and system based on bridge domain transfer learning
CN110365583A (en) * 2019-07-17 2019-10-22 南京航空航天大学 A kind of sign prediction method and system based on bridged domain transfer learning
CN110781926A (en) * 2019-09-29 2020-02-11 武汉大学 Support vector machine multi-spectral-band image analysis method based on robust auxiliary information reconstruction
CN110781926B (en) * 2019-09-29 2023-09-19 武汉大学 Multi-spectral band image analysis method of support vector machine based on robust auxiliary information reconstruction
CN111046933A (en) * 2019-12-03 2020-04-21 东软集团股份有限公司 Image classification method and device, storage medium and electronic equipment
CN111046933B (en) * 2019-12-03 2024-03-05 东软集团股份有限公司 Image classification method, device, storage medium and electronic equipment
CN111461345A (en) * 2020-03-31 2020-07-28 北京百度网讯科技有限公司 Deep learning model training method and device
CN111461345B (en) * 2020-03-31 2023-08-11 北京百度网讯科技有限公司 Deep learning model training method and device
CN113537389B (en) * 2021-08-05 2023-11-07 京东科技信息技术有限公司 Robust image classification method and device based on model embedding
CN113537389A (en) * 2021-08-05 2021-10-22 京东科技信息技术有限公司 Robust image classification method and device based on model embedding
CN114049567A (en) * 2021-11-22 2022-02-15 齐鲁工业大学 Self-adaptive soft label generation method and application in hyperspectral image classification
CN114049567B (en) * 2021-11-22 2024-02-23 齐鲁工业大学 Adaptive soft label generation method and application in hyperspectral image classification
CN115511012A (en) * 2022-11-22 2022-12-23 南京码极客科技有限公司 Class soft label recognition training method for maximum entropy constraint
CN115511012B (en) * 2022-11-22 2023-04-07 南京码极客科技有限公司 Class soft label identification training method with maximum entropy constraint

Also Published As

Publication number Publication date
CN105335756B (en) 2019-06-11

Similar Documents

Publication Publication Date Title
CN105335756A (en) Robust learning model and image classification system
CN109960800B (en) Weak supervision text classification method and device based on active learning
CN105354595A (en) Robust visual image classification method and system
CN108399163A (en) Bluebeard compound polymerize the text similarity measure with word combination semantic feature
CN110209823A (en) A kind of multi-tag file classification method and system
CN104794489A (en) Deep label prediction based inducing type image classification method and system
CN104966105A (en) Robust machine error retrieving method and system
US20120253792A1 (en) Sentiment Classification Based on Supervised Latent N-Gram Analysis
CN104463202A (en) Multi-class image semi-supervised classifying method and system
CN106649853A (en) Short text clustering method based on deep learning
CN109190120A (en) Neural network training method and device and name entity recognition method and device
CN114169442B (en) Remote sensing image small sample scene classification method based on double prototype network
CN112487805B (en) Small sample Web service classification method based on meta-learning framework
CN104933428A (en) Human face recognition method and device based on tensor description
CN109492625A (en) A kind of human face identification work-attendance checking method based on width study
CN109948735A (en) A kind of multi-tag classification method, system, device and storage medium
CN111881671B (en) Attribute word extraction method
CN104778482A (en) Hyperspectral image classifying method based on tensor semi-supervised scale cutting dimension reduction
CN110019822B (en) Few-sample relation classification method and system
CN109710725A (en) A kind of Chinese table column label restoration methods and system based on text classification
CN108920446A (en) A kind of processing method of Engineering document
CN111680131A (en) Document clustering method and system based on semantics and computer equipment
CN112905793B (en) Case recommendation method and system based on bilstm+attention text classification
CN109886315A (en) A kind of Measurement of Similarity between Two Images method kept based on core
CN113177417A (en) Trigger word recognition method based on hybrid neural network and multi-stage attention mechanism

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant