CN107169531B - A kind of image classification dictionary learning method and device based on Laplce's insertion - Google Patents
A kind of image classification dictionary learning method and device based on Laplce's insertion Download PDFInfo
- Publication number
- CN107169531B CN107169531B CN201710447133.7A CN201710447133A CN107169531B CN 107169531 B CN107169531 B CN 107169531B CN 201710447133 A CN201710447133 A CN 201710447133A CN 107169531 B CN107169531 B CN 107169531B
- Authority
- CN
- China
- Prior art keywords
- matrix
- laplce
- dictionary
- sample
- optimal solution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2136—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on sparsity criteria, e.g. with an overcomplete basis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/28—Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Other Investigation Or Analysis Of Materials By Electrical Means (AREA)
Abstract
The invention discloses a kind of image classification dictionary learning methods and device based on Laplce's insertion, belong to technical field of image processing, by introducing dictionary weight matrix in traditional Laplce's constraints, different weights is assigned to each atom in image classification dictionary, larger weight can be assigned to the atom for being conducive to image classification accuracy in image classification dictionary by realizing, and improve classifying quality of the image classification dictionary of the embodiment of the present invention for image classification when;Meanwhile the embodiment of the present invention introduces dictionary weight matrix in traditional Laplce's constraints, and a liter dimension is carried out to image classification dictionary matrix, can further increase classifying quality of the image classification dictionary of the embodiment of the present invention for image classification when;Moreover, the embodiment of the present invention calculates neighbour's degree between two samples using how distance weighted graph structure model, the accuracy for weighing neighbour's degree between two samples is improved.
Description
Technical field
The present invention relates to image processing field, more particularly to a kind of image classification dictionary study based on Laplce's insertion
Method and apparatus.
Background technology
Image classification is the different characteristic reflected in image information according to each image, different classes of target area
The image processing method separated.Image classification carries out quantitative analysis using computer to image, every in image or image
A pixel or region are incorporated into as a certain kind in several classifications, to replace the vision interpretation of people.
Wherein, image classification includes mainly the index classification based on color character, the image classification based on texture, is based on shape
The image classification of shape and image classification based on spatial relationship.Most common image classification algorithms are based on sparse in prior art
The image classification algorithms of expression generally include image characteristics extraction, dictionary study, image coding and image classification.
Wherein, the dictionary study in prior art is using traditional dictionary learning method based on sparse expression, tool
Body is directly to construct dictionary, the expression power having the same of each atom pair sample in dictionary using training sample
Weight, the dictionary for eventually leading to construction are unfavorable for the classification of image.Moreover, traditional dictionary learning method based on sparse expression is straight
It connects and dictionary is constructed using training sample, if image pattern in practical applications is fewer, it will lead to training sample
Number is insufficient, is also unfavorable for carrying out sparse expression to sample.
Invention content
The expression of each atom pair sample in dictionary learning method in order to solve the prior art is having the same
Weight, eventually leads to the problem of dictionary of construction is unfavorable for the classification of image, and the embodiment of the present invention, which provides, a kind of being based on La Pula
This insertion image classification dictionary learning method and device, can to avoid lexicography learning method in practical applications due to image
Caused lack of training samples when sample is fewer, caused by be unfavorable for sample carry out sparse expression the problem of appearance.Institute
It is as follows to state technical solution:
In a first aspect, providing a kind of image classification dictionary learning method being embedded in based on Laplce, the method packet
It includes:
Step 100:Training sample feature set is obtained from training sample database, wherein wrapped in the training sample feature set
Include at least 2 class training samples;
Step 110:According to the C class training samples that the training sample is concentrated, trained using Laplce's constraints
The sparse expression dictionary of the C class training samples, wherein C is the positive integer more than 0, and Laplce's constraints is:
φ(Xc) it is the image characteristic matrix that the C class training samples are mapped in nuclear space, WcIt is dictionary weight square
Battle array, ScFor the sparse expression matrix of the C class training samples, K is WcMatrix column number,Represent ScI-th row of matrix, pij
For weight coefficient, pijRepresent training sampleAnd training sampleClose proximity,It represents in the C class training samples
I-th of sample,Represent the jth sample in the C class training samples, α, β are constants, α, β be known as regularization because
Son;
Step 120:Graph structure model based on how distance weighted measurement, the neighbour for obtaining the C class training samples are closed
System's figure, wherein the graph structure model is Laplce's embedded structure, and the graph structure model isThe 1st kind is represented to ask in the C class training samples
I-th of sample and the distance between j-th of sample method,It represents kth kind and seeks the C class training samples
In i-th of sample and the distance between j-th of sample method, t is constant, μkThe C class training samples are sought for kth kind
In i-th of sample weight coefficient corresponding with the method for the distance between j-th of sample;
Step 130:Based on the newer method of iteration seek dictionary weight matrix in Laplce's constraints and
The optimal solution of sparse expression matrix;
Step 140:For every a kind of training sample in the training sample feature set, above-mentioned steps 110 are repeated
~step 130 then exports training generation until being performed both by per a kind of training sample in the training sample feature set finishes
Image classification dictionary based on Laplce's insertion.
Optionally, the graph structure model based on how distance weighted measurement obtains the neighbour of the C class training samples
Relational graph, specially:
Based at least two power corresponding with its in Euclidean distance, Hamming distance, COS distance and Chebyshev's distance
Weight coefficient, determines the neighbor relationships figure of the C class training samples.
Optionally, the dictionary weight matrix sought based on the newer method of iteration in Laplce's constraints
It the step of with the optimal solution of sparse expression matrix, specifically includes:
Step 1301:It sets the dictionary weight matrix to fixed value, the drawing is sought based on the newer method of iteration
First optimal solution of the sparse expression matrix in this constraints of pula, wherein the fixed value is the dictionary weight
The corresponding random number matrix of matrix;
Step 1302:It is first optimal solution by the sparse expression arranged in matrix, is asked based on the newer method of iteration
Take the second optimal solution of the dictionary weight matrix in Laplce's constraints;
Step 1303:If the second optimal solution of the first optimal solution of the sparse expression matrix and the dictionary weight matrix
Laplce's constraints of composition does not restrain, then recycles and execute the step 1301 and the step 1302;
Step 1304:If the second optimal solution of the first optimal solution of the sparse expression matrix and the dictionary weight matrix
Laplce's constraints convergence of composition, then be determined as the optimal of the sparse expression matrix by first optimal solution
Second optimal solution, is determined as the optimal solution of dictionary weight matrix by solution.
Optionally, described to set the dictionary weight matrix to fixed value, it is sought based on the newer method of iteration described
First optimal solution of the sparse expression matrix in Laplce's constraints, specially:
The dictionary weight matrix is set to fixed value, based on the newer method of iteration according to formula:
Seek the first optimal solution of the sparse expression matrix in Laplce's constraints, wherein ScFor institute
The sparse expression matrix of C class training samples is stated,Represent ScThe element that the row k n-th of matrix arranges, κ (Xc,Xc)=φ (Xc)T
φ(Xc), φ (Xc) it is the image characteristic matrix that the C class training samples are mapped in nuclear space,Represent ScThe of matrix
The all elements of n row,Represent ScThe all elements of the row k of matrix, WcIt is dictionary weight matrix, K is WcMatrix column
Number,Represent ScI-th row of matrix, α, β are constants, and α, β are known as regularization factors.
Optionally, it is described by the sparse expression arranged in matrix be first optimal solution, be based on the newer method of iteration
The second optimal solution of the dictionary weight matrix in Laplce's constraints is sought, specially:
It is first optimal solution by the sparse expression arranged in matrix, the newer method of iteration is based on, according to formula:It seeks the Laplce and constrains item
Second optimal solution of the dictionary weight matrix in part, wherein φ (Xc) it is that the C class training samples are mapped to nuclear space
In image characteristic matrix, WcIt is dictionary weight matrix, ScFor the sparse expression matrix of the C class training samples, K is WcSquare
The columns of battle array,Represent the kth row of matrix.
Second aspect provides a kind of image classification dictionary learning device being embedded in based on Laplce, described device packet
It includes:
First acquisition module, for obtaining training sample feature set from training sample database, wherein the training sample is special
Collection includes at least 2 class training samples;
First processing module, the C class training samples for being concentrated according to the training sample, is constrained using Laplce
Condition trains the sparse expression dictionary of the C class training samples, wherein C is the positive integer more than 0, and the Laplce is about
Beam condition is:
φ(Xc) it is the image characteristic matrix that the C class training samples are mapped in nuclear space, WcIt is dictionary weight square
Battle array, ScFor the sparse expression matrix of the C class training samples, K is WcMatrix column number,Represent ScI-th row of matrix, pij
For weight coefficient, pijRepresent training sampleAnd training sampleClose proximity,It represents in the C class training samples
I-th of sample,Represent the jth sample in the C class training samples, α, β are constants, α, β be known as regularization because
Son;
Second acquisition module is used for the graph structure model based on how distance weighted measurement, obtains the C class training samples
Neighbor relationships figure, wherein the graph structure model is Laplce's embedded structure, and the graph structure model isThe 1st kind is represented to ask in the C class training samples
I-th of sample and the distance between j-th of sample method,It represents kth kind and seeks the C class training samples
In i-th of sample and the distance between j-th of sample method, t is constant, μkThe C class training samples are sought for kth kind
In i-th of sample weight coefficient corresponding with the method for the distance between j-th of sample;
Second processing module, for seeking the power of the dictionary in Laplce's constraints based on the newer method of iteration
The optimal solution of weight matrix and sparse expression matrix;
Loop module, for for per a kind of training sample, repeating described the in the training sample feature set
The execution step of one processing module, second acquisition module and the Second processing module, until the training sample feature
Being performed both by per a kind of training sample for concentrating finishes, then exports the image classification word of training generation being embedded in based on Laplce
Allusion quotation.
Optionally, second acquisition module is specifically used for:
Based at least two power corresponding with its in Euclidean distance, Hamming distance, COS distance and Chebyshev's distance
Weight coefficient, determines the neighbor relationships figure of the C class training samples.
Optionally, the Second processing module specifically includes:
First solves submodule, for setting the dictionary weight matrix to fixed value, is based on the newer method of iteration
Seek the first optimal solution of the sparse expression matrix in Laplce's constraints, wherein the fixed value is institute
The corresponding random number matrix of predicate allusion quotation weight matrix;
Second solves submodule, for being first optimal solution by the sparse expression arranged in matrix, more based on iteration
New method seeks the second optimal solution of the dictionary weight matrix in Laplce's constraints;
First judging submodule, if the first optimal solution for the sparse expression matrix and the dictionary weight matrix
Laplce's constraints of second optimal solution composition does not restrain, then recycles and execute described first and solve submodule and described
Second solves the execution step of submodule;
Second judgment submodule, if the first optimal solution for the sparse expression matrix and the dictionary weight matrix
Laplce's constraints convergence of second optimal solution composition, then be determined as the sparse expression by first optimal solution
Second optimal solution is determined as the optimal solution of dictionary weight matrix by the optimal solution of matrix.
Optionally, the first solution submodule is specifically used for:
The dictionary weight matrix is set to fixed value, based on the newer method of iteration according to formula:
Seek the first optimal solution of the sparse expression matrix in Laplce's constraints, wherein ScFor institute
The sparse expression matrix of C class training samples is stated,Represent ScThe element that the row k n-th of matrix arranges, κ (Xc,Xc)=φ (Xc)T
φ(Xc), φ (Xc) it is the image characteristic matrix that the C class training samples are mapped in nuclear space,Represent ScThe n-th of matrix
The all elements of row,Represent ScThe all elements of the row k of matrix, WcIt is dictionary weight matrix, K is WcMatrix column number,Represent ScI-th row of matrix, α, β are constants, and α, β are known as regularization factors.
Optionally, the second solution submodule is specifically used for:
It is first optimal solution by the sparse expression arranged in matrix, the newer method of iteration is based on, according to formula:It seeks the Laplce and constrains item
Second optimal solution of the dictionary weight matrix in part, wherein φ (Xc) it is that the C class training samples are mapped to nuclear space
In image characteristic matrix, WcIt is dictionary weight matrix, ScFor the sparse expression matrix of the C class training samples, K is WcSquare
The columns of battle array,Represent the kth row of matrix.
The third aspect provides a kind of computer readable storage medium, is stored thereon with computer program, and feature exists
In the computer program realizes following steps when being executed by processor:
Step 100:Training sample feature set is obtained from training sample database, wherein wrapped in the training sample feature set
Include at least 2 class training samples;
Step 110:According to the C class training samples that the training sample is concentrated, trained using Laplce's constraints
The sparse expression dictionary of the C class training samples, wherein C is the positive integer more than 0, and Laplce's constraints is:
φ(Xc) it is the image characteristic matrix that the C class training samples are mapped in nuclear space, WcIt is dictionary weight square
Battle array, ScFor the sparse expression matrix of the C class training samples, K is WcMatrix column number,Represent ScI-th row of matrix, pij
For weight coefficient, pijRepresent training sampleAnd training sampleClose proximity,It represents in the C class training samples
I-th of sample,Represent the jth sample in the C class training samples, α, β are constants, α, β be known as regularization because
Son;
Step 120:Graph structure model based on how distance weighted measurement, the neighbour for obtaining the C class training samples are closed
System's figure, wherein the graph structure model is Laplce's embedded structure, and the graph structure model isThe 1st kind is represented to ask in the C class training samples
I-th of sample and the distance between j-th of sample method,It represents kth kind and seeks the C class training samples
In i-th of sample and the distance between j-th of sample method, t is constant, μkThe C class training samples are sought for kth kind
In i-th of sample weight coefficient corresponding with the method for the distance between j-th of sample;
Step 130:Based on the newer method of iteration seek dictionary weight matrix in Laplce's constraints and
The optimal solution of sparse expression matrix;
Step 140:For every a kind of training sample in the training sample feature set, above-mentioned steps 110 are repeated
~step 130 then exports training generation until being performed both by per a kind of training sample in the training sample feature set finishes
Image classification dictionary based on Laplce's insertion.
The advantageous effect that technical solution provided in an embodiment of the present invention is brought is:
It is provided in an embodiment of the present invention based on Laplce insertion image classification dictionary learning method and device, by
Dictionary weight matrix is introduced in traditional Laplce's constraints, is assigned not to each atom in image classification dictionary
Same weight, larger weight can be assigned to the atom for being conducive to image classification accuracy in image classification dictionary by realizing,
Improve classifying quality of the image classification dictionary of the embodiment of the present invention for image classification when;Meanwhile the embodiment of the present invention exists
Dictionary weight matrix is introduced in traditional Laplce's constraints, a liter dimension, Ke Yijin are carried out to image classification dictionary matrix
One step improves classifying quality of the image classification dictionary of the embodiment of the present invention for image classification when;Moreover, the embodiment of the present invention
Neighbour's degree between two samples is calculated using how distance weighted graph structure model, is improved between weighing two samples
The accuracy of neighbour's degree.
Description of the drawings
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for
For those of ordinary skill in the art, without creative efforts, other are can also be obtained according to these attached drawings
Attached drawing.
Fig. 1 is a kind of stream of image classification dictionary learning method being embedded in based on Laplce provided in an embodiment of the present invention
Journey schematic diagram;
Fig. 2 is the execution flow diagram of step 130 in Fig. 1;
Fig. 3 is a kind of knot of image classification dictionary learning device being embedded in based on Laplce provided in an embodiment of the present invention
Structure block diagram;
Fig. 4 is the structure diagram of Second processing module 304 in Fig. 3.
Specific implementation mode
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with attached drawing to embodiment party of the present invention
Formula is described in further detail.
The expression of each atom pair sample in dictionary learning method in order to solve the prior art is having the same
Weight, eventually leads to the problem of dictionary of construction is unfavorable for the classification of image, and the embodiment of the present invention, which provides, a kind of being based on La Pula
The image classification dictionary learning method and device of this insertion, by introducing dictionary power in traditional Laplce's constraints
Weight matrix, different weights is assigned to each atom in image classification dictionary, and realizing can be in image classification dictionary
The atom for being conducive to image classification accuracy assigns larger weight, and the image classification dictionary for improving the embodiment of the present invention is used for
Classifying quality when image classification;Meanwhile the embodiment of the present invention introduces dictionary power in traditional Laplce's constraints
Weight matrix carries out a liter dimension to image classification dictionary matrix, and the image classification dictionary that can further increase the embodiment of the present invention is used
Classifying quality when image classification;Moreover, the embodiment of the present invention calculates two samples using how distance weighted graph structure model
Neighbour's degree between this improves the accuracy for weighing neighbour's degree between two samples.
Below in conjunction with attached drawing 1 and attached drawing 2, to the image classification dictionary based on Laplce's insertion of the embodiment of the present invention
Learning method is described in detail.
Shown in refer to the attached drawing 1, the image classification dictionary learning method packet based on Laplce's insertion of the embodiment of the present invention
It includes:
Step 100:Training sample feature set is obtained from training sample database, wherein wrapped in the training sample feature set
Include at least 2 class training samples.
For obtaining the specific steps of training sample feature set from training sample database, the embodiment of the present invention does not do specific limit
Fixed, those skilled in the art can refer to the prior art.Wherein, the step of training sample feature set is obtained from training sample database can
To include extracting characteristics of image and image tag from the image in training sample database.
It is exemplary, by taking the training sample database of the embodiment of the present invention is facial image sample database as an example, the embodiment of the present invention
Image classification dictionary is used for the classification of facial image, wherein facial image classified dictionary is obtained from facial image sample database
The step of training sample feature set, is as follows:The facial image in facial image sample database is obtained, and obtains the people in facial image
Then face image information converts human face image information to a dimensional vector, wherein this is one-dimensional by two dimensional image matrix
Column vector be exactly obtained from facial image sample database facial image feature (wherein, facial image feature can be HOG spy
One or more combinations in sign, LBP features, Haar features), the people belonging to facial image is exactly image category, and from
The image tag got in facial image sample database.
It should be noted that 2 class training samples are included at least from the training sample feature set obtained in training sample database,
The training sample that i.e. training sample feature set includes at least belongs to 2 class image categories, that is, is wrapped in training sample feature set
The training sample included includes at least 2 class image tags.It is exemplary, by taking facial image training sample feature set as an example, facial image
The facial image training sample of 2 people is included at least in training sample feature set.
Step 110:According to the C class training samples that the training sample is concentrated, trained using Laplce's constraints
The sparse expression dictionary of the C class training samples.
Assuming that training sample concentration has N class training samples, wherein N is the positive integer more than 1, exemplary, training sample set
In there are the 3 class training samples, C classes training sample can be the 2nd class training sample in training sample database, wherein C is more than 0
Positive integer.By taking facial image training sample set as an example, C classes training sample can be that facial image training sample concentrates the 2nd
Personal facial image training sample.
Further, Laplce's constraints that the image classification dictionary learning method of the embodiment of the present invention uses for:Its
In, φ (Xc) it is the image characteristic matrix that C class training samples are mapped in nuclear space, WcIt is dictionary weight matrix, ScFor C
The sparse expression matrix of class training sample, K are WcMatrix column number,Represent ScI-th row of matrix, pijFor weight coefficient, pij
Represent training sampleAnd training sampleClose proximity,I-th of sample in C class training samples is represented,Generation
J-th of sample in table C class training samples, α, β are constants, and α, β are known as regularization factors.
It should be noted that assuming that the training sample one that training sample is concentrated shares N classes, wherein N is just whole more than 1
Number, X=[X1,X2,…XC,…,XN]∈RD×NIndicate that training sample, D are the dimensions of characteristics of image, N is that training sample is concentrated
The total number of training sample, X1,X2,...XC,...,XNIs indicated respectivelyClass training sample, it is assumed that N1,
N2,...,NC,…,NNIt is indicated respectively per class training sample number, then N=N1+N2+…+NC+…+NN.Wherein, in step 110
Implementation procedure in, need to repeat per a kind of training sample to what training sample was concentrated, i.e., be directed to training sample set respectively
In per a kind of training sample, its corresponding sparse expression dictionary is trained using Laplce's constraints.
Secondly, it should be noted that for the image characteristic matrix φ being mapped to C class training samples in nuclear space
(Xc) process, the embodiment of the present invention is not specifically limited and tires out and state, and those skilled in the art can refer to the prior art.Example
, the mapping acquisition from lower dimensional space to higher dimensional space can be referred to, C class training samples are mapped to the image in nuclear space
Eigenmatrix φ (Xc)。
Step 120:Graph structure model based on how distance weighted measurement, the neighbour for obtaining the C class training samples are closed
System's figure.
Wherein, graph structure model used in the embodiment of the present invention is Laplce's embedded structure, the table of the graph structure model
It is up to formulaWherein,It represents the 1st kind and seeks the present invention
The method of i-th of sample and the distance between j-th of sample in the C class training samples of embodiment,Represent
The method that k kinds seek i-th of the sample and the distance between j-th of sample in the C class training samples of the embodiment of the present invention, t are
Constant, μkThe distance between i-th of sample and j-th of the sample in the C class training samples of the embodiment of the present invention are sought for kth kind
The corresponding weight coefficient of method.
It should be noted that Laplce's embedded structure is using figure embedded mobile GIS, wherein figure embedded mobile GIS is intended to send out
Existing one is present in lower dimensional space and can describe the Optimality Criteria of original higher dimensional space substantive characteristics.Figure embedding grammar is number
According to the node seen in mapping, each sample of ode table registration in describes data knot with a undirected authorized graph
Relationship between point, by indicating the neighbor relationships between them to a weights are assigned between two nodes.
Secondly it is exactly that will count in fact it should be noted that indicating the relationship between data using figure in figure embedding grammar
Regard the point in manifold space as according to space, assumes initially that these data points are in a manifold of higher dimension spatially, then utilize
The neighbor relationships of each data point in figure find a rational description method, or perhaps object function, find one and be in
The figure of lower dimensional space carrys out the approximate figure for indicating luv space, and the data after dimensionality reduction can keep the neighbour before dimensionality reduction to close
System.Wherein, the quality of graph model influences the effect after figure dimensionality reduction, and present inventor is by a large amount of trial and always
Knot, describes the relationship between sample node using the graph structure model of Laplce's embedded structure, can be very good holding figure
Effect after dimensionality reduction.
For i-th of sample in accurate description C class training samples and the close proximity between j-th of sample, this hair
Bright embodiment is embedded in graph structure model using the Laplce based on how distance weighted measurement, and expression formula isWherein, used i-th of the sample calculated in C class training samples
The method of this and the distance between j-th of sample includes in Euclidean distance, Hamming distance, COS distance and Chebyshev's distance
At least two weight coefficients corresponding with its, the i.e. embodiment of the present invention are based on Euclidean distance, Hamming distance, COS distance and Qie Bi
At least two weight coefficients corresponding with its in husband's distance are avenged, determine the neighbor relationships figure of C class training samples.
It is exemplary, formulaIn the first calculate C classes training
The method of i-th of sample and the distance between j-th of sample in sample can be calculate i-th of sample and j-th sample it
Between Euclidean distance method;FormulaIn second calculating C classes
The method of i-th of sample and the distance between j-th of sample in training sample can calculate i-th of sample and j-th of sample
The method of Hamming distance between this;Formula
In the third calculate in C class training samples the
The method of the distance between i sample and j-th of sample can be calculate cosine between i-th of sample and j-th of sample away from
From method;FormulaIn the 4th kind calculating C class training samples
In i-th of sample and the method for the distance between j-th of sample can calculate between i-th of sample and j-th of sample
The method of Chebyshev's distance.Certainly, it is merely illustrative of herein, does not represent formulaI-th of sample in the calculating for including C class training samples and
The method of the distance between j sample is confined to this.
Exemplary, Euclidean distance is also known as euclidean metric, is between common 2 points or the distance between multiple spot table
Show method, defines in Euclidean space, two point xiAnd xjThe distance between be:
Exemplary, Hamming distance is with the naming of Richard's Wei Sili Hammings.In information theory, two etc.
Hamming distance between long character string is the number of the kinds of characters of two character string corresponding positions.
Exemplary, the calculation formula of COS distance isWherein xiAnd xjFor high dimension vector.
Exemplary, Chebyshev's distance is a kind of measurement in vector space, and the definition of the distance between two points is that it is each
The maximum value of number of coordinates value difference absolute value.If xiAnd xjRepresent two image patterns, xiAnd xjIt is high dimension vector, then between the two
Chebyshev's distance calculation formula be:dChebyshev(xi,xj)=max (| xi-xj|), wherein:max(|xi-xj|) represent and seek xi
And xjElement in vector corresponds the value after subtracting each other, and then seeks the maximum value in these values.
Step 130:Based on the newer method of iteration seek dictionary weight matrix in Laplce's constraints and
The optimal solution of sparse expression matrix.
Based on the newer method of iteration, the dictionary weight matrix and sparse expression matrix in Laplce's constraints are solved
Optimal solution, wherein the expression formula of Laplce's constraints is:Tool
Body, shown in refer to the attached drawing 2, the dictionary weight matrix in Laplce's constraints is sought based on the newer method of iteration
Include the following steps with the process of the optimal solution of sparse expression matrix:
Step 1301:It sets the dictionary weight matrix to fixed value, the drawing is sought based on the newer method of iteration
First optimal solution of the sparse expression matrix in this constraints of pula.
Set dictionary weight matrix to fixed value first, fixed value can be for the corresponding random number of dictionary weight matrix
Matrix sets the numerical value in dictionary weight matrix to random number, then consolidating dictionary weight matrix that is, in initial procedure
Definite value substitutes into Laplce's constraints, and Laplce's constraints is reduced to
Then, based on Laplce's constraints after simplification, Laplce's constraint is sought using the newer method of iteration
First optimal solution of the sparse expression matrix in condition, wherein S in Laplce's constraints after simplifyingcIt is trained for C classes
The sparse expression matrix of sample,Represent ScThe element that the row k n-th of matrix arranges, κ (Xc,Xc)=φ (Xc)Tφ(Xc), φ
(Xc) it is the image characteristic matrix that C class training samples are mapped in nuclear space,Represent ScAll members of n-th row of matrix
Element,Represent ScThe all elements of the row k of matrix, WcIt is dictionary weight matrix, K is WcMatrix column number,Represent ScMatrix
I-th row, α, β are constants, and α, β are known as regularization factors.
Use the first optimal solution's expression of the sparse expression matrix that the newer method of iteration seeks for
Wherein, ((Wc)Tκ(Xc,Xc)Wc)kk=1, E=(Wc)Tκ(Xc,Xc)Wc,
Step 1302:It is first optimal solution by the sparse expression arranged in matrix, is asked based on the newer method of iteration
Take the second optimal solution of the dictionary weight matrix in Laplce's constraints.
It is the sparse expression that the newer method of iteration is sought by sparse expression arranged in matrix after step 1301 is finished
First optimal solution of matrix, brings into Laplce's constraints, and Laplce's constraints is reduced toWherein, φ (Xc) it is C classes instruction
Practice sample and is mapped to the image characteristic matrix in nuclear space, WcIt is dictionary weight matrix, ScFor the sparse table of C class training samples
Up to matrix, K is WcMatrix column number,Represent the kth row of matrix.
Then, using the newer method of iteration, based on Laplce's constraints after simplification:
Seek Laplce
Second optimal solution of the dictionary weight matrix in constraints, wherein the second optimal solution of the dictionary weight matrix of solution is expressed
Formula isWherein,Represent WcThe kth of matrix arranges, F=
Sc(Sc)T,
Step 1303:If the second optimal solution of the first optimal solution of the sparse expression matrix and the dictionary weight matrix
Laplce's constraints of composition does not restrain, then recycles and execute above-mentioned step 1301 and step 1302, until calculating
The Laplce of first optimal solution of the sparse expression matrix of acquisition and the second optimal solution composition of dictionary weight matrix constrains item
Part is restrained, and thens follow the steps 1304.
Step 1304:If the second optimal solution of the first optimal solution of the sparse expression matrix and the dictionary weight matrix
Laplce's constraints convergence of composition, then be determined as the optimal of the sparse expression matrix by first optimal solution
Second optimal solution, is determined as the optimal solution of dictionary weight matrix by solution.
Step 140:For every a kind of training sample in the training sample feature set, above-mentioned steps 110 are repeated
~step 130 then exports training generation until being performed both by per a kind of training sample in the training sample feature set finishes
Image classification dictionary based on Laplce's insertion.
For the every a kind of training sample of training sample that training sample is concentrated, the step in above-mentioned steps is repeated respectively
110, step 120 and step 130 are obtained per the sparse expression matrix in the corresponding Laplce's constraints of a kind of training sample
Optimal solution and dictionary weight matrix optimal solution;Until being performed both by per a kind of training sample in training sample feature set
Finish, then exports the image classification dictionary of training generation being embedded in based on Laplce.
Wherein, exemplary, the expression formula for the image classification dictionary based on Laplce's insertion that training generates isWherein, image pattern to be sorted φ (y), φ (Xc) it is the training of C classes
Sample is mapped to the image characteristic matrix in nuclear space, WcIt is dictionary weight matrix, ScFor the sparse expression of C class training samples
Matrix, α are known as regularization factors.
The process of image classification is carried out for the image classification dictionary based on Laplce's insertion of the embodiment of the present invention, this
Inventive embodiments are not repeated herein.The dictionary weight matrix W per one kind that is exemplary, being obtained using trainingcWith φ (Xc), it presses
According to formulaSparse coding is carried out to sample φ (y), is then solved, it can
To obtain following expression:
Wherein, Represent scK-th of element of vector.Then φ (y) is calculated again in every class sample
This institute constitutes the error of fitting of subspace, is indicated with r (c), and the calculation formula of r (c) is as follows:Compare φ (y) and the error of fitting per class sample, image to be classified, which then belongs to fitting, to be missed
Poor that minimum classification.
It is provided in an embodiment of the present invention based on Laplce insertion image classification dictionary learning method and device, by
Dictionary weight matrix is introduced in traditional Laplce's constraints, is assigned not to each atom in image classification dictionary
Same weight, larger weight can be assigned to the atom for being conducive to image classification accuracy in image classification dictionary by realizing,
Improve classifying quality of the image classification dictionary of the embodiment of the present invention for image classification when;Meanwhile the embodiment of the present invention exists
Dictionary weight matrix is introduced in traditional Laplce's constraints, a liter dimension, Ke Yijin are carried out to image classification dictionary matrix
One step improves classifying quality of the image classification dictionary of the embodiment of the present invention for image classification when;Moreover, the embodiment of the present invention
Neighbour's degree between two samples is calculated using how distance weighted graph structure model, is improved between weighing two samples
The accuracy of neighbour's degree.
Shown in Figure 3, an embodiment of the present invention provides a kind of image classification dictionary study based on Laplce's insertion
Device, the device include the first acquisition module 301, first processing module 302, the second acquisition module 303, Second processing module
304 and loop module 305.
Wherein, the first acquisition module 301, for obtaining training sample feature set from training sample database, wherein the instruction
It includes at least 2 class training samples to practice sample characteristics to concentrate;
First processing module 302, the C class training samples for being concentrated according to the training sample, using Laplce
Constraints trains the sparse expression dictionary of the C class training samples, wherein C is the positive integer more than 0, the La Pula
This constraints is:
φ(Xc) it is the image characteristic matrix that the C class training samples are mapped in nuclear space, WcIt is dictionary weight square
Battle array, ScFor the sparse expression matrix of the C class training samples, K is WcMatrix column number,Represent ScI-th row of matrix, pij
For weight coefficient, pijRepresent training sampleAnd training sampleClose proximity,It represents in the C class training samples
I-th of sample,Represent the jth sample in the C class training samples, α, β are constants, α, β be known as regularization because
Son;
Second acquisition module 303 is used for the graph structure model based on how distance weighted measurement, obtains the C classes training
The neighbor relationships figure of sample, wherein the graph structure model is Laplce's embedded structure, and the graph structure model isThe 1st kind is represented to ask in the C class training samples
I-th of sample and the distance between j-th of sample method,It represents kth kind and seeks the C class training samples
In i-th of sample and the distance between j-th of sample method, t is constant, μkThe C classes are asked to train sample for kth kind
I-th of sample weight coefficient corresponding with the method for the distance between j-th of sample in this;
Second processing module 304, for seeking the word in Laplce's constraints based on the newer method of iteration
The optimal solution of allusion quotation weight matrix and sparse expression matrix;
Loop module 305, for for every a kind of training sample in the training sample feature set, repeating first
The execution step of processing module 302, the second acquisition module 303 and Second processing module 304, until the training sample feature set
In be performed both by and finish per a kind of training sample, then export the image classification dictionary of training generation being embedded in based on Laplce.
Preferably, the second acquisition module specific 303 is used for:
Based at least two power corresponding with its in Euclidean distance, Hamming distance, COS distance and Chebyshev's distance
Weight coefficient, determines the neighbor relationships figure of the C class training samples.
Optionally, refering to what is shown in Fig. 4, Second processing module 304 specifically includes:
First solves submodule 3041, newer based on iteration for setting the dictionary weight matrix to fixed value
Method seeks the first optimal solution of the sparse expression matrix in Laplce's constraints, wherein the fixed value
For the corresponding random number matrix of the dictionary weight matrix;
Second solves submodule 3042, for being first optimal solution by the sparse expression arranged in matrix, based on repeatedly
The second optimal solution of the dictionary weight matrix in Laplce's constraints is sought for newer method;
First judging submodule 3043, if the first optimal solution for the sparse expression matrix and the dictionary weight square
Laplce's constraints of the second optimal solution composition of battle array does not restrain, then recycles execution first and solve 3041 He of submodule
Second solves the execution step of submodule 3042;
Second judgment submodule 3044, if the first optimal solution for the sparse expression matrix and the dictionary weight square
First optimal solution, then be determined as described sparse by Laplce's constraints convergence of the second optimal solution composition of battle array
Second optimal solution is determined as the optimal solution of dictionary weight matrix by the optimal solution of expression matrix.
Optionally, the first solution submodule 3041 is specifically used for:
The dictionary weight matrix is set to fixed value, based on the newer method of iteration according to formula:
Seek the first optimal solution of the sparse expression matrix in Laplce's constraints, wherein ScFor institute
The sparse expression matrix of C class training samples is stated,Represent ScThe element that the row k n-th of matrix arranges, κ (Xc,Xc)=φ (Xc)T
φ(Xc), φ (Xc) it is the image characteristic matrix that the C class training samples are mapped in nuclear space,Represent ScThe n-th of matrix
The all elements of row,Represent ScThe all elements of the row k of matrix, WcIt is dictionary weight matrix, K is WcMatrix column number,Represent ScI-th row of matrix, α, β are constants, and α, β are known as regularization factors.
Optionally, the second solution submodule 3042 is specifically used for:
It is first optimal solution by the sparse expression arranged in matrix, the newer method of iteration is based on, according to formula:It seeks the Laplce and constrains item
Second optimal solution of the dictionary weight matrix in part, wherein φ (Xc) it is that the C class training samples are mapped to nuclear space
In image characteristic matrix, WcIt is dictionary weight matrix, ScFor the sparse expression matrix of the C class training samples, K is WcSquare
The columns of battle array,Represent the kth row of matrix.
It should be noted that:The image classification dictionary learning device based on Laplce's insertion that above-described embodiment provides exists
When training generates the image classification dictionary per a kind of training sample, only the example of the division of the above functional modules,
In practical application, it can be completed as needed and by above-mentioned function distribution by different function modules, i.e., by the internal junction of device
Structure is divided into different function modules, to complete all or part of the functions described above.In addition, what above-described embodiment provided
Image classification dictionary learning device based on Laplce's insertion and the image classification dictionary study side based on Laplce's insertion
Method embodiment belongs to same design, and specific implementation process refers to embodiment of the method, and which is not described herein again.
Based on identical inventive concept, the embodiment of the present invention also provides a kind of computer readable storage medium, the computer
Readable storage medium storing program for executing can be computer readable storage medium included in memory;Can also be individualism, it is unassembled
Enter the computer readable storage medium in terminal.There are one the computer-readable recording medium storages or more than one computer
Program, this either more than one computer program be used for executing Fig. 1, Fig. 2 institute by one or more than one processor
The image classification dictionary learning method being embedded in based on Laplce shown.In addition, computer-readable the depositing of above-described embodiment offer
Storage media belongs to same design with the above-mentioned image classification dictionary learning method embodiment based on Laplce's insertion, specific
Realization process refers to embodiment of the method, and which is not described herein again.
One of ordinary skill in the art will appreciate that realizing that all or part of step of above-described embodiment can pass through hardware
It completes, relevant hardware can also be instructed to complete by program, the program can be stored in a kind of computer-readable
In storage medium, storage medium mentioned above can be read-only memory, disk or CD etc..
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all the present invention spirit and
Within principle, any modification, equivalent replacement, improvement and so on should all be included in the protection scope of the present invention.
Claims (10)
1. a kind of image classification dictionary learning method based on Laplce's insertion, which is characterized in that the method includes:
Step 100:From training sample database obtain training sample feature set, wherein the training sample feature set include to
Few 2 class training samples;
Step 110:According to the C class training samples that the training sample is concentrated, described in the training of Laplce's constraints
The sparse expression dictionary of C class training samples, wherein C is the positive integer more than 0, and Laplce's constraints is:
φ(Xc) it is the image characteristic matrix that the C class training samples are mapped in nuclear space, WcIt is dictionary weight matrix, Sc
For the sparse expression matrix of the C class training samples, K is WcMatrix column number,Represent ScI-th row of matrix, pijFor power
Weight coefficient, pijRepresent training sampleAnd training sampleClose proximity,Represent i-th in the C class training samples
A sample,J-th of sample in the C class training samples is represented, α, β are constants, and α, β are known as regularization factors;
Step 120:Graph structure model based on how distance weighted measurement obtains the neighbor relationships figure of the C class training samples,
Wherein, the graph structure model is Laplce's embedded structure, and the graph structure model is The 1st kind is represented to ask in the C class training samples
The method of the distance between i-th of sample and j-th of sample,Kth kind is represented to ask in the C class training samples
I-th of sample and the distance between j-th of sample method, t is constant, μkIt is asked in the C class training samples for kth kind
I-th of sample weight coefficient corresponding with the method for the distance between j-th of sample;
Step 130:Dictionary weight matrix in Laplce's constraints and sparse is sought based on the newer method of iteration
The optimal solution of expression matrix;
Step 140:For every a kind of training sample in the training sample feature set, above-mentioned steps 110~step is repeated
Rapid 130, until being performed both by per a kind of training sample in the training sample feature set finishes, then export training generation based on
The image classification dictionary of Laplce's insertion.
2. according to the method described in claim 1, it is characterized in that, the graph structure model based on how distance weighted measurement,
The neighbor relationships figure of the C class training samples is obtained, specially:
Based at least two weight corresponding with its systems in Euclidean distance, Hamming distance, COS distance and Chebyshev's distance
Number, determines the neighbor relationships figure of the C class training samples.
3. method according to claim 1 or 2, which is characterized in that described to seek the drawing based on the newer method of iteration
It the step of optimal solution of dictionary weight matrix and sparse expression matrix in this constraints of pula, specifically includes:
Step 1301:It sets the dictionary weight matrix to fixed value, the La Pula is sought based on the newer method of iteration
First optimal solution of the sparse expression matrix in this constraints, wherein the fixed value is the dictionary weight matrix
Corresponding random number matrix;
Step 1302:It is first optimal solution by the sparse expression arranged in matrix, institute is sought based on the newer method of iteration
State the second optimal solution of the dictionary weight matrix in Laplce's constraints;
Step 1303:If the first optimal solution of the sparse expression matrix and the second optimal solution of dictionary weight matrix composition
Laplce's constraints do not restrain, then recycle and execute the step 1301 and the step 1302;
Step 1304:If the first optimal solution of the sparse expression matrix and the second optimal solution of dictionary weight matrix composition
Laplce's constraints convergence, then first optimal solution is determined as to the optimal solution of the sparse expression matrix,
Second optimal solution is determined as to the optimal solution of dictionary weight matrix.
4. according to the method described in claim 3, it is characterized in that, described set the dictionary weight matrix to fixed value,
The first optimal solution of the sparse expression matrix in Laplce's constraints is sought based on the newer method of iteration, is had
Body is:
The dictionary weight matrix is set to fixed value, based on the newer method of iteration according to formula:
Seek the first optimal solution of the sparse expression matrix in Laplce's constraints, wherein ScFor the C
The sparse expression matrix of class training sample,Represent ScThe element that the row k n-th of matrix arranges, κ (Xc,Xc)=φ (Xc)Tφ
(Xc), φ (Xc) it is the image characteristic matrix that the C class training samples are mapped in nuclear space,Represent ScN-th row of matrix
All elements,Represent ScThe all elements of the row k of matrix, WcIt is dictionary weight matrix, K is WcMatrix column number,
Represent ScI-th row of matrix, α, β are constants, and α, β are known as regularization factors.
5. according to the method described in claim 4, it is characterized in that, it is described by the sparse expression arranged in matrix be described first
Optimal solution seeks second of the dictionary weight matrix in Laplce's constraints based on the newer method of iteration most
Excellent solution, specially:
It is first optimal solution by the sparse expression arranged in matrix, the newer method of iteration is based on, according to formula:It seeks the Laplce and constrains item
Second optimal solution of dictionary weight matrix described in part, wherein φ (Xc) it is that the C class training samples are mapped in nuclear space
Image characteristic matrix, WcIt is dictionary weight matrix, ScFor the sparse expression matrix of the C class training samples, K is WcMatrix
Columns,Represent the kth row of matrix.
6. a kind of image classification dictionary learning device based on Laplce's insertion, which is characterized in that described device includes:
First acquisition module, for obtaining training sample feature set from training sample database, wherein the training sample feature set
Include at least 2 class training samples;
First processing module, the C class training samples for being concentrated according to the training sample, using Laplce's constraints
The sparse expression dictionary of the training C class training samples, wherein C is the positive integer more than 0, and the Laplce constrains item
Part is:
φ(Xc) it is the image characteristic matrix that the C class training samples are mapped in nuclear space, WcIt is dictionary weight matrix, Sc
For the sparse expression matrix of the C class training samples, K is WcMatrix column number,Represent ScI-th row of matrix, pijFor power
Weight coefficient, pijRepresent training sampleAnd training sampleClose proximity,Represent i-th in the C class training samples
A sample,J-th of sample in the C class training samples is represented, α, β are constants, and α, β are known as regularization factors;
Second acquisition module is used for the graph structure model based on how distance weighted measurement, obtains the close of the C class training samples
Adjacent relational graph, wherein the graph structure model is Laplce's embedded structure, and the graph structure model is The 1st kind is represented to ask in the C class training samples
The method of the distance between i-th of sample and j-th of sample,Kth kind is represented to ask in the C class training samples
I-th of sample and the distance between j-th of sample method, t is constant, μkIt is asked in the C class training samples for kth kind
I-th of sample weight coefficient corresponding with the method for the distance between j-th of sample;
Second processing module, for seeking the dictionary weight square in Laplce's constraints based on the newer method of iteration
The optimal solution of battle array and sparse expression matrix;
Loop module, for for every a kind of training sample in the training sample feature set, repeating at described first
Module, the execution step of second acquisition module and the Second processing module are managed, until in the training sample feature set
Be performed both by and finish per a kind of training sample, then export the image classification dictionary of training generation being embedded in based on Laplce.
7. device according to claim 6, which is characterized in that second acquisition module is specifically used for:
Based at least two weight corresponding with its systems in Euclidean distance, Hamming distance, COS distance and Chebyshev's distance
Number, determines the neighbor relationships figure of the C class training samples.
8. the device described according to claim 6 or 7, which is characterized in that the Second processing module specifically includes:
First solution submodule is sought for setting the dictionary weight matrix to fixed value based on the newer method of iteration
First optimal solution of the sparse expression matrix in Laplce's constraints, wherein the fixed value is institute's predicate
The corresponding random number matrix of allusion quotation weight matrix;
Second solves submodule, newer based on iteration for being first optimal solution by the sparse expression arranged in matrix
Method seeks the second optimal solution of the dictionary weight matrix in Laplce's constraints;
First judging submodule, if for the sparse expression matrix the first optimal solution and the dictionary weight matrix second
Laplce's constraints of optimal solution composition does not restrain, then recycles and execute the first solution submodule and described second
Solve the execution step of submodule;
Second judgment submodule, if for the sparse expression matrix the first optimal solution and the dictionary weight matrix second
Laplce's constraints convergence of optimal solution composition, then be determined as the sparse expression matrix by first optimal solution
Optimal solution, second optimal solution is determined as to the optimal solution of dictionary weight matrix.
9. device according to claim 8, which is characterized in that the first solution submodule is specifically used for:
The dictionary weight matrix is set to fixed value, based on the newer method of iteration according to formula:
Seek the first optimal solution of the sparse expression matrix in Laplce's constraints, wherein ScFor the C
The sparse expression matrix of class training sample,Represent ScThe element that the row k n-th of matrix arranges, κ (Xc,Xc)=φ (Xc)Tφ
(Xc), φ (Xc) it is the image characteristic matrix that the C class training samples are mapped in nuclear space,Represent ScThe n-th of matrix
The all elements of row,Represent ScThe all elements of the row k of matrix, WcIt is dictionary weight matrix, K is WcMatrix column number,Represent ScI-th row of matrix, α, β are constants, and α, β are known as regularization factors.
10. device according to claim 9, which is characterized in that the second solution submodule is specifically used for:
It is first optimal solution by the sparse expression arranged in matrix, the newer method of iteration is based on, according to formula:It seeks the Laplce and constrains item
Second optimal solution of the dictionary weight matrix in part, wherein φ (Xc) it is that the C class training samples are mapped to nuclear space
In image characteristic matrix, WcIt is dictionary weight matrix, ScFor the sparse expression matrix of the C class training samples, K is WcSquare
The columns of battle array,Represent the kth row of matrix.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710447133.7A CN107169531B (en) | 2017-06-14 | 2017-06-14 | A kind of image classification dictionary learning method and device based on Laplce's insertion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710447133.7A CN107169531B (en) | 2017-06-14 | 2017-06-14 | A kind of image classification dictionary learning method and device based on Laplce's insertion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107169531A CN107169531A (en) | 2017-09-15 |
CN107169531B true CN107169531B (en) | 2018-08-17 |
Family
ID=59818480
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710447133.7A Expired - Fee Related CN107169531B (en) | 2017-06-14 | 2017-06-14 | A kind of image classification dictionary learning method and device based on Laplce's insertion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107169531B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009048641A (en) * | 2007-08-20 | 2009-03-05 | Fujitsu Ltd | Character recognition method and character recognition device |
US8478005B2 (en) * | 2011-04-11 | 2013-07-02 | King Fahd University Of Petroleum And Minerals | Method of performing facial recognition using genetically modified fuzzy linear discriminant analysis |
CN104392251A (en) * | 2014-11-28 | 2015-03-04 | 西安电子科技大学 | Hyperspectral image classification method based on semi-supervised dictionary learning |
CN106557782A (en) * | 2016-11-22 | 2017-04-05 | 青岛理工大学 | Hyperspectral image classification method and device based on category dictionary |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1755067A1 (en) * | 2005-08-15 | 2007-02-21 | Mitsubishi Electric Information Technology Centre Europe B.V. | Mutual-rank similarity-space for navigating, visualising and clustering in image databases |
JP5506272B2 (en) * | 2009-07-31 | 2014-05-28 | 富士フイルム株式会社 | Image processing apparatus and method, data processing apparatus and method, and program |
CN104318261B (en) * | 2014-11-03 | 2016-04-27 | 河南大学 | A kind of sparse representation face identification method representing recovery based on figure embedding low-rank sparse |
CN105574548B (en) * | 2015-12-23 | 2019-04-26 | 北京化工大学 | It is a kind of based on sparse and low-rank representation figure high-spectral data dimension reduction method |
CN105868796B (en) * | 2016-04-26 | 2019-03-01 | 中国石油大学(华东) | The design method of linear discriminant rarefaction representation classifier based on nuclear space |
-
2017
- 2017-06-14 CN CN201710447133.7A patent/CN107169531B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009048641A (en) * | 2007-08-20 | 2009-03-05 | Fujitsu Ltd | Character recognition method and character recognition device |
US8478005B2 (en) * | 2011-04-11 | 2013-07-02 | King Fahd University Of Petroleum And Minerals | Method of performing facial recognition using genetically modified fuzzy linear discriminant analysis |
CN104392251A (en) * | 2014-11-28 | 2015-03-04 | 西安电子科技大学 | Hyperspectral image classification method based on semi-supervised dictionary learning |
CN106557782A (en) * | 2016-11-22 | 2017-04-05 | 青岛理工大学 | Hyperspectral image classification method and device based on category dictionary |
Non-Patent Citations (1)
Title |
---|
基于稀疏编码的半监督图像分类研究;陈汉英;《中国优秀硕士学位论文全文数据库》;20141031(第10期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN107169531A (en) | 2017-09-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110659665B (en) | Model construction method of different-dimension characteristics and image recognition method and device | |
CN110135267A (en) | A kind of subtle object detection method of large scene SAR image | |
Liu et al. | An improved evaluation framework for generative adversarial networks | |
CN104200240B (en) | A kind of Sketch Searching method based on content-adaptive Hash coding | |
CN110163258A (en) | A kind of zero sample learning method and system reassigning mechanism based on semantic attribute attention | |
CN109615014A (en) | A kind of data sorting system and method based on the optimization of KL divergence | |
CN112464865A (en) | Facial expression recognition method based on pixel and geometric mixed features | |
CN110516095A (en) | Weakly supervised depth Hash social activity image search method and system based on semanteme migration | |
CN105243139A (en) | Deep learning based three-dimensional model retrieval method and retrieval device thereof | |
CN108960299A (en) | A kind of recognition methods of multiclass Mental imagery EEG signals | |
CN106897669A (en) | A kind of pedestrian based on consistent iteration various visual angles transfer learning discrimination method again | |
CN109192298A (en) | Deep brain medical diagnosis on disease algorithm based on brain network | |
CN104881852B (en) | Image partition method based on immune clone and fuzzy kernel clustering | |
CN107871107A (en) | Face authentication method and device | |
CN106991355A (en) | The face identification method of the analytical type dictionary learning model kept based on topology | |
CN109255289A (en) | A kind of across aging face identification method generating model based on unified formula | |
Huang et al. | MultiSpectralNet: Spectral clustering using deep neural network for multi-view data | |
CN107066964B (en) | Rapid collaborative representation face classification method | |
CN113032613B (en) | Three-dimensional model retrieval method based on interactive attention convolution neural network | |
Li et al. | Dating ancient paintings of Mogao Grottoes using deeply learnt visual codes | |
CN113011243A (en) | Facial expression analysis method based on capsule network | |
CN106250918A (en) | A kind of mixed Gauss model matching process based on the soil-shifting distance improved | |
CN109325513A (en) | A kind of image classification network training method based on magnanimity list class single image | |
CN110399814B (en) | Face recognition method based on local linear representation field adaptive measurement | |
CN110007764A (en) | A kind of gesture skeleton recognition methods, device, system and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20180817 |