CN105184320B - The image classification method of non-negative sparse coding based on structural similarity - Google Patents

The image classification method of non-negative sparse coding based on structural similarity Download PDF

Info

Publication number
CN105184320B
CN105184320B CN201510566662.XA CN201510566662A CN105184320B CN 105184320 B CN105184320 B CN 105184320B CN 201510566662 A CN201510566662 A CN 201510566662A CN 105184320 B CN105184320 B CN 105184320B
Authority
CN
China
Prior art keywords
code book
matrix
image
sparse coding
structural similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510566662.XA
Other languages
Chinese (zh)
Other versions
CN105184320A (en
Inventor
石伟伟
王进军
龚怡宏
张世周
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201510566662.XA priority Critical patent/CN105184320B/en
Publication of CN105184320A publication Critical patent/CN105184320A/en
Application granted granted Critical
Publication of CN105184320B publication Critical patent/CN105184320B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Analysis (AREA)

Abstract

The present invention is based on the image classification methods of the non-negative sparse coding of structural similarity, comprising the following steps: concentrates all images densely to extract SIFT feature respectively image data to be processed;Several SIFT features are randomly chosen for seeking the code book of image data set to be processed;Establish the non-negative sparse coding model based on structural similarity;The code book of the image data set is solved with the SIFT feature randomly selected;Fixed code book encodes all SIFT features;It concentrates the coding of every image to carry out the integration of spatial pyramid maximum pond method image data, obtains the feature vector of every image;Image data set is divided into training set and test set, with the image feature vector and the corresponding label of image in the spatial pyramid maximum pond of training set, one classifier of training;For any one image, feature vector behind its spatial pyramid maximum pond is input in trained classifier, this image prediction classification has been obtained.

Description

The image classification method of non-negative sparse coding based on structural similarity
Technical field:
The invention belongs to computer visual image sorting technique fields, and in particular to a kind of based on the non-negative of structural similarity The image classification method of sparse coding.
Background technique:
One critical function in biological vision system primary stage is exactly the statistical redundancy for being eliminated as much as input stimulus.Just Grade visual cortex meets sparsity to the response of environmental stimuli, i.e. only a small number of neuron is activated, and is encoded to accordingly sparse Coding.Sparse coding is exactly generally a signal to be expressed as to the combination of one group of base, and require only to need few Several several bases can come out signal reconstruction.Sparse coding has been widely applied to computer vision, image signal process Equal fields, for example, the application such as signal reconstruction, signal denoising, image characteristics extraction and classification.
Structural information is defined as reflecting the attribute of signal structure independently of brightness, contrast by structural similarity index, It and is the combination of the different factors of brightness, contrast and three, structure by distortion modeling.Use mean value as the estimation of brightness, standard deviation The estimation spent as a comparison, measurement of the covariance as structure similarity degree.
Traditional sparse coding method is that is, to make reconstructed error based on the reconstruct under least mean-square error meaning Quadratic sum is as small as possible, meanwhile, make to encode rarefaction accordingly as far as possible, it is sparse to show that in coding be exactly the feature encoded The element of vector as much as possible is zero.The current image classification method based on sparse coding, is largely all based on minimum Change the encoding model of reconstructed error quadratic sum to do, error sum of squares does not meet the vision of human eye as the judgment criteria of distortion Characteristic.It has recently been demonstrated that the major function of human visual system is the structure extracted in image and video from visual zone Change information, and error sum of squares is without fully considering the visual characteristic of human eye, therefore reconstruct cannot be very for traditional sparse coding The structural similarity of reconstructed image and original image is evaluated well.
Summary of the invention:
In view of the above-mentioned deficiencies in the prior art, it is an object of the present invention to provide a kind of non-negative sparse based on structural similarity The image classification method of coding.
In order to achieve the above objectives, the present invention adopts the following technical scheme:
The image classification method of non-negative sparse coding based on structural similarity, comprising the following steps:
1) all images are concentrated densely to extract SIFT feature respectively image data to be processed;
2) after concentrating all complete SIFT features of image zooming-out to image data to be processed, 50,000 to 500,000 are randomly chosen SIFT feature be used to seek the code book of image data set to be processed;
3) the non-negative sparse coding model based on structural similarity is established;
4) according to step 2) and step 3), the code book of the image data set is solved with the SIFT feature randomly selected;
5) after the code book of the image data set solves, fixed code book encodes all SIFT features;
6) it concentrates the coding of every image to carry out the integration of spatial pyramid maximum pond method image data, obtains every The feature vector of image;
7) image data set is divided into training set and test set, with the image in the spatial pyramid maximum pond of training set Feature vector and the corresponding label of image, one classifier of training;
8) for any one image, feature vector behind its spatial pyramid maximum pond is input to trained In classifier, this image prediction classification has been obtained.
A further improvement of the present invention lies in that concentrating all images according to 16 image data to be processed in step 1) To 32 block of pixels and 6 to 10 sliding step, the SIFT feature of each image is densely extracted.
A further improvement of the present invention lies in that in step 3), if code book is A=[a1,a2,…,ak], each column of A indicate One base vector, the columns of code book are k, SIFT feature vector xiCorresponding sparse coding is s at code book Ai, definition coding square Battle array S=[s1,s2,…,sn], i.e. each column of encoder matrix S structural similarity sparse coding for being corresponding SIFT feature;It is non- The objective function of negative sparse coding model is as follows:
Wherein, i=1,2 ..., n, n are the number for the SIFT feature selected at random, and j=1,2 ..., k, k is the column of code book A Number, sjiIt is s for sparse codingiJ-th of component;
Write as matrix form:
Wherein, | | ai| |=1, i.e., the mould of each column of code book is long or L-2 norm is 1,The m of representing matrix S1Norm, m1 Norm is equal to the sum of the absolute value of all elements of matrix;
Each element is non-negative in the presentation code matrix of S >=0 S;
λ be adjustment structure distortion level and encode sparsity weight coefficient, λ is bigger, encode accordingly it is sparse, And there is 0.05≤λ≤0.5;
SSIM () is structural similarity target function.
A further improvement of the present invention lies in that the concrete methods of realizing of step 4) is as follows:
401) according to the SIFT feature for randomly choosing 50,000 to 500,000 in step 2), the value of weight coefficient λ is given;
402) code book A is initialized:
Random initializtion code book A(0), byThe mould length of each column of code book A is normalized into 1, sets t =1;A(t)Indicate the t times iterative value of code book, A(0)Indicate code book initial value;
403) the corresponding encoder matrix S of random initializtion(0), each of matrix element is initialized between 0 to 1 A random number, S(t)The t times iterative value of presentation code matrix S;
404) encoder matrix S is updated, specific as follows:
Wherein: the value of the right variable is assigned to the variable on the left side by symbol ← expression assignment;SymbolWithRespectively Indicate that Hardmard multiplies and removes, i.e. the dot product of matrix and point is removed;Evolution function sqrt () acts on matrix and indicates to corresponding matrix Each component elements do open operation;B+=max (B, 0), B-=-min (B, 0), equally with evolution function sqrt (), Max () and min () makes comparisons each element of matrix with 0 respectively here, takes corresponding maximum or least member;H is the matrix that 1 is all with the element of matrix S same order;
405) code book A is updated, specific as follows:
Wherein, σ is the gradient decline step-length for optimizing code book, andThe value range of h be 0.01~ 0.1, molecule max | Aij| indicate code book A in element absolute value maximum value, denominatorIndicate E about code book A Gradient matrixAll elements absolute value average value,Subscript T representing matrix or to The transposition of amount, ρ indicate that penalty coefficient, F are the objective function of non-negative sparse coding model;
406) if opposite variable before and after target function value twice is less than 10-6, or reach scheduled the number of iterations, With regard to stopping executing, code book matrix A is obtained;Otherwise, t=t+1 is set, step 403) is gone to.
A further improvement of the present invention lies in thatWherein, That is D is a diagonal matrix, its diagonal element is respectivelyI is the unit matrix with D same order.
A further improvement of the present invention lies in that fix code book in objective function in step 5), according toTo calculate the non-negative sparse coding s based on structural similarity of SIFT feature x.
Compared with the existing technology, the present invention has the advantage that:
The present invention changes traditional coding mode by the way that structural similarity index to be introduced into inside sparse coding method, The quality of coding is improved, so that coding more meets the coding mode of human visual system, corresponding coding is then carried out space The mode in pyramid maximum pond is integrated, the feature vector of available image, and then by spatial pyramid maximum pond Feature vector afterwards is applied to image classification.
The present invention eliminates the reconstruct distortion metrics based on least mean-square error in traditional sparse coding method, uses structure Measurement of the similarity as reconstruct distortion, proposes the non-negative sparse coding model based on structural similarity, and the party can be to appointing The feature of what vectorization is encoded, and is a kind of coding method for meeting human-eye visual characteristic.
The present invention densely extracts SIFT feature to image, compile based on the similar non-negative sparse of structure to SIFT feature Code carries out spatial pyramid maximum pondization to coding and has just obtained a feature vector of entire image, by obtained feature to Amount is used for image classification.
Detailed description of the invention:
Fig. 1 is that the present invention is based on the flow charts of the image classification method of the non-negative sparse coding of structural similarity.
Fig. 2 is spatial pyramid maximum pond schematic diagram.
Specific embodiment:
Below in conjunction with drawings and examples, the present invention is described in further detail.
The present invention attempts to find corresponding sparse coding from the angle of structural similarity, and present invention introduces structural similarity The important measurement index of degree is kept as information, is then constrained, is given based on the non-of structural similarity plus non-negative sparse Negative sparse coding model.Why requirement coding nonnegativity, be because non-negative coding there is better stabilization in the application Property.
Setting signal x and y, x, y ∈ RN, structural similarity is defined as follows:
Wherein, xiFor i-th of component elements of signal x, yiIt is i-th of signal y Component elements, σxAnd σyThe standard deviation of signal x and y is respectively indicated,σx,yIndicate the covariance of signal x and y,0< C1,C2< < 1 is 0 and two ad hoc minimum normal numbers to avoid denominator, and structural similarity illustrates the two spies closer to 1 It levies closer.Therefore, 1-SSIM (x, y) can be used as the degree for measuring Characteristic Distortion.
As shown in Figure 1, the present invention is based on the image classification method of the non-negative sparse coding of structural similarity, including following step It is rapid:
1): to each width picture of data set according to a certain size block of pixels (such as the image of 16 × 16 pixel sizes Block) and pre-determined sliding step up and down (such as sliding step of 6 pixels), densely extract SIFT feature.
2): from extracted all SIFT features, n (for example, 200,000) a SIFT feature is randomly chosen, it is selected Whole features form a matrix, are denoted as X, remember X=[x1,x2,…,xn];Each column xi∈Rp×1(i=1,2 ..., n) indicate one A SIFT feature vector, p indicate the dimension of extracted SIFT feature, if SIFT feature vector is 128 dimension, p= 128, selected whole SIFT features are used to ask the code book of the data set here.
3): setting code book as A=[a1,a2,…,ak], each column of A indicate that a base vector, the columns of code book are k, SIFT Feature vector xiThe sparse coding of (i=1,2 ..., n) at code book A is si(i=1,2 ..., n) defines encoder matrix S= [s1,s2,…,sn],.The objective function of non-negative sparse coding model is given below:
Wherein, i=1,2 ..., n, n are the number for the SIFT feature selected at random, and j=1,2 ..., k, k is the column of code book Number, sjiIt is s for sparse codingiJ-th of component;
Write as matrix form:
Wherein, | | ai| |=1, i.e. the mould of each column of code book is long or L-2 norm is that 1 (the L-2 norm of vector is all equal to it Element square root sum square), it is desirable that the mould of each of code book base a length of 1 is that occur trivial solution in order to prevent,Table Show the m of matrix S1Norm, m1Norm is equal to the sum of the absolute value of all elements of matrix.Each element in the representing matrix of S >=0 S It is all non-negative.In objective function, the effect of first item is to guarantee the feature of reconstruct and former feature structure phase as far as possible Seemingly, it is a kind of measurement to reconstruct distortion, in traditional sparse coding model, reconstruct distortion is weighed with the quadratic sum of error Amount, and the present invention is measured with structural similarity index.Section 2 in objective function is to guarantee the dilute of coding Dredge property.λ be adjust this two weight coefficient, λ is bigger, encode accordingly it is sparse, method realize during, λ takes Size for the constant greater than zero, λ value can be adjusted according to different data sets.The present invention needs to seek optimal code book Objective function is made to obtain minimum value with corresponding structural similarity sparse coding.
4): according to the objective function in SIFT feature and step 3) selected in step 2), solving image to be processed The optimal code book of data set.Optimize the non-negative sparse coding model based on structural similarity using the strategy of alternative optimization Objective function, concrete scheme are:
Step 1: giving selected SIFT feature, matrix X is formed;The value of given weight coefficient λ.
Second step initializes code book A;
Step 3: fixed A, optimizes S, using the convex optimization method of nonnegative matrix;
Step 4: fixed S, optimizes A, using gradient descent method;
Third step and the 4th step are repeated, until algorithmic statement.
When objective function convergence, obtained code book is exactly the code book of the data set.Judge that convergent method can be used One of following two criterion: (a) opposite variable before and after target function value twice is less than 10-6;(b) front and back of code book is twice Difference F- norm the opposite variable less than 10-6;When convergence, A is required code book.Here the F- norm of matrix is equal to Square root sum square of all elements in matrix.
Specific analysis and the process of the scheme of solution are as follows:
ByIt can obtain
Wherein,H be and S same order (i.e. line number and columns is all equal) Element be all 1 matrix.
Since bound for objective function requires the mould a length of 1 of each column of code book, we are by the constraint about code book Problem is converted into unconstrained problem by Means of Penalty Function Methods.
That is, enablingHere, the transposition of subscript T representing matrix or vector, ρ indicate punishment system Number, ρ=1 to 1000.
Wherein,It is one diagonal Matrix, its diagonal element are that I is unit matrix with D same order respectively.
It enablesDecline step-length to optimize the gradient of code book, the value range of s is 0.01~0.1, one As take s=0.05, molecule max | Aij| indicate code book in maximum absolute value element value, denominator indicate E about code book A gradient MatrixAll elements absolute value average value.
Optimization algorithm:
Step 1: random initializtion code book A(0), byThe mould length of each column of code book is normalized into 1, Set t=1;A(t)Indicate the t times iterative value of code book, A(0)Indicate code book initial value.
Step 2: the corresponding encoder matrix S of random initializtion(0)(each of matrix element be initialized to 0 to 1 it Between a random number), A(t)Indicate the t times iterative value of S.
Step 3: encoder matrix S is updated,
Step 4: updating code book (code book) A
Step 5: if algorithmic statement, or reach scheduled the number of iterations, just stop executing, obtains A, S;Otherwise, t=is set T+1 goes to step 3.
Note: here, the value of the right variable is assigned to the variable on the left side by symbol ← expression assignment;SymbolWith It respectively indicates Hardmard to multiply and remove, i.e. the dot product of matrix and point removes (each component elements make corresponding operation respectively);Evolution Function sqrt () acts on matrix expression and does open operation to each component elements of corresponding matrix;For any one square Battle array M, M+=max (M, 0), M-=-min (M, 0), equally with evolution function sqrt (), max () and min () are here respectively square Each element of battle array is made comparisons with 0, takes corresponding maximum or least member.
5): then fixed code book concentrates the SIFT feature of each width figure according to based on the non-negative of structural similarity data Sparse coding model calculates corresponding sparse coding.Code book is fixed in objective function, according toIt calculates the non-negative sparse coding s based on structural similarity of SIFT feature x, calculates The method that the sparse coding of SIFT feature x uses goes the method for Optimized Coding Based matrix S identical with fixed code book A in step 4).
6): for the every piece image of image data set to be processed, by the corresponding sparse coding of its all SIFT feature, It is integrated in the way of spatial pyramid maximum pond (SPM max pooling), the feature of an available higher-dimension Vector, the feature vector of obtained higher-dimension are exactly the feature vector for the visual tasks such as classify of this image.
As shown in Fig. 2, spatial pyramid maximum pond (SPM max pooling): spatial pyramid is exactly by original picture The grid being divided into, is typically divided into 1 × 1,2 × 2,4 × 4 grid, and each grid can regard a biggish figure as As block.In each image block, to the sparse coding of its all SIFT feature in every one-dimensional maximum pond of upper progress, i.e., each In a dimension, pond the result is that in all sparse coding respective dimensions absolute value maximum value.It is available behind each grid pond Feature vector after all grids or image block upper storage reservoir is spliced together by the feature vector on the image block, obtained The feature of one higher-dimension is exactly the feature vector of the picture in its entirety, this operating process is known as spatial pyramid maximum pond. The schematic diagram in spatial pyramid maximum pond is shown in Figure of description 1.
7): in every one kind of image data set to be processed, randomly selecting several pictures, or be determined in advance better Image, as training set, with the feature in the spatial pyramid maximum pond of training set image and one point of label training corresponding Class device, after classifier training is good, the parameter of classifier is determined that, generally just uses SVM classifier.
8) feature vector behind the spatial pyramid maximum pond of test set picture is input to classifier, can obtained Corresponding classification prediction label.

Claims (5)

1. the image classification method of the non-negative sparse coding based on structural similarity, which comprises the following steps:
1) all images are concentrated densely to extract SIFT feature respectively image data to be processed;
2) after concentrating all complete SIFT features of image zooming-out to image data to be processed, 50,000 to 500,000 are randomly chosen SIFT feature is used to ask the code book of image data set to be processed;
3) the non-negative sparse coding model based on structural similarity is established;The specific method is as follows:
If code book is A=[a1,a2,…,ak], each column of A indicate a base vector, and the columns of code book is k, SIFT feature to Measure xiCorresponding sparse coding is s at code book Ai, define encoder matrix S=[s1,s2,…,sn], i.e., encoder matrix S's is each Column are the structural similarity sparse codings of corresponding SIFT feature;The objective function of non-negative sparse coding model is as follows:
Wherein, i=1,2 ..., n, n are the number for the SIFT feature selected at random, and j=1,2 ..., k, k is the columns of code book A, sjiIt is s for sparse codingiJ-th of component;
Write as matrix form:
Wherein, | | ai| |=1, i.e., the mould of each column of code book is long or L-2 norm is 1,The m of representing matrix S1Norm, m1Norm Equal to the sum of the absolute value of all elements of matrix;
Each element is non-negative in the presentation code matrix of S >=0 S;
λ is adjustment structure distortion level and the weight coefficient for encoding sparsity, and λ is bigger, encodes accordingly sparse, and is had 0.05≤λ≤0.5;
SSIM () is structural similarity target function;
4) according to step 2) and step 3), the code book of the image data set is solved with the SIFT feature randomly selected;
5) after the code book of the image data set solves, fixed code book encodes all SIFT features;
6) it concentrates the coding of every image to carry out the integration of spatial pyramid maximum pond method image data, obtains every image Feature vector;
7) image data set is divided into training set and test set, with the characteristics of image in the spatial pyramid maximum pond of training set The corresponding label of vector sum image, one classifier of training;
8) for any one image, feature vector behind its spatial pyramid maximum pond is input to trained classification In device, this image prediction classification has been obtained.
2. the image classification method of the non-negative sparse coding according to claim 1 based on structural similarity, feature exist In, in step 1), to image data to be processed concentrate all images according to 16 to 32 block of pixels and 6 to 10 sliding walk It is long, densely extract the SIFT feature of each image.
3. the image classification method of the non-negative sparse coding according to claim 1 based on structural similarity, feature exist In the concrete methods of realizing of step 4) is as follows:
401) according to the SIFT feature for randomly choosing 50,000 to 500,000 in step 2), the value of weight coefficient λ is given;
402) code book A is initialized:
Random initializtion code book A(0), byThe mould length of each column of code book A is normalized into 1, sets t=1;A(t)Indicate the t times iterative value of code book, A(0)Indicate code book initial value;
403) the corresponding encoder matrix S of random initializtion(0), each of matrix element is initialized to one between 0 to 1 A random number, S(t)The t times iterative value of presentation code matrix S;
404) encoder matrix S is updated, specific as follows:
Wherein: the value of the right variable is assigned to the variable on the left side by symbol ← expression assignment;SymbolIt is respectively indicated with ⊙ Hardmard multiplies and removes, i.e. the dot product of matrix and point is removed;Evolution function sqrt () acts on matrix and indicates to the every of corresponding matrix A component elements all do open operation;B+=max (B, 0), B-=-min (B, 0), equally with evolution function sqrt (), max () Each element of matrix is made comparisons respectively here with 0 with min (), takes corresponding maximum or least member;B=[b1,b2,…, bn],H is the matrix that 1 is all with the element of matrix S same order;
405) code book A is updated, specific as follows:
A(t)←A(t-1)-σ▽AE
Wherein, σ is the gradient decline step-length for optimizing code book, andThe value range of h is 0.01~0.1, Molecule max | Aij| indicate code book A in element absolute value maximum value, denominator avg | (▽AE) | } indicate E about code book A's Gradient matrix ▽AThe average value of all elements absolute value of E,Subscript T representing matrix or vector Transposition, ρ indicate penalty coefficient, F be non-negative sparse coding model objective function;
406) if opposite variable before and after target function value twice is less than 10-6, or reach scheduled the number of iterations, just stop It only executes, obtains code book matrix A;Otherwise, t=t+1 is set, step 403) is gone to.
4. the image classification method of the non-negative sparse coding according to claim 3 based on structural similarity, feature exist In,Wherein,That is D is a diagonal matrix, it Diagonal element is respectivelyI is the unit matrix with D same order.
5. the image classification method of the non-negative sparse coding according to claim 3 based on structural similarity, feature exist In, in step 5), code book is fixed in objective function, according toTo calculate SIFT spy Levy the non-negative sparse coding s based on structural similarity of x.
CN201510566662.XA 2015-09-08 2015-09-08 The image classification method of non-negative sparse coding based on structural similarity Active CN105184320B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510566662.XA CN105184320B (en) 2015-09-08 2015-09-08 The image classification method of non-negative sparse coding based on structural similarity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510566662.XA CN105184320B (en) 2015-09-08 2015-09-08 The image classification method of non-negative sparse coding based on structural similarity

Publications (2)

Publication Number Publication Date
CN105184320A CN105184320A (en) 2015-12-23
CN105184320B true CN105184320B (en) 2019-01-15

Family

ID=54906384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510566662.XA Active CN105184320B (en) 2015-09-08 2015-09-08 The image classification method of non-negative sparse coding based on structural similarity

Country Status (1)

Country Link
CN (1) CN105184320B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106096658B (en) * 2016-06-16 2019-05-24 华北理工大学 Aerial Images classification method based on unsupervised deep space feature coding
CN106408018B (en) * 2016-09-13 2019-05-14 大连理工大学 A kind of image classification method based on amplitude-frequency characteristic sparseness filtering
CN116842030B (en) * 2023-09-01 2023-11-17 广州尚航信息科技股份有限公司 Data synchronous updating method and system of server

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020647A (en) * 2013-01-08 2013-04-03 西安电子科技大学 Image classification method based on hierarchical SIFT (scale-invariant feature transform) features and sparse coding

Also Published As

Publication number Publication date
CN105184320A (en) 2015-12-23

Similar Documents

Publication Publication Date Title
CN110414377B (en) Remote sensing image scene classification method based on scale attention network
Yokota et al. Smooth PARAFAC decomposition for tensor completion
JP6244059B2 (en) Face image verification method and face image verification system based on reference image
Wang et al. Modality and component aware feature fusion for RGB-D scene classification
Su et al. Order-preserving wasserstein distance for sequence matching
US8233711B2 (en) Locality-constrained linear coding systems and methods for image classification
JP6192010B2 (en) Weight setting apparatus and method
CN107067367A (en) A kind of Image Super-resolution Reconstruction processing method
CN108596138A (en) A kind of face identification method based on migration hierarchical network
CN102982165A (en) Large-scale human face image searching method
Kingma et al. Regularized estimation of image statistics by score matching
CN105046272B (en) A kind of image classification method based on succinct non-supervisory formula convolutional network
CN109376787B (en) Manifold learning network and computer vision image set classification method based on manifold learning network
CN106529586A (en) Image classification method based on supplemented text characteristic
CN105184320B (en) The image classification method of non-negative sparse coding based on structural similarity
CN108460400A (en) A kind of hyperspectral image classification method of combination various features information
CN105868711A (en) Method for identifying human body behaviors based on sparse and low rank
Kekre et al. CBIR feature vector dimension reduction with eigenvectors of covariance matrix using row, column and diagonal mean sequences
CN105260736A (en) Fast image feature representing method based on normalized nonnegative sparse encoder
Zhang et al. Kernel dictionary learning based discriminant analysis
Zheng et al. Extracting non-negative basis images using pixel dispersion penalty
CN104573726B (en) Facial image recognition method based on the quartering and each ingredient reconstructed error optimum combination
Georgy et al. Data dimensionality reduction for face recognition
CN105224943A (en) Based on the image swift nature method for expressing of multi thread normalization non-negative sparse coding device
Rafati et al. Trust-region minimization algorithm for training responses (TRMinATR): The rise of machine learning techniques

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant