CN107894967A - One kind is based on local and global regularization sparse coding method - Google Patents

One kind is based on local and global regularization sparse coding method Download PDF

Info

Publication number
CN107894967A
CN107894967A CN201711202173.1A CN201711202173A CN107894967A CN 107894967 A CN107894967 A CN 107894967A CN 201711202173 A CN201711202173 A CN 201711202173A CN 107894967 A CN107894967 A CN 107894967A
Authority
CN
China
Prior art keywords
mrow
msub
msup
msubsup
mtr
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711202173.1A
Other languages
Chinese (zh)
Inventor
舒振球
朱琪
范洪辉
张�杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Jiangsu University of Technology
Original Assignee
Nanjing University of Science and Technology
Jiangsu University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology, Jiangsu University of Technology filed Critical Nanjing University of Science and Technology
Priority to CN201711202173.1A priority Critical patent/CN107894967A/en
Publication of CN107894967A publication Critical patent/CN107894967A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computational Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Medicines Containing Antibodies Or Antigens For Use As Internal Diagnostic Agents (AREA)
  • Other Investigation Or Analysis Of Materials By Electrical Means (AREA)
  • Nitrogen And Oxygen Or Sulfur-Condensed Heterocyclic Ring Systems (AREA)

Abstract

The present invention discloses one kind based on local and global regularization sparse coding method, belongs to sparse coding (SC) technical field.The present invention has found the potential geometry of data using local regression regularization;Specifically, whole data space is divided into a large amount of regional areas, and each sample carries out linear expression by the sample of the regional area of its subordinate, is referred to as local study and contemplates;Also catch on this basis while using the core overall situation homing method global geometry of data;Therefore, the intrinsic geometry of data is obtained using local and global regularization.

Description

One kind is based on local and global regularization sparse coding method
Technical field
It is especially a kind of based on local and global regularization sparse coding the invention belongs to sparse coding (SC) technical field Method.
Background technology
In many classification and clustering problem, the problem of processing of high dimensional data is one big great challenging.To understand Certainly this problem, people represent often by data presentation technique to find the low-dimensional in high dimensional data, are improved so as to reach Computational efficiency and the purpose for reducing memory space.At present, data presentation technique is due in computer vision, information retrieval and machine Study etc. excellent expression and attracted the attention of numerous scholars.
Principal component analysis (PCA) and linear discriminant analysis (LDA) are two kinds of popular linear expression methods, and the former is a kind of Unsupervised learning method, the purpose is to find minimum covariance projecting direction, the latter is a kind of supervised learning method completely, its Purpose is to find the projecting direction of best discriminant technique information.But both the shortcomings that are all can not effectively to find in data potentially Geometry manifold structure.
In recent years, sparse coding (SC) achieves huge in the application such as image procossing, target classification and visual analysis Success.Its principle is to represent test sample by using the linear combination of atom a small amount of in dictionary so that expression coefficient Rarefaction.At present, many traditional methods reach the purpose of rarefaction by adding sparse constraint condition, such as it is sparse it is main into Analysis, sparse Non-negative Matrix Factorization and sparse low-rank representation etc., and achieve and widely apply.
An assuming that given group data set X=[x1,x2,...,xn]∈Rm×n, D ∈ Rm×kIt was complete dictionary, A ∈ Rk×nIt is Code coefficient.In order that coefficient rarefaction, l0Norm is used to constrain code coefficient.Therefore, the object function of sparse coding can To be expressed as minimization problem:
s.t.||di||2≤ c, i=1 ..., k (1)
Wherein | |FIt is Frobenius norms, | | | |0It is l0Norm, c are a given restrictive conditions, and α is one Non-negative parameter.l0Norm minimum value problem is np problem, is solved extremely difficult.And when representing that coefficient is sparse enough in formula (1), It can is converted into l1Norm minimum value optimization problem.Therefore, the minimum problems in formula (1) can be written as optimizing as follows and ask Topic:
s.t.||di||2≤ c, i=1 ..., k (2)
Wherein | |1It is l1Norm.L in formula (2)1Norm minimum value problem is convex optimization problem, therefore using existing Software kit (such as l1- magic, PDCO-LSQR and PDCO-CHOL etc.) solved.
Past research shows that manifold learning plays extremely important effect in data representation.It is high naturally to contemplate for one Two adjacent data samples are also adjacent in low-dimensional feature space in dimension space, and here it is manifold learning guess.Recently, in order to Effectively a kind of figure regularization sparse coding (GSC) method is proposed using the geometry manifold structure, Cai etc. of data.It passes through fortune With figure Regularization Technique, its low-dimensional is set to represent that the geometry manifold structure of high dimensional data can be kept.
Assuming that provide a group data set X=[x1,x2,...,xn]∈Rm×n, we, which construct one, has k arest neighbors figure G ={ X, W }, wherein vertex set are X, and Neighborhood Graph matrix is W.If xiIt is xjNearest samples or xiIt is xiArest neighbors sample This, Wij=1, otherwise, Wij=0.Therefore, figure regular terms is represented by:
The wherein order of Tr () representing matrix, A=[a1,…,an] it is sparse coefficient matrix, L=D-W is Laplce's square Battle array, D are diagonal matrix, Dii=∑jWij
Laplce's figure regular terms is added in the model of sparse coding, can obtain GSC object function:
s.t.||ai||2≤ c, i=1 ..., k (4)
Wherein α and β is regularization coefficient.Formula (4) can be solved using signature identification searching algorithm.
But GSC only make use of the local geometric manifold structure of data and ignore the global geometry relation of data. It is, thus, sought for a new method, makes its low-dimensional represent that the part and global geometry of data can be kept simultaneously.
The content of the invention
The defects of low-dimensional represents that the part and global geometry of data can not be kept simultaneously be present to solve prior art, The present invention provides a kind of based on part and global regularization sparse coding method, compared with traditional sparse coding method, the party Method keeps the partial structurtes information of data by constructing local regression regularization, and the global several of data are caught by kernel regression What structure.
To achieve the above object, the present invention uses following technical proposals:
One kind is comprised the following steps based on local with global regularization sparse coding method, this method:
Step 1, input one group of data X=[x1,x2,...,xn]∈Rm×n, wherein xiRepresent i-th of sample;Utilize part Study thoughts represent each sample, derive local regular terms expression formula:
Step 2, each sample is represented using core overall situation homing method, derive global regular terms expression formula:
Wherein φ () is nuclear mapping function, and b is bias item;
Step 3, formula (7) and formula (8) are combined, draw local and global regular terms expression formula:
Wherein μ is a balance parameters, for controlling local regular terms and the ratio shared by global regular terms;
Step 4, input data concentrate sample Xi=[xi,xi1,xi2,...,xik-1]∈Rm×kIt is expressed as sample NiData Matrix, Ai=[ai,ai1,...,aik-1]T∈Rm×kFor sample NiLow-dimensional represent, expression formula is derived by formula (7):
Jlocal=ATLlocalA (18)
Wherein
Expression formula is derived by formula (8):
Jglobal=ATLglobalA (22)
Wherein Lglobal=γ H (HKH+ γ I)-1H (25)
Step 5, local regular terms and global regular terms are combined, finally draw local and global regular terms expression formula:
Step 6, by Regularization Technique, local and global regular terms is embedded into sparse coding model, obtained described The object function expression formula of method:
s.t.||ai||2≤ c, i=1 ..., k (27)
Formula (27) is solved by alternative and iterative algorithm, exports dictionary S and code coefficient A.
Further, it is by the linear regression function representation of each sample using local study thoughts in step 1:
fi(x)=Wi Txj+bi (5)
Wherein xj∈N(xi), WiFor weight vector, biIt is fiBias;The loss function table of each sample is derived by formula (5) It is shown as:
Wherein αjIt is xiLow-dimensional represent, regular terms γ | | Wi||2For weighing WiSmooth degree.Further, step Input data concentrates sample X in fouri=[xi,xi1,xi2,...,xik-1]∈Rm×kIt is expressed as sample NiData matrix, Ai= [ai,ai1,...,aik-1]T∈Rm×kFor sample NiLow-dimensional represent, expression formula is derived by formula (9):
Wherein 1k∈RkWith 1n∈RnRepresent two unit vectors;Utilize the attribute of matrixBy formula (10) expression formula is derived:
To the W in formula (11)iAnd biLocal derviation is sought respectively, is obtained:
WhereinIt is local center matrix, formula (14) and (15) generation is arrived in formula (6), obtained:
WhereinA selection matrix Q is defined,By Formula (16) derives expression formula:
Further, input data concentrates sample X in step 4i=[xi,xi1,xi2,...,xik-1]∈Rm×kIt is expressed as sample This NiData matrix, Ai=[ai,ai1,...,aik-1]T∈Rm×kFor sample NiLow-dimensional represent, by the Section 2 in formula (9) JglobalDerive expression formula:
Jglobal=tr { [φ (X)Tφ(W)+1nbT-A]T[φ(X)Tφ(W)+1nbT-A]}
+γtr[φ(X)Tφ(X)] (19)
To the W in formula (11)iAnd biLocal derviation is sought respectively, is obtained:
φ (W)=(φ (X) H φ (X)T+γI)-1φ(X)A
=φ (X) H (H φ (X)TXH+γI)-1A (20)
DefinitionFor Global center matrix, formula (22) is obtained;
Lglobal=H-H φ (X)T[φ(X)Hφ(X)T+γI]-1φ(X)H
Wherein=γ H (H φ (X)Tφ(X)H+γI)-1H (23)
Wherein φ (X)Tφ (X) is calculated by kernel function K, obtains formula (25).
Further, kernel function K expression formulas:
Its Kernel Function K meets Mercer conditions, and
Further, the method for solving of formula (27) is as follows:
Step 7, regular coding coefficient A, the optimization problem in formula (27) is converted into the minimum with quadratic constraints and put down Fang Wenti:
s.t.||ai||2≤ c, i=1 ..., k (28)
Formula (28) is solved by Lagrange duality;
Step 8, fixed dictionary S, the problem of following is converted into by the optimization problem in formula (27):
Formula (30) solves the code coefficient A of each sample by coordinate optimizing method one by one.
Further, λ=[λ is inputted in step 712,...,λk] as Lagrange multiply vector, wherein λiIt is to carry I-th of inequality | | ai||2≤ c Lagrange multiplier, derived by formula (28):
S*=XAT(AAT+diag(λ*))-1 (29)
Wherein λ*It is λ optimal solution.
Further, i-th of coefficient a in step 8 in Optimized Coding Based coefficient AiWhen fix other coefficients, by formula (30) It is expressed as:
Formula (31) can be solved using signature identification searching algorithm.
Beneficial effect:
1. being compared with traditional data presentation technique, such as PAC, GNMF, CF etc., LGSC methods of the invention pass through part With global regularization, the geometry popular structure of data can be more caught, data is represented more accurate.
2. the LGSC methods of the present invention do not keep the global geometry information of data, but also profit merely with kernel regression Protected the manifold of data with local regression and differentiated structural information.
3. the convergence rate of the LGSC methods of the present invention is suitable with traditional sparse coding method;In computational efficiency, this The LGSC methods of invention are very nearly the same on computation complexity compared with traditional GSC methods, and this has fully demonstrated the present invention's The high efficiency of LGSC methods.
Brief description of the drawings
Fig. 1 is the specific method flow chart of one embodiment of the invention.
Embodiment
The present invention is further described with reference to the accompanying drawings and examples.
The present embodiment propose it is a kind of based on local with global regularization sparse coding method, the object function of this method and ask Solution method is as follows:
(1) local and global regularization
It is because they are only independent that traditional sparse coding method, which can not obtain catching geometry intrinsic in data, Ground utilizes the partial structurtes or global structure of data.One rational method is that the part of data is made full use of during expression With global structure information.
Numerous studies show, when data represent, to make full use of the local geometry of data to improve the performance of algorithm. Therefore, the present invention has found the potential geometry of data using local regression regularization.Specifically, whole data space quilt It is divided into a large amount of regional areas, each sample carries out linear expression by the sample of the regional area of its subordinate, and this is referred to as part Study is contemplated.But the shortcomings that imagination is to lack that the data point locally classified can be constructed in each regional area.Cause This, the present invention also catches on this basis while using the core overall situation homing method global geometry of data.
Therefore, the present invention obtains the intrinsic geometry of data with global regularization using local, and idiographic flow is as follows:
Provide one group of data X=[x1,x2,...,xn]∈Rm×n, wherein xiRepresent i-th of sample.Local study thoughts are By the neighborhood of sample come linear expression sample.Therefore, the linear regression function of each sample is represented by:
fi(x)=Wi Txj+bi (5)
Wherein xj∈N(xi), WiFor weight vector, biIt is fiBias., can be by the loss function of each sample according to formula (5) It is expressed as:
Wherein αjIt is xiLow-dimensional represent, regular terms γ | | Wi||2For weighing WiSmooth degree.Therefore, sample set Prediction error is represented by:
Formula (7) is referred to as local regular terms.
In order to obtain the global geometry of data, the present invention represents each sample using kernel regression.Sample set Global loss function JglobalIt is represented by:
Wherein φ () is nuclear mapping function, and b is bias item.Formula (8) is referred to as global regular terms.By formula (7) and formula (8) With reference to local and global regular terms can be expressed as:
Wherein μ is a balance parameters, is mainly used to control local regular terms and the ratio shared by global regular terms.Assuming that Sample X in data seti=[xi,xi1,xi2,...,xik-1]∈Rm×kIt is expressed as sample NiData matrix, Ai=[ai,ai1,..., aik-1]T∈Rm×kFor sample NiLow-dimensional represent, formula (9) can be rewritten as:
Wherein 1k∈RkWith 1n∈RnRepresent two unit vectors.Utilize the attribute of matrixFormula (10) It is represented by:
To the W in formula (11)iAnd biLocal derviation is sought, can be obtained:
AllowIt can obtain:
WhereinIt is local center matrix, formula (14) and (15) generation is arrived in formula (6), can obtain:
WhereinA selection matrix Q is defined, if xiIt is NiJth Individual element, then Qij=1, otherwise, Qij=0.Therefore formula (16) can be rewritten as:
Meanwhile the local regular terms in formula (7) can be rewritten as:
Jlocal=ATLlocalA (18)
Wherein
It is similar, the Section 2 J in formula (9)globalIt can be rewritten as:
Jglobal=tr { [φ (X)Tφ(W)+1nbT-A]T[φ(X)Tφ(W)+1nbT-A]}
+γtr[φ(X)Tφ (X)] (19) to the W in formula (11)iAnd biLocal derviation is sought respectively, can be obtained:
φ (W)=(φ (X) H φ (X)T+γI)-1φ(X)A
=φ (X) H (H φ (X)TXH+γI)-1A (20)
DefinitionFor Global center matrix, global regularization term is represented by:
Jglobal=ATLglobalA (22) therefore, can be obtained:
Lglobal=H-H φ (X)T[φ(X)Hφ(X)T+γI]-1φ(X)H
=γ H (H φ (X)Tφ(X)H+γI)-1H (23)
Wherein φ (X)Tφ (X) can be calculated by kernel function.Assuming that xiAnd xjDot product obtained by following kernel function Arrive:
Kernel function K must is fulfilled for Mercer conditions, andTherefore, LglobalIt is represented by:
Lglobal=γ H (HKH+ γ I)-1H (25)
Local regular terms and global regular terms are combined, can obtain part proposed in the inventive method and it is global just Then item expression formula:
Formula (26) is referred to as local and global regularization.
(2) object function of LGSC methods
The part of data and global regular terms can be embedded into sparse coding model by the present invention by Regularization Technique. Therefore, the object function of LGSC methods of the invention is represented by:
s.t.||ai||2≤ c, i=1 ..., k (27)
The object function of LGSC methods is non-convex for S and A product, can be solved by alternative and iterative algorithm excellent Change problem (27), specific method for solving are as follows:
(a) dictionary S iteration renewal
Regular coding coefficient A, the optimization problem in formula (27) can be converted into the least square with quadratic constraints and ask Topic:
s.t.||ai||2≤ c, i=1 ..., k (28)
Can be by Lagrange duality come solving-optimizing problem (28).Assuming that λ=[λ12,...,λk] turn into Lagrange Multiply vector, wherein λiIt is to carry i-th of inequality | | ai||2≤ c Lagrange multiplier, therefore can be derived by formula (28):
S*=XAT(AAT+diag(λ*))-1 (29)
Wherein λ*It is λ optimal solution.
(b) code coefficient A iteration updates
Fixed dictionary S, the problem of following is converted into by the optimization problem in formula (27):
Formula (30) can solve the code coefficient A of each sample one by one by coordinate optimizing method.That is Optimized Coding Based square I-th of coefficient a in battle array AiWhen fix other coefficients.Therefore, optimization problem (30) can be rewritten as:
Similar, optimization problem (31) can utilize signature identification searching algorithm to solve.
As shown in figure 1, the idiographic flow of the present invention is:(1) one group of view data X=[x for including m is inputted1, x2... ..., xm], iterations T, factor alpha, β, μ, ν;(2) the local Laplacian Matrix L in formula (18) is calculated respectivelylocal With the global Laplacian Matrix L in formula (26)glocal;(3) local and global Laplacian Matrix is calculated by formula (26) Llocal-glocal;(4) i is recycled to T from 1;(5) dictionary S is updated according to formula (29) iteration;(6) characteristic indication searching method is used To solve code coefficient A;(7) dictionary S and code coefficient A is exported.
Limiting the scope of the invention, one of ordinary skill in the art should be understood that in technical scheme On the basis of, those skilled in the art need not pay the various modifications that creative work can make or deformation still the present invention's Within protection domain.

Claims (8)

1. one kind is based on local and global regularization sparse coding method, it is characterised in that methods described comprises the following steps:
Step 1, input one group of data X=[x1,x2,...,xn]∈Rm×n, wherein xiRepresent i-th of sample;Utilize local study Thought represents each sample, derives local regular terms expression formula:
<mrow> <msup> <mi>J</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> </mrow> </msup> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <munder> <mo>&amp;Sigma;</mo> <mrow> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>&amp;Element;</mo> <mi>N</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </munder> <mo>|</mo> <mo>|</mo> <msubsup> <mi>W</mi> <mi>i</mi> <mi>T</mi> </msubsup> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>+</mo> <msub> <mi>b</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>a</mi> <mi>j</mi> </msub> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>+</mo> <mi>&amp;gamma;</mi> <mo>|</mo> <mo>|</mo> <msub> <mi>W</mi> <mi>i</mi> </msub> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow>
Step 2, each sample is represented using core overall situation homing method, derive global regular terms expression formula:
<mrow> <msup> <mi>J</mi> <mrow> <mi>g</mi> <mi>l</mi> <mi>o</mi> <mi>b</mi> <mi>a</mi> <mi>l</mi> </mrow> </msup> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mo>|</mo> <mo>|</mo> <mi>&amp;phi;</mi> <msup> <mrow> <mo>(</mo> <mi>W</mi> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mi>&amp;phi;</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <mi>b</mi> <mo>-</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>+</mo> <mi>&amp;gamma;</mi> <mo>|</mo> <mo>|</mo> <mi>&amp;phi;</mi> <mrow> <mo>(</mo> <mi>W</mi> <mo>)</mo> </mrow> <mo>|</mo> <msubsup> <mo>|</mo> <mi>F</mi> <mn>2</mn> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow>
Wherein φ () is nuclear mapping function, and b is bias item;
Step 3, formula (7) and formula (8) are combined, draw local and global regular terms expression formula:
<mrow> <mtable> <mtr> <mtd> <mrow> <mi>J</mi> <mo>=</mo> <msup> <mi>L</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> </mrow> </msup> <mo>+</mo> <msup> <mi>&amp;mu;L</mi> <mrow> <mi>g</mi> <mi>l</mi> <mi>o</mi> <mi>b</mi> <mi>a</mi> <mi>l</mi> </mrow> </msup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <munder> <mo>&amp;Sigma;</mo> <mrow> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>&amp;Element;</mo> <msub> <mi>N</mi> <mi>i</mi> </msub> </mrow> </munder> <mrow> <mo>(</mo> <mo>|</mo> <mo>|</mo> <msubsup> <mi>W</mi> <mi>i</mi> <mi>T</mi> </msubsup> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>+</mo> <msub> <mi>b</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>a</mi> <mi>j</mi> </msub> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>+</mo> <mi>&amp;gamma;</mi> <mo>|</mo> <mo>|</mo> <msub> <mi>W</mi> <mi>i</mi> </msub> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>+</mo> <mi>&amp;mu;</mi> <mo>&amp;lsqb;</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mo>|</mo> <mo>|</mo> <mi>&amp;phi;</mi> <msup> <mrow> <mo>(</mo> <mi>W</mi> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mi>&amp;phi;</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <mi>b</mi> <mo>-</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>+</mo> <mi>&amp;gamma;</mi> <mo>|</mo> <mo>|</mo> <mi>&amp;phi;</mi> <mrow> <mo>(</mo> <mi>W</mi> <mo>)</mo> </mrow> <mo>|</mo> <msubsup> <mo>|</mo> <mi>F</mi> <mn>2</mn> </msubsup> <mo>&amp;rsqb;</mo> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow>
Wherein μ is a balance parameters, for controlling local regular terms and the ratio shared by global regular terms;
Step 4, input data concentrate sample Xi=[xi,xi1,xi2,...,xik-1]∈Rm×kIt is expressed as sample NiData matrix, Ai=[ai,ai1,...,aik-1]T∈Rm×kFor sample NiLow-dimensional represent, expression formula is derived by formula (7):
Jlocal=ATLlocalA (18)
Wherein
Expression formula is derived by formula (8):
Jglobal=ATLglobalA (22)
Wherein Lglobal=γ H (HKH+ γ I)-1H (25)
Step 5, local regular terms and global regular terms are combined, finally draw local and global regular terms expression formula:
<mrow> <mtable> <mtr> <mtd> <mrow> <msup> <mi>L</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> <mo>-</mo> <mi>g</mi> <mi>l</mi> <mi>o</mi> <mi>b</mi> <mi>a</mi> <mi>l</mi> </mrow> </msup> <mo>=</mo> <msup> <mi>L</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> </mrow> </msup> <mo>+</mo> <msup> <mi>&amp;mu;L</mi> <mrow> <mi>g</mi> <mi>l</mi> <mi>o</mi> <mi>b</mi> <mi>a</mi> <mi>l</mi> </mrow> </msup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <msup> <mi>A</mi> <mi>T</mi> </msup> <mrow> <mo>(</mo> <msup> <mi>L</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> </mrow> </msup> <mo>+</mo> <msup> <mi>&amp;mu;L</mi> <mrow> <mi>g</mi> <mi>l</mi> <mi>o</mi> <mi>b</mi> <mi>a</mi> <mi>l</mi> </mrow> </msup> <mo>)</mo> </mrow> <mi>A</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <msup> <mi>A</mi> <mi>T</mi> </msup> <mrow> <mo>(</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>Q</mi> <mi>i</mi> </msub> <msub> <mi>F</mi> <mi>i</mi> </msub> <msubsup> <mi>Q</mi> <mi>i</mi> <mi>T</mi> </msubsup> <mo>+</mo> <mi>&amp;mu;</mi> <mi>&amp;gamma;</mi> <mi>H</mi> <msup> <mrow> <mo>(</mo> <mrow> <mi>H</mi> <mi>K</mi> <mi>H</mi> <mo>+</mo> <mi>&amp;gamma;</mi> <mi>I</mi> </mrow> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mi>H</mi> <mo>)</mo> </mrow> <mi>A</mi> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>26</mn> <mo>)</mo> </mrow> </mrow>
Step 6, by Regularization Technique, local and global regular terms is embedded into sparse coding model, obtains methods described Object function expression formula:
<mrow> <mtable> <mtr> <mtd> <mrow> <munder> <mi>min</mi> <mrow> <mi>S</mi> <mo>,</mo> <mi>A</mi> </mrow> </munder> <mo>|</mo> <mo>|</mo> <mi>X</mi> <mo>-</mo> <mi>S</mi> <mi>A</mi> <mo>|</mo> <msubsup> <mo>|</mo> <mi>F</mi> <mn>2</mn> </msubsup> <mo>+</mo> <mi>&amp;alpha;</mi> <mi>T</mi> <mi>r</mi> <mrow> <mo>(</mo> <msup> <mi>AL</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> <mo>-</mo> <mi>g</mi> <mi>l</mi> <mi>o</mi> <mi>b</mi> <mi>a</mi> <mi>l</mi> </mrow> </msup> <msup> <mi>A</mi> <mi>T</mi> </msup> <mo>)</mo> </mrow> <mo>+</mo> <mi>&amp;beta;</mi> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <mo>|</mo> <mo>|</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> <mo>|</mo> <msub> <mo>|</mo> <mn>1</mn> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> <mo>|</mo> <mo>|</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>&amp;le;</mo> <mi>c</mi> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>...</mn> <mo>,</mo> <mi>k</mi> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>27</mn> <mo>)</mo> </mrow> </mrow>
Formula (27) is solved by alternative and iterative algorithm, exports dictionary S and code coefficient A.
It is 2. according to claim 1 based on local and global regularization sparse coding method, it is characterised in that the step It is by the linear regression function representation of each sample using local study thoughts in one:
<mrow> <msub> <mi>f</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <msubsup> <mi>W</mi> <mi>i</mi> <mi>T</mi> </msubsup> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>+</mo> <msub> <mi>b</mi> <mi>i</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
Wherein xj∈N(xi), WiFor weight vector, biIt is fiBias;The loss function for deriving each sample by formula (5) represents For:
<mrow> <msubsup> <mi>J</mi> <mi>i</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> </mrow> </msubsup> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <mrow> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>&amp;Element;</mo> <mi>N</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </munder> <mo>|</mo> <mo>|</mo> <msubsup> <mi>W</mi> <mi>i</mi> <mi>T</mi> </msubsup> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>+</mo> <msub> <mi>b</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>a</mi> <mi>j</mi> </msub> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>+</mo> <mi>&amp;gamma;</mi> <mo>|</mo> <mo>|</mo> <msub> <mi>W</mi> <mi>i</mi> </msub> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>
Wherein αjIt is xiLow-dimensional represent, regular terms γ | | Wi||2For weighing WiSmooth degree.
It is 3. according to claim 1 based on local and global regularization sparse coding method, it is characterised in that the step Input data concentrates sample in fourIt is expressed as sample NiData matrix, Ai=[ai, ai1,...,aik-1]T∈Rm×kFor sample NiLow-dimensional represent, expression formula is derived by formula (9):
<mrow> <mtable> <mtr> <mtd> <mrow> <mi>J</mi> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mrow> <mo>(</mo> <mo>|</mo> <mo>|</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mi>T</mi> </msubsup> <msub> <mi>W</mi> <mi>i</mi> </msub> <mo>+</mo> <msub> <mn>1</mn> <mi>k</mi> </msub> <msubsup> <mi>b</mi> <mi>i</mi> <mi>T</mi> </msubsup> <mo>-</mo> <msub> <mi>A</mi> <mi>i</mi> </msub> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>+</mo> <mi>&amp;gamma;</mi> <mo>|</mo> <mo>|</mo> <msub> <mi>W</mi> <mi>i</mi> </msub> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>+</mo> <mi>&amp;mu;</mi> <mo>(</mo> <mo>|</mo> <mo>|</mo> <mi>&amp;phi;</mi> <msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mi>&amp;phi;</mi> <mrow> <mo>(</mo> <mi>W</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mn>1</mn> <mi>n</mi> </msub> <mi>b</mi> <mo>-</mo> <mi>A</mi> <mo>|</mo> <msubsup> <mo>|</mo> <mi>F</mi> <mn>2</mn> </msubsup> <mo>+</mo> <mi>&amp;gamma;</mi> <mo>|</mo> <mo>|</mo> <mi>&amp;phi;</mi> <mrow> <mo>(</mo> <mi>W</mi> <mo>)</mo> </mrow> <mo>|</mo> <msubsup> <mo>|</mo> <mi>F</mi> <mn>2</mn> </msubsup> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow>
Wherein 1k∈RkWith 1n∈RnRepresent two unit vectors;Utilize the attribute of matrixPushed away by formula (10) Derived expressions:
<mrow> <msup> <mi>J</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> </mrow> </msup> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mo>{</mo> <mi>t</mi> <mi>r</mi> <mo>&amp;lsqb;</mo> <msup> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mi>T</mi> </msubsup> <msub> <mi>W</mi> <mi>i</mi> </msub> <mo>+</mo> <msub> <mn>1</mn> <mi>k</mi> </msub> <msubsup> <mi>b</mi> <mi>i</mi> <mi>T</mi> </msubsup> <mo>-</mo> <msub> <mi>A</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mi>T</mi> </msubsup> <msub> <mi>W</mi> <mi>i</mi> </msub> <mo>+</mo> <msub> <mn>1</mn> <mi>k</mi> </msub> <msubsup> <mi>b</mi> <mi>i</mi> <mi>T</mi> </msubsup> <mo>-</mo> <msub> <mi>A</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mo>+</mo> <mi>&amp;gamma;</mi> <mi>t</mi> <mi>r</mi> <mrow> <mo>(</mo> <msubsup> <mi>W</mi> <mi>i</mi> <mi>T</mi> </msubsup> <msub> <mi>W</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow>
To the W in formula (11)iAnd biLocal derviation is sought respectively, is obtained:
<mrow> <msub> <mi>b</mi> <mi>i</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>k</mi> </mfrac> <mrow> <mo>(</mo> <msubsup> <mi>A</mi> <mi>i</mi> <mi>T</mi> </msubsup> <msub> <mn>1</mn> <mi>k</mi> </msub> <mo>-</mo> <msubsup> <mi>W</mi> <mi>i</mi> <mi>T</mi> </msubsup> <msub> <mi>X</mi> <mi>i</mi> </msub> <msub> <mn>1</mn> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>14</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mi>W</mi> <mi>i</mi> </msub> <mo>=</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>i</mi> </msub> <msub> <mi>H</mi> <mi>k</mi> </msub> <msubsup> <mi>X</mi> <mi>i</mi> <mi>T</mi> </msubsup> <mo>+</mo> <mi>&amp;gamma;</mi> <mi>I</mi> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <msub> <mi>X</mi> <mi>i</mi> </msub> <msub> <mi>H</mi> <mi>k</mi> </msub> <msub> <mi>A</mi> <mi>i</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>15</mn> <mo>)</mo> </mrow> </mrow>
WhereinIt is local center matrix, formula (14) and (15) generation is arrived in formula (6), obtained:
<mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mi>t</mi> <mi>r</mi> <mrow> <mo>(</mo> <msubsup> <mi>A</mi> <mi>i</mi> <mi>T</mi> </msubsup> <msub> <mi>F</mi> <mi>i</mi> </msub> <msub> <mi>A</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>16</mn> <mo>)</mo> </mrow> </mrow>
WhereinA selection matrix Q is defined,By formula (16) expression formula is derived:
<mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mi>t</mi> <mi>r</mi> <mrow> <mo>(</mo> <msubsup> <mi>A</mi> <mi>i</mi> <mi>T</mi> </msubsup> <msub> <mi>Q</mi> <mi>i</mi> </msub> <msub> <mi>F</mi> <mi>i</mi> </msub> <msubsup> <mi>Q</mi> <mi>i</mi> <mi>T</mi> </msubsup> <msub> <mi>A</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>t</mi> <mi>r</mi> <mrow> <mo>(</mo> <msup> <mi>A</mi> <mi>T</mi> </msup> <mo>(</mo> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>Q</mi> <mi>i</mi> </msub> <msub> <mi>F</mi> <mi>i</mi> </msub> <msubsup> <mi>Q</mi> <mi>i</mi> <mi>T</mi> </msubsup> </mrow> <mo>)</mo> <msup> <mi>A</mi> <mi>T</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>17</mn> <mo>)</mo> </mrow> </mrow>
It is 4. according to claim 1 based on local and global regularization sparse coding method, it is characterised in that the step Input data concentrates sample in fourIt is expressed as sample NiData matrix, Ai=[ai, ai1,...,aik-1]T∈Rm×kFor sample NiLow-dimensional represent, by the Section 2 J in formula (9)globalDerive expression formula:
<mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>J</mi> <mrow> <mi>g</mi> <mi>l</mi> <mi>o</mi> <mi>b</mi> <mi>a</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <mi>t</mi> <mi>r</mi> <mo>{</mo> <msup> <mrow> <mo>&amp;lsqb;</mo> <mi>&amp;phi;</mi> <msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mi>&amp;phi;</mi> <mrow> <mo>(</mo> <mi>W</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mn>1</mn> <mi>n</mi> </msub> <msup> <mi>b</mi> <mi>T</mi> </msup> <mo>-</mo> <mi>A</mi> <mo>&amp;rsqb;</mo> </mrow> <mi>T</mi> </msup> <mo>&amp;lsqb;</mo> <mi>&amp;phi;</mi> <msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mi>&amp;phi;</mi> <mrow> <mo>(</mo> <mi>W</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mn>1</mn> <mi>n</mi> </msub> <msup> <mi>b</mi> <mi>T</mi> </msup> <mo>-</mo> <mi>A</mi> <mo>&amp;rsqb;</mo> <mo>}</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>+</mo> <mi>&amp;gamma;</mi> <mi>t</mi> <mi>r</mi> <mo>&amp;lsqb;</mo> <mi>&amp;phi;</mi> <msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mi>&amp;phi;</mi> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>19</mn> <mo>)</mo> </mrow> </mrow>
To the W in formula (11)iAnd biLocal derviation is sought respectively, is obtained:
<mrow> <mtable> <mtr> <mtd> <mrow> <mi>&amp;phi;</mi> <mrow> <mo>(</mo> <mi>W</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mrow> <mo>(</mo> <mi>&amp;phi;</mi> <mo>(</mo> <mi>X</mi> <mo>)</mo> <mi>H</mi> <mi>&amp;phi;</mi> <msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mo>+</mo> <mi>&amp;gamma;</mi> <mi>I</mi> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mi>&amp;phi;</mi> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mi>A</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <mi>&amp;phi;</mi> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mi>H</mi> <msup> <mrow> <mo>(</mo> <mi>H</mi> <mi>&amp;phi;</mi> <msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mi>X</mi> <mi>H</mi> <mo>+</mo> <mi>&amp;gamma;</mi> <mi>I</mi> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mi>A</mi> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>20</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <mtable> <mtr> <mtd> <mrow> <mi>b</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <msup> <mi>A</mi> <mi>T</mi> </msup> <msub> <mn>1</mn> <mi>n</mi> </msub> <mo>-</mo> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <msup> <mi>W</mi> <mi>T</mi> </msup> <mi>&amp;phi;</mi> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <msub> <mn>1</mn> <mi>n</mi> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <msup> <mi>A</mi> <mi>T</mi> </msup> <msub> <mn>1</mn> <mi>n</mi> </msub> <mo>-</mo> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <msup> <mi>A</mi> <mi>T</mi> </msup> <msup> <mrow> <mo>(</mo> <mi>H</mi> <mi>&amp;phi;</mi> <msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mi>&amp;phi;</mi> <mo>(</mo> <mi>X</mi> <mo>)</mo> <mi>H</mi> <mo>+</mo> <mi>&amp;gamma;</mi> <mi>I</mi> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mi>H</mi> <mi>&amp;phi;</mi> <msup> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mi>&amp;phi;</mi> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <msub> <mn>1</mn> <mi>n</mi> </msub> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>21</mn> <mo>)</mo> </mrow> </mrow>
DefinitionFor Global center matrix, formula (22) is obtained;
Wherein
Wherein φ (X)Tφ (X) is calculated by kernel function K, obtains formula (25).
It is 5. according to claim 4 based on local and global regularization sparse coding method, it is characterised in that the core letter Number K expression formulas:
<mrow> <msub> <mi>K</mi> <mrow> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mi>&amp;phi;</mi> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> <mi>&amp;phi;</mi> <mo>(</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>)</mo> <mo>)</mo> </mrow> <mo>=</mo> <mi>&amp;phi;</mi> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mi>&amp;phi;</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>24</mn> <mo>)</mo> </mrow> </mrow>
Its Kernel Function K meets Mercer conditions, and
It is 6. according to claim 1 based on local and global regularization sparse coding method, it is characterised in that the formula (27) method for solving is as follows:
Step 7, regular coding coefficient A, the optimization problem in formula (27) is converted into the least square with quadratic constraints and asked Topic:
<mrow> <mtable> <mtr> <mtd> <mrow> <munder> <mi>min</mi> <mrow> <mi>D</mi> <mo>,</mo> <mi>A</mi> </mrow> </munder> <mo>|</mo> <mo>|</mo> <mi>X</mi> <mo>-</mo> <mi>S</mi> <mi>A</mi> <mo>|</mo> <msubsup> <mo>|</mo> <mi>F</mi> <mn>2</mn> </msubsup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> <mo>|</mo> <mo>|</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>&amp;le;</mo> <mi>c</mi> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>...</mo> <mo>,</mo> <mi>k</mi> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>28</mn> <mo>)</mo> </mrow> </mrow>
Formula (28) is solved by Lagrange duality;
Step 8, fixed dictionary S, the problem of following is converted into by the optimization problem in formula (27):
<mrow> <munder> <mi>min</mi> <mrow> <mi>S</mi> <mo>,</mo> <mi>A</mi> </mrow> </munder> <mo>|</mo> <mo>|</mo> <mi>X</mi> <mo>-</mo> <mi>S</mi> <mi>A</mi> <mo>|</mo> <msubsup> <mo>|</mo> <mi>F</mi> <mn>2</mn> </msubsup> <mo>+</mo> <mi>&amp;alpha;</mi> <mi>T</mi> <mi>r</mi> <mrow> <mo>(</mo> <msup> <mi>AL</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> <mo>-</mo> <mi>g</mi> <mi>l</mi> <mi>o</mi> <mi>b</mi> <mi>a</mi> <mi>l</mi> </mrow> </msup> <msup> <mi>A</mi> <mi>T</mi> </msup> <mo>)</mo> </mrow> <mo>+</mo> <mi>&amp;beta;</mi> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <mo>|</mo> <mo>|</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> <mo>|</mo> <msub> <mo>|</mo> <mn>1</mn> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>30</mn> <mo>)</mo> </mrow> </mrow>
Formula (30) solves the code coefficient A of each sample by coordinate optimizing method one by one.
It is 7. according to claim 6 based on local and global regularization sparse coding method, it is characterised in that the step λ=[λ is inputted in seven12,...,λk] as Lagrange multiply vector, wherein λiIt is to carry i-th of inequality | | ai||2≤ c's Lagrange multiplier, derived by formula (28):
S*=XAT(AAT+diag(λ*))-1 (29)
Wherein λ*It is λ optimal solution.
It is 8. according to claim 6 based on local and global regularization sparse coding method, it is characterised in that:The step I-th of coefficient a in eight in Optimized Coding Based coefficient AiWhen fix other coefficients, formula (30) is expressed as:
<mrow> <munder> <mi>min</mi> <msub> <mi>a</mi> <mi>i</mi> </msub> </munder> <mo>|</mo> <mo>|</mo> <msub> <mi>X</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>Sa</mi> <mi>i</mi> </msub> <mo>|</mo> <msubsup> <mo>|</mo> <mi>F</mi> <mn>2</mn> </msubsup> <mo>+</mo> <mi>&amp;alpha;</mi> <mo>&amp;lsqb;</mo> <mrow> <msubsup> <mi>L</mi> <mrow> <mi>i</mi> <mi>i</mi> </mrow> <mrow> <mi>l</mi> <mi>o</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> <mo>-</mo> <mi>g</mi> <mi>l</mi> <mi>o</mi> <mi>b</mi> <mi>a</mi> <mi>l</mi> </mrow> </msubsup> <msubsup> <mi>a</mi> <mi>i</mi> <mi>T</mi> </msubsup> <msub> <mi>a</mi> <mi>i</mi> </msub> <mo>+</mo> <mn>2</mn> <msubsup> <mi>a</mi> <mi>i</mi> <mi>T</mi> </msubsup> <munder> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>&amp;NotEqual;</mo> <mi>i</mi> </mrow> </munder> <msubsup> <mi>L</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mrow> <mi>l</mi> <mi>o</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> <mo>-</mo> <mi>g</mi> <mi>l</mi> <mi>o</mi> <mi>b</mi> <mi>a</mi> <mi>l</mi> </mrow> </msubsup> <msub> <mi>a</mi> <mi>i</mi> </msub> </mrow> <mo>&amp;rsqb;</mo> <mo>+</mo> <mi>&amp;beta;</mi> <mo>|</mo> <mo>|</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> <mo>|</mo> <msub> <mo>|</mo> <mn>1</mn> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>31</mn> <mo>)</mo> </mrow> </mrow>
Formula (31) can be solved using signature identification searching algorithm.
CN201711202173.1A 2017-11-27 2017-11-27 One kind is based on local and global regularization sparse coding method Pending CN107894967A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711202173.1A CN107894967A (en) 2017-11-27 2017-11-27 One kind is based on local and global regularization sparse coding method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711202173.1A CN107894967A (en) 2017-11-27 2017-11-27 One kind is based on local and global regularization sparse coding method

Publications (1)

Publication Number Publication Date
CN107894967A true CN107894967A (en) 2018-04-10

Family

ID=61804714

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711202173.1A Pending CN107894967A (en) 2017-11-27 2017-11-27 One kind is based on local and global regularization sparse coding method

Country Status (1)

Country Link
CN (1) CN107894967A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063725A (en) * 2018-06-13 2018-12-21 江苏理工学院 More figure regularization matrix of depths decomposition methods towards multiple view cluster
CN109325515A (en) * 2018-08-10 2019-02-12 江苏理工学院 Matrix of depths decomposition method and image clustering method based on part study regularization
CN112304419A (en) * 2020-10-25 2021-02-02 广东石油化工学院 Vibration and sound detection signal reconstruction method and system by using generalized sparse coding

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063725A (en) * 2018-06-13 2018-12-21 江苏理工学院 More figure regularization matrix of depths decomposition methods towards multiple view cluster
CN109063725B (en) * 2018-06-13 2021-09-28 江苏理工学院 Multi-view clustering-oriented multi-graph regularization depth matrix decomposition method
CN109325515A (en) * 2018-08-10 2019-02-12 江苏理工学院 Matrix of depths decomposition method and image clustering method based on part study regularization
CN109325515B (en) * 2018-08-10 2021-09-28 江苏理工学院 Depth matrix decomposition method and image clustering method based on local learning regularization
CN112304419A (en) * 2020-10-25 2021-02-02 广东石油化工学院 Vibration and sound detection signal reconstruction method and system by using generalized sparse coding

Similar Documents

Publication Publication Date Title
Li et al. Multiview clustering: A scalable and parameter-free bipartite graph fusion method
Wen et al. Adaptive graph completion based incomplete multi-view clustering
Xie et al. Hyper-Laplacian regularized multilinear multiview self-representations for clustering and semisupervised learning
Wang et al. Deep CNNs meet global covariance pooling: Better representation and generalization
Wang et al. A study of graph-based system for multi-view clustering
CN103488662A (en) Clustering method and system of parallelized self-organizing mapping neural network based on graphic processing unit
CN112836672A (en) Unsupervised data dimension reduction method based on self-adaptive neighbor graph embedding
Guo et al. Sparse deep nonnegative matrix factorization
Li et al. An efficient manifold regularized sparse non-negative matrix factorization model for large-scale recommender systems on GPUs
CN106203470A (en) A kind of multi-modal feature selection based on hypergraph and sorting technique
CN107894967A (en) One kind is based on local and global regularization sparse coding method
CN113468227A (en) Information recommendation method, system, device and storage medium based on graph neural network
CN103605985A (en) A data dimension reduction method based on a tensor global-local preserving projection
CN108830301A (en) The semi-supervised data classification method of double Laplace regularizations based on anchor graph structure
CN105868796A (en) Design method for linear discrimination of sparse representation classifier based on nuclear space
Liu et al. Auto-weighted collective matrix factorization with graph dual regularization for multi-view clustering
CN103473308B (en) High-dimensional multimedia data classifying method based on maximum margin tensor study
CN106021402A (en) Multi-modal multi-class Boosting frame construction method and device for cross-modal retrieval
Hao et al. Tensor-based multi-view clustering with consistency exploration and diversity regularization
He et al. Community detection method based on robust semi-supervised nonnegative matrix factorization
CN105335499B (en) It is a kind of based on distribution-convergence model document clustering method
Wu et al. Computation of heterogeneous object co-embeddings from relational measurements
Zhou et al. Structural regularization based discriminative multi-view unsupervised feature selection
Cheung et al. Unsupervised feature selection with feature clustering
CN110135499A (en) Clustering method based on the study of manifold spatially adaptive Neighborhood Graph

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180410