CN108845974A - Linear dimension reduction method is supervised using the having for separation probability of minimax probability machine - Google Patents

Linear dimension reduction method is supervised using the having for separation probability of minimax probability machine Download PDF

Info

Publication number
CN108845974A
CN108845974A CN201810371801.7A CN201810371801A CN108845974A CN 108845974 A CN108845974 A CN 108845974A CN 201810371801 A CN201810371801 A CN 201810371801A CN 108845974 A CN108845974 A CN 108845974A
Authority
CN
China
Prior art keywords
projection vector
projection
sample
vector
probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810371801.7A
Other languages
Chinese (zh)
Inventor
宋士吉
巩延上
张玉利
黄高
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201810371801.7A priority Critical patent/CN108845974A/en
Publication of CN108845974A publication Critical patent/CN108845974A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computational Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention proposes that a kind of having for the separation probability using minimax probability machine supervises linear dimension reduction method, belongs to computer machine study and statistical learning technical field.This method, which is initially set up, supervises linear dimensionality reduction model using the having for separation probability of minimax probability machine, and the input of model is the sample set with multiple dimensions and classification, is exported as projection matrix;When dimensionality reduction to 1 is tieed up, belong to single projection vector target;When dimensionality reduction is to multidimensional, belong to multiple projection vector targets;The present invention uses the separation probability between sample to measure as the distance between classification, and conjugate gradient method has been used to optimize, and finally obtains and guarantees each classification to the projection matrix as far as possible with maximum separation probability.The present invention can be improved data can discrimination and subsequent classification accuracy and efficiency, good application effect can be reached in multiclass dimensionality reduction problem.

Description

Linear dimension reduction method is supervised using the having for separation probability of minimax probability machine
Technical field
It is the invention belongs to computer machine study and statistical learning technical field, in particular to a kind of general using minimax The separation probability of rate machine has the linear dimension reduction method of supervision.
Background technique
In machine learning and metric learning field, the effect of dimension reduction method is very important.Dimension reduction method can incite somebody to action The data of higher-dimension are mapped in the subspace of a low-dimensional, at the same remain between sample as much as possible (unsupervised learning) or The separation information of (supervised learning) between classification.It is commonly used as the pretreatment of data, to improve subsequent data analysis Effect, such as classifier, data visualization and recurrence.
Linear discriminant analysis (LDA) is classical based on the feature extraction and dimension reduction method for having supervision distance metric.LDA rises Just by propositions such as Fisher for then extending to multi-class problem by Rao etc. in two classification problems.It is by maximizing total class Between dispersion simultaneously minimize total within-cluster variance, to obtain an optimal projection matrix.In in the past few decades, have perhaps More researchers improve LDA, to improve its accuracy in some specific set of data, such as using penalty function, The methods of Recursive Linear differentiation, differentiation study analysis.But these dimension reduction methods be usually analyzed from global angle it is all Inhomogeneous dispersion, there is no the situations for considering each classification pair respectively.When in the scene for being applied to multi-class problem, have Some classifications are improper to what may be handled.Although having for example, LDA can be used in the dimensionality reduction scene of multi-class problem In one the shortcomings that:What it acquired projection matrix dependence is total within-cluster variance and total inter _ class relationship, is carried out apart from degree The form of amount, we term it " quadratic sum " forms.The objective function of LDA will be in the between class distance and class of all quadratic forms Distance is directly added respectively, and the property of chi square function curve may result in some classifications for needing emphasis optimization to neglecting Depending on the classification pair for going optimization to be easily separated from each other enough instead.For example LDA is applied to the value difference of different class spacing When very big scene, it just will appear undesirable result.
, there is the effect that many improvement LDA are applied in multiclass in recent years in the shortcomings that in order to overcome this form to bring Dimension reduction method.These improved methods usually have a common feature, that is, consider the scene of " pairs of ", such as sample pair or Classification equity, to overcome the above problem.Consider that the advantage of " pairs of " situation is that it is possible to for different sample pair or classification It is right, different processing is targetedly made, so that the result of dimension reduction method can either obtain global high separation, also can Reach certain local high separations.Such that the bad situation of certain effects obtains deeper optimization.But this Although a little methods overcome the shortcoming under the multiclass scene of LDA to a certain extent, usually didactic, they Objective function lack an accurate practical significance.
Summary of the invention
The purpose of the present invention is the shortcomings to overcome prior art, propose a kind of point using minimax probability machine There is the linear dimension reduction method of supervision from probability.The present invention realizes the separation probability between using sample as the spacing of classification for the first time From measurement, can be improved data can discrimination and subsequent classification accuracy and efficiency, can in multiclass dimensionality reduction problem Reach good application effect.
The present invention proposes that a kind of having for the separation probability using minimax probability machine supervises linear dimension reduction method, feature It is, this approach includes the following steps:
1) it establishes and linear dimensionality reduction DR-MPM model is supervised using the having for separation probability of minimax probability machine;
The input for enabling model is sample setI-th sample x in sample setiCorresponding class Distinguishing label is denoted as ci, i=1,2 ..., n, the classification sum of sample set is K, and the output of model is projection matrixwiFor i-th of projection vector for constituting projection matrix;Wherein, n is in the sample set of input Sample serial number, d be the original dimension of sample, p is target dimension, p<D, R indicate set of real numbers;
2) value of p is determined:If p=1, for single projection vector target, enter step 3);If p > 1, For multiple projection vector targets, enter step 4);
3) dimension reduction method under single projection vector target;Specific step is as follows:
3-1) determine the objective function of DR-MPM model under single projection vector target;
As p=1, projection matrix W becomes a projection vector and is denoted as w;Then DR-MPM model be one it is unconfined most Bigization optimal problem, shown in the objective function of model such as formula (1):
Wherein,
In formula,For the set of the i-th class sample,For the set of kth class sample, 1≤k≤K, ∑ijFor the i-th class sample and The inter _ class relationship of jth class sample, ∑iFor the within-cluster variance of the i-th class sample,For the mean value of kth class sample;
3-2) to DR-MPM model solution;
It is equivalent to target function type (1) to minimize following objective function:
Objective function shown in formula (5) is the unconfined non-convex problem of minimum, using conjugate gradient method to formula (5) It optimizes, specific step is as follows:
Iteration serial number t=0, allowable error ε > 0 3-2-1) are set, projection vector w is initialized as random d × 1 Vector w(0)
3-2-2) calculating target function formula (5) is in w(t)On derivative:
Wherein,
Then conjugate direction calculates as follows:
Wherein,
3-2-3) update projection vector w;
w(t+1)=w(t)(t)d(k) (10)
Wherein, α(t)It is obtained by linear search, so that f (w(t)(t)d(t))=minαf(w(t)+αd(t));
3-2-4) determine:If reaching the condition of convergence, i.e., | | d(t)| | < ε or ‖ w(t+1)-w(t)When ‖ < ε, then iteration ends, obtain Optimal solution to DR-MPM model is w*=w(t+1), dimensionality reduction finishes;Otherwise, t is set:=t+1 is returned to step 3-2- 2), continue iteration;
4) dimension reduction method under multiple projection vector targets;Specific step is as follows:
4-1) determine the objective function of DR-MPM model under multiple projection vector targets;
Work as p>When 1, projection matrix is constitutedEach projection vector objective function it is identical as formula (1);For The projection matrix increases constraint:WTStW=I, i.e.,:
Wherein St=Sw+Sb, and:
Wherein, for projection matrixIn first projection vector, the i.e. first row of projection matrix, pass through repetition The optimal solution that step 3) calculates single projection vector target drag is obtained, and then, obtains first projection using solution Vector acquires next new projection vector;The r projection vector w before having solved1, w2..., wrLater, r<P, r+1 A projection vector wr+1It is present in by matrixColumn vector linear expansion subspace in, i.e.,:
Wherein,
Wr=(w1, w2..., wr)
Accordingly, there exist a vector vsr, meet:
wr+1=Arvr (14)
Formula (14) is brought into target function type (1), is obtained shown in unconstrained optimization problem such as formula (15):
Wherein,
Formula (15) are solved, w is obtainedr+1
To wr+1It is normalized:
The r+1 projection vector has then been acquired by preceding r projection vector;
Enable Wr+1=(Wr, wr+1), continue iteration, until obtaining all p projection vectors, then finally solves obtained projection Matrix is:
W*=Wp
Dimensionality reduction finishes.
The features of the present invention and beneficial effect are:
Present invention could apply to the Data Dimensionality Reduction in the fields such as fault diagnosis, medical image recognition, artificial intelligence and classification Task, can be improved data can discrimination and subsequent classification accuracy and efficiency.
The present invention had both considered the case where pairs of classification, and optimization aim has an accurate practical significance again, that is, divides The maximization of class accuracy.The source data of complicated higher-dimension can be carried out effectively dimensionality reduction by the present invention, extracted in sample and most closed The feature of key weeds out redundancy feature, and then improves the accuracy and efficiency of the processes such as subsequent classification, cluster, regression forecasting.
The present invention uses the separation probability between two classes to measure as the distance between classification, and has used conjugate gradient method It optimizes, and without making the assumption that in advance to data distribution, applicability is wide, and has stronger robustness, effect is more excellent.
(1) objective function of DR-MPM can be attempted to look for a projection vector, so that protecting in new projection subspace Each classification is demonstrate,proved to as far as possible with maximum separation probability.Also, what it is due to method optimization is each classification to dividing Class accuracy, therefore the maximum value of objective function has directly corresponded to the maximum of the classification accuracy rate of subsequent classification method (1 pair 1) Value has direct practical significance.
(2) the method for the present invention can more be focused on optimizing the lesser classification pair of separating degree, and method quality is high.
Detailed description of the invention
Fig. 1 is to be illustrated in the embodiment of the present invention using the objective function curve comparison of DR-MPM method and tradition LDA method Figure.
Fig. 2 is that DR-MPM method and tradition LDA method are applied to three true common data sets in the embodiment of the present invention Dimensionality reduction result schematic diagram on USPS, PIE and COIL20.
Fig. 3 is that the dimension reduction method method of DR-MPM method and current main-stream is applied to two true public affairs in the embodiment of the present invention Result comparison schematic diagram on data set CIOL20, YaleB altogether.
Fig. 4 is that the dimension reduction method of DR-MPM method and current main-stream is applied to " flood dragon number " failure and examines in the embodiment of the present invention Result schematic diagram on disconnected.
Specific embodiment
A kind of separation probability using minimax probability machine proposed by the present invention has the linear dimension reduction method of supervision, below It is further described in detail with reference to the drawings and specific embodiments.
A kind of separation probability using minimax probability machine proposed by the present invention has the linear dimension reduction method of supervision, below Referred to as DR-MPM is divided into two kinds of situations of single projection vector target and multiple projection vector targets and (when dimensionality reduction to 1 is tieed up, belongs to In single projection vector target;When dimensionality reduction is to multidimensional, belong to multiple projection vector targets), this approach includes the following steps:
1) DR-MPM model is established;
The input for enabling model is sample setI-th of sample x in sample setiCorresponding classification Label is denoted as ci, i=1,2 ..., n, the classification sum of sample set is K, and wherein each classification may include multiple samples, That is in the corresponding class label of all samples, there may be duplicate class labels;The output of model is projection matrixwiFor i-th of projection vector for constituting projection matrix.Wherein, n is in the sample set of input Sample serial number (n>1), d is the original dimension of sample, and p is the target dimension (p to be dropped to<D), R indicates set of real numbers.xiIt is I sample, wiFor i-th of projection vector for constituting projection matrix.
In the present invention, all matrixes are all indicated with capitalization, and all vectors are all indicated with lowercase;
2) value of p is determined:If p=1, for single projection vector target, enter step 3);If p > 1, For multiple projection vector targets, enter step 4);
3) dimension reduction method under single projection vector target;Specific step is as follows:
3-1) determine the objective function of DR-MPM model under single projection vector target;
Input sample collection X is dropped to 1 dimension, i.e. p=1 by single projection vector target call, therefore projection matrix W becomes one Projection vector is denoted as w;
At this point, DR-MPM model is a unconfined maximization optimal problem, the objective function of model such as formula (1) institute Show:
Wherein,
In formula,For the set of the i-th class sample,For the set of kth class sample, 1≤k≤K.∑ijFor the i-th class sample and The inter _ class relationship of jth class sample, ∑iFor the within-cluster variance of the i-th class sample,For the mean value of kth class sample.
3-2) to DR-MPM model solution;
Before solving formula (1) optimization problem, first the target function type (1) of DR-MPM model is equivalent to minimize following Objective function:
Objective function shown in formula (5) is the unconfined non-convex problem of minimum, and the present invention uses classical conjugation Gradient method is carried out formula (5) and is optimized, and specific step is as follows:
Iteration serial number t=0, allowable error ε > 0 3-2-1) are set, allowable error is the decimal close to 0, and value is smaller, most Result is more accurate afterwards, and the present embodiment is taken as 0.01.ω is initialized as the vector of random d × 1 w(0), wherein d is that sample is former Beginning dimension.
3-2-2) calculating target function formula (5) is in w(t)On this point derivative:
Wherein,
Then conjugate direction calculates as follows:
Wherein,
3-2-3) update projection vector w;
w(t+1)=w(t)(t)d(k) (10)
Wherein α(t)It is obtained by linear search, so that f (w(t)(t)d(t))=minαf(w(t)+αd(t));
3-2-4) determine:If reaching the condition of convergence, i.e., | | d(t)| | < ε or ‖ w(t+1)-w(t)When ‖ < ε, then iteration ends, obtain Optimal solution to DR-MPM model is w*=w(t+1), dimensionality reduction finishes;Otherwise, t is set:=t+1 is returned to step 3-2- 2), continue iteration.
Conjugate gradient method can converge to a locally optimal solution.But this is not globally optimal solution, therefore is theoretically come It says, needs to will eventually get a series of different locally optimal solutions, so the method is run multiple times using different initial values Select corresponding that the smallest locally optimal solution of target function value as final result afterwards.
4) dimension reduction method under multiple projection vector targets;Specific step is as follows:
4-1) determine the objective function of DR-MPM model under multiple projection vector targets;
When required projection vector is multiple, i.e. p>When 1, need to find out a projection matrixConstitute projection square The objective function of each projection vector of battle array is identical as formula (1).For the projection matrix, need to apply a constraint:WTStW =I, i.e.,:
Wherein St=Sw+Sb, and:
Wherein, projection matrixIn first projection vector (first row of projection matrix) by repeating step 3) The optimal solution for calculating single projection vector target drag is obtained, then, first projection vector obtained using solution, Acquire next new projection vector.When having solved preceding r (r<P) projection vector w1, w2..., wrLater, r+1 A projection vector wr+1It is present in by matrixColumn vector linear expansion subspace in, i.e.,:
Wherein,
Wr=(w1, w2..., wr)
Accordingly, there exist a vector vsr, meet:
wr+ 1=Arvr (14)
Formula (14) is brought into target function type (1), is obtained shown in unconstrained optimization problem such as formula (15):
Wherein,
Problem shown in formula (15) is identical as the format of formula (1), therefore identical mode v can be usedrSolution can be passed through The above problem obtains, and then obtains wr+1, finally it is normalized to:
In this way, then having acquired the r+1 projection vector by preceding r projection vector.Enable Wr+1=(Wr, wr+1), continue iteration Go down, until p required projection vector all obtains, then finally solving obtained projection matrix is:
W*=Wp
Dimensionality reduction finishes.
The objective function curve comparison of DR-MPM method of the invention and tradition LDA method is as shown in Figure 1;Fig. 1 (a) is this The objective function curve of the DR-MPM method of invention, Fig. 1 (b) are the objective function curve of tradition LDA method;Fig. 1 (a) and Fig. 1 (b) separation degree of the horizontal axis between two classes, the longitudinal axis are target function value.As can be seen that the objective function slope of LDA is gradually The property of increase may result in the ignorance to some classifications pair for needing emphasis to optimize, and optimization is gone to be easy enough instead The classification pair being separated from each other.On the contrary, the objective function of DR-MPM can ignore the sufficiently large classification pair of those separation probability, and Focusing on optimization, those are not easy the classification pair distinguished, and make its dimensionality reduction later can be separated from each other.
As shown in Fig. 2, being that DR-MPM method and tradition LDA method of the invention are applied to three true common data sets The result schematic diagram for being down to 2 dimensions on USPS, PIE and COIL20.As can be seen that the dimensionality reduction of LDA is as a result, usually have some offices Portion assembles many and is difficult to isolated classification, and DR-MPM then can well separate these classifications one by one.Therefore, DR-MPM has apparent advantage relative to LDA.
As shown in figure 3, be DR-MPM method and current main-stream of the invention dimension reduction method be applied to two it is true public Result comparison schematic diagram on data set CIOL20, YaleB altogether.Abscissa is target dimension, and ordinate is use after dimensionality reduction The classification accuracy rate of nearest neighbor classifier.It can be seen that in most cases its classification accuracy rate is the largest DR-MPM, especially It is when target dimension very little.With the increase of target dimension, the advantage of DR-MPM becomes not to be so obviously, still It still maintains higher than other methods or fair classification accuracy rate.
One embodiment of the present of invention has been used in the operation type deep sea manned submersible " flood dragon to first of China autonomous Design Number " fault diagnosis on.
Sample data is that 4 days integrated data is tried in " flood dragon number " sea, contains the too low failure of oxygen concentration, pressure mistake in cabin The excessively high failure of low failure, oxygen concentration, standby battery fuel level in tank compensation failure, hydraulic system water leakage fault, main storage battery electricity Failure, operating system junction box is pressed to reveal 7 kinds of fault types and the normal datas such as failure, altogether 8 kinds of classifications.
Samples sources include " propeller control ", " digital output ", " Doppler log ", " navigation control manipulation Box ", " manipulator ", " angular rate gyroscope ", " control amount ", " compass ", " inclinator ", " depth gauge ", " life-support system ", " acoustics computer ", " thermohaline is deep ", " hydraulic system ", the sources such as " motion sensor ", total characteristic number are 384.
Sample data shares 4 data sets, and the characteristic of each sample data set is 384, as shown in table 1:
1 present invention of table is applied to the data sample table in " flood dragon number " fault diagnosis
Result such as Fig. 4 institute of dimensionality reduction and feature extraction is carried out to data set using the method for the present invention and other main stream approach Show, abscissa is target dimension, and ordinate is the classification accuracy rate that dimensionality reduction uses nearest neighbor classifier later, is analyzed as follows:
(1) for 4 data sets, the accuracy curve of method DR-MPM of the invention, the method with other current main-streams It compares, it will usually be higher than them or maintain an equal level with them.This illustrates the dimensionality reduction of DR-MPM as a result, can make between different classes of It is more easier to separate, so that it generally can be better than other methods to the effect of subsequent classification problem.Especially work as mesh When mark dimension is smaller, the advantage of the method for the present invention is clearly.
(2) whether the effect quality of dimensionality reduction, lie also in while reducing complexity, be able to maintain that even more than to original The accuracy of data Direct Classification.By comparing the classification accuracy rate before and after dimensionality reduction, it will be seen that when target dimension is very low When (such as p=1,2), the classification accuracy rate after the method for the present invention dimensionality reduction can be slightly below the classification accuracy rate of initial data;But With the raising of target dimension, class accuracy is also increased, it will usually reach the height to maintain an equal level with original accuracy, even Original accuracy can be higher than (in such as data set 2 the case where p >=2).This illustrates that the reduction process of the method for the present invention can incite somebody to action The important information of data set extracts, and less characteristic does not interfere with the accuracy of subsequent classification, in some instances it may even be possible to can make The accuracy for obtaining subsequent classification increases.
In conclusion for the application aspect of " flood dragon number " fault diagnosis data, the DR-MPM method one of the method for the present invention As can be more preferable than the effect of the dimension reduction method of current main-stream.After Data Dimensionality Reduction, the characteristic of each sample is substantially reduced, this has Conducive to increasing substantially for subsequent classification method speed;Meanwhile method the method for the present invention also ensures the main letter for retaining data Breath, filters out noise information, so that the classification accuracy rate after dimensionality reduction is able to maintain the accuracy of even higher than initial data.

Claims (1)

1. a kind of separation probability using minimax probability machine has the linear dimension reduction method of supervision, which is characterized in that this method Include the following steps:
1) it establishes and linear dimensionality reduction DR-MPM model is supervised using the having for separation probability of minimax probability machine;
The input for enabling model is sample setI-th sample x in sample setiCorresponding classification mark Label are denoted as ci,=1,2 ..., n, the classification sum of sample set is K, and the output of model is projection matrixwiFor i-th of projection vector for constituting projection matrix;Wherein, n is in the sample set of input Sample serial number, d be the original dimension of sample, p is target dimension, p<D, R indicate set of real numbers;
2) value of p is determined:If p=1, for single projection vector target, enter step 3);It is more if p > 1 4) a projection vector target, enters step;
3) dimension reduction method under single projection vector target;Specific step is as follows:
3-1) determine the objective function of DR-MPM model under single projection vector target;
As p=1, projection matrix W becomes a projection vector and is denoted as w;Then DR-MPM model is a unconfined maximization Optimal problem, shown in the objective function of model such as formula (1):
Wherein,
In formula,For the set of the i-th class sample,For the set of kth class sample, 1≤k≤K, ∑ijFor the i-th class sample and jth The inter _ class relationship of class sample, ∑iFor the within-cluster variance of the i-th class sample,For the mean value of kth class sample;
3-2) to DR-MPM model solution;
It is equivalent to target function type (1) to minimize following objective function:
Objective function shown in formula (5) is the unconfined non-convex problem of minimum, is carried out using conjugate gradient method to formula (5) Optimization Solution, specific step is as follows:
Iteration serial number t=0, allowable error ε > 0 3-2-1) are set, projection vector w is initialized as the vector of random d × 1 w(0)
3-2-2) calculating target function formula (5) is in w(t)On derivative:
Wherein,
Then conjugate direction calculates as follows:
Wherein,
3-2-3) update projection vector w;
w(t+1)=w(t)(t)d(k) (10)
Wherein, α(t)It is obtained by linear search, so that f (w(t)(t)d(t)=minαf(w(t)+αd(t));
3-2-4) determine:If reaching the condition of convergence, i.e., | | d(t)| | < ε or ‖ w(t+1)-w(t)When ‖ < ε, then iteration ends, obtain The optimal solution of DR-MPM model is w*=w(t+1), dimensionality reduction finishes;Otherwise, t is set:=t+1 is returned to step 3-2-2), Continue iteration;
4) dimension reduction method under multiple projection vector targets;Specific step is as follows:
4-1) determine the objective function of DR-MPM model under multiple projection vector targets;
Work as p>When 1, projection matrix is constitutedEach projection vector objective function it is identical as formula (1);For the throwing Shadow matrix increases constraint:WTStW=I, i.e.,:
Wherein St=Sw+Sb, and:
Wherein, for projection matrixIn first projection vector, the i.e. first row of projection matrix, pass through repeat step 3) optimal solution for calculating single projection vector target drag is obtained, and then, obtains first projection vector using solution, Acquire next new projection vector;The r projection vector w before having solved1, w2..., wrLater, r<P, the r+1 projection Vector wr+1It is present in by matrixColumn vector linear expansion subspace in, i.e.,:
Wherein,
Wr=(w1, w2..., Wr)
Accordingly, there exist a vector vsr, meet:
wr+1=Arvr (14)
Formula (14) is brought into target function type (1), is obtained shown in unconstrained optimization problem such as formula (15):
Wherein,
Formula (15) are solved, w is obtainedr+1
To wr+1It is normalized:
The r+1 projection vector has then been acquired by preceding r projection vector;
Enable Wr+1=(Wr, wr+1), continue iteration, until obtaining all p projection vectors, then finally solves obtained projection matrix For:
W*=Wp
Dimensionality reduction finishes.
CN201810371801.7A 2018-04-24 2018-04-24 Linear dimension reduction method is supervised using the having for separation probability of minimax probability machine Pending CN108845974A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810371801.7A CN108845974A (en) 2018-04-24 2018-04-24 Linear dimension reduction method is supervised using the having for separation probability of minimax probability machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810371801.7A CN108845974A (en) 2018-04-24 2018-04-24 Linear dimension reduction method is supervised using the having for separation probability of minimax probability machine

Publications (1)

Publication Number Publication Date
CN108845974A true CN108845974A (en) 2018-11-20

Family

ID=64212227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810371801.7A Pending CN108845974A (en) 2018-04-24 2018-04-24 Linear dimension reduction method is supervised using the having for separation probability of minimax probability machine

Country Status (1)

Country Link
CN (1) CN108845974A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110020674A (en) * 2019-03-13 2019-07-16 清华大学 A kind of cross-cutting adapting to image classification method promoting topic categories discrimination
CN110347825A (en) * 2019-06-14 2019-10-18 北京物资学院 The short English film review classification method of one kind and device
CN110473140A (en) * 2019-07-18 2019-11-19 清华大学 A kind of image dimension reduction method of the extreme learning machine based on figure insertion
CN112836671A (en) * 2021-02-26 2021-05-25 西北工业大学 Data dimension reduction method based on maximization ratio and linear discriminant analysis
CN113836757A (en) * 2021-11-30 2021-12-24 滨州学院 Supervised feature selection method and device and electronic equipment
CN116386830A (en) * 2023-04-10 2023-07-04 山东博鹏信息科技有限公司 Hospital management system based on big data
US11816127B2 (en) 2021-02-26 2023-11-14 International Business Machines Corporation Quality assessment of extracted features from high-dimensional machine learning datasets

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110020674A (en) * 2019-03-13 2019-07-16 清华大学 A kind of cross-cutting adapting to image classification method promoting topic categories discrimination
CN110347825A (en) * 2019-06-14 2019-10-18 北京物资学院 The short English film review classification method of one kind and device
CN110473140A (en) * 2019-07-18 2019-11-19 清华大学 A kind of image dimension reduction method of the extreme learning machine based on figure insertion
CN112836671A (en) * 2021-02-26 2021-05-25 西北工业大学 Data dimension reduction method based on maximization ratio and linear discriminant analysis
US11816127B2 (en) 2021-02-26 2023-11-14 International Business Machines Corporation Quality assessment of extracted features from high-dimensional machine learning datasets
CN112836671B (en) * 2021-02-26 2024-03-08 西北工业大学 Data dimension reduction method based on maximized ratio and linear discriminant analysis
CN113836757A (en) * 2021-11-30 2021-12-24 滨州学院 Supervised feature selection method and device and electronic equipment
CN116386830A (en) * 2023-04-10 2023-07-04 山东博鹏信息科技有限公司 Hospital management system based on big data
CN116386830B (en) * 2023-04-10 2023-09-22 山东博鹏信息科技有限公司 Hospital management system based on big data

Similar Documents

Publication Publication Date Title
CN108845974A (en) Linear dimension reduction method is supervised using the having for separation probability of minimax probability machine
CN110689086B (en) Semi-supervised high-resolution remote sensing image scene classification method based on generating countermeasure network
CN111612051B (en) Weak supervision target detection method based on graph convolution neural network
CN110516533B (en) Pedestrian re-identification method based on depth measurement
CN110728694B (en) Long-time visual target tracking method based on continuous learning
CN113128600A (en) Structured depth incomplete multi-view clustering method
CN110188812A (en) A kind of multicore clustering method of quick processing missing isomeric data
CN111723930A (en) System applying crowd-sourcing supervised learning method
CN106647272A (en) Robot route planning method by employing improved convolutional neural network based on K mean value
CN114820655A (en) Weak supervision building segmentation method taking reliable area as attention mechanism supervision
CN116863177A (en) Object view distillation method for general multi-view object clustering
CN113052017A (en) Unsupervised pedestrian re-identification method based on multi-granularity feature representation and domain adaptive learning
CN111192240B (en) Remote sensing image target detection method based on random access memory
CN110263855B (en) Method for classifying images by utilizing common-basis capsule projection
CN114329031A (en) Fine-grained bird image retrieval method based on graph neural network and deep hash
CN113033345B (en) V2V video face recognition method based on public feature subspace
CN112668633B (en) Adaptive graph migration learning method based on fine granularity field
CN103793720B (en) A kind of eye locating method and system
CN114549863B (en) Light field saliency target detection method based on pixel-level noise label supervision
CN115019053A (en) Dynamic graph semantic feature extraction method for point cloud classification and segmentation
CN115601578A (en) Multi-view clustering method and system based on self-walking learning and view weighting
Liu et al. Combined with the residual and multi-scale method for Chinese thermal power system record text recognition
CN111353509B (en) Key point extractor generation method of visual SLAM system
CN114220003A (en) Multi-target unsupervised domain self-adaption method for large-range ground object segmentation
CN112818152A (en) Data enhancement method and device of deep clustering model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20181120