CN108763156A - A kind of quick approximation method based on bumps planning - Google Patents

A kind of quick approximation method based on bumps planning Download PDF

Info

Publication number
CN108763156A
CN108763156A CN201810526436.2A CN201810526436A CN108763156A CN 108763156 A CN108763156 A CN 108763156A CN 201810526436 A CN201810526436 A CN 201810526436A CN 108763156 A CN108763156 A CN 108763156A
Authority
CN
China
Prior art keywords
convex
function
planning
positive definite
bumps
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810526436.2A
Other languages
Chinese (zh)
Inventor
杨杰
刘方辉
黄晓霖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201810526436.2A priority Critical patent/CN108763156A/en
Publication of CN108763156A publication Critical patent/CN108763156A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computational Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The present invention provides a kind of quick approximation methods based on bumps planning, apply in the Logic Regression Models using non-positive definite kernel, the method includes:By the Logic Regression Models dismantling of non-positive definite kernel at the form of the difference of two convex functions, first order Taylor expansion is carried out to one of convex function, obtain a convex optimization problem, and solution is iterated until obtaining corresponding solving result to the convex optimization problem, continue to carry out first order Taylor expansion to the convex function on the basis of solving result, the above flow of alternating iteration, until convergence.So as to realize the rapid solving process of concave-convex planning, there is good classifying quality and convergence rate to the high dimensional data under big data scale, method is realized simply, easily operated.

Description

A kind of quick approximation method based on bumps planning
Technical field
The present invention relates to non-convex optimization, technical field of data processing, and in particular, to a kind of based on the quick of bumps planning Approximation method.
Background technology
Kernel method is widely used in numerous areas such as machine learning, computer vision, bioinformatics.Kernel method is wanted The necessary positive semidefinite of nuclear matrix is sought, and then ensures that corresponding model is convex optimization problem and alternative function is in regeneration Hilbert sky Between.But in fields such as image procossing, manifold learning, steady study, people meet more and more is unsatisfactory for Positive item The kernel function of part, such as:The limitation of orthotropicity is just unsatisfactory for using similarity measurement defined in KL divergences;Not due to measurement Accuracy so that the nuclear matrix of acquisition is polluted by noise, becomes non-positive definite nuclear matrix.Due to the non-orthotropicity of kernel function, Traditional regeneration Hilbert space theory is not being set up, the solution of problem also from a convex optimization problem become one it is non-convex excellent Change problem.How non-positive definite kernel to be studied in theoretical and actual algorithm, this is also exactly the content that kernel method is studied.
In existing non-convex optimization method, bumps planning is a kind of very common method, its thinking is to wait for The object function of solution splits into the form of the difference of two convex functions, by carrying out Taylor in current point to one of convex function Expansion, to obtain a convex approximation about former problem, acquires the optimal solution about the convex Approximation Problem;It is then optimal at this Xie Chu carries out Taylor expansion, until overall goals function convergence.But traditional concave-convex planning needs to carry out double alternating to change Generation, outer loop obtain the convex approximation of former problem in current point, and interior loop accurately solves a convex optimization problem.With outer layer The computation complexity of the increase of loop iteration number, algorithm linearly increases.In the case of data scale is larger, this method Computational efficiency is relatively low, has larger limitation.
In the prior art, the above-mentioned concave-convex planning technology occurred solving non-convex optimization problem, such as:
[1]Francois Bertrand Akoa,“Combining DC algorithms(DCAs) anddecomposition techniques for the training of nonpositive- semidefinitekernels,”IEEE Transactions on Neural Networks,vol.19,no.11, pp.1854–1872,2008.
[2]Haiming Xu,Hui Xue,Xiaohong Chen,and Yunyun Wang, “Solvingindefinite kernel support vector machine with difference of convexfunctions programming,”in Proceedings of AAAI Conference on ArtificialIntelligence,2017,pp.1610–1616.
However above-mentioned technology is all made of traditional concave-convex planing method and is solved, computational efficiency is relatively low.Therefore, with existing In being on the increase for every field experimental data, in the case where data capacity, dimension are increasing, it would be highly desirable to there is a kind of method to exist Ensure that calculating effect does not occur under the premise of being substantially reduced, it being capable of rapid solving bumps planning problem.
Invention content
For the defects in the prior art, the object of the present invention is to provide a kind of quick approximate solutions based on bumps planning Method can effectively accelerate solution procedure, method to realize simple, easily operated, the higher-dimension being very suitable under big data scale Data handling utility.
The present invention provides a kind of quick approximation method based on bumps planning, including:
By the Logic Regression Models dismantling of non-positive definite kernel at the form of the difference of two convex functions;
First order Taylor expansion is carried out to any of which convex function, obtains the first convex optimization problem, and convex to described first Optimization problem is iterated solution, until obtaining corresponding first solving result;
First order Taylor expansion is carried out again to the convex function on the basis of solving result, obtains the second convex optimization Problem, and solution is iterated to the described second convex optimization problem, until obtained the second solving result convergence.
Optionally, before the Logic Regression Models by non-positive definite kernel disassemble the form at the difference of two convex functions, Further include:
Build the Logic Regression Models of non-positive definite kernel.
Optionally, the Logic Regression Models of the non-positive definite kernel of structure, including:
Pass through given sample spaceWith output spaceBased on training sample setIt obtains differentiating letter Number f;Wherein, the discriminant function f is located in the spaces regeneration Krein;Wherein, xiFor i-th of training sample, yiIt is trained for i-th The label of sample, N are training samples number;
Following initial model is built based on the discriminant function f:
In formula:λ is regularization coefficient, and f is discriminant function,To regenerate the spaces Krein,It is being regenerated for f The regular terms in the spaces Krein, f (xi) be discriminant function f to training sample xiPrediction, stab indicate stablize, that is, solve the target The stability problem of function;
Based on regeneration Krein space representation theorems, the initial model is converted, the logic for obtaining non-positive definite kernel is returned Return model as follows:
In formula:K is nuclear matrix, and Y is label matrix, and β is to solve coefficient, βTTo solve the transposition of factor beta, 1 is complete 1 row Vector, dimension N, 1TFor complete 1 row vector, dimension N.
Optionally, the Logic Regression Models by non-positive definite kernel disassemble the form of the difference at two convex functions, including:
Following form is obtained after being converted to the Logic Regression Models of the non-positive definite kernel:
In formula:To solve the stability problem of f (β), f (β) is the object function about β, K+For to nuclear moment Battle array K carries out Eigenvalues Decomposition, and the part by characteristic value in K more than 0 forms, K-To carry out Eigenvalues Decomposition to nuclear matrix K, by K Part of the middle characteristic value more than 0 forms;
By regenerating the positive definite decomposing property in the spaces Krein, under given set, the logic of the non-positive definite kernel is returned Return model to be disassembled, obtains the form of the difference of following two convex functions:
F (β)=g (β)-h (β)
Wherein:
Optionally, described that first order Taylor expansion is carried out to any of which convex function, the first convex optimization problem is obtained, is wrapped It includes:
By h (β) in current point βkTaylor expansion is carried out, is obtainedWherein:βkFor kth time solve iteration as a result,For in βkPlace is to βkFunctional value after Taylor expansion;
After doing approximate processing to f (β), approximate function is obtainedIt is describedIt is as follows:
I.e.:
In formula:h(βk) it is in βkThe functional value at place,For gradient signs,To seek after gradient function h in βk The functional value at place;
Optionally, solution is iterated to the described first convex optimization problem, until obtaining corresponding first solving result, wrapped It includes:
Obtain approximate functionGradient, gradient algorithm is as follows:
Wherein:Q=[q1,q2,…,qN]TIt is defined as:
In formula:βjTo solve j-th of component of factor beta, KijFor i-th, j element of nuclear matrix, initial value is givenGradient information is calculated according to above formulaThen use following iterative manner:
In formula:To solve βkInterior loop in (t+1) secondary iteration result,To solve βkInterior loop In the t times iteration result, ηtFor the step-length of the t times iteration;
Setting interior loop convergence criterion is denoted as until obtaining the first solving resultWherein, the interior loop is received Holding back criterion is:
Wherein:ε is absolutely to terminate residual error,For outer layer kth time iteration FunctionSolving βkInterior loop in (t+1) secondary iteration result functional value,For outer layer kth time iteration letter NumberSolving βkInterior loop in the t times iteration result functional value.
Optionally, first order Taylor expansion is carried out again to the convex function on the basis of solving result, obtains Two convex optimization problems, including:
In first resultPlace carries out Taylor expansion to h (β), obtains the second convex optimization problem.
Optionally, solution is iterated to the described second convex optimization problem, until obtained the second solving result convergence, packet It includes:
Maximum iteration is set;
Each pair of second convex optimization problem is iterated, then iterations are up to the maximum iteration from increasing 1 When value as second solving result.
Compared with prior art, the present invention has following advantageous effect:
Quick approximate solution method provided by the invention based on bumps planning, by by the logistic regression mould of non-positive definite kernel Type disassembles the form of the difference at two convex functions, carries out first order Taylor expansion to one of convex function, obtains a convex optimization Problem, and solution is iterated until obtaining corresponding solving result, in the base of the solving result to the convex optimization problem Continue to carry out first order Taylor expansion, the above flow of alternating iteration, until convergence to the convex function on plinth.It is recessed so as to realize The rapid solving process of convex programming has good classifying quality and convergence rate to the high dimensional data under big data scale, Method is realized simply, easily operated.
Description of the drawings
Upon reading the detailed description of non-limiting embodiments with reference to the following drawings, other feature of the invention, Objects and advantages will become more apparent upon:
Fig. 1 is the flow chart of the method for the present invention.
Fig. 2 be the present invention in monks1 data sets with other methods convergent tendency comparison schematic diagram.
Specific implementation mode
With reference to specific embodiment, the present invention is described in detail.Following embodiment will be helpful to the technology of this field Personnel further understand the present invention, but the invention is not limited in any way.It should be pointed out that the ordinary skill of this field For personnel, without departing from the inventive concept of the premise, various modifications and improvements can be made.These belong to the present invention Protection domain.
The present invention provides a kind of quick approximation method based on bumps planning.This method is built first based on non-positive definite kernel Logic Regression Models are disassembled into the form of the difference of two convex functions, then carry out Taylor expansion to one of convex function, The convex approximation of a former problem is obtained, and carries out iteratively faster solution, obtains the approximate solution of the problem.Then the approximate solution after It is continuous that Taylor expansion, the above flow of alternating iteration, until convergence are carried out to one of convex function.Proposed by the invention is quick close Like derivation algorithm, the approximate solution about interior loop is obtained, it is intended to accelerate interior loop iterative process.Specific implementation mode is such as Under:
Step 1:Build the Logic Regression Models based on non-positive definite kernel.
In the present embodiment, according to existing non-positive definite kernel designing technique, specific non-positive definite is generated using training data sample Core embeds it in core logistic regression, and converts the problem to the non-convex problem of a solution stability.
Step 2:The Logic Regression Models of non-positive definite kernel are disassembled.
In the present embodiment, the Logic Regression Models of the non-positive definite kernel in step 1 are split into the difference of two convex function models Form, and to one of convex function carry out first order Taylor expansion, obtain convex approximate model.
Step 3:Solution is iterated to the interior loop of the convex approximate model, until the solution procedure of interior loop is received It holds back.
In the present embodiment, for the convex approximate model that step 2 obtains, using existing gradient descent algorithm or stochastic gradient Descent algorithm is iterated calculating, and sets loose stopping criterion for iteration, obtains the non-precision solution of convex approximate model, i.e., closely Like solution.
Step 4, on the basis of the approximate solution of the interior loop obtained in step 3, current point to one of convex function into Row first order Taylor is unfolded, and alternative manner is identical as the alternative manner of step 3, until the solution procedure of outer loop restrains.
Specifically, sample space is givenWith output spaceBased on training sample setIt is desirable that Learn to a discriminant function f:The label of a new samples can be predicted.Particularly, it is based on non-positive definite in structure When the Logic Regression Models of core, the non-orthotropicity of kernel function causes alternative function f to be in the spaces regeneration Krein, and the space is by non- Positive definite kernel is turned into.Optimization problem at this time has been converted into a non-convex problem, and the minimization problem of solving model extreme point is moved back Stability (stab) problem of saddle point that can only acquire is turned to, it is specific as follows:
Utilize the representation theorem in the spaces regeneration KreinCan be by above-mentioned model conversation:
Wherein K is nuclear matrix, and Y is label matrix, since K no longer ensures positive semidefinite, non-convex is asked so the problem is one Topic.
Utilize the positive definite decomposing property in the spaces regeneration Krein, under given set, a non-positive definite kernel functionThe form of the difference of two positive definite kernel functions can be decomposed into:
Correspondingly, the non-convex model in the first step can be converted into:
Wherein nuclear matrix K+And K-It can be obtained by carrying out Eigenvalues Decomposition to K, K+It is the portion for being more than 0 by characteristic value in K It is grouped as, K-Part by characteristic value in K less than 0 forms.First item and Section 2 are can be seen that from the formulaIt is the convex function about β, Section 3 It is the convex function about β.Therefore the non-convex model is decomposed by the positive definite, and above-mentioned model can be split as to two convex letters The form of the difference of number, i.e. f (β)=g (β)-h (β).In each iteration, h (β) is in current point βkTaylor expansion is carried out, is obtainedAt this moment, f (β) can be indicated by following approximations to function:
Specifically, for the Logic Regression Models of non-positive definite kernel, convex approximate model is as follows:
The convex approximate model of former problem is obtained for the second step, can be solved there are many ready-made algorithm, example If gradient declines, stochastic gradient descent.However traditional concave-convex planning often carries out an outer loop, being required to accurately solution should Convex approximate model.Quick approximation method of the present invention only needs to obtain the non-precision solution of interior loop.With under gradient For dropping algorithm,
Wherein q=[q1,q1,…,qN]TIt is defined as:
Given initial valueGradient information is calculated according to above formulaThen use following iteration side Formula:
Interior loop convergence criterion is set as:
Wherein ε is absolutely to terminate residual error, and traditional concave-convex planning need to generally be set as 10-8, 10-6The equal orders of magnitude, and at this 1 is set as in invention.Particularly, the present invention also provides the upper bound condition that ε is met, to ensure entire iterative process according to Old convergence, it is specific as follows:
The wherein opposite residual epsilon that terminates is defined as
Current resultsPlace carries out Taylor expansion, repeats third step, until solution procedure reaches maximum iteration.
The present invention considers the quick approximate solution method of the recessed convex programming of big data scale, therefore in processing higher-dimension complexity number It can get according to image, text and video etc. and more quickly restrain effect compared with conventional method, greatly save computational efficiency. And the data such as image, text and video all play an important role in industrial production and life, therefore the present invention is in reality In have broad application prospects.The present invention indicates that this is previous in the model for giving the logistic regression based on non-positive definite kernel Similar technique logistic regression does not have.Non- positive definite kernel is not only successfully applied in logistic regression by this, but also gives mould Description of the type in the spaces regeneration Krein.This technology has application well in practice, can apply more more flexible core letters Number carries out different angle to data and portrays.
With reference to specific embodiment, more detailed explanation is done to the method in the present invention.
The implementation case selects the data of some images, text, medical treatment etc., at following detailed step Reason:
S1:It includes EEG data collection, characteristic dimension 14, sample number 14980 to select data;Ijcnn1-tr data Collection, characteristic dimension 26, sample number 35000;Madelon data sets, characteristic dimension 500, sample number 2000;monks1 Data set, characteristic dimension 6, sample number 124.
S2:TL1 kernel functions are chosen,The indefinite kernel function is to data sample This neighborhood relationships, which have, preferably portrays, and accurately reflects similarity relationships between sample.Based on this kernel function, utilize 1st step data obtains corresponding nuclear matrix K.
S3:The Logic Regression Models based on TL1 kernel functions are built, object function is as follows:
Wherein K is TL1 nuclear matrix, and Y is label matrix, since K no longer ensures positive semidefinite, so the problem is one non-convex Problem.
S4:The model of step S3 is decomposed to the form for the difference for being split as two convex functions using positive definite, i.e.,:
Wherein nuclear matrix K+And K-It can be obtained by carrying out Eigenvalues Decomposition to TL1 nuclear matrix K, K+It is by characteristic value in K Part composition more than 0, K-Part by characteristic value in K less than 0 forms.First item and Section 2 are can be seen that from the formulaIt is the convex function about β, Section 3 It is the convex function about β.Therefore the non-convex model is decomposed by the positive definite, and above-mentioned model can be split as to two convex letters The form of the difference of number, i.e. f (β)=g (β)-h (β).In each iteration, h (β) is in current point βkTaylor expansion is carried out, is obtainedAt this moment, f (β) can be indicated by following approximations to function:
S5:The convex approximate model of former problem is obtained for step S4, quick approximation method of the present invention only needs to obtain Take the non-precision solution of interior loop.By taking gradient descent algorithm as an example,
Wherein q=[q1,q1,…,qn]TIt is defined as:
Given initial valueGradient information is calculated according to above formulaThen use following iteration side Formula:
Interior loop convergence criterion is set as:
Wherein ε is absolutely to terminate residual error, and traditional concave-convex planning need to generally be set as 10-8, 10-6The equal orders of magnitude, and at this Be set as in invention 1 can it is special, the present invention also provides the upper bound condition that ε is met, to ensure entire iterative process according to Old convergence, it is specific as follows:
The wherein opposite residual epsilon that terminates is defined as
S6:Current results are obtained in step S5Place carries out Taylor expansion, repeats step S5, until solution procedure reaches Maximum iteration.
Further, in order to intuitively illustrate the method for the invention provided and existing methods superiority, the present embodiment is first It is tested on a small-scale data set monks1, as a result such as Fig. 2.Solid line result is traditional concave-convex planning in figure The convergent tendency figure of CCCP, dotted line result are iteratively faster method CCICP proposed by the invention.From this figure, it can be seen that CCCP methods need 16 iteration to can be only achieved stable state, opposite, CCICP methods proposed by the invention only need 4 times Iteration can restrain, and convergence number is considerably less than traditional CCCP methods, and computational efficiency can be greatly reduced.The convergent tendency figure Qualitative or even sxemiquantitative the validity for illustrating interior loop acceleration mechanism proposed by the invention not only accelerates solution speed Degree, can still ensure computational efficiency.
Particularly, in order to better illustrate the acceleration of quick approximation method proposed by the invention on large-scale dataset Effect, we have counted the CCICP methods that are proposed in the accuracy rate on three large-scale datasets, training time and when test Between, and be compared with traditional concave-convex planning CCCP.Statistical result is shown in Table 1.As can be seen that in these three extensive numbers According on collection, method proposed by the invention slightly reduces on classification accuracy than CCCP method, however on the training time but Substantially save, it is more than from several times to hundreds times.Compared to being substantially improved for training effectiveness, a little losing in accuracy rate, also Seem insignificant.It should be noted that since test process is not related to related solution procedure, so CCCP and CCICP exist Upper no essential distinction the time required to test.
Table 1
Specific embodiments of the present invention are described above.It is to be appreciated that the invention is not limited in above-mentioned Particular implementation, those skilled in the art can make various deformations or amendments within the scope of the claims, this not shadow Ring the substantive content of the present invention.

Claims (8)

1. a kind of quick approximation method based on bumps planning, which is characterized in that including:
By the Logic Regression Models dismantling of non-positive definite kernel at the form of the difference of two convex functions;
First order Taylor expansion is carried out to any of which convex function, obtains the first convex optimization problem, and to the described first convex optimization Problem is iterated solution, until obtaining corresponding first solving result;
First order Taylor expansion is carried out again to the convex function on the basis of solving result, the second convex optimization is obtained and asks Topic, and solution is iterated to the described second convex optimization problem, until obtained the second solving result convergence.
2. the quick approximation method according to claim 1 based on bumps planning, which is characterized in that described by non-positive definite Before the Logic Regression Models of core disassemble the form at the difference of two convex functions, further include:
Build the Logic Regression Models of non-positive definite kernel.
3. the quick approximation method according to claim 2 based on bumps planning, which is characterized in that the non-positive definite of structure The Logic Regression Models of core, including:
Pass through given sample spaceWith output spaceBased on training sample setObtain discriminant function f; Wherein, the discriminant function f is located in the spaces regeneration Krein;Wherein, xiFor i-th of training sample, yiFor i-th of training sample Label, N is training samples number;
Following initial model is built based on the discriminant function f:
In formula:λ is regularization coefficient, and f is discriminant function,To regenerate the spaces Krein,It is that f is regenerating the spaces Krein Regular terms, f (xi) be discriminant function f to training sample xiPrediction, stab indicate stablize, that is, solve the steady of the object function Qualitative question;
Based on regeneration Krein space representation theorems, the initial model is converted, the logistic regression mould of non-positive definite kernel is obtained Type is as follows:
In formula:K is nuclear matrix, and Y is label matrix, and β is to solve coefficient, βTTo solve the transposition of factor beta, 1 is complete 1 column vector, Dimension is N, 1TFor complete 1 row vector, dimension N.
4. the quick approximation method according to claim 3 based on bumps planning, which is characterized in that described by non-positive definite kernel Logic Regression Models dismantling at two convex functions difference form, including:
Following form is obtained after being converted to the Logic Regression Models of the non-positive definite kernel:
In formula:To solve the stability problem of f (β), f (β) is the object function about β, K+For to nuclear matrix K into Row Eigenvalues Decomposition, the part by characteristic value in K more than 0 form, K_To carry out Eigenvalues Decomposition to nuclear matrix K, by feature in K Part of the value more than 0 forms;
By regenerating the positive definite decomposing property in the spaces Krein, under given set, by the logistic regression mould of the non-positive definite kernel Type is disassembled, and the form of the difference of following two convex functions is obtained:
F (β)=g (β)-h (β)
Wherein:
5. the quick approximation method according to claim 4 based on bumps planning, which is characterized in that described to any of which A convex function carries out first order Taylor expansion, obtains the first convex optimization problem, including:
By h (β) in current point βkTaylor expansion is carried out, is obtainedWherein:βkFor kth time solve iteration as a result,For In βkPlace is to βkFunctional value after Taylor expansion;
After doing approximate processing to f (β), approximate function is obtainedIt is describedIt is as follows:
I.e.:
In formula:h(βk) it is in βkThe functional value at place,For gradient signs,To seek after gradient function h in βkPlace Functional value.
6. the quick approximation method according to claim 5 based on bumps planning, which is characterized in that convex to described first excellent Change problem is iterated solution, until obtaining corresponding first solving result, including:
Obtain approximate functionGradient, gradient algorithm is as follows:
Wherein:Q=[q1,q2,…,qN]TIt is defined as:
In formula:β j are j-th of component for solving factor beta, and Kij is i-th, j element of nuclear matrix, gives initial valueRoot Gradient information is calculated according to above formulaThen use following iterative manner:
In formula:To solve βkInterior loop in the t+1 times iteration result,To solve βkInterior loop in the t times Iteration result, ηtFor the step-length of the t times iteration;
Setting interior loop convergence criterion is denoted as until obtaining the first solving resultWherein, the interior loop convergence is accurate It is then:
Wherein:ε is absolutely to terminate residual error, For outer layer kth time iteration function Solving βkInterior loop in (t+1) secondary iteration result functional value,For outer layer kth time iteration function? Solve βkInterior loop in the t times iteration result functional value.
7. the quick approximation method according to claim 6 based on bumps planning, which is characterized in that in the solving result On the basis of first order Taylor expansion is carried out again to the convex function, obtain the second convex optimization problem, including:
In first resultPlace carries out Taylor expansion to h (β), obtains the second convex optimization problem.
8. the quick approximation method according to claim 7 based on bumps planning, which is characterized in that convex to described second excellent Change problem is iterated solution, until obtained the second solving result convergence, including:
Maximum iteration is set;
Each pair of second convex optimization problem is iterated, then iterations are from when increasing 1, being up to the maximum iteration Value is used as second solving result.
CN201810526436.2A 2018-05-29 2018-05-29 A kind of quick approximation method based on bumps planning Pending CN108763156A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810526436.2A CN108763156A (en) 2018-05-29 2018-05-29 A kind of quick approximation method based on bumps planning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810526436.2A CN108763156A (en) 2018-05-29 2018-05-29 A kind of quick approximation method based on bumps planning

Publications (1)

Publication Number Publication Date
CN108763156A true CN108763156A (en) 2018-11-06

Family

ID=64003105

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810526436.2A Pending CN108763156A (en) 2018-05-29 2018-05-29 A kind of quick approximation method based on bumps planning

Country Status (1)

Country Link
CN (1) CN108763156A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111242867A (en) * 2020-01-14 2020-06-05 桂林电子科技大学 Graph signal distributed online reconstruction method based on truncated Taylor series approximation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060276996A1 (en) * 2005-06-01 2006-12-07 Keerthi Sathiya S Fast tracking system and method for generalized LARS/LASSO
CN102186072A (en) * 2011-04-20 2011-09-14 上海交通大学 Optimized transmission method of multi-rate multicast communication for scalable video stream
CN106844295A (en) * 2017-02-13 2017-06-13 中国科学技术大学 A kind of reconstruction of quantum states method and system based on compression sensing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060276996A1 (en) * 2005-06-01 2006-12-07 Keerthi Sathiya S Fast tracking system and method for generalized LARS/LASSO
CN102186072A (en) * 2011-04-20 2011-09-14 上海交通大学 Optimized transmission method of multi-rate multicast communication for scalable video stream
CN106844295A (en) * 2017-02-13 2017-06-13 中国科学技术大学 A kind of reconstruction of quantum states method and system based on compression sensing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FANGHUI LIU等: "Indefinite Kernel Logistic Regression", 《MM"17:PROCEEDINGS OF THE 25TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111242867A (en) * 2020-01-14 2020-06-05 桂林电子科技大学 Graph signal distributed online reconstruction method based on truncated Taylor series approximation
CN111242867B (en) * 2020-01-14 2023-06-13 桂林电子科技大学 Graph signal distributed on-line reconstruction method based on truncated taylor series approximation

Similar Documents

Publication Publication Date Title
Zheng et al. Migo-nas: Towards fast and generalizable neural architecture search
Zhang et al. Uncovering fuzzy community structure in complex networks
Gong et al. Robust multi-task feature learning
Wu et al. Multi-label active learning for image classification
CN114493014B (en) Multi-element time sequence prediction method, system, computer product and storage medium
Nolet et al. Bringing UMAP closer to the speed of light with GPU acceleration
Yang et al. LFTF: A framework for efficient tensor analytics at scale
WO2022267954A1 (en) Spectral clustering method and system based on unified anchor and subspace learning
CN102799627B (en) Data association method based on first-order logic and nerve network
Vorona et al. DeepSPACE: Approximate geospatial query processing with deep learning
Zhang et al. Learning all-in collaborative multiview binary representation for clustering
CN110347754B (en) Data query method and device
Pu et al. Stochastic mirror descent for low-rank tensor decomposition under non-euclidean losses
CN108763156A (en) A kind of quick approximation method based on bumps planning
Ranadive et al. An all–at–once CP decomposition method for count tensors
Kviman et al. Cooperation in the latent space: The benefits of adding mixture components in variational autoencoders
Wan et al. Shift-BNN: Highly-efficient probabilistic Bayesian neural network training via memory-friendly pattern retrieving
Li et al. Exploiting inductive bias in transformer for point cloud classification and segmentation
Zhao et al. Hardware-software co-design enabling static and dynamic sparse attention mechanisms
Ali et al. fairDMS: Rapid model training by data and model reuse
CN109325585A (en) The shot and long term memory network partially connected method decomposed based on tensor ring
Zheng et al. Disentangled neural architecture search
CN108664616A (en) ROWID-based Oracle data batch acquisition method
CN109919200B (en) Image classification method based on tensor decomposition and domain adaptation
Jin et al. Efficient action recognition with introducing r (2+ 1) d convolution to improved transformer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181106