CN107590505A - The learning method of joint low-rank representation and sparse regression - Google Patents

The learning method of joint low-rank representation and sparse regression Download PDF

Info

Publication number
CN107590505A
CN107590505A CN201710648066.5A CN201710648066A CN107590505A CN 107590505 A CN107590505 A CN 107590505A CN 201710648066 A CN201710648066 A CN 201710648066A CN 107590505 A CN107590505 A CN 107590505A
Authority
CN
China
Prior art keywords
low
rank
image
feature
regression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710648066.5A
Other languages
Chinese (zh)
Other versions
CN107590505B (en
Inventor
刘安安
史英迪
苏育挺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201710648066.5A priority Critical patent/CN107590505B/en
Publication of CN107590505A publication Critical patent/CN107590505A/en
Application granted granted Critical
Publication of CN107590505B publication Critical patent/CN107590505B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

Combine low-rank representation and the learning method of sparse regression the invention discloses a kind of, the described method comprises the following steps:Feature extraction is carried out to the SUN data sets with iconic memory degree fraction label.By low-rank representation, one entirety of composition under same framework is placed on reference to sparse regression model two parts, builds joint low-rank representation and sparse regression model;Solves the problems, such as the mnemonic of automatic Prediction image using more vision self-adapting regression algorithms, the relation of characteristics of image and iconic memory degree is obtained under optimized parameter, and obtain relational result under optimized parameter, forecast database test machine iconic memory degree, and verify prediction result with relevant evaluation standard;The low-rank learning framework of present invention joint low-rank representation and sparse regression, the mnemonic of Accurate Prediction image-region.

Description

The learning method of joint low-rank representation and sparse regression
Technical field
The present invention relates to low-rank representation and sparse regression field, the memory degree for image is predicted, more particularly to joint is low Order represents and the learning method of sparse regression.
Background technology
The mankind have the ability for remembeing thousands of images, but not all image is all stored in the same way In the brain.Some representational pictures can be remembered at a glance, and other images are easy to disappear from memory.Image is remembered Recall and be used to the measurement degree that image is remembered or passed into silence after a specific amount of time.Previous studies work it has been shown that It is relevant to the memory of picture and the build-in attribute of image, i.e., to the memory of picture in different time intervals and not With being uniformity between observer.In this case, just as study many other high vision attributes (such as popularity, interest, Mood and aesthetics) as, some research work start to explore the potentially relevant property between picture material expression and iconic memory.
Analysis image mnemonic can be applied to be set in such as user-interface design, video frequency abstract, scene understanding and advertisement In several fields such as meter.For example, can be by selecting significant image that mnemonic is used as into guiding standard to summarize figure Image set closes or video.By improving memory of the consumer to target brand, unforgettable advertisement can be designed and help businessman to expand shadow Ring power.
Recently, low-rank performance (LRR) has been successfully applied to multimedia and computer vision field.In order to preferably handle Character representation problem, LRR are used for by the way that raw data matrix is decomposed into low-rank representation matrix, while eliminate incoherent thin Section, disclose the bottom low-rank subspace structure in embedding data.Conventional method is typically not enough to carry out the processing of exceptional value.In order to Solve this problem, there are some researchs also to focus on that sparse regression learns recently.
Carried out however, one of major defect of these methods is character representation and memory prediction two separated stages. That is, when it is determined that for image mnemonic prediction combinations of features pattern when, the final performance of separate regression steps is main Determined by the feature handled.Although bibliography [1] proposes the feature coding algorithm of joint low-rank and sparse regression to handle Exceptional value.Equally, bibliography [2] develops a kind of joint figure insertion and sparse regression framework.But they are all for vision point The design of class problem, rather than iconic memory prediction task.
The content of the invention
The invention provides a kind of learning method for combining low-rank representation and sparse regression, present invention joint low-rank representation and The learning framework of sparse regression, the mnemonic of Accurate Prediction image-region are described below:
It is a kind of to combine low-rank representation and the learning method of sparse regression, it the described method comprises the following steps:
Feature extraction is carried out to the SUN data sets with iconic memory degree fraction label;
One entirety of composition under same framework, structure connection are placed on by low-rank representation and with reference to sparse regression model two parts Close the model of low-rank and sparse regression;
Solves the problems, such as the mnemonic of automatic Prediction image using more vision self-adapting regression algorithms, under optimized parameter Obtain the relation of characteristics of image and iconic memory degree;
The feature that combination image proposes, utilizes the relational result obtained under optimized parameter, forecast database test set figure As memory degree, and prediction result is verified with relevant evaluation standard.
Methods described also includes:Obtain image mnemonic data set.
The feature includes:Scale invariant features transform feature, search tree feature, histograms of oriented gradients feature and Structural similarity feature.
The joint low-rank and the model of sparse regression are specially:
Wherein:
X be input feature, A ∈ RD×DBe N number of sample low-rank projection matrix it is low to capture the bottom shared between sample Order structure, E ∈ RN×DIt is to utilize L1Norm solves random error;w∈RD×1It is transformation matrix, by the sample after conversion and they Memory score be associated, y be training sample label;It is the error function of definition;λ > 0 are flat Weigh parameter.
The beneficial effect of technical scheme provided by the invention is:
1st, combine low-rank representation and sparse regression to predict for image mnemonic, wherein disclosing using inferior grade constraint The immanent structure of embedded initial data, exceptional value and redundancy are removed using sparse constraint, when low-rank representation and sparse time When returning common execution, the shared low-rank representation of all features can capture the internal structure of feature, so as to improve the standard of prediction True rate;
2nd, the present invention returns (MAR) algorithm based on more vision self-adaptings, solves the optimization of object function with Fast Convergent Problem.
Brief description of the drawings
Fig. 1 is the flow chart of joint low-rank representation and the learning method of sparse regression;
Fig. 2 is the database images sample for indicating iconic memory degree fraction;
Fig. 3 is algorithmic statement figure;
Fig. 4 is pair of the prediction result of single class image attributes feature and all properties feature prediction result under this method framework Than figure.
Embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with accompanying drawing to embodiment party of the present invention Formula is described in further detail.
Embodiment 1
The feature of image is studied and iconic memory degree is predicted, the embodiment of the present invention proposes a kind of joint The learning method of low-rank representation and sparse regression, referring to Fig. 1, this method comprises the following steps:
101:Obtain image mnemonic data set;
Wherein, the image mnemonic data set[1]Comprising from SUN data sets[11]2,222 images.The note of image Recall score to obtain by Amazon Mechanical Turk Visual Memory Game, image mnemonic is from 0 to 1 Successive value.Value is higher, and image is more difficult to remember.Sample image with various memory scores is as shown in Figure 2.
102:Feature extraction is carried out respectively to the SUN data sets with iconic memory degree fraction label;
Wherein, the feature of extraction includes:SIFT (Scale invariant features transform, Scale-invariant feature Transform, SIFT), Gist (search tree, Generalized Search Trees), HOG (histograms of oriented gradients, Histogram of Oriented Gradient) and SSIM (structural similarity, structural similarity Index)) feature, 4 kinds of features together constitute property data base.
103:Low-rank representation and sparse regression model two parts are placed on one entirety of composition under same framework, structure JLRSR (joint low-rank and sparse regression) model;
104:(MAR) algorithm, which is returned, using more vision self-adaptings solves the problems, such as the mnemonic of automatic Prediction image, The relation of characteristics of image and iconic memory degree is obtained under optimized parameter;
105:Combination image feature, utilize the relational result obtained under optimized parameter, forecast database test set image Memory degree, and verify prediction result with relevant evaluation standard.
In summary, the embodiment of the present invention is constrained to disclose original number by above-mentioned steps 101- steps 105 using low-rank According to immanent structure and using sparse constraint remove feature exceptional value and redundancy, when low-rank representation and sparse regression are total to During with performing, the shared low-rank representation of all features can not only capture the global structure of all mode, and can represent back The requirement returned;Because the object function worked out is unsmooth, it is difficult to solve, therefore (MAR) algorithm is returned using various visual angles are adaptive Solving the problems, such as the mnemonic of automatic Prediction image, solves optimization problem with Fast Convergent.
Embodiment 2
The scheme in embodiment 1 is further introduced with reference to specific calculation formula, it is described below:
201:Image mnemonic data set[1]Comprising from SUN data sets[17]2,222 images;
Wherein, the data set is known to those skilled in the art, and the embodiment of the present invention is not repeated this.
202:Picture progress feature extraction to the SUN data sets with iconic memory degree fraction label, the SIFT of extraction, Gist, HOG and SSIM feature constitutive characteristic storehouse.
This database includes the picture under 2222 various environment, and iconic memory degree point has all been marked per pictures Number, accompanying drawing 2 illustrate the sample that memory degree fraction picture is indicated in database.Character representation isDiRepresent such The dimension of feature, contained image number (2222) in N representation databases.These feature constitutive characteristic storehouses B={ B1,...,BM}。
203:Establish JLRSR and (Joint Low-Rank and Sparse Regression, combine low-rank and sparse time Return) model, low-rank representation and sparse regression are combined on the basis of the feature of extraction, establishes more robust character representation and accurate Regression model.General framework defined in JLRSR models is as follows:
Wherein, F (A, w) is used as predicting the loss function of error;L (A, E) represents the feature coding based on low-rank representation Device;G (A) is for solving the problems, such as the figure regularization of over-fitting expression.A is the mapping matrix of low-rank representation;W is low-rank feature Represent the linear dependence between output memory degree fraction;E is sparse error constraint portions.
Image mnemonic data set[1]Comprising from SUN data sets[17]2,222 images, the memory score of image Obtained by Amazon Mechanical Turk Visual Memory Game;The recurrence instruction of combining adaptive transfer learning Practice, the feature database of extraction is trained using the method for linear regression.In terms of the Score on Prediction of iconic memory degree is divided into two, one Aspect is to obtain every a kind of characteristics of image to the mapping square of iconic memory degree directly using character representation come prognostic chart picture memory degree Battle array wi, learn with reference to low-rank, obtain the relation of every class image attributes and iconic memory degree;According to initial pictures set of eigenvectors X ∈RN×D, the target of JLRSR models is to combine low-rank representation and sparse regression on the basis of the visual cues of extraction to strengthen Shandong Rod character representation, establish accurate regression model.
Each part is specifically introduced:
Because low-rank constraint can remove noise or redundancy to help to disclose the essential structure of data.Therefore, these Low-rank attribute can be integrated into feature learning to handle these problems.LRR assumes that primitive character matrix includes all samples Shared potential lowest rank structure components and its unique error matrix,
Wherein A ∈ RD×DIt is the low-rank projection matrix of N number of sample, E ∈ RN×DIt is to useUnique sparse error of norm constraint Part, to handle random error, λ > 0 are balance parameters, and X is the feature of input;D is the intrinsic dimensionality after low-rank constraint; The low-rank representation method that rank is characterized.
Because above-mentioned equation is difficult optimization, therefore use nuclear norm | | A | |*(*Represent that nuclear norm refers to singular values of a matrix With) approach A order, therefore L (A, E) formula can be defined as foloows
In the framework that the embodiment of the present invention proposes, the problem of iconic memory is predicted, is as standard regression problem.It is proposed Lasso[5]Homing method, by the linear relationship v for establishing input feature vector matrix X and can be between degree of memory scores vector y, most Smallization minimum mean-square errorTo solve forecasting problem.After adding ridge regularization in minimum mean-square error part, obtain With ridge regression[6]Typical least square problem.
Wherein, α is to predict the balance parameters between error component and regularization part.
From the perspective of matrix decomposition, conversion vector v can be broken down into the product of two components, i.e., thrown using low-rank Coefficient vector w is applied to the sample after conversion with theirs by shadow matrix A to capture the low-rank structure shared between sample Memory score is associated.Introduce v=Aw and be defined as loss function F (A, w)
Thought based on Diverse study, the uniformity of geometry is kept to be asked to solve this using figure regularization Topic.The core concept of figure regularization is that sample in character representation is close in form, then their memory score is also Close, vice versa.By minimizing figure regularizer G (A), the geometry between feature and memory degree score is realized Uniformity:
Wherein, L=B-S is Laplace operator, and B is angular moment battle array, Bii=∑jSij, s is that Gauss similarity function is calculated The weight matrix gone out, its calculating are obtained by Gauss similarity function,
Wherein, yiAnd yjIt is the popularity score of i-th of sample and j-th of sample, NKRepresent xiIt is xjK close on data, σ It is a radius parameter, it is simply set as intermediate value of all pictures to upper Euclidean distance.
Therefore defining JLRSR models is:
Wherein:
A∈RD×DIt is that the low-rank projection matrix of N number of sample captures the bottom low-rank structure shared between sample,
E∈RN×DIt is to utilize L1Norm solves random error;X is characterized;The low-rank representation and output that w is characterized are remembered The linear dependence spent between fraction, y are the label of training sample.
It is to ensure sample memory fraction similar in feature It is and close.
α, β, λ and φ in JLRSR model objective functions is initialized;A, E, w and Q and derivation, constantly weight are fixed respectively Multiple derivation process reaches the minimum value of setting until error.
Lower mask body introduces solution procedure, and (MAR) algorithm is returned using more vision self-adaptings[7]To solve automatic Prediction figure The problem of mnemonic of picture, to solve optimization problem.First, a slack variable Q is introduced to change the problem of above-mentioned of equal value:
S.t.X=XA+E, Q=Aw
Then, two slack variable Y are introduced1And Y2To obtain the Lagrangian of augmentation:
Wherein <, > represent the inner product operation of matrix, Y1And Y2Lagrangian matrix is represented, μ > 0 are positive punishment ginsengs Number, the above method is merged into:
Wherein
This method is solved using the method for alternating iteration.By by quadratic term h (A, Q, E, w, Y1,Y2, μ) and it is approximately two Rank Taylor expansion handles each subproblem respectively.In order to more fully understand and understand this process, a variable t is introduced, And define, At,Et,Qt,wt,Y1,t,Y2,tWith results of the μ as the t times iteration of variable, therefore the t+1 times iteration knot is obtained Fruit is as follows.
A iteration result:
Then w is fixed, the optimization that A, Q obtain E is as follows:
Then w is optimized by fixed E, A, Q, it is as a result as follows:
Above mentioned problem is actually well-known ridge regression problem, and its optimal solution is
E is finally fixed, w, A optimization Q, can be obtained:
Wherein,
In addition, Lagrange's multiplier Y1And Y2Updated by following scheme:
Y1,t+1=Y1,tt(X-XAt+1-Et+1)
Y2,t+1=Y2,tt(Qt+1-At+1wt+1)
Wherein, ▽ is the symbol for seeking local derviation.
The relation between the fraction of prediction and true score is studied under selected evaluation criterion, obtains algorithm performance result.
Wherein, database is randomly divided into 10 groups by the embodiment of the present invention, and all carrying out above-mentioned steps to each group obtains 10 groups Coefficient correlation, evaluation algorithms performance of averaging.The evaluation criterion of this method selection has sequence correlation (Ranking Correlation) and R-value, also have in embodiment 3 and be discussed in detail.
Embodiment 3
With reference to specific experimental data, Fig. 3 to Fig. 4 carries out feasibility checking to the scheme in Examples 1 and 2, in detail See below description:
Image mnemonic data set includes 2,222 images from SUN data sets.The memory score of image passes through Amazon Mechanical Turk Visual Memory Game are obtained, and image mnemonic is the successive value from 0 to 1. Value is higher, and image is more difficult to remember, and has the sample image of various memory scores as shown in Figure 2.
This method takes two kinds of appraisal procedures:
The dependent evaluation method that sorts (Ranking Correlation, RC):Obtain the sequence of real memory degree and prediction memory Fraction ordering relation is spent, the phase relation between two kinds of sequences is weighed using the standard of the related Spearman coefficient correlations of sequence Number.Its span is [- 1,1], and value is higher, and to represent two kinds of sequences closer:
Wherein, N is test set image number, r1In element r1iIt is the position that the i-th pictures sort in legitimate reading, r2In element r2iIt is the position that the i-th pictures sort in prediction result.
R-value:Compared with assessment prediction fraction is easy to regression model with the coefficient correlation between true score.R-value takes Value scope is [- 1,1], and 1 represents positive correlation, and -1 represents negative correlation:
Wherein, N is test set image number, siIt is image real memory degree scores vector,It is all image real memories Spend the average of fraction;viIt is image prediction memory degree scores vector,It is the average of all image prediction memory degree fractions.
This method and following four method are contrasted in experiment:
LR(Liner Regression):The pass between low-level image feature and memory degree fraction is trained using linear prediction function System;
SVR(Support Vector Regression):Support vector regression, by low-level image feature string together, with reference to RBF kernel function learning of nonlinear functions prognostic chart picture memory degree;
MRR[9](Multiple Rank Regression):Established back using multistage left projection vector and right projection vector Return model;
MLHR[10](Multi-Level via Hierarchical Regression):Multimedia based on hierarchical multiple regression Information analysis.
Fig. 3 demonstrates convergence;Fig. 4 illustrates this method and other method performance comparison result, it can be seen that This method is better than other method.Low-level image feature and the relation of memory degree prediction have only been probed into control methods.This method is special by bottom Sign is incorporated under same framework with image attributes feature to be predicted to iconic memory degree.This method also uses transfer learning simultaneously Train to obtain image attributes detector from external data base, obtain a relatively stable model.Experiment show we The feasibility and superiority of method.
Bibliography:
[1]Zhang Z,Li F,Zhao M,et al.Joint low-rank and sparse principal feature coding for enhancedrobust representation and visual classification [J].IEEE Transactions on Image Processing,2016,25(6):2429-2443.
[2]Shi X,Guo Z,Lai Z,et al.A framework ofjoint graph embedding and sparse regression for dimensionality reduction[J].IEEE Transactions on Image Processing,2015,24(4):1341-1355.
[3]P.Isola,J.Xiao,A.Torralba,and A.Oliva,“What makes an image memorable”in Proc.Int.Conf.Comput.Vis.Pattern Recognit.,2011,pp.145–152.
[4]P.Isola,D.Parikh,A.Torralba,and A.Oliva,“Understanding the intrinsic memorability of images,”in Proc.Adv.Conf.Neural Inf.Process.Syst., 2011,pp.2429–2437.
[5]Tibshirani R.Regression shrinkage and selection via the lasso[J] .Journal of the Royal Statistical Society.Series B(Methodological),1996:267- 288.
[6]Hoerl A E,Kennard R W.Ridge regression:Biased estimation for nonorthogonal problems.Technometrics,1970,12(1):55-67.
[7]Purcell S,Neale B,Todd-Brown K,et al.PLINK:a tool set for whole- genome association and population-based linkage analyses.The American Journal of Human Genetics,2007,81(3):559-575.
[8]Q.You,H.Jin,and J.Luo,“Visual sentiment analysis by attending on local image regions,”in Thirty-FirstAAAI Conference onArtificial Intelligence,2017.
[9]Hou C,Nie F,Yi D,et al.Efficient image classification via multiple rank regression.IEEE Transactions on Image Processing,2013,22(1):340-352.
[10]Sundt B.A multi-level hierarchical credibility regression model [J].Scandinavian Actuarial Journal,1980,1980(1):25-32.
[11]J.Xiao,J.Hays,K.Ehinger,A.Oliva,A.Torralba et al.,“Sun database: Large-scale scene recognition from abbey to zoo,”in Proc.Int.Conf.Comput.Vis. Pattern Recognit.,2010,pp.3485–3492.
It will be appreciated by those skilled in the art that accompanying drawing is the schematic diagram of a preferred embodiment, the embodiments of the present invention Sequence number is for illustration only, does not represent the quality of embodiment.
The foregoing is only presently preferred embodiments of the present invention, be not intended to limit the invention, it is all the present invention spirit and Within principle, any modification, equivalent substitution and improvements made etc., it should be included in the scope of the protection.

Claims (4)

1. a kind of combine low-rank representation and the learning method of sparse regression, it is characterised in that the described method comprises the following steps:
Feature extraction is carried out to the SUN data sets with iconic memory degree fraction label;
One entirety of composition under same framework is placed on by low-rank representation and with reference to sparse regression model two parts, structure joint is low The model of sum of ranks sparse regression;
Solve the problems, such as the mnemonic of automatic Prediction image using more vision self-adapting regression algorithms, obtained under optimized parameter The relation of characteristics of image and iconic memory degree;
The feature that combination image proposes, utilize the relational result obtained under optimized parameter, forecast database test set image note Degree of recalling, and verify prediction result with relevant evaluation standard.
2. the learning method of a kind of joint low-rank representation according to claim 1 and sparse regression, it is characterised in that described Method also includes:Obtain image mnemonic data set.
3. the learning method of a kind of joint low-rank representation according to claim 1 and sparse regression, it is characterised in that described Feature includes:Scale invariant features transform feature, search tree feature, histograms of oriented gradients feature and structural similarity are special Sign.
4. the learning method of a kind of joint low-rank representation according to claim 1 and sparse regression, it is characterised in that described The model of joint low-rank and sparse regression is specially:
<mrow> <munder> <mi>min</mi> <mrow> <mi>A</mi> <mo>,</mo> <mi>E</mi> <mo>,</mo> <mi>w</mi> </mrow> </munder> <mo>|</mo> <mo>|</mo> <mi>A</mi> <mo>|</mo> <msub> <mo>|</mo> <mo>*</mo> </msub> <mo>+</mo> <mi>&amp;alpha;</mi> <mo>|</mo> <mo>|</mo> <mi>E</mi> <mo>|</mo> <msub> <mo>|</mo> <mn>1</mn> </msub> <mo>+</mo> <mi>&amp;beta;</mi> <mo>|</mo> <mo>|</mo> <mi>A</mi> <mi>w</mi> <mo>|</mo> <msub> <mo>|</mo> <mn>1</mn> </msub> <mo>+</mo> <mi>&amp;lambda;</mi> <mo>|</mo> <mo>|</mo> <mi>X</mi> <mi>A</mi> <mi>w</mi> <mo>-</mo> <mi>y</mi> <mo>|</mo> <msubsup> <mo>|</mo> <mi>F</mi> <mn>2</mn> </msubsup> <mo>+</mo> <mi>&amp;phi;</mi> <mi>t</mi> <mi>r</mi> <mrow> <mo>(</mo> <msup> <mi>A</mi> <mi>T</mi> </msup> <msup> <mi>X</mi> <mi>T</mi> </msup> <mi>L</mi> <mi>X</mi> <mi>A</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> <mi>X</mi> <mo>=</mo> <mi>X</mi> <mi>A</mi> <mo>+</mo> <mi>E</mi> </mrow>
Wherein:
X be input feature, A ∈ RD×DIt is that the low-rank projection matrix of N number of sample captures the bottom low-rank knot shared between sample Structure, E ∈ RN×DIt is to utilize L1Norm solves random error;w∈RD×1It is transformation matrix, by the sample after conversion and their note Recalling score is associated, and y is the label of training sample;It is the error function of definition;λ > 0 are balance ginsengs Number.
CN201710648066.5A 2017-08-01 2017-08-01 Learning method combining low-rank representation and sparse regression Active CN107590505B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710648066.5A CN107590505B (en) 2017-08-01 2017-08-01 Learning method combining low-rank representation and sparse regression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710648066.5A CN107590505B (en) 2017-08-01 2017-08-01 Learning method combining low-rank representation and sparse regression

Publications (2)

Publication Number Publication Date
CN107590505A true CN107590505A (en) 2018-01-16
CN107590505B CN107590505B (en) 2021-08-27

Family

ID=61043166

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710648066.5A Active CN107590505B (en) 2017-08-01 2017-08-01 Learning method combining low-rank representation and sparse regression

Country Status (1)

Country Link
CN (1) CN107590505B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109558882A (en) * 2018-11-30 2019-04-02 苏州大学 Image classification method and device based on robust part low-rank sparse CNN feature
CN109858543A (en) * 2019-01-25 2019-06-07 天津大学 The image inferred based on low-rank sparse characterization and relationship can degree of memory prediction technique
CN109885728A (en) * 2019-01-16 2019-06-14 西北工业大学 Video summarization method based on meta learning
CN110032704A (en) * 2018-05-15 2019-07-19 腾讯科技(深圳)有限公司 Data processing method, device, terminal and storage medium
CN110457672A (en) * 2019-06-25 2019-11-15 平安科技(深圳)有限公司 Keyword determines method, apparatus, electronic equipment and storage medium
CN112990242A (en) * 2019-12-16 2021-06-18 京东数字科技控股有限公司 Training method and training device for image classification model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632138A (en) * 2013-11-20 2014-03-12 南京信息工程大学 Low-rank partitioning sparse representation human face identifying method
CN106971200A (en) * 2017-03-13 2017-07-21 天津大学 A kind of iconic memory degree Forecasting Methodology learnt based on adaptive-migration

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632138A (en) * 2013-11-20 2014-03-12 南京信息工程大学 Low-rank partitioning sparse representation human face identifying method
CN106971200A (en) * 2017-03-13 2017-07-21 天津大学 A kind of iconic memory degree Forecasting Methodology learnt based on adaptive-migration

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YANWEI PANG 等: "Ranking Graph Embedding for Learning to Rerank", 《IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS》 *
ZHAO ZHANG 等: "Joint Low-Rank and Sparse Principal Feature Coding for Enhanced Robust Representation and Visual Classification", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
刘建伟 等: "结构稀疏模型", 《计算机学报》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110032704A (en) * 2018-05-15 2019-07-19 腾讯科技(深圳)有限公司 Data processing method, device, terminal and storage medium
CN110032704B (en) * 2018-05-15 2023-06-09 腾讯科技(深圳)有限公司 Data processing method, device, terminal and storage medium
CN109558882A (en) * 2018-11-30 2019-04-02 苏州大学 Image classification method and device based on robust part low-rank sparse CNN feature
CN109558882B (en) * 2018-11-30 2023-05-05 苏州大学 Image classification method and device based on robust local low-rank sparse CNN features
CN109885728A (en) * 2019-01-16 2019-06-14 西北工业大学 Video summarization method based on meta learning
CN109885728B (en) * 2019-01-16 2022-06-07 西北工业大学 Video abstraction method based on meta-learning
CN109858543A (en) * 2019-01-25 2019-06-07 天津大学 The image inferred based on low-rank sparse characterization and relationship can degree of memory prediction technique
CN109858543B (en) * 2019-01-25 2023-03-21 天津大学 Image memorability prediction method based on low-rank sparse representation and relationship inference
CN110457672A (en) * 2019-06-25 2019-11-15 平安科技(深圳)有限公司 Keyword determines method, apparatus, electronic equipment and storage medium
CN112990242A (en) * 2019-12-16 2021-06-18 京东数字科技控股有限公司 Training method and training device for image classification model

Also Published As

Publication number Publication date
CN107590505B (en) 2021-08-27

Similar Documents

Publication Publication Date Title
CN107480261B (en) Fine-grained face image fast retrieval method based on deep learning
CN107545276A (en) The various visual angles learning method of joint low-rank representation and sparse regression
CN110059198B (en) Discrete hash retrieval method of cross-modal data based on similarity maintenance
Lu et al. Co-attending free-form regions and detections with multi-modal multiplicative feature embedding for visual question answering
CN107590505A (en) The learning method of joint low-rank representation and sparse regression
CN108920720B (en) Large-scale image retrieval method based on depth hash and GPU acceleration
US20180341862A1 (en) Integrating a memory layer in a neural network for one-shot learning
CN113487629B (en) Image attribute editing method based on structured scene and text description
Bawa et al. Emotional sentiment analysis for a group of people based on transfer learning with a multi-modal system
CN105320764A (en) 3D model retrieval method and 3D model retrieval apparatus based on slow increment features
Simran et al. Content based image retrieval using deep learning convolutional neural network
Seddati et al. DeepSketch 3: Analyzing deep neural networks features for better sketch recognition and sketch-based image retrieval
Xing et al. Few-shot single-view 3d reconstruction with memory prior contrastive network
Setyono et al. Betawi traditional food image detection using ResNet and DenseNet
CN115204301A (en) Video text matching model training method and device and video text matching method and device
CN111079011A (en) Deep learning-based information recommendation method
Fu et al. Video summarization with a dual attention capsule network
López-Cifuentes et al. Attention-based knowledge distillation in scene recognition: the impact of a dct-driven loss
Arulmozhi et al. DSHPoolF: deep supervised hashing based on selective pool feature map for image retrieval
Huang et al. Remote sensing object counting through regression ensembles and learning to rank
CN117523271A (en) Large-scale home textile image retrieval method, device, equipment and medium based on metric learning
Sufikarimi et al. Speed up biological inspired object recognition, HMAX
Kobs et al. Indirect: Language-guided zero-shot deep metric learning for images
CN117688390A (en) Content matching method, apparatus, computer device, storage medium, and program product
CN107909091A (en) A kind of iconic memory degree Forecasting Methodology based on sparse low-rank regression model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant