CN110222213A - A kind of image classification method based on isomery tensor resolution - Google Patents
A kind of image classification method based on isomery tensor resolution Download PDFInfo
- Publication number
- CN110222213A CN110222213A CN201910453844.4A CN201910453844A CN110222213A CN 110222213 A CN110222213 A CN 110222213A CN 201910453844 A CN201910453844 A CN 201910453844A CN 110222213 A CN110222213 A CN 110222213A
- Authority
- CN
- China
- Prior art keywords
- tensor
- matrix
- rank
- image classification
- isomery
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/55—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of image classification methods based on isomery tensor resolution, the described method comprises the following steps: constructing original tensor according to ORL data set;It is factor matrix corresponding with each rank of tensor and sample characteristics characterization matrix by original tensor resolution;Orthogonality constraint by column is added on factor matrix;Low-rank constraint is applied to sample characteristics characterization matrix, by identifying that the global low-rank structure of sample indicates to obtain lowest rank;Pass through l2,1Norm constraint loss function and regularization term realize the target for differentiating robust features selection;Obtain complete objective function;Iteration is optimized to objective function with alternating direction multiplier, optimal solution is obtained, further obtains classifier;Test set picture in ORL is inputted in above-mentioned trained classifier, image classification is completed.The present invention, which solves to be easy to cause in traditional vector or matrix method, to lose structural information and destroys the dependence between element and the isomerism during tensor resolution.
Description
Technical field
The present invention relates to image classification field more particularly to a kind of image classification methods based on isomery tensor resolution.
Background technique
Nowadays, with the fast development of present information acquisition and storage technology, the never homologous or channel in multiple views
Mass data is generated, while by noisy damage, lacks entry and high-dimensional.For example, multidimensional data is big in many fields
Amount production, comprising: remote sensing, medical science, biology, computer vision and image procossing.
In this case, the complicated composed structure in data description highlights the limitation of conventional vector or matrix method
Property, therefore such scheme makes data become flattening and partly destroys the complete structure letter established in multiple dimensions
Breath.
Traditional small sample image classification and generally two steps the step of detection: feature extraction and training classifier.
In feature extraction phases, designer can attempt various generic features or self-designed feature and carry out spy to image
Sign is extracted, and by taking Face datection as an example, general feature is exactly histograms of oriented gradients (HOG), accelerates robust features (SURF), office
The features such as portion's binary pattern (LBP), and the feature relatively good for face effect has haar feature.Classical small sample image point
Class method has support vector machines (SVM), principal component analysis (PCA) etc..
After deep learning in 2015 captures each image procossing match umber one, existing image classification largely makes
Method is all deep learning, that is, neural network.Neural network is built into layer by many neurons
Network, by active coating come so that model has the ability of very strong nonlinear fitting.Designer need to only input image, then accuse
Tell that model needs the result is that, model will be extracted automatically and result mapping learning characteristic.
By deep learning, the work of designer is concentrated mainly in image characteristics extraction.Furthermore designer only needs to set
Network structure is counted, so that the feature that network automatically extracts is better, effect will be more preferable, and accuracy is higher.Deep learning is for figure
It is highly useful as handling, but there are also drawbacks simultaneously, such as: it needs mass data, adjust parameter by experience, the calculating needed
Ability is very high, the real life scenarios for being suitble to processing very complicated.
Summary of the invention
The present invention provides a kind of image classification method based on isomery tensor resolution, the present invention solve traditional vector or
It is easy to cause in matrix method and loses structural information and destroy the dependence between element and the isomery during tensor resolution
Property, it is described below:
A kind of image classification method based on isomery tensor resolution, the described method comprises the following steps:
Original tensor is constructed according to ORL data set;By original tensor resolution be factor matrix corresponding with each rank of tensor and
Sample characteristics characterization matrix;
Orthogonality constraint by column is added on factor matrix;Low-rank constraint is applied to sample characteristics characterization matrix, passes through identification
The global low-rank structure of sample indicates to obtain lowest rank;
Pass through l2,1Norm constraint loss function and regularization term realize the target for differentiating robust features selection;
Obtain complete objective function;Iteration is optimized to objective function with alternating direction multiplier, obtains optimal solution, into
One step obtains classifier;
Test set picture in ORL is inputted in above-mentioned trained classifier, image classification is completed.
Wherein, described to have original tensor resolution for factor matrix corresponding with each rank of tensor and sample characteristics characterization matrix
Body are as follows:
1) to the preceding M rank of tensor, learn the low-dimensional insertion that M orthogonal factor matrixs realize original tensor, to tensor
Last first-order learning characteristic present matrix corresponding with sample;
2) learn linear projection matrix, the mapping between building low-dimensional characterization and corresponding label space.
Further, described to pass through l2,1Norm constraint loss function and regularization term specifically:
Wherein, α is tradeoff parameter;A1 ... Am is factor matrix;Z is characteristic present matrix;||·||FIt indicates
Frobenius norm;Indicate core tensor;I1×I2×…×IMIt is the size of sample space.
Wherein, the complete objective function specifically:
Wherein, χ is M rank target tensor;β, λ, η are that tradeoff parameter is tradeoff parameter;F is binary label matrix;H is
Slack variable matrix.
The beneficial effect of the technical scheme provided by the present invention is that:
1, the present invention proposes joint isomery tensor resolution and differentiates robust features selection, and institute's climbing form type passes through isomery tensor point
The intrinsic character representation of solution study more low-rank is obtained the feature for having more identification by differentiation robust features selection, improves and divide
Class accuracy;
2, in order to better solve the heterogeneity being embedded in tensor data, object of the present invention is to minimize nuclear norm, and
Apply low-rank constraint, and the factor matrix application orthogonalization behaviour to generate for the matrix of the last one mode of tensor resolution generation
Make, and then the classifier optimized;
3, in order to preferably instruct decomposable process and improve the discriminating power of algorithm, the present invention strengthen about loss function and
The joint " 2,1- norm minimum " of regularization;
4, the present invention is based on isomery tensor resolutions to propose a kind of efficient image classification, merges low-rank and orthogonal thought,
Image classification accuracy is significantly improved while reducing calculation amount.
Detailed description of the invention
Fig. 1 is a kind of flow chart of image classification method based on isomery tensor resolution.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, embodiment of the present invention is made below further
Ground detailed description.
Tensor is a multidimensional variable array, can be for multichannel array provides the simple of structure attribute and naturally indicates.
For example, grey video can be modeled as three one point of the rank tensor sums indexed by two spaces variable in computer vision
Not Dui Yingyu width, highly with the time variable of time;In electroencephalogram, each test can there are two types of channel and times with apparatus
" channel × time × test " matrix of mode characterizes, and test of many times forms three rank tensors, i.e.,;In social tagging system
In, user, the relationship triple between theme and project can also be by three rank tensor representations, i.e., " user × theme × project ".
In fact, tensor be vector sum matrix high-order it is extensive, for example, scalar is zeroth order tensor, vector is single order tensor, and matrix is two
Rank tensor.
In the modeling and analysis of multidimensional data, due to the strong coherence in each possible hyperspace with α, in structure
It inevitably include bulk redundancy in the tensor data made.Therefore, a critical issue is to reduce this redundancy to obtain more
More intrinsic representation, with the application for real world.Tensor resolution occurs as promising data mining technology, wide
It is general to be used to reduce the inherent low dimensional structures being embedded in this redundancy and heuristic data.The most important motivation of tensor resolution is from a large amount of
Compact expression is generated in redundant data.In various tensor resolution models, Tucker decomposes [1] and CANDECOMP/
It is two most popular frames that PARAFAC (CP), which decomposes [2], they are formed naturally two different definitions of tensor grade, i.e.,
Polyteny grade and CP order.It is low-dimensional core tensor multiplied by the group factor square along different mode that Tucker, which is decomposed tensor resolution,
Battle array.CP decomposes the summation by tensor resolution for -1 tensor of order.In some cases, CP decomposition is considered as having super diagonal core
The Tucker of the specific type of amount is decomposed.
Embodiment 1
The embodiment of the invention provides a kind of image classification methods based on isomery tensor resolution, referring to Fig. 1, this method packet
Include following steps:
101: original tensor is constructed according to ORL data set;
Wherein, ORL data set be univ cambridge uk establish face image database, by those skilled in the art public affairs
Know, the embodiment of the present invention does not repeat them here this.
102: being factor matrix corresponding with each rank of tensor and sample characteristics characterization matrix by original tensor resolution;
103: orthogonality constraint by column is added on factor matrix;
104: low-rank constraint being applied to sample characteristics characterization matrix Z, is obtained by identifying the global low-rank structure of sample
Lowest rank indicates;
105: passing through l2,1Norm constraint loss function and regularization term realize the target for differentiating robust features selection;
106: obtaining complete object function;
107: iteration being optimized to objective function with alternating direction multiplier, optimal solution is obtained, further obtains classifier;
108: the test set picture in ORL being inputted in above-mentioned trained classifier, image classification is completed.
In conclusion the embodiment of the present invention proposes a kind of image classification method based on isomery tensor resolution, with higher
Accuracy solve the problems, such as image classification.
Embodiment 2
The scheme in embodiment 1 is further introduced below with reference to specific calculation formula, example, it is as detailed below
Description
1, problem definition
Assuming that providing a M rank target tensorTraining set, wherein N is sample number
Amount, I1×I2×…×IMIt is the size of sample space, R is real number sample space.N number of target tensor is constituted jointly into (a M
+ 1) tensor of rankMeanwhile by F=[f1,f2,…,fN]T∈{0,1}N×CIt is expressed as binary label square
Battle array, wherein C is the quantity of class, fiIt is i-th of sample χiLabel vector.And if only if χiBelong to jth class (j=1,
2, C) when, element fi,jIt is 1, remaining situation is 0.
The target of mentioned algorithm is to learn jointly:
1) to the preceding M rank of tensor, learn M orthogonal factor matrixsRealize original
The low-dimensional of amount is embedded in, characteristic present matrix Z ∈ R corresponding with sample to the last first-order learning of tensorN×D;
Wherein, AiFor orthogonal factor matrix, D is characterized dimension.
2) learn linear projection matrix W ∈ RD×C, the mapping between low-dimensional characterization and corresponding label space is constructed, wherein
Ji< IiWith D < N.
2, the isomery tensor resolution of low-rank regularization:
Tucker decomposes the one group factor matrix of relatively low-rank core tensor sum by original tensor resolution for each mode.Connection
The Tucker decomposition of tensor χ can be expressed as following form:
Wherein, ×iIndicate the modular multiplication along the i-th rank of tensor,Indicate core tensor.I.e. formula (1) is above-mentioned reality
Apply the original tensor and label matrix corresponding with sample in example 1 in step 101.
Latent factor matrix A1,A2,…,AMIt can be obtained by minimizing reconstruction loss function as follows with Z, i.e., it is above-mentioned
Factor matrix and sample characteristics characterization matrix in embodiment 1 in step 102:
Wherein, factor matrix is A1 ... Am, and characteristic present matrix is Z, | | | |FIndicate Frobenius norm in general, nothing
It is not unique for constraining the solution that Tucker is decomposed.A kind of direct strategy for overcoming the problems, such as this is to apply volume to factor matrix
Outer constraint allows to obtain unique and physically significant explanation.
By adding orthogonality constraint by column on factor matrix, following formula is derived:
Orthogonality is applied to factor matrix to attempt to avoid any scaling of intermediate representation and retain as much as possible to believe
Breath.
Therefore tensor can be rewritten and rebuild loss function
Since the first item in equation (4) is constant, so minimization problem is converted into consequent minimum:
Wherein,For objective function.
In fact,It is considered potential projection tensorIt is compressed into the humble dictionary tensor for indicating Z.
The inspiration of exceptional value [3] or missing content [4] is solved by the low-rank method being most recently successfully, Z is applied in consideration of the embodiment of the present invention
Low-rank is added to constrain, so that by identifying that the global low-rank structure of sample indicates to obtain lowest rank.As order minimization problem
Common practice, the embodiment of the present invention are replaced low-rank constraint function, i.e. the sum of singular value of Z with nuclear norm ‖ Z ‖ *, are obtained following
Convex optimization problem.Therefore, the objective function comprising the constraint of orthogonal and low-rank specifically:
Wherein, α is tradeoff parameter.
Wherein, above-mentioned formula (6) is that the lowest rank in embodiment 1 in step 104 indicates.
3、l2,1Norm regularization differentiates robust features selection:
Wherein, l2,1Norm regularization is widely used in due to its efficiency and robustness when selection differentiates feature
Various classification tasks.Least square regression is one of the most popular method connected between modeling input element and output label.For
Simplified model (6), solve Tag Estimation using least square regression, as follows:
Wherein, W ∈ RD×CLinear projection matrix, β and λ are tradeoff parameters, for control loss function and regularization term it
Between balance.
In order to enhance the robustness of model, robust loss function is used:
Therefore, above-mentioned equation can convert are as follows:
Wherein,wijFor the element that the i-th row jth in matrix W arranges, above-mentioned formula (9) is step
Loss function and regularization term in 105.
It is expected that the stringent binary constraint in H can be loosened in soft-constraint, so that the model proposed has oneself bigger
Label is adapted to by spending and avoids overfitting problem.For this purpose, replacing original tag matrix F using slack variable matrix H.Cause
This, improved discriminate learning model is restated as follows:
Wherein, η is tradeoff parameter.
By combining target function type (10) and formula (6) in equation, the complete objective function (HTDRC) got
It is as follows:
44, Optimization Solution
Objective function optimization in equation (11) can be solved by alternating direction multipliers method (ADMM), and this method will answer
Miscellaneous problem is divided into subproblem, wherein each subproblem is easier to be handled with iterative process.It is firstly introduced into two auxiliary variable P
And E, keep objective function (11) relaxation as follows:
Then, Lagrange's multiplier Y is introduced1And Y2, and it is fitted Augmented Lagrangian Functions againIt is as follows:
Wherein, μ indicates positve term punishment parameter, and formula (13) is the classifier further obtained by optimal solution.
In order to preferably explain the process, introducing variable k simultaneously willZk,Ek,Wk,Pk,FkAnd μkIt is defined as in kth
The variable updated in secondary iteration.
Wherein, alternating direction multipliers method (ADMM)[5]For technology well known in the art, specific iterative process is this field
Well known to technical staff, the embodiment of the present invention is not repeated them here.
Embodiment 3
Feasibility verifying is carried out to the scheme in Examples 1 and 2 below with reference to specific implementation process, result of implementation, in detail
See below description:
This method is compared with state-of-the-art sorting algorithm, to verify conjunctive use core tensor sum l2,1Norm it is different
Structure tensor resolution carries out the validity of image classification, the data set used are as follows: ORL.
In embodiment 1, the image based on isomery tensor resolution of proposition is had evaluated on widely used data set ORL
Classification.This data set is widely used by various image classification methods.Classification performance is assessed using nearest neighbour classification device.
ORL data set includes 40 different themes, and each theme has 10 images.Image in same subject be
It is shot under very big difference, changes illumination, facial expression (opening/closing eye, smile/unhappy) and face detail (wear glasses/
It does not wear glasses).In an experiment, facial image is tailored to 32 × 32 pixels.Gray value is used as input element.Use nearest-neighbors
It is repeated this five times as classifier, and reports average result.For each class, { 2,3,4,5 } a image therein is randomly choosed
As training sample, remaining image is as test sample.
Aspect has evaluated this hair in convergence, parametric sensitivity, characteristic dimension and compared with several most advanced methods
The benchmark test algorithm that bright embodiment proposes.Mentioned algorithm is used for initial factor matrix A1,A2,A3,…,AMAnd Z.Rule of thumb
α=10 and λ=0.1 are set by the auto-adaptive parameter of four data sets.By trellis search method setting proposed based on
Tradeoff parameter beta and η in the image classification method of isomery tensor resolution.
In embodiments of the present invention, it adjusts and records the optimal classification accuracy of every kind of method to carry out fair comparison.Table 1
Show the recognition accuracy obtained by the distinct methods of the training sample of all size on ORL data set.It can from table
To find out, the embodiment of the present invention proposes the small sample image classification method (HTDRC) based on isomery tensor resolution in most of feelings
Other image classification algorithms are better than under condition.
Classification accuracy of 1 distinct methods of table on ORL data set
The method of comparison includes: linear regression classification linear regression classification (LRC), sparse
Presentation class sparse representation classification (SRC) is classified based on constant canonical regression is turned round
Rotational invariant norm-based regression for classification (RRC), optimization feature choosing
Select classification optimal feature selection classification (OFSC), Higher-order Singular value decomposition high order
Singular value decomposition (HOSVD), synchronous tensor resolution and completion simultaneous tensor
Decomposition and completion (STDC), exceptional value robust tensor principal component analysis outlier-robust
tensor PCA(OR-TPCA)。
The embodiment of the present invention proposes one and nuclear norm and l is used in combination2,1The image of specification regularization isomery tensor resolution
Sorting algorithm.In order to better solve the isomerism being embedded in tensor data, tensor expansion is minimised as most based on nuclear norm
The lowest rank representing matrix of latter mode finds one group of orthogonal factor matrix.
In order to preferably instruct tensor resolution process and improve the discriminating power of algorithm, the differentiation for introducing a robust is special
Selected section is levied, emphasizes the joint " l about loss function and regularization2,1Norm minimum ".Meanwhile potential label matrix
This part is further relaxed, keeps frame more stable and flexible.Result of implementation shows this method not only in a small amount of iteration
Reach convergence, and also achieves more accurate image classification result compared with state-of-the-art method.
Bibliography
[1]K.T.G and B.B.W,“Tensor decompositions and applications,”SIAM
review,vol.51,no.3, pp.455–500,2009.
[2]K.H.AL,“Towards a standardized notation and terminology in
multiway analysis,”Journal of chemometrics,vol.14,no.3,pp.105–122,2000.
[3]C.JiaandY.Fu,“Low-rank tensor subspace learning for rgb-
dactionrecognition,”IEEE Transactions on Image Processing,vol.25,no.10,
pp.4641–4652,2016.
[4]P.Rai,Y.Wang,S.Guo,G.Chen,D.Dunson,and L.Carin,“Scalable bayesian
low-rank decomposition of incomplete multiway tensors,”in Proceedinds of
International Conference on Machine Learning,2014,pp.1800–1808.
[5]W.Shi,Q.Ling,K.Yuan,G.Wu and W.Yin,"On the Linear Convergence of
the ADMM in Decentralized Consensus Optimization,"in IEEE Transactions on
Signal Processing,vol.62, no.7,pp.1750-1761,April1,2014.
The embodiment of the present invention to the model of each device in addition to doing specified otherwise, the model of other devices with no restrictions,
As long as the device of above-mentioned function can be completed.
It will be appreciated by those skilled in the art that attached drawing is the schematic diagram of a preferred embodiment, the embodiments of the present invention
Serial number is for illustration only, does not represent the advantages or disadvantages of the embodiments.
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all in spirit of the invention and
Within principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.
Claims (4)
1. a kind of image classification method based on isomery tensor resolution, which is characterized in that the described method comprises the following steps:
Original tensor is constructed according to ORL data set;It is factor matrix corresponding with each rank of tensor and sample by original tensor resolution
Characteristic present matrix;
Orthogonality constraint by column is added on factor matrix;Low-rank constraint is applied to sample characteristics characterization matrix, by identifying sample
Global low-rank structure come obtain lowest rank indicate;
Pass through l2,1Norm constraint loss function and regularization term realize the target for differentiating robust features selection;
Obtain complete objective function;Iteration is optimized to objective function with alternating direction multiplier, obtains optimal solution, further
Obtain classifier;
Test set picture in ORL is inputted in above-mentioned trained classifier, image classification is completed.
2. a kind of image classification method based on isomery tensor resolution according to claim 1, which is characterized in that described to incite somebody to action
Original tensor resolution is factor matrix corresponding with each rank of tensor and sample characteristics characterization matrix specifically:
1) to the preceding M rank of tensor, learn the low-dimensional insertion that M orthogonal factor matrixs realize original tensor, to the last of tensor
First-order learning characteristic present matrix corresponding with sample;
2) learn linear projection matrix, the mapping between building low-dimensional characterization and corresponding label space.
3. a kind of image classification method based on isomery tensor resolution according to claim 1, which is characterized in that described logical
Cross l2,1Norm constraint loss function and regularization term specifically:
Wherein, α is tradeoff parameter;A1 ... Am is factor matrix;Z is characteristic present matrix;||·||FIndicate Frobenius model
Number;Indicate core tensor;I1×I2×…×IMIt is the size of sample space.
4. a kind of image classification method based on isomery tensor resolution according to claim 1, which is characterized in that described complete
Whole objective function specifically:
Wherein,For M rank target tensor;β, λ, η are that tradeoff parameter is tradeoff parameter;F is binary label matrix;H is relaxation
Matrix of variables.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910453844.4A CN110222213B (en) | 2019-05-28 | 2019-05-28 | Image classification method based on heterogeneous tensor decomposition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910453844.4A CN110222213B (en) | 2019-05-28 | 2019-05-28 | Image classification method based on heterogeneous tensor decomposition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110222213A true CN110222213A (en) | 2019-09-10 |
CN110222213B CN110222213B (en) | 2021-07-16 |
Family
ID=67818419
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910453844.4A Active CN110222213B (en) | 2019-05-28 | 2019-05-28 | Image classification method based on heterogeneous tensor decomposition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110222213B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110717538A (en) * | 2019-10-08 | 2020-01-21 | 广东工业大学 | Color picture clustering method based on non-negative tensor ring |
CN110766066A (en) * | 2019-10-18 | 2020-02-07 | 天津理工大学 | FNN-based tensor heterogeneous integrated internet of vehicles missing data estimation method |
CN111274525A (en) * | 2020-01-19 | 2020-06-12 | 东南大学 | Tensor data recovery method based on multi-linear augmented Lagrange multiplier method |
CN112085704A (en) * | 2020-08-07 | 2020-12-15 | 深圳先进技术研究院 | Medical image classification method and device, terminal equipment and storage medium |
CN112329604A (en) * | 2020-11-03 | 2021-02-05 | 浙江大学 | Multi-modal emotion analysis method based on multi-dimensional low-rank decomposition |
CN112600697A (en) * | 2020-12-07 | 2021-04-02 | 中山大学 | QoS prediction method and system based on federal learning, client and server |
CN113349795A (en) * | 2021-06-15 | 2021-09-07 | 杭州电子科技大学 | Depression electroencephalogram analysis method based on sparse low-rank tensor decomposition |
WO2023065525A1 (en) * | 2021-10-22 | 2023-04-27 | 西安闻泰信息技术有限公司 | Object feature matrix determination method and apparatus, device, and storage medium |
CN117784940A (en) * | 2024-02-23 | 2024-03-29 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | High-dimensional tensor analysis method and device aiming at Chinese character stroke writing imagination |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105608478A (en) * | 2016-03-30 | 2016-05-25 | 苏州大学 | Combined method and system for extracting and classifying features of images |
CN107392107A (en) * | 2017-06-24 | 2017-11-24 | 天津大学 | A kind of face feature extraction method based on isomery tensor resolution |
CN107563444A (en) * | 2017-09-05 | 2018-01-09 | 浙江大学 | A kind of zero sample image sorting technique and system |
CN109522956A (en) * | 2018-11-16 | 2019-03-26 | 哈尔滨理工大学 | A kind of low-rank differentiation proper subspace learning method |
-
2019
- 2019-05-28 CN CN201910453844.4A patent/CN110222213B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105608478A (en) * | 2016-03-30 | 2016-05-25 | 苏州大学 | Combined method and system for extracting and classifying features of images |
CN107392107A (en) * | 2017-06-24 | 2017-11-24 | 天津大学 | A kind of face feature extraction method based on isomery tensor resolution |
CN107563444A (en) * | 2017-09-05 | 2018-01-09 | 浙江大学 | A kind of zero sample image sorting technique and system |
CN109522956A (en) * | 2018-11-16 | 2019-03-26 | 哈尔滨理工大学 | A kind of low-rank differentiation proper subspace learning method |
Non-Patent Citations (1)
Title |
---|
JING ZHANG等: "Low-Rank Regularized Heterogeneous Tensor Decomposition for Subspace Clustering", 《IEEE》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110717538A (en) * | 2019-10-08 | 2020-01-21 | 广东工业大学 | Color picture clustering method based on non-negative tensor ring |
CN110717538B (en) * | 2019-10-08 | 2022-06-24 | 广东工业大学 | Color picture clustering method based on non-negative tensor ring |
CN110766066A (en) * | 2019-10-18 | 2020-02-07 | 天津理工大学 | FNN-based tensor heterogeneous integrated internet of vehicles missing data estimation method |
CN111274525A (en) * | 2020-01-19 | 2020-06-12 | 东南大学 | Tensor data recovery method based on multi-linear augmented Lagrange multiplier method |
CN112085704A (en) * | 2020-08-07 | 2020-12-15 | 深圳先进技术研究院 | Medical image classification method and device, terminal equipment and storage medium |
CN112329604A (en) * | 2020-11-03 | 2021-02-05 | 浙江大学 | Multi-modal emotion analysis method based on multi-dimensional low-rank decomposition |
CN112600697A (en) * | 2020-12-07 | 2021-04-02 | 中山大学 | QoS prediction method and system based on federal learning, client and server |
CN113349795A (en) * | 2021-06-15 | 2021-09-07 | 杭州电子科技大学 | Depression electroencephalogram analysis method based on sparse low-rank tensor decomposition |
CN113349795B (en) * | 2021-06-15 | 2022-04-08 | 杭州电子科技大学 | Depression electroencephalogram analysis method based on sparse low-rank tensor decomposition |
WO2023065525A1 (en) * | 2021-10-22 | 2023-04-27 | 西安闻泰信息技术有限公司 | Object feature matrix determination method and apparatus, device, and storage medium |
CN117784940A (en) * | 2024-02-23 | 2024-03-29 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | High-dimensional tensor analysis method and device aiming at Chinese character stroke writing imagination |
Also Published As
Publication number | Publication date |
---|---|
CN110222213B (en) | 2021-07-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110222213A (en) | A kind of image classification method based on isomery tensor resolution | |
Xie et al. | Hyper-Laplacian regularized multilinear multiview self-representations for clustering and semisupervised learning | |
Zhang et al. | Joint low-rank and sparse principal feature coding for enhanced robust representation and visual classification | |
Nie et al. | Towards Robust Discriminative Projections Learning via Non-Greedy $\ell _ {2, 1} $ ℓ 2, 1-Norm MinMax | |
Zhuang et al. | Label information guided graph construction for semi-supervised learning | |
Yang et al. | Supervised translation-invariant sparse coding | |
Li et al. | Linestofacephoto: Face photo generation from lines with conditional self-attention generative adversarial networks | |
Yan et al. | Graph embedding and extensions: A general framework for dimensionality reduction | |
Kapoor et al. | Active learning with gaussian processes for object categorization | |
Chen et al. | Ambiguously labeled learning using dictionaries | |
Lin et al. | Construction of dependent Dirichlet processes based on Poisson processes | |
Xing et al. | Towards robust and accurate multi-view and partially-occluded face alignment | |
Yang et al. | Incomplete-data oriented multiview dimension reduction via sparse low-rank representation | |
Fang et al. | Graph-based learning via auto-grouped sparse regularization and kernelized extension | |
Wen et al. | Discriminative dictionary learning with two-level low rank and group sparse decomposition for image classification | |
Du et al. | Low-rank graph preserving discriminative dictionary learning for image recognition | |
Jia et al. | Semi-supervised cross-modality action recognition by latent tensor transfer learning | |
Jia et al. | Adaptive neighborhood propagation by joint L2, 1-norm regularized sparse coding for representation and classification | |
Nguyen et al. | Discriminative low-rank dictionary learning for face recognition | |
CN109063555B (en) | Multi-pose face recognition method based on low-rank decomposition and sparse representation residual error comparison | |
Yuan et al. | Generative modeling of infinite occluded objects for compositional scene representation | |
Wang et al. | Generative partial multi-view clustering | |
Du et al. | Discriminative low-rank graph preserving dictionary learning with Schatten-p quasi-norm regularization for image recognition | |
Zhang et al. | Multi-view dimensionality reduction via canonical random correlation analysis | |
Wang et al. | Locality-constrained discriminative learning and coding |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information | ||
CB03 | Change of inventor or designer information |
Inventor after: Jing Peiguang Inventor after: Cui Kai Inventor after: Wu Yuting Inventor after: Su Yuting Inventor before: Jing Peiguang Inventor before: Wu Yuting Inventor before: Su Yuting |
|
GR01 | Patent grant | ||
GR01 | Patent grant |