CN111695011B - Tensor expression-based dynamic hypergraph structure learning classification method and system - Google Patents

Tensor expression-based dynamic hypergraph structure learning classification method and system Download PDF

Info

Publication number
CN111695011B
CN111695011B CN202010548497.6A CN202010548497A CN111695011B CN 111695011 B CN111695011 B CN 111695011B CN 202010548497 A CN202010548497 A CN 202010548497A CN 111695011 B CN111695011 B CN 111695011B
Authority
CN
China
Prior art keywords
hypergraph structure
label
tensor
data
hypergraph
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010548497.6A
Other languages
Chinese (zh)
Other versions
CN111695011A (en
Inventor
高跃
张子昭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202010548497.6A priority Critical patent/CN111695011B/en
Publication of CN111695011A publication Critical patent/CN111695011A/en
Application granted granted Critical
Publication of CN111695011B publication Critical patent/CN111695011B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/906Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9024Graphs; Linked lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a tensor expression-based dynamic hypergraph structure learning classification method and system, wherein the method comprises the following steps: step 1, extracting a characteristic vector of sample data in a database, constructing a hypergraph structure according to the characteristic vector, and expressing the connection strength between any point set in the hypergraph structure by utilizing a tensor; and 2, introducing potential energy loss functions and empirical loss functions into the potential energy of the label vector set, the hypergraph structure represented by the tensor and the point set in the database to generate a dynamic hypergraph structure learning model, performing optimization solution on the dynamic hypergraph structure learning model by using an alternative optimization method, and performing optimal solution of the label vector set after the model solution for data classification. According to the technical scheme, tensor is introduced as a representation form of the dynamic hypergraph structure and a dynamic hypergraph structure learning method, the hypergraph structure and the label vectors of data are optimized alternately, and finally data classification is achieved according to the optimal solution of the label vectors of the data.

Description

Tensor expression-based dynamic hypergraph structure learning classification method and system
Technical Field
The application relates to the technical field of data label processing, in particular to a dynamic hypergraph structure learning classification method based on tensor expression and a dynamic hypergraph structure learning classification system based on tensor expression.
Background
In practice, a small portion of the data is usually tagged, and a large portion of the data is untagged. In such a case, the semi-supervised learning method can simultaneously utilize tagged data and untagged data, and exhibits excellent performance.
The hypergraph is a semi-supervised classification method, each vertex of the hypergraph represents sample data, and a hypergraph represents the association between the sample data.
The learning method based on the hypergraph structure can be regarded as a process of label propagation on the hypergraph structure, points which are connected more closely on the hypergraph should have similar labels, and points which are farther away should have different labels. A good hypergraph structure can accurately model the relevance between data, and therefore better classification performance is obtained.
In the prior art, methods for establishing a hypergraph structure comprise methods for establishing a hypergraph structure based on k-nearest neighbor and methods for establishing a hypergraph structure based on sparse representation, and the methods are used for establishing a static hypergraph structure according to characteristic representation or sparse representation of data, and the hypergraph structure is kept unchanged in the subsequent hypergraph learning process. Obviously, this static hypergraph structure is not guaranteed to be optimal.
In addition, there is also a way to adjust the weights of the hyperedges during hypergraph learning, since different connections may have different importance. However, such adjustments do not completely repair improper or even erroneous connections, and thus the performance of the hypergraph structure is improved only marginally.
Disclosure of Invention
The purpose of this application lies in: and a dynamic hypergraph structure is provided, a tensor is introduced as a representation form of the dynamic hypergraph structure and a dynamic hypergraph structure learning method, the hypergraph structure and the label vector of the data are alternately optimized, a stable and more excellent hypergraph structure and label vector of the data are finally obtained, label-free data are classified according to the label vector, the defect that the traditional static hypergraph structure cannot accurately represent data association is overcome, and the accuracy of data classification is effectively improved.
The technical scheme of the first aspect of the application is as follows: a tensor representation-based dynamic hypergraph structure learning classification method is provided, and comprises the following steps:
step 1, extracting a characteristic vector of sample data in a database, constructing a hypergraph structure according to the characteristic vector, and expressing the connection strength between any point set in the hypergraph structure by utilizing a tensor, wherein the sample data comprises label data and label-free data;
step 2, introducing potential energy loss functions and empirical loss functions into potential energy of a label vector set, a hypergraph structure represented by tensor and a point set in a database to generate a dynamic hypergraph structure learning model, performing optimization solution on the dynamic hypergraph structure learning model by using an alternative optimization method, recording the model after optimization solution as a label classification model, and performing data classification by using the label classification model to solve the optimal solution of the label vector set,
the calculation formula of the dynamic hypergraph structure learning model is as follows:
Figure BDA0002541597740000021
in the formula (I), the compound is shown in the specification,
Figure BDA0002541597740000022
for the joint strength of the ω -th set of points,
Figure BDA0002541597740000023
for all joint strengths
Figure BDA0002541597740000024
Where row vectors are arranged as a row indexed by omega,
Figure BDA0002541597740000025
as a row vector
Figure BDA0002541597740000026
Initial value of f ω Is potential energy of point set, lambda and beta are weight coefficients, Y is label vector set 0 Is the initial set of values for the tag vector.
In any of the above technical solutions, further, in step 1, tensor representation is performed on the connection strength between any point set in the hypergraph structure by using a tensor, which specifically includes: initial values of the row vectors in the tensor representation
Figure BDA0002541597740000027
Corresponding element with middle index of omega
Figure BDA0002541597740000028
The calculation formula of (c) is:
Figure BDA0002541597740000029
ω∈Ψ N
Figure BDA0002541597740000031
in the formula (I), the compound is shown in the specification,
Figure BDA0002541597740000032
is any point set { v i I belongs to the center of ω, v i As the point of the i-th spot,
Figure BDA0002541597740000033
is a power set of the set Ω = {1,2, \8230;, N }, N being the number of vertices in the hypergraph structure,
Figure BDA0002541597740000034
represents the ith point v i To the center
Figure BDA0002541597740000035
The distance of (a) to (b),
Figure BDA0002541597740000036
is the set of points { v i I belongs to omega, and epsilon is a super-edge set of the hypergraph structure.
In any of the above technical solutions, further, in step 2, a calculation formula of potential energy of the point set is:
Figure BDA0002541597740000037
where α is the balance parameter, δ (ω) is the number of points in the set of points, y i Is a label vector, x, of the ith sample data i Is the feature vector of the ith sample data.
In any one of the above technical solutions, further, in step 2, performing optimization solution on the dynamic hypergraph structure learning model by using an alternating optimization method, specifically including:
step 21Fixing label vector set Y, utilizing projection gradient and first objective function to iteratively update tensor expression of hypergraph structure
Figure BDA0002541597740000038
The calculation formula of the hypergraph structure iterative updating is as follows:
Figure BDA0002541597740000039
Figure BDA00025415977400000310
Figure BDA00025415977400000311
Figure BDA00025415977400000312
Figure BDA00025415977400000313
in the formula (I), the compound is shown in the specification,
Figure BDA00025415977400000314
in order to be a gradient of the magnetic field,
Figure BDA00025415977400000315
is the tensor representation of the hypergraph structure after k iterations, h is the step length of each optimization, P is the projection of the feasible set, and f is all the potential energies f ω Row vectors arranged in a row by taking omega as an index;
step 22, fixing the tensor representation of the hypergraph structure
Figure BDA00025415977400000316
Calculating a label transfer function by using a second objective function;
and 23, repeating the step 21 and the step 22 until the dynamic hypergraph structure learning model is converged, and completing optimization solution.
In any one of the above technical solutions, further, in step 1, extracting a feature vector of sample data in the database specifically includes: judging the data type of the sample data; determining a feature extraction method according to the data type; and extracting the feature vector of the sample data by using the determined feature extraction method.
In any one of the above technical solutions, further, the label vector set is determined by a class to which the labeled data and the unlabeled data belong.
In any of the above technical solutions, further, the method is suitable for gesture recognition and three-dimensional object recognition.
The technical scheme of the second aspect of the application is as follows: a tensor representation-based dynamic hypergraph structure learning classification system is provided, which comprises: a tensor expression unit and a label classification model generation unit;
the tensor expression unit is used for extracting the characteristic vector of sample data in the database, constructing a hypergraph structure according to the characteristic vector, and expressing the connection strength between any point set in the hypergraph structure by utilizing the tensor, wherein the sample data comprises label data and label-free data;
the label classification model generation unit is used for introducing potential energy loss functions and empirical loss functions into potential energy of a label vector set, a hypergraph structure represented by tensor and a point set in a database to generate a dynamic hypergraph structure learning model, carrying out optimization solution on the dynamic hypergraph structure learning model by using an alternative optimization method, recording the model after optimization solution as a label classification model, wherein the label classification model is used for solving the optimal solution of the label vector set to carry out data classification,
the calculation formula of the dynamic hypergraph structure learning model is as follows:
Figure BDA0002541597740000041
in the formula (I), the compound is shown in the specification,
Figure BDA0002541597740000042
for the joint strength of the ω -th set of points,
Figure BDA0002541597740000043
for all joint strengths
Figure BDA0002541597740000044
Where row vectors are arranged as a row indexed by omega,
Figure BDA0002541597740000045
is a row vector
Figure BDA0002541597740000046
Initial value of (a), f ω Is potential energy of point set, lambda and beta are weight coefficients, Y is label vector set 0 Is the initial set of values for the tag vector.
In any one of the above technical solutions, further, when the tensor representing unit tensor represents the connection strength between any one point set in the hypergraph structure by using a tensor, the tensor representing unit specifically includes: initial values of the row vectors in the tensor representation
Figure BDA0002541597740000047
Corresponding element with middle index of omega
Figure BDA0002541597740000048
The calculation formula of (2) is as follows:
Figure BDA0002541597740000051
ω∈Ψ N
Figure BDA0002541597740000052
in the formula (I), the compound is shown in the specification,
Figure BDA0002541597740000053
is any point set { v i I ∈ ω } center, v i As the point of the i-th spot,
Figure BDA0002541597740000054
a power set of the set Ω = {1,2, \8230;, N }, N being the number of vertices in the hypergraph structure,
Figure BDA0002541597740000055
represents the ith point v i To the center
Figure BDA0002541597740000056
The distance of (a) to (b),
Figure BDA0002541597740000057
for the set of points { v i I belongs to omega, and epsilon is a super-edge set of the hypergraph structure.
In any one of the above technical solutions, further, the tag classification model generating unit is further configured to: calculating potential energy of the point set, wherein the potential energy is calculated according to the formula:
Figure BDA0002541597740000058
where α is the balance parameter, δ (ω) is the number of points in the set of points, y i A label vector, x, for the ith sample data i Is the feature vector of the ith sample data.
In any of the above technical solutions, further, when the tag classification model generating unit performs optimization solution on the dynamic hypergraph structure learning model by using an alternating optimization method, the method specifically includes:
fixing a label vector set Y, and iteratively updating tensor representation of the hypergraph structure by using projection gradient and a first objective function
Figure BDA0002541597740000059
The calculation formula of the hypergraph structure iterative updating is as follows:
Figure BDA00025415977400000510
Figure BDA00025415977400000511
Figure BDA00025415977400000512
Figure BDA00025415977400000513
Figure BDA00025415977400000514
in the formula (I), the compound is shown in the specification,
Figure BDA00025415977400000515
in order to be a gradient of the magnetic field,
Figure BDA00025415977400000516
is the tensor representation of the hypergraph structure after k iterations, h is the step length of each optimization, P is the projection of the feasible set, and f is all the potential energies f ω Row vectors arranged in a row by taking omega as an index;
tensor representation of a fixed hypergraph structure
Figure BDA00025415977400000517
Calculating a label transfer function by using a second objective function;
refastening the labelstock set Y and tensor representation of the hypergraph structure, respectively
Figure BDA00025415977400000518
And (5) completing the optimization solution until the dynamic hypergraph structure learning model is converged.
In any of the above technical solutions, further, the extracting, by the tensor expression unit, the eigenvector of the sample data in the database specifically includes: judging the data type of the sample data; determining a feature extraction method according to the data type; and extracting the feature vector of the sample data by using the determined feature extraction method.
In any one of the above technical solutions, further, the label vector set is determined by a class to which the labeled data and the unlabeled data belong.
In any of the above technical solutions, further, the system is suitable for gesture recognition and three-dimensional object recognition.
The beneficial effect of this application is:
according to the technical scheme, the connection strength among the point sets in the hypergraph structure is expressed by adopting the tensor, sample data in the database are correlated on each order, the hypergraph structure is optimized alternately to form a dynamic hypergraph structure, and then label-free data are classified according to label vectors, so that the accuracy of data classification is effectively improved.
In the application, in the optimization updating process of the dynamic hypergraph structure, the prior information (labeled data) of the existing labels and characteristics is fully combined, the smoothness of the hypergraph structure in each characteristic space of a label space is kept, the hypergraph structure in the label classification process is greatly optimized, and the complex high-order association of data can be expressed more intuitively.
Drawings
The advantages of the above and/or additional aspects of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow diagram of a tensor representation-based dynamic hypergraph structure learning classification method according to one embodiment of the present application;
FIG. 2 is a schematic diagram of a label classification model according to an embodiment of the present application.
Detailed Description
In order that the above objects, features and advantages of the present application can be more clearly understood, the present application will be described in further detail with reference to the accompanying drawings and detailed description. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, however, the present application may be practiced in other ways than those described herein, and therefore the scope of the present application is not limited by the specific embodiments disclosed below.
The first embodiment is as follows:
the traditional static hypergraph structure is directly constructed by prior information, and the structure of the hypergraph structure is fixed and invariable in the hypergraph learning process and is usually represented by an adjacency matrix. In the embodiment, a dynamic hypergraph structure different from a traditional static hypergraph structure is provided, dynamic updating is performed in the hypergraph learning process, particularly, in the classification of label-free data, tensors are introduced to express the connection strength between point sets in the hypergraph structure, label vectors of the hypergraph structure and the data are optimized alternately, and the data classification is achieved.
As shown in fig. 1, the embodiment provides a dynamic hypergraph structure learning and classification method based on tensor expression, which is applicable to gesture recognition and three-dimensional object recognition. The method comprises the following steps:
step 1, extracting characteristic vectors of sample data in a database, constructing a hypergraph structure according to the characteristic vectors, and expressing the connection strength between any point set in the initial hypergraph structure by using a tensor, wherein the sample data comprises label data and label-free data;
specifically, in the present embodiment, the three-dimensional object feature description data is classified as an example, and the database uses a three-dimensional model data set (NTU) including 67 classes of 2020 objects, such as bombs, bottles, cars, chairs, cups, doors, maps, airplanes, swords, watches, tanks, trucks, etc., wherein 400 three-dimensional objects have tag information and 1620 three-dimensional objects have no tag information. Two parameters: the weight coefficient beta of the empirical loss term of the hypergraph structure is set to be 10, because if the beta is larger, the difference between the finally optimized tag classification model and the hypergraph structure is smaller, and the beta is too large, the hypergraph structure is difficult to optimize, and the smaller the beta is, the larger the difference between the finally optimized tag classification model and the hypergraph structure is, the too small beta causes overfitting, and therefore an intermediate value is selected; the weight coefficient λ of the data tag experience loss term is set to 1, because if λ is too small, the contribution of information with tag data is smaller, and therefore λ ≧ 0 is generally set.
Further, in step 1, extracting a feature vector of sample data in the database specifically includes: judging the data type of the sample data; determining a feature extraction method according to the data type; and extracting the feature vector of the sample data by using the determined feature extraction method.
It should be noted that one data corresponds to one feature vector, and the feature vector represents information of the data in the feature space. The feature extraction method is determined by the data type and characteristics of the sample data, and therefore, the data type of the sample data needs to be judged. When the sample data is judged to be three-dimensional object feature description data, extracting feature vectors of the sample data through a multi-view convolution neural network; when the sample data is judged to be text feature description data, feature vectors of the sample data are extracted through a Bag-of-Words model.
Therefore, in this embodiment, a multi-view convolutional neural network is used to extract feature vectors of sample data, and after the sample data is input to the multi-view convolutional neural network, a 4096-dimensional MVCNN feature vector can be output.
The hypergraph structure constructed in this embodiment is
Figure BDA0002541597740000081
Figure BDA0002541597740000082
Is the set of all points in the hypergraph, each point representing a datum, and ε is the set of all hyper-edges in the hypergraph, each hyper-edge representing a higher order association between those points connected by that hyper-edge. The method for constructing the hypergraph structure is a k-nearest neighbor method.
First, let k =2, for each three-dimensional object, calculate the euclidean distance between its feature vector and the feature vectors of other three-dimensional objects, find the 2 three-dimensional objects closest to it, and establish a super edge between these three objects. In the embodiment, there are 2020 three-dimensional objects in total, so 2020 super edges are established, each super edge connects 3 objects, which we call express 3-degree association between the objects. Then let k =4, for each three-dimensional object, find the 4 three-dimensional objects closest to it and establish a super edge between these 5 objects. Thus we have further added 2020 super edges, each connecting 5 objects, which we call express 5 th order associations between objects. We further let k =8 and 16, establish the super-edge in the same way. Finally we obtained a hypergraph structure with 8080 hyper-edges.
Further, the label vector set in step 1 is determined by the classes to which the labeled data and the unlabeled data belong;
specifically, for a hypergraph structure comprising N vertices
Figure BDA0002541597740000083
The dimension of its tensor is 2 N -1. Order to
Figure BDA0002541597740000091
Represents a power set of the set omega = {1,2, \8230;, N }, and uses
Figure BDA0002541597740000092
To index each element in its tensor. For arbitrary
Figure BDA0002541597740000093
Is defined as v i I ∈ ω } the strength of the connection between the set of points. If there is one hyper-edge connection and only connect { v } i All vertices in | i ∈ ω }, then
Figure BDA0002541597740000094
Otherwise
Figure BDA0002541597740000095
For all the connection strengths
Figure BDA0002541597740000096
(where ω ∈ Ψ) N ) In a row vector arranged in a row with omega as an index
Figure BDA0002541597740000097
Since the vector is a special case of tensor, the row vector
Figure BDA0002541597740000098
Is a tensor, and its corresponding formula is:
Figure BDA0002541597740000099
using tensors
Figure BDA00025415977400000910
The connection strength of all possible higher-order associations between N points, i.e. the tensor representation between the sets of points in the hypergraph structure, can be expressed as a tensor
Figure BDA00025415977400000911
Tensor if the hypergraph structure changes, i.e. the connection strength of some hyperedges changes
Figure BDA00025415977400000912
The corresponding elements in (a) may also change.
Further, based on the above setting, it can be found that in step 1, the connection strength between any point set in the initial hypergraph structure is expressed by a tensor, in which the initial value of a line vector is expressed by a tensor
Figure BDA00025415977400000913
Corresponding elements of the middle index ω
Figure BDA00025415977400000914
The calculation formula of (2) is as follows:
Figure BDA00025415977400000915
ω∈Ψ N
Figure BDA00025415977400000916
in the formula (I), the compound is shown in the specification,
Figure BDA00025415977400000917
is any point set { v i I belongs to the center of ω, v i As the point (i) is a point (i),
Figure BDA00025415977400000918
is a set of powers of the set Ω = {1,2, \8230;, N }, N being the number of vertices in the hypergraph structure,
Figure BDA00025415977400000919
represents the ith point v i To the center
Figure BDA00025415977400000920
May be a euclidean distance,
Figure BDA00025415977400000921
is the set of points { v i I belongs to the average distance between every two points in omega, exp () is an exponential function with a natural constant e as the base, and epsilon is a super-edge set of the hypergraph structure.
And 2, introducing a potential energy loss function and an empirical loss function into a label vector set in the database, expressing the potential energy of the hypergraph structure and the point set by a tensor, generating a dynamic hypergraph structure learning model, performing optimization solution on the dynamic hypergraph structure learning model by using an alternative optimization method, and recording the model after the optimization solution as a label classification model.
Specifically, the purpose of this embodiment is to obtain a label of unlabeled data in sample data, and set x i Denotes the firsti eigenvectors of sample data, and feature vector set X = { X } of all sample data 1 ,x 2 ,…,x N },y i A tag vector representing the ith sample data, a set of tag vectors of all sample data Y = { Y = 1 ,y 2 ,…,y N Where for the label vector y i The initial values of (a) are defined as follows:
for tagged data, if it belongs to class j, we have y ij =1, the remaining elements are equal to 0, if the data is unlabeled, all elements are set to 0.5, and the set of initial values of the label vectors of all sample data is denoted as Y 0
Therefore, assuming that δ (ω) represents the number of points in the point set, the potential energy of the point set is calculated as:
Figure BDA0002541597740000101
where α is the balance parameter, δ (ω) is the number of points in the set of points, y i Is a label vector, x, of the ith sample data i Is the feature vector of the ith sample data.
If the characteristics and labels of the point in the point set are similar, then its potential energy is smaller, otherwise the potential energy is larger.
And (3) expressing a label vector set, potential energy of the point set and tensor of the hypergraph structure in the database, introducing a potential energy loss function and an empirical loss function, and obtaining a calculation formula of a dynamic hypergraph structure learning model, wherein the calculation formula is as follows:
Figure BDA0002541597740000102
in the formula (I), the compound is shown in the specification,
Figure BDA0002541597740000103
for the joint strength of the ω -th set of points,
Figure BDA0002541597740000104
for all joint strengths
Figure BDA0002541597740000105
Where row vectors, i.e. tensor representations of the hypergraph structure,
Figure BDA0002541597740000106
vector for rows (hypergraph structure)
Figure BDA0002541597740000107
Initial value of f ω Is potential energy of point set, lambda and beta are weight coefficients, Y is label vector set 0 Is the initial set of values for the tag vector.
First term in the above formula
Figure BDA0002541597740000111
Is a potential energy loss term defined as: the joint strength of the point set is multiplied by the sum of the potential energy, and the term represents that for the point set with larger potential energy, the super edge is not needed to connect the group of points, and if the super edge is connected, the joint strength is also as small as possible; for a point set with a relatively large potential energy, it is preferable to have a super edge connecting the points and to have the highest possible connecting strength.
Second term in the above formula
Figure BDA0002541597740000112
The empirical loss term of the hypergraph structure is set to 10 to reduce the difference between the optimized dynamic hypergraph structure and the initially constructed hypergraph structure, and the empirical loss term can accelerate convergence in the learning process.
The third term λ R in the above formula emp And (Y) setting the weight coefficient lambda of the experience loss term of the label to be 1, so that the label obtained by learning the dynamic hypergraph structure learning model has small difference with the initial label vector of the sample data.
Further, in step 2, the optimization solution is performed on the dynamic hypergraph structure learning model by using an alternating optimization method, which specifically includes:
step 21, fixing the label vector set Y, and iteratively updating tensor expression of the hypergraph structure by using the projection gradient and the first objective function
Figure BDA0002541597740000113
The calculation formula for the hypergraph structure iterative update is as follows:
Figure BDA0002541597740000114
Figure BDA0002541597740000115
Figure BDA0002541597740000116
Figure BDA0002541597740000117
Figure BDA0002541597740000118
in the formula (I), the compound is shown in the specification,
Figure BDA0002541597740000119
in order to be a gradient of the magnetic field,
Figure BDA00025415977400001110
is the tensor representation of the hypergraph structure after k iterations, h is the step length of each optimization, P is the projection of the feasible set, and f is all the potential energies f ω (where ω ∈ Ψ) N ) A row vector which is arranged into a row by taking omega as an index, namely representing the potential energy of all point sets in a vector form;
specifically, the tensor representation of the hypergraph structure is updated iteratively while the label vector set Y is fixed
Figure BDA00025415977400001111
The first objective function is:
Figure BDA0002541597740000121
the constraint optimization of the function can be solved by utilizing the projection gradient, and the tensor expression of the hypergraph structure is set
Figure BDA0002541597740000122
The gradient of (c) is calculated as:
Figure BDA0002541597740000123
wherein f is all of f ω (where ω ∈ Ψ) N ) The potential energy of all point sets is represented in a vector form by a row vector which is arranged in a row by taking omega as an index. The tensor representation of the hypergraph structure is then updated by iteration
Figure BDA0002541597740000124
Namely:
Figure BDA0002541597740000125
Figure BDA0002541597740000126
Figure BDA00025415977400001214
Figure BDA0002541597740000127
in the formula (I), the compound is shown in the specification,
Figure BDA0002541597740000128
in order to be a gradient of the magnetic field,
Figure BDA0002541597740000129
for the hypergraph structure after k iterations, h is the step length of each optimization, and P is the feasible set
Figure BDA00025415977400001210
Projection of (2);
step 22, fixing the tensor representation of the hypergraph structure
Figure BDA00025415977400001211
Calculating a tag transfer function by using the second objective function;
specifically, the second objective function is set as follows:
Figure BDA00025415977400001212
Figure BDA00025415977400001213
Figure BDA0002541597740000131
in the formula, Ψ ij For all the contained points v i And v j A is a balance parameter, S ij Is an element of the ith row and jth column of the matrix S, D S Is a diagonal matrix whose diagonal elements are the sum of S per row.
Thus, the optimal solution for the tag transfer function is:
Figure BDA0002541597740000132
and 23, repeating the step 21 and the step 22 until the dynamic hypergraph structure learning model converges, and finishing the optimization solution.
Specifically, after solving, the optimal label vector set Y and the optimal tensor representation of the hypergraph structure are obtained
Figure BDA0002541597740000133
Then the tag vector of the ith sample data is y i Find y i Subscript of the element having the highest value in (a) if y ij Is y i The ith sample data is classified into the jth class. In this way, 1620 unlabeled data without labeled information in the sample data can be classified.
Example two:
the embodiment also provides a dynamic hypergraph structure learning classification system based on tensor expression, and the system can be applied to gesture recognition and three-dimensional object recognition. The system comprises: a tensor expression unit and a label classification model generation unit; the tensor expression unit is used for extracting the eigenvector of sample data in a database, constructing a hypergraph structure according to the eigenvector, and expressing the connection strength between any point set in the hypergraph structure by utilizing tensor, wherein the sample data comprises labeled data and unlabeled data, and a label vector set is determined by the class to which the labeled data and the unlabeled data belong.
Specifically, in the present embodiment, the three-dimensional object feature description data is classified as an example, and the database uses a three-dimensional model data set (NTU) including 67 classes of 2020 objects, such as bombs, bottles, cars, chairs, cups, doors, maps, airplanes, swords, watches, tanks, trucks, etc., wherein 400 three-dimensional objects have tag information and 1620 three-dimensional objects have no tag information. Two parameters: the weight coefficient beta of the empirical loss term of the hypergraph structure is set to be 10, because if beta is larger, the difference between the finally optimized tag classification model and the hypergraph structure is smaller, and beta is too large, the hypergraph structure is difficult to optimize, and if beta is smaller, the difference between the finally optimized tag classification model and the hypergraph structure is larger, and beta is too small, overfitting is caused, so that an intermediate value is selected; the weighting factor λ of the data tag experience loss term is set to 1, because λ is too small, the contribution of information with tag data is smaller, and therefore λ ≧ 0 is generally set.
Further, the tensor expression unit extracts the eigenvector of the sample data in the database, and specifically includes: judging the data type of the sample data; determining a feature extraction method according to the data type; and extracting the feature vector of the sample data by using the determined feature extraction method.
It should be noted that one data corresponds to one feature vector, and the feature vector represents information of the data in the feature space. The feature extraction method is determined by the data type and characteristics of the sample data, and therefore, the data type of the sample data needs to be judged. When the sample data is judged to be three-dimensional object feature description data, extracting feature vectors of the sample data through a multi-view convolution neural network; when the sample data is judged to be text feature description data, feature vectors of the sample data are extracted through a Bag-of-Words model.
Therefore, in this embodiment, the multi-view convolutional neural network is used to extract the feature vector of the sample data, and after the sample data is input into the multi-view convolutional neural network, a 4096-dimensional MVCNN feature vector can be output.
The hypergraph structure constructed in this embodiment is
Figure BDA0002541597740000141
Figure BDA0002541597740000142
Is the set of all points in the hypergraph, each point representing a datum, and ε is the set of all hyper-edges in the hypergraph, each hyper-edge representing a high order association between those points connected by that hyper-edge. The method for constructing the hypergraph structure is a k-nearest neighbor method.
First, let k =2, for each three-dimensional object, calculate the euclidean distance between its feature vector and the feature vectors of other three-dimensional objects, find the 2 three-dimensional objects closest to it, and establish a super edge between these three objects. In the embodiment, there are 2020 three-dimensional objects in total, so 2020 super edges are established, each super edge connects 3 objects, which we call express 3-degree association between the objects. Then let k =4, for each three-dimensional object, find the 4 three-dimensional objects closest to it and establish a super edge between these 5 objects. Thus we have further added 2020 super edges, each connecting 5 objects, which we call express 5 th order associations between objects. We further let k =8 and 16, establish the super-edge in the same way. Finally we obtained a hypergraph structure with 8080 hyper-edges.
For a hypergraph structure containing N vertices
Figure BDA0002541597740000151
The dimension of its tensor is 2 N -1. Order to
Figure BDA0002541597740000152
Represents a power set of the set omega = {1,2, \8230;, N }, and uses
Figure BDA0002541597740000153
To index each element in its tensor. For arbitrary ω ∈ Ψ N
Figure BDA0002541597740000154
Is defined as v i I ∈ ω } the strength of the connection between the set of points. If there is one hyper-edge connection and only connect { v } i All vertices in | i ∈ ω }, then
Figure BDA0002541597740000155
Otherwise
Figure BDA0002541597740000156
For all the connection strengths
Figure BDA0002541597740000157
(where ω ∈ Ψ) N ) In a row vector arranged in a row with omega as an index
Figure BDA0002541597740000158
Since the vector is a special case of tensor, the row vector
Figure BDA0002541597740000159
Is a tensor, and its corresponding formula is:
Figure BDA00025415977400001510
using tensors
Figure BDA00025415977400001511
The connection strength of all possible higher-order associations between N points, i.e. the tensor representation between the sets of points in the hypergraph structure, can be expressed as a tensor
Figure BDA00025415977400001512
Tensor if the hypergraph structure changes, i.e. the connection strength of some hyperedges changes
Figure BDA00025415977400001513
The corresponding elements in (a) may also change.
Further, based on the setting, the tensor expression unit tensorially expresses the connection strength between any one point set in the hypergraph structure by using the tensor specifically includes: initial values of the row vectors in the tensor representation
Figure BDA00025415977400001514
Corresponding element with middle index of omega
Figure BDA00025415977400001515
The calculation formula of (c) is:
Figure BDA00025415977400001516
ω∈Ψ N
Figure BDA00025415977400001517
in the formula (I), the compound is shown in the specification,
Figure BDA00025415977400001518
is any point set { v i I ∈ ω } center, v i As the point of the i-th spot,
Figure BDA00025415977400001519
is a power set of the set Ω = {1,2, \8230;, N }, N being the number of vertices in the hypergraph structure,
Figure BDA00025415977400001520
represents the ith point v i To the center
Figure BDA00025415977400001521
The distance of (a) to (b),
Figure BDA00025415977400001522
is the set of points { v i I belongs to omega, and epsilon is a super-edge set of the hypergraph structure.
The label classification model generation unit is used for introducing potential energy loss functions and experience loss functions into potential energy of a label vector set, a hypergraph structure represented by a tensor and a point set in a database to generate a dynamic hypergraph structure learning model, optimizing and solving the dynamic hypergraph structure learning model by using an alternative optimization method, recording the optimized and solved model as a label classification model, and the label classification model is used for solving the optimal solution of the label vector set to perform data classification.
Specifically, the purpose of this embodiment is to obtain a label of unlabeled data in sample data, and set x i Representing the eigenvectors of the ith sample data, the set of eigenvectors of all sample data X = { X = 1 ,x 2 ,…,x N },y i A label vector representing the ith sample data, a set of label vectors of all sample data Y = { Y = { Y } 1 ,y 2 ,…,y N H, where for the label vector y i The initial values of (a) are defined as follows:
for tagged data, if it belongs to class j, we have y ij =1, the remaining elements are equal to 0, if the data is unlabeled, all elements are set to 0.5, and the set of initial values of the label vectors of all sample data is denoted as Y 0
Therefore, it is assumed that δ (ω) represents the number of points in the point set, and the label classification model generation unit calculates the potential energy of the point set using the following calculation formula:
Figure BDA0002541597740000161
where α is the balance parameter, δ (ω) is the number of points in the set of points, y i Is a label vector, x, of the ith sample data i Is the feature vector of the ith sample data.
If the characteristics and labels of the point in the point set are similar, then its potential energy is smaller, otherwise the potential energy is larger.
And (3) expressing a label vector set, the potential energy of the point set and the tensor of the hypergraph structure in the database, introducing a potential energy loss function and an empirical loss function, and obtaining a calculation formula of a learning model of the dynamic hypergraph structure, wherein the calculation formula comprises the following steps:
Figure BDA0002541597740000162
in the formula (I), the compound is shown in the specification,
Figure BDA0002541597740000163
for the joint strength of the ω -th set of points,
Figure BDA0002541597740000164
for all joint strengths
Figure BDA0002541597740000165
Where row vectors, i.e. tensor representations of the hypergraph structure,
Figure BDA0002541597740000171
vector of execution (hypergraph structure)
Figure BDA0002541597740000172
Initial value of f ω Is potential energy of point set, lambda and beta are weight coefficients, Y is label vector set 0 Is the initial set of values for the tag vector.
First term in the above formula
Figure BDA0002541597740000173
Is a potential energy loss term defined as: the joint strength of the point set is multiplied by the sum of the potential energy, and the term represents that for the point set with larger potential energy, the super edge is not needed to connect the group of points, and if the super edge is connected, the joint strength is also as small as possible; for a set of points with a relatively large potential energy, it is preferable to have a super edge connecting the set of points and to have the greatest possible strength.
Second term in the above equation
Figure BDA0002541597740000174
The empirical loss term for the hypergraph structure can be used to speed up convergence in the learning process by setting the weighting coefficient beta to 10 to reduce the difference between the optimized hypergraph structure and the initial hypergraph structure.
The third term λ R in the above equation emp And (Y) setting the weight coefficient lambda of the experience loss term of the label to be 1, so that the label vector obtained by learning of the dynamic hypergraph structure learning model has small difference with the initial label vector of the sample data.
Further, when the label classification model generation unit performs optimization solution on the dynamic hypergraph structure learning model by using an alternative optimization method, the label classification model generation unit specifically includes:
firstly, fixing a label vector set Y, and iteratively updating tensor expression of a hypergraph structure by using projection gradient and a first objective function
Figure BDA0002541597740000175
Wherein, the hypergraph structure is updated by iterationThe formula is as follows:
Figure BDA0002541597740000176
Figure BDA0002541597740000177
Figure BDA0002541597740000178
Figure BDA0002541597740000179
Figure BDA00025415977400001710
in the formula (I), the compound is shown in the specification,
Figure BDA00025415977400001711
in order to be a gradient of the magnetic field,
Figure BDA00025415977400001712
is tensor representation of the hypergraph structure after k iterations, h is the step length of each optimization, P is projection of the feasible set, and f is all potential energy f ω Row vectors arranged in a row by taking omega as an index;
specifically, the tensor representation of the hypergraph structure is updated iteratively while the label vector set Y is fixed
Figure BDA00025415977400001713
The first objective function is:
Figure BDA0002541597740000181
the constraint optimization of the function can be solved by using projection gradient and set a hypergraph nodeTensor representation of a structure
Figure BDA0002541597740000182
The gradient of (c) is calculated as:
Figure BDA0002541597740000183
wherein f is all of ω (where ω ∈ Ψ) N ) The potential energy of all point sets is represented in a vector form by a row vector arranged in a row with omega as an index. The tensor representation of the hypergraph structure is then updated by iteration
Figure BDA0002541597740000184
Namely:
Figure BDA0002541597740000185
Figure BDA0002541597740000186
Figure BDA0002541597740000187
Figure BDA0002541597740000188
in the formula (I), the compound is shown in the specification,
Figure BDA0002541597740000189
in order to be a gradient of the magnetic field,
Figure BDA00025415977400001810
for the hypergraph structure after k iterations, h is the step length of each optimization, and P is the feasible set
Figure BDA00025415977400001811
Projection of (2);
second, the tensor representation of the fixed hypergraph structure
Figure BDA00025415977400001812
Calculating a tag transfer function by using the second objective function;
specifically, the second objective function is set as follows:
Figure BDA00025415977400001813
Figure BDA00025415977400001814
Figure BDA0002541597740000191
in the formula, psi ij For all contained points v i And v j A is a balance parameter, S ij As elements of the ith row and jth column of the matrix S, D S Is a diagonal matrix whose diagonal elements are the sum of S per row.
Thus, the optimal solution for the tag transfer function is:
Figure BDA0002541597740000192
finally, repeating the two processes, and respectively fixing the label vector set Y and the tensor expression of the hypergraph structure again
Figure BDA0002541597740000193
And completing the optimization solution until the dynamic hypergraph structure learning model converges.
Specifically, after solving, the optimal label vector set Y and the optimal tensor expression of the hypergraph structure are obtained
Figure BDA0002541597740000194
Then the tag vector of the ith sample data is y i Find y i Subscript of the element having the highest value in (a) if y ij Is y i The ith sample data is classified into the jth class. In this way, 1620 unlabeled data without label information in the sample data can be classified.
As shown in fig. 2, a label classification model is built by using the classification method in the present embodiment, and a support vector machine method and a conventional hypergraph learning method are used as comparison methods to classify a three-dimensional model data set (NTU) used in the present embodiment, where the classification accuracy is shown in table 1, the classification accuracy of the method in the present embodiment is 80.36%, the classification accuracy of the support vector machine method to the data set is 71.6%, and the classification accuracy of the conventional hypergraph learning method to the data set is 74.07%.
TABLE 1
Comparison method Accuracy of classification
Support vector machine method 71.6%
Traditional hypergraph learning method 74.07%
Dynamic hypergraph structure learning method 80.36%
The technical scheme of the present application is described in detail above with reference to the accompanying drawings, and the present application provides a tensor representation-based dynamic hypergraph structure learning classification method and system, wherein the method includes: step 1, extracting a characteristic vector of sample data in a database, constructing a hypergraph structure according to the characteristic vector, and expressing the connection strength between any point set in the hypergraph structure by utilizing a tensor, wherein the sample data comprises label data and label-free data; and 2, introducing potential energy loss functions and empirical loss functions into potential energy of the label vector set, the hypergraph structure represented by the tensor and the point set in the database to generate a dynamic hypergraph structure learning model, performing optimization solution on the dynamic hypergraph structure learning model by using an alternative optimization method, and performing optimal solution of the label vector set after the model solution for data classification. According to the technical scheme, tensor is introduced as a representation form of the dynamic hypergraph structure and a dynamic hypergraph structure learning method, the hypergraph structure and the label vectors of data are optimized alternately, and finally data classification is achieved according to the optimal solution of the label vectors of the data.
The steps in the present application may be sequentially adjusted, combined, and subtracted according to actual requirements.
The units in the device can be merged, divided and deleted according to actual requirements.
Although the present application has been disclosed in detail with reference to the accompanying drawings, it is to be understood that such description is merely illustrative and is not intended to limit the application of the present application. The scope of the present application is defined by the appended claims and may include various modifications, adaptations, and equivalents of the invention without departing from the scope and spirit of the application.

Claims (10)

1. A dynamic hypergraph structure learning classification method based on tensor expression is characterized by comprising the following steps:
step 1, extracting a characteristic vector of sample data in a database, constructing a hypergraph structure according to the characteristic vector, and expressing the connection strength between any point set in the hypergraph structure by utilizing a tensor, wherein the sample data comprises label data and label-free data;
step 2, introducing potential energy loss functions and empirical loss functions into the potential energy of the label vector set, the hypergraph structure represented by tensor and the point set in the database to generate a dynamic hypergraph structure learning model, performing optimization solution on the dynamic hypergraph structure learning model by using an alternative optimization method, recording the model after optimization solution as a label classification model, wherein the label classification model is used for solving the optimal solution of the label vector set to perform data classification,
the calculation formula of the dynamic hypergraph structure learning model is as follows:
Figure FDA0002541597730000011
in the formula (I), the compound is shown in the specification,
Figure FDA0002541597730000012
for the connection strength of the w-th set of points,
Figure FDA0002541597730000013
for all joint strengths
Figure FDA0002541597730000014
Where row vectors are arranged as a row indexed by omega,
Figure FDA0002541597730000015
is a row vector
Figure FDA0002541597730000016
Initial value of f ω Is potential energy of point set, lambda and beta are weight coefficients, Y is label vector set 0 Is the initial set of values for the tag vector.
2. The tensor representation-based dynamic hypergraph structure learning classification method of claim 1, characterized by the steps ofIn 1, tensor representation of the connection strength between any point sets in the hypergraph structure by using tensors specifically includes: said initial values of the row vectors in the tensor representation
Figure FDA0002541597730000017
Corresponding element with middle index of omega
Figure FDA0002541597730000018
The calculation formula of (c) is:
Figure FDA0002541597730000019
ω∈Ψ N
Figure FDA00025415977300000110
in the formula (I), the compound is shown in the specification,
Figure FDA00025415977300000111
is any point set { v i I belongs to the center of ω, v i As the point of the i-th spot,
Figure FDA00025415977300000112
is a set of powers of the set Ω = {1,2, \8230;, N }, N being the number of vertices in the hypergraph structure,
Figure FDA00025415977300000113
represents the ith point v i To the center
Figure FDA00025415977300000114
The distance of (a) to (b),
Figure FDA00025415977300000115
for the set of points { v i I belongs to omega, and epsilon is the average distance between every two points in the hypergraph structureAnd (4) super edge collection.
3. The tensor representation-based dynamic hypergraph structure learning classification method as recited in claim 1, wherein in the step 2, the potential energy of the point set is calculated by the formula:
Figure FDA0002541597730000021
where α is the balance parameter, δ (ω) is the number of points in the set of points, y i Is a label vector, x, of the ith sample data i Is the feature vector of the ith sample data.
4. The tensor representation-based dynamic hypergraph structure learning classification method as claimed in any one of claims 1 to 3, wherein in the step 2, the optimization solution is performed on the dynamic hypergraph structure learning model by using an alternative optimization method, which specifically includes:
step 21, fixing the label vector set Y, and iteratively updating the tensor expression of the hypergraph structure by using the projection gradient and the first objective function
Figure FDA0002541597730000022
The calculation formula of the hypergraph structure iterative update is as follows:
Figure FDA0002541597730000023
Figure FDA0002541597730000024
Figure FDA0002541597730000025
in the formula (I), the compound is shown in the specification,
Figure FDA0002541597730000026
in order to be a gradient of the magnetic field,
Figure FDA0002541597730000027
is the tensor representation of the hypergraph structure after k iterations, h is the step length of each optimization, P is the projection of the feasible set, and f is all the potential energies f ω Row vectors arranged in a row by taking omega as an index;
step 22, fixing the tensor representation of the hypergraph structure
Figure FDA0002541597730000028
Calculating a label transfer function by using a second objective function;
and 23, repeating the step 21 and the step 22 until the dynamic hypergraph structure learning model converges, and finishing the optimization solution.
5. The tensor representation-based dynamic hypergraph structure learning classification method as claimed in claim 1, wherein in step 1, extracting the feature vector of the sample data in the database specifically comprises:
judging the data type of the sample data;
determining a feature extraction method according to the data type;
and extracting the feature vector of the sample data by using the determined feature extraction method.
6. The tensor representation-based dynamic hypergraph structure learning classification method of claim 1, wherein the set of label vectors is determined by the class to which the labeled data and the unlabeled data belong.
7. The tensor representation-based dynamic hypergraph structure learning classification method as claimed in claim 1, wherein the method is suitable for gesture recognition and three-dimensional object recognition.
8. A tensor representation-based dynamic hypergraph structure learning classification system, comprising: a tensor expression unit and a label classification model generation unit;
the tensor expression unit is used for extracting the eigenvector of sample data in a database, constructing a hypergraph structure according to the eigenvector, and expressing the connection strength between any point set in the hypergraph structure by using a tensor, wherein the sample data comprises label data and label-free data;
the label classification model generation unit is used for introducing a potential energy loss function and an empirical loss function into potential energy of a label vector set, the hypergraph structure represented by tensor and the point set in the database to generate a dynamic hypergraph structure learning model, performing optimization solution on the dynamic hypergraph structure learning model by using an alternative optimization method, recording the model after optimization solution as a label classification model, and performing data classification by using the label classification model to solve the optimal solution of the label vector set,
the calculation formula of the dynamic hypergraph structure learning model is as follows:
Figure FDA0002541597730000031
in the formula (I), the compound is shown in the specification,
Figure FDA0002541597730000032
for the joint strength of the ω -th set of points,
Figure FDA0002541597730000033
for all joint strengths
Figure FDA0002541597730000034
Where row vectors are arranged as a row indexed by omega,
Figure FDA0002541597730000035
as a row vector
Figure FDA0002541597730000036
Initial value of f ω Is the potential energy of the point set, lambda and beta are weight coefficients, Y is a set of label vectors, Y is 0 Is the initial set of values for the tag vector.
9. The tensor representation-based dynamic hypergraph structure learning classification system of claim 8, wherein the label classification model generation unit is further configured to: calculating potential energy of the point set, wherein the potential energy is calculated according to the formula:
Figure FDA0002541597730000041
where α is the balance parameter, δ (ω) is the number of points in the set of points, y i Is a label vector, x, of the ith sample data i Is the feature vector of the ith sample data.
10. The system as claimed in claim 9, wherein the label classification model generating unit, when performing optimization solution on the dynamic hypergraph structure learning model by using an alternating optimization method, specifically includes:
fixing the label vector set Y, and iteratively updating tensor representation of the hypergraph structure by using projection gradient and a first objective function
Figure FDA0002541597730000042
The calculation formula of the hypergraph structure iterative update is as follows:
Figure FDA0002541597730000043
Figure FDA0002541597730000044
Figure FDA0002541597730000045
in the formula (I), the compound is shown in the specification,
Figure FDA0002541597730000046
in order to be a gradient of the magnetic field,
Figure FDA0002541597730000047
is tensor representation of the hypergraph structure after k iterations, h is the step length of each optimization, P is projection of the feasible set, and f is all potential energy f ω Row vectors arranged in a row by taking omega as an index;
tensor representation of a fixed hypergraph structure
Figure FDA0002541597730000048
Calculating a tag transfer function by using the second objective function;
re-fixing the labelstock set Y and tensor representation of the hypergraph structure, respectively
Figure FDA0002541597730000049
And completing optimization solution until the dynamic hypergraph structure learning model converges.
CN202010548497.6A 2020-06-16 2020-06-16 Tensor expression-based dynamic hypergraph structure learning classification method and system Active CN111695011B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010548497.6A CN111695011B (en) 2020-06-16 2020-06-16 Tensor expression-based dynamic hypergraph structure learning classification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010548497.6A CN111695011B (en) 2020-06-16 2020-06-16 Tensor expression-based dynamic hypergraph structure learning classification method and system

Publications (2)

Publication Number Publication Date
CN111695011A CN111695011A (en) 2020-09-22
CN111695011B true CN111695011B (en) 2022-10-28

Family

ID=72481172

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010548497.6A Active CN111695011B (en) 2020-06-16 2020-06-16 Tensor expression-based dynamic hypergraph structure learning classification method and system

Country Status (1)

Country Link
CN (1) CN111695011B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113129267A (en) * 2021-03-22 2021-07-16 杭州电子科技大学 OCT image detection method and system based on retina hierarchical data
CN113254729B (en) * 2021-06-29 2021-10-08 中国科学院自动化研究所 Multi-modal evolution characteristic automatic conformal representation method based on dynamic hypergraph network
CN116030535B (en) * 2023-03-24 2023-06-20 深圳时识科技有限公司 Gesture recognition method and device, chip and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108334455A (en) * 2018-03-05 2018-07-27 清华大学 The Software Defects Predict Methods and system of cost-sensitive hypergraph study based on search
CN109766935A (en) * 2018-12-27 2019-05-17 中国石油大学(华东) A kind of semisupervised classification method based on hypergraph p-Laplacian figure convolutional neural networks
CN110298392A (en) * 2019-06-13 2019-10-01 北京工业大学 A kind of semisupervised classification method that label constraint learns from the more hypergraphs of weight

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108334455A (en) * 2018-03-05 2018-07-27 清华大学 The Software Defects Predict Methods and system of cost-sensitive hypergraph study based on search
CN109766935A (en) * 2018-12-27 2019-05-17 中国石油大学(华东) A kind of semisupervised classification method based on hypergraph p-Laplacian figure convolutional neural networks
CN110298392A (en) * 2019-06-13 2019-10-01 北京工业大学 A kind of semisupervised classification method that label constraint learns from the more hypergraphs of weight

Also Published As

Publication number Publication date
CN111695011A (en) 2020-09-22

Similar Documents

Publication Publication Date Title
CN111695011B (en) Tensor expression-based dynamic hypergraph structure learning classification method and system
CN110188228B (en) Cross-modal retrieval method based on sketch retrieval three-dimensional model
CN114841257B (en) Small sample target detection method based on self-supervision comparison constraint
CN112699247A (en) Knowledge representation learning framework based on multi-class cross entropy contrast completion coding
CN107729290B (en) Representation learning method of super-large scale graph by using locality sensitive hash optimization
CN104991974A (en) Particle swarm algorithm-based multi-label classification method
CN114329232A (en) User portrait construction method and system based on scientific research network
CN111368076B (en) Bernoulli naive Bayesian text classification method based on random forest
CN110264372B (en) Topic community discovery method based on node representation
CN113378913A (en) Semi-supervised node classification method based on self-supervised learning
CN111723914A (en) Neural network architecture searching method based on convolution kernel prediction
CN110796233A (en) Self-adaptive compression method of deep residual convolution neural network based on transfer learning
CN108647206B (en) Chinese junk mail identification method based on chaos particle swarm optimization CNN network
CN111026887B (en) Cross-media retrieval method and system
CN109558898B (en) Multi-choice learning method with high confidence based on deep neural network
CN113361283A (en) Web table-oriented paired entity joint disambiguation method
CN117152459A (en) Image detection method, device, computer readable medium and electronic equipment
Long et al. Graph-based active learning based on label propagation
CN114912458A (en) Emotion analysis method and device and computer readable medium
CN116797850A (en) Class increment image classification method based on knowledge distillation and consistency regularization
CN116883746A (en) Graph node classification method based on partition pooling hypergraph neural network
CN114861065B (en) Personalized recommendation method of cascade residual error graph convolution network based on multiple behaviors
CN110866838A (en) Network representation learning algorithm based on transition probability preprocessing
Gu et al. A cross domain feature extraction method for bearing fault diagnosis based on balanced distribution adaptation
CN113792551A (en) Named entity identification method based on hybrid transfer learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant