CN116257659A - Dynamic diagram embedding method and system of intelligent learning guiding system - Google Patents

Dynamic diagram embedding method and system of intelligent learning guiding system Download PDF

Info

Publication number
CN116257659A
CN116257659A CN202310342163.7A CN202310342163A CN116257659A CN 116257659 A CN116257659 A CN 116257659A CN 202310342163 A CN202310342163 A CN 202310342163A CN 116257659 A CN116257659 A CN 116257659A
Authority
CN
China
Prior art keywords
nodes
matrix
dynamic
embedding
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310342163.7A
Other languages
Chinese (zh)
Inventor
刘三女牙
孙建文
沈筱譞
刘盛英杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central China Normal University
Original Assignee
Central China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central China Normal University filed Critical Central China Normal University
Priority to CN202310342163.7A priority Critical patent/CN116257659A/en
Publication of CN116257659A publication Critical patent/CN116257659A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9024Graphs; Linked lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Strategic Management (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • Tourism & Hospitality (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Algebra (AREA)
  • Educational Technology (AREA)
  • Computing Systems (AREA)
  • Educational Administration (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Development Economics (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the technical field of intelligent guide, in particular to a dynamic diagram embedding method and system of an intelligent guide system, comprising the following steps: performing time sequence expansion on the dynamic nodes to generate time sequence sequences of the dynamic nodes; generating a first embedding matrix based on the sequence of static nodes and the time sequence of dynamic nodes; carrying out pooling treatment on the first embedded matrix through the transformation matrix, and outputting a second embedded matrix; for each node in the second embedding matrix, acquiring the convergence characteristics of all nodes based on the node-level attention characteristics and the relationship-level attention characteristics and generating a final embedding matrix; based on the nodes finally embedded in the matrix, an objective function is constructed, and the result of the training done by the learner is predicted through the objective function. According to the invention, on the basis of ensuring the model prediction accuracy, the calculation scale of the model is reduced, the continuous change of the learner in the actual situation is effectively simulated through the random time sequence pooling, and the performance of the learner can be more stable along with the time.

Description

Dynamic diagram embedding method and system of intelligent learning guiding system
Technical Field
The invention relates to the technical field of intelligent guide, in particular to a dynamic diagram embedding method and system of an intelligent guide system.
Background
In recent years, with the continuous fusion of artificial intelligence and online education, intelligent guide system (ITS) is being promoted to develop rapidly. ITS plays the role of a virtual mentor to design a personalized learning path for a learner. A large amount of learning behavior data is generated from ITS every day, based on which developers build many intelligent learning services, including cognitive diagnostics, knowledge tracking, and resource recommendation services, to help learners improve learning efficiency.
Modeling the graph of educational data can greatly enhance the performance of educational services. As a common data structure, the graph is widely used and used in many real scenes, such as a biological protein network, an information network, a knowledge graph and the like. Real-time problems are modeled based on graph perspectives, and then representation vectors of nodes in the graph are learned by Graph Embedding (GE) to feed back to downstream machine learning tasks, thereby improving final efficiency and performance.
In the prior art, this graph model constructed from ITS data is referred to as an intelligent guide graph (ITG). Notably, the ITG is a heterogeneous graph in that it contains various entities such as resources, exercises, learners, classes, and knowledge concepts, as well as various relationships between them. Thus, ITG is handled by generally using correlation methods applicable to heterograms.
However, as learning progresses, the learner's knowledge state also changes and affects their interactive decisions, i.e., ITG is dynamic, and thus cannot be modeled using traditional Dynamic Graph Embedding (DGE) methods. The prior art typically models a dynamic graph as a sequence of static snapshot graphs or a sequence of neighborhood formations, samples random walks in time sequence, these methods simply capture the sequential evolution of static structures throughout the snapshot graph sequence, while the evolution of ITG differs from these processes in that it has the characteristics of locality, independence and smoothness.
Wherein, locality is: an ITG contains not only dynamic nodes, such as learners, but also many static nodes that do not change over time; in order to cope with the changing dynamic knowledge state, it is necessary to consider how to coordinate invariance of static nodes and local evolution of dynamic nodes, which is not shown in the conventional dynamic graph embedding method. The independence is: the evolution process of the nodes in the ITG is mutually independent; that is, each evolving learner has a unique timeline that describes the entire learning process from start to finish. The smoothness is: the evolution of each learner in the ITG is stable and smooth, rather than abrupt, consistent with the learner's progressive knowledge; while the objects used for traditional dynamic graph modeling may be various types of nodes that do not result in smooth changes over time.
In summary, the conventional dynamic graph embedding method cannot adapt to the characteristics of locality, independence and smoothness of dynamic heterogeneous graphs, and the conventional method cannot model the ITG in the educational scene.
Disclosure of Invention
The invention provides a dynamic diagram embedding method and system of an intelligent guide system, which are used for solving the defects in the prior art.
The invention provides a dynamic image data embedding method of an intelligent guide system, which comprises the following steps:
performing time sequence expansion on dynamic nodes in a heterogeneous evolution network of the intelligent guide system to obtain an expanded graph data structure, wherein the expanded graph data structure comprises an expanded node set and an expanded edge set, and generating a time sequence of the dynamic nodes;
randomly projecting all nodes, and generating a first embedding matrix based on the sequence of the static nodes and the time sequence of the dynamic nodes;
dividing the time sequence into a plurality of continuous intervals with equal length, and embedding an average value of nodes in each interval as an interval to generate a transformation matrix; the first embedded matrix is subjected to pooling treatment through the transformation matrix to reduce the number of nodes, and a second embedded matrix is output;
for each node in the second embedding matrix, acquiring the convergence characteristics of all nodes based on the node-level attention characteristics and the relationship-level attention characteristics and generating a final embedding matrix;
and carrying out edge reconstruction and weight regression based on the nodes finally embedded in the matrix, constructing an objective function, and predicting the training result of the learner through the objective function.
According to the method for embedding the dynamic graph data of the intelligent guide system, which is provided by the invention, the dynamic nodes in the heterogeneous evolution network of the intelligent guide system are subjected to time sequence expansion, and the method comprises the following steps:
graph data structure for heterogeneous evolved network
Figure BDA0004158376100000031
And (3) performing time sequence expansion:
Figure BDA0004158376100000032
acquiring a set of expansion nodes
Figure BDA0004158376100000033
And expanding the set ε of edges +
Figure BDA0004158376100000034
Creating an expansion node for each dynamic edge, changing the dynamic node connected with the dynamic edge into a corresponding expansion node, creating the expansion edge to link adjacent expansion nodes, and generating a time sequence of the dynamic nodes
Figure BDA0004158376100000035
Figure BDA0004158376100000036
wherein ,
Figure BDA00041583761000000318
represents the set of initial nodes, ε represents the set of initial edges, +.>
Figure BDA0004158376100000037
As a dynamic node at the initial moment in time,
Figure BDA0004158376100000038
is an extended node that evolves over time.
According to the dynamic graph data embedding method of the intelligent guide system, provided by the invention, all nodes are randomly projected, and a first embedding matrix is generated based on the sequence of static nodes and the time sequence of dynamic nodes, and the method comprises the following steps:
node set
Figure BDA0004158376100000039
Comprising dynamic node set->
Figure BDA00041583761000000310
And static node set->
Figure BDA00041583761000000311
Static node set based->
Figure BDA00041583761000000312
Is embedded statically->
Figure BDA00041583761000000313
Dynamic node set based->
Figure BDA00041583761000000314
Dynamic embedding of (a)>
Figure BDA00041583761000000315
Based on the static embedding and the dynamic embeddingThe state embedding generates the first embedding matrix:
Figure BDA00041583761000000316
according to the dynamic diagram data embedding method of the intelligent guide system provided by the invention, for a given time sequence
Figure BDA00041583761000000317
s is the number of nodes in each section, and s>1, presetting the size of the first interval as s 1 ,s 1 Is a discrete random variable obeying the average and distribution, s 1 ∈{1,…,s};
When s is 1 Is determined, the total number of all intervals is:
Figure BDA0004158376100000041
according to the dynamic image data embedding method of the intelligent guide system provided by the invention, outputting the second embedding matrix comprises the following steps:
generating a transformation matrix
Figure BDA0004158376100000042
p ij Is->
Figure BDA0004158376100000043
The j-th column of row i:
Figure BDA0004158376100000044
the first embedded matrix is subjected to pooling treatment through the transformation matrix to reduce the number of nodes, and a second embedded matrix is output
Figure BDA0004158376100000045
Comprising the following steps:
Figure BDA0004158376100000046
Figure BDA0004158376100000047
wherein ,
Figure BDA0004158376100000048
representation s 1 Transformation matrix at=n, ++>
Figure BDA0004158376100000049
Representation s 1 Second embedding matrix at=n.
According to the dynamic graph data embedding method of the intelligent guide system provided by the invention, for each node in the second embedding matrix, node-level attention features and relationship-level attention features are calculated respectively, and the convergence features of all the nodes are acquired to generate a final embedding matrix, which comprises the following steps:
let node v i The characteristic of the layer I is that
Figure BDA00041583761000000410
Figure BDA00041583761000000411
The following formula is satisfied and the following formula is adopted,
Figure BDA00041583761000000412
Figure BDA00041583761000000413
v is i Attention of layer I under relation r, < ->
Figure BDA00041583761000000414
For aggregate features at node level, b r,l As a parameter of the model, it is possible to provide,
Figure BDA00041583761000000415
the method comprises the following steps:
Figure BDA00041583761000000416
Figure BDA00041583761000000417
v is i Is a set of relationships of (a);
Figure BDA00041583761000000418
Figure BDA0004158376100000051
for weighted summation of neighbor features under all relations, +.>
Figure BDA0004158376100000052
V is j Features of the previous layer,/->
Figure BDA0004158376100000053
Is a set of neighbors under relationship r;
Figure BDA0004158376100000054
is node v in the corresponding layer i And node v j Attention in between:
Figure BDA0004158376100000055
ρ is a linear rectifying unit with leakage, the symbol || represents the connection between vectors, a r,l Attention vectors belonging to model parameters;
H l =A l H l-1
wherein ,
Figure BDA0004158376100000056
for convergence feature of layer I, A l Is a convergence matrix of layer l,>
Figure BDA0004158376100000057
is A l I th row, j th column of (2) satisfies +.>
Figure BDA0004158376100000058
All other non-existent items are 0;
the aggregation characteristics of all layers are connected in series, and a final embedded matrix Z is obtained:
Z=[H 0 ,H 1 ,…,H L ];
wherein ,H0 =D,z i I-th row of Z.
According to the method for embedding the dynamic graph data of the intelligent guide system, provided by the invention, based on the nodes in the final embedded matrix, edge reconstruction and weight regression are carried out, an objective function is constructed, and the result of training completion of a learner is predicted through the objective function, and the method comprises the following steps:
the correlation between nodes is measured by similarity measurement, the larger the inner product of the corresponding vector is obtained, and the inner product is mapped to the range [0,1 ] by a sigmoid function delta]Acquiring a first objective function
Figure BDA0004158376100000059
Figure BDA00041583761000000510
P n (v) Is the noise distribution, M e Is the negative sampling rate;
performing weight regression of the edges by using a triplet o ij =<v i ,v j ,y ij >To describe the relationship of nodes to predicted outcomes:
v i representing learner, v j Representing exercise, y ij Representing the predicted result, if y ij =1 indicates that the learner has correct answer, if y ij =0 indicates learner answer error;
Figure BDA00041583761000000511
is made up of all triples o in a given dataset ij A second objective function is acquired by the composed set:
Figure BDA0004158376100000061
wherein ,pij Probability of correctly completing the exercise for the learner.
According to the dynamic graph data embedding method of the intelligent guide system, provided by the invention, based on the nodes in the final embedding matrix, edge reconstruction and weight regression are carried out, an objective function is constructed, the result of training completion of a learner is predicted through the objective function, and the prediction probability p is calculated ij Comprising:
p ij =sigmoid(W 2 ·σ(W 1 ·[z i ||z j ]+b 1 )+b 2 );
wherein ,W1 、W 2 、b 1 、b 2 All are parameters of the model, sigma is a nonlinear activation function, and the final objective function is obtained by:
Figure BDA0004158376100000062
where λ is the equilibrium coefficient.
On the other hand, the invention also provides a dynamic diagram embedding system of the intelligent guide system, which comprises:
the time sequence expansion module is used for performing time sequence expansion on the dynamic nodes in the heterogeneous evolution network of the intelligent guide system, acquiring an expanded graph data structure, comprising an expanded node set and an expanded edge set, and generating a time sequence of the dynamic nodes;
the time sequence expansion module is also used for carrying out random projection on all the nodes and generating a first embedded matrix based on the sequence of the static nodes and the time sequence of the dynamic nodes;
the random time sequence pooling module is used for dividing the time sequence into a plurality of continuous intervals with equal length, and embedding an average value of nodes in each interval as an interval to generate a transformation matrix; the first embedded matrix is subjected to pooling treatment through the transformation matrix to reduce the number of nodes, and a second embedded matrix is output;
the heterogeneous convergence network module is used for acquiring convergence characteristics of all nodes based on the node-level attention characteristics and the relationship-level attention characteristics for each node in the second embedding matrix and generating a final embedding matrix;
and the loss calculation module is used for carrying out edge reconstruction and weight regression based on the nodes finally embedded in the matrix, constructing an objective function and predicting the training result of the learner through the objective function.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method for embedding dynamic images of an intelligent lead-through system as described in any of the above.
The method and the system for embedding the dynamic diagram of the intelligent guide system provided by the invention have at least the following technical effects:
(1) The task of the automatic knowledge point marking of the test questions of the representation vector obtained by the embedding method provided by the invention is far superior to other methods in recall rate and accuracy, the dynamic evolution of learners along with time can be better adapted based on the time sequence expansion of the nodes, and the dynamic connection between the nodes can be more effectively obtained along with the smooth change of time.
(2) On the task of learner performance prediction, the method provided by the invention can obtain the representation with stronger prediction capability, the time sequence is expanded, and the performance of learners in each stage can be effectively improved through the time sequence expansion; the links between different nodes are considered through the heterogeneous aggregation network, and the influence of the links on the nodes is considered.
(3) On the basis of guaranteeing the model prediction accuracy, the calculation scale of the model is greatly reduced, continuous change of learners in actual conditions can be effectively simulated through random time sequence pooling, and the performances of the learners can be more stable along with the time. And the number of nodes is effectively reduced, the time and the memory overhead are saved, and better experience can be provided for users.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a dynamic diagram embedding method of an intelligent guide system provided by the invention;
FIG. 2 is a second flow chart of the method for embedding dynamic diagram of the intelligent learning guiding system provided by the invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that, as for the current graph data embedding method, three types of graph types can be classified, i.e., conventional graph embedding, heterogeneous graph embedding, and dynamic graph embedding.
The most intuitive of the conventional graph embedding methods is matrix decomposition. Matrix factorization converts a graph into a vector space in an explicit manner by using adjacency matrices. It can derive a deterministic solution, but it is difficult to apply to large-scale graphics because its complexity is proportional to the cubic of the number of nodes. Random walk-based methods achieve good performance in some cases, where the paths are sampled on a given graph according to certain criteria, such as deep and Node2vec; with further fusion of deep learning and graph models, graph Neural Networks (GNNs) have become increasingly important means of graph embedding, which can generate higher-level features, such as graph roll-up networks (GCNs), graph annotation networks (GATs), and graph sage, by aggregating neighbor information;
heterogeneous graphs, also known as Heterogeneous Information Networks (HIN), contain many types of nodes whose characteristics may not match due to problems with different dimensions or different representations. Therefore, the GE method of the homogeneity map cannot be widely used in HIN. To solve this problem, researchers have proposed a meta-path based approach using MP2 vec.
Because of the sequential nature of the dynamic graph, DGE focuses on preserving structural and temporal semantic information in the network, providing a new perspective for dynamic data analysis, making graph embedding more practical in the real world. Early approaches often relied on Recurrent Neural Networks (RNNs) to embed, ignoring structural attributes. The factorization-based method can find low-rank decomposition of similarity measures which change with time to generate node embedding which changes with time; GNN-based methods optimize GCN parameters by capturing dynamic changes in graph sequences using RNNs.
None of the methods in the prior art can model an intelligent guide graph (ITG) in an educational scene, because the ITG in the educational scene has locality, independence and smoothness, and the traditional heterogeneous method and the dynamic method can not effectively learn the dynamic evolution process of a learner in the ITG.
In one embodiment, as shown in fig. 1, the present invention provides a dynamic diagram embedding method of an intelligent learning guiding system, which includes the steps of:
performing time sequence expansion on dynamic nodes in a heterogeneous evolution network of the intelligent guide system to obtain an expanded graph data structure, wherein the expanded graph data structure comprises an expanded node set and an expanded edge set, and generating a time sequence of the dynamic nodes;
randomly projecting all nodes, and generating a first embedding matrix based on the sequence of the static nodes and the time sequence of the dynamic nodes;
dividing the time sequence into a plurality of continuous intervals with equal length, and embedding an average value of nodes in each interval as an interval to generate a transformation matrix; the first embedded matrix is subjected to pooling treatment through the transformation matrix to reduce the number of nodes, and a second embedded matrix is output;
for each node in the second embedding matrix, acquiring the convergence characteristics of all nodes based on the node-level attention characteristics and the relationship-level attention characteristics and generating a final embedding matrix;
and carrying out edge reconstruction and weight regression based on the nodes finally embedded in the matrix, constructing an objective function, and predicting the training result of the learner through the objective function.
It should be noted that the Heterogeneous Evolved Network (HEN) is defined as a heterogeneous information network
Figure BDA0004158376100000091
Figure BDA0004158376100000092
Representing a set of nodes, ε representing a set of edges;
the information network
Figure BDA0004158376100000093
There is a mapping from node to node type +.>
Figure BDA0004158376100000094
And a mapping of edge-to-edge type +.>
Figure BDA0004158376100000095
wherein
Figure BDA0004158376100000096
And->
Figure BDA0004158376100000097
Respectively representing the type set and the edge type set of the node, and meeting the requirements
Figure BDA0004158376100000101
To any side
Figure BDA0004158376100000102
Satisfy->
Figure BDA0004158376100000103
Wherein the node set can be divided into two parts, dynamic node set +.>
Figure BDA0004158376100000104
And static node set->
Figure BDA0004158376100000105
There is->
Figure BDA0004158376100000106
If->
Figure BDA0004158376100000107
Or->
Figure BDA0004158376100000108
Then define a mapping function +.>
Figure BDA0004158376100000109
wherein
Figure BDA00041583761000001010
Is a collection of times;
the dynamic graph embedding method of the intelligent guide system is essentially a time sequence expansion graph neural network and mainly comprises three steps of time sequence expansion, random time sequence pooling and heterogeneous aggregation;
in one embodiment, the time sequence expansion is performed after the original image data section, so that the nodes and edges of the original image data structure are expanded, and the method comprises the following steps:
for a given heterogeneous information network
Figure BDA00041583761000001011
TE is defined as +.>
Figure BDA00041583761000001012
Figure BDA00041583761000001013
Representing the expanded graph, < >>
Figure BDA00041583761000001014
Figure BDA00041583761000001015
And epsilon + Respectively representing the set of the expanded nodes and the set of the edges;
creating an extension node for each dynamic edge, representing a transition state in evolution, the connection of the dynamic edge needs to be changed from the dynamic node to its extension node, and the extension edge needs to be constructed to connect adjacent extension nodes to each other, thereby indicating the continuity of the node and the edge over time evolution process, namely, satisfying the following conditions:
Figure BDA00041583761000001016
there is->
Figure BDA00041583761000001017
If it is<v i ,r,v j >Epsilon and epsilon, and
Figure BDA00041583761000001018
then->
Figure BDA00041583761000001019
wherein
Figure BDA00041583761000001020
If it is
Figure BDA00041583761000001021
Then<v i ,r,v j >∈ε;
For a pair of
Figure BDA00041583761000001022
There is a timing sequence->
Figure BDA00041583761000001023
Corresponding to node v, the sequence is +.>
Figure BDA00041583761000001024
Represented as
Figure BDA00041583761000001025
Based on static node set
Figure BDA00041583761000001026
Is embedded statically->
Figure BDA00041583761000001027
Dynamic node set based->
Figure BDA00041583761000001028
Is embedded dynamically
Figure BDA00041583761000001029
Generating the first embedding matrix based on the static embedding and the dynamic embedding:
Figure BDA00041583761000001030
Figure BDA0004158376100000111
Figure BDA0004158376100000112
representing embedded satisfaction of static node->
Figure BDA0004158376100000113
Figure BDA0004158376100000114
Representing the time sequence +.>
Figure BDA0004158376100000115
Is embedded in (i)>
Figure BDA0004158376100000116
E is a parameter of the model, and can be gradually optimized along with gradient descent;
because a large number of new nodes are created after time sequence expansion, each node needs to be represented by a unique vector, which leads to a large number of model parameters, and training the large-scale parameters leads to excessive calculation, so that extra time expenditure is added to train the model, and higher requirements are put forward on hardware equipment, and extra memory is needed to store a large number of variables in a heterogeneous aggregation network;
in one embodiment, to reduce the number of nodes, reduce the number of computations, avoid overfitting, and expand perception, further comprising a random pooling based downsampling method:
sequence the time sequence
Figure BDA0004158376100000117
Dividing the time period into a plurality of continuous intervals with equal length, taking the average value of nodes in each interval as the embedding of the interval, and taking each interval as the representation of the average state of the time period;
the training process is greatly reduced in scale and smoothed in sequence through a random time sequence pooling operation, so that the training process is continuous, and main information is reserved at the same time, and the training process comprises the following steps:
D=PE
transforming the acquired first embedding matrix E into a second embedding matrix by a transformation matrix PD, and satisfy
Figure BDA0004158376100000118
N represents the total number of nodes and P can be expressed as the following formula:
Figure BDA0004158376100000119
i is the identity matrix of the matrix of units,
Figure BDA00041583761000001110
Figure BDA00041583761000001111
is->
Figure BDA00041583761000001112
Is satisfied by the transformation matrix of->
Figure BDA00041583761000001113
Figure BDA00041583761000001114
Wherein d represents the number of intervals; />
For a given time sequence
Figure BDA00041583761000001115
s is the number of nodes in each section, and s>1;
Preferably, since m/s cannot be guaranteed to be an integer, there may be a section where the number of nodes is less than s; the size of the preset initial interval is s 1 ,s 1 Is a discrete random variable obeying the average and distribution, s 1 ∈{1,…,s};
When s is 1 Is determined, the total number of all intervals will be determined;
Figure BDA0004158376100000121
p ij is that
Figure BDA0004158376100000122
The ith row and the jth column of the table are valued;
Figure BDA0004158376100000123
due to s 1 Is a random variable, s for each time series before each gradient drop during training 1 Will be regenerated; thus different s 1 Will be taken so that eventually each node can learn a unique representation; conversely, if s 1 Is always fixed to a specific value, so that the time sequence expansion is meaningless, and the nodes of each interval are converged;
wherein ,
Figure BDA0004158376100000124
representation s 1 Transformation matrix at=n, ++>
Figure BDA0004158376100000125
Representation s 1 A second embedding matrix when n,
Figure BDA0004158376100000126
combine all
Figure BDA0004158376100000127
The following formula applies:
Figure BDA0004158376100000128
so long as it
Figure BDA0004158376100000129
For a matrix of full rank, each node can learn a unique representation, obviously +.>
Figure BDA00041583761000001210
Can be treated as an upper triangular matrix, i.e. full rank.
The random timing pooling operation described above is only used for extended nodes in dynamic nodes, while static nodes remain unchanged, since extended nodes are typically much more than static nodes, the total number of nodes can be cut down exponentially by random timing pooling;
in one embodiment, after performing the random timing pooling operation, performing a heterogeneous aggregation operation includes:
it should be noted that, since the attribute of the node includes two parts, the characteristics of the node and the characteristics of the environment in which the node is located, the latter can be constructed by feature aggregation; in heterogeneous graphs, the diversity of nodes and edges makes the aggregation process challenging, any node may be connected to several different types of edges, resulting in various relationships,
thus, attention at both node level and relationship level is considered in performing the aggregation operation to let node v i The characteristic of the layer I is that
Figure BDA00041583761000001211
Figure BDA00041583761000001212
The following formula is satisfied:
Figure BDA0004158376100000131
Figure BDA0004158376100000132
v is i Attention of the first layer under relationship r;
Figure BDA0004158376100000133
Is the converging characteristic of the node level;
Figure BDA0004158376100000134
from b r,l Calculated, b r,l Parameters of the model:
Figure BDA0004158376100000135
Figure BDA0004158376100000136
v is i Is a set of relationships of (a);
thus, if a relationship does not belong to this set, then the corresponding attention is considered to be 0.b k,l Corresponds to only one relationship, not a specific node, and the relationship is directional, e.g., knowledge of interest in exercise may be different than exercise of interest in knowledge;
further:
Figure BDA0004158376100000137
Figure BDA0004158376100000138
for weighted summation of neighbor features under all relations, +.>
Figure BDA0004158376100000139
V is j Features of the previous layer,/->
Figure BDA00041583761000001310
Is a set of neighbors under relationship r;
Figure BDA00041583761000001311
is node v in the corresponding layer i And node v j Attention in between:
Figure BDA00041583761000001312
ρ is a linear rectifying unit with leakage, i.e., leakyReLU; the symbol i represents the connection between vectors, and a r,l Is an attention vector belonging to the model parameters.
H l =A l H l-1
wherein ,
Figure BDA00041583761000001313
for convergence feature of layer I, A l Is a convergence matrix of layer l,>
Figure BDA00041583761000001314
is A l I th row, j th column of (2) satisfies +.>
Figure BDA00041583761000001315
All other non-existent items are 0;
the aggregated features of all layers are concatenated to derive a final representation matrix, i.e., a final embedding matrix Z, to contain information at each level:
Z=[H 0 ,H 1 ,…,H L ];
wherein ,H0 =D,
Figure BDA00041583761000001316
z i An ith row of Z;
in one embodiment, after heterogeneous aggregation, outputting the objective function includes:
an objective function for reconstructing edges during graph learning, i.e. a first objective function for edge reconstruction
Figure BDA0004158376100000141
And (3) with
The objective function for fitting the weights of the edges, i.e. the weight regressing the second objective function
Figure BDA0004158376100000142
By using phasesThe similarity measure measures the correlation between nodes, i.e. the larger the inner product of the vectors, the higher the probability that an edge exists, since the inner product has a range of all real numbers, we need to map the inner product to [0,1 ] by a sigmoid function δ]The obtained objective function
Figure BDA0004158376100000143
The method comprises the following steps:
Figure BDA0004158376100000144
wherein ,Pn (v) Is the noise distribution, M e Is the negative example sampling rate.
Performing edge weight regression to obtain an objective function for fitting the weight of the edge
Figure BDA0004158376100000145
Comprising the following steps:
through triplet o ij =<v i ,v j ,y ij >To describe the relationship of nodes to predicted outcomes: v i Representing learner, v j Representing exercise, y ij Representing the predicted result, if y ij =1 indicates that the learner has correct answer, if y ij =0 indicates learner answer error;
Figure BDA0004158376100000146
is made up of all triples o in a given dataset ij A second objective function is acquired by the composed set: />
Figure BDA0004158376100000147
wherein ,pij Probability of correctly completing exercise for learner;
the second cross loss is the cross entropy loss of the binary prediction problem;
preferably, the predictive probability p is generated ij The visual method is as follows:
vectors connecting nodes, the prediction probabilities are directly derived using a multi-layer perceptron (MLP) predictor, as follows:
p ij =sigmoid(W 2 ·σ(E 1 ·[z i ||z j ]+b 1 )+b 2 );
W 1 ,W 2 ,b 1 ,b 2 are parameters of the model. Sigma is a nonlinear activation function (we use tanh);
the final objective function is obtained as follows:
Figure BDA0004158376100000148
λ is a balance coefficient;
in order to minimize the variance of parameter updates, the convergence process is stabilized while reducing storage overhead, batch training is performed to build the target by drawing a specified number of edges and triples, thereby ensuring that all edges and triples are involved in the training process after several rounds of gradient descent.
On the other hand, the invention also provides a dynamic diagram embedding system of the intelligent learning guiding system, and the dynamic diagram embedding system and the dynamic diagram embedding method described below can be correspondingly referred to each other, and specifically comprise a time sequence expanding module, a random time sequence pooling module, a heterogeneous convergence network module and a loss calculating module:
the timing sequence expansion module is used for performing timing sequence expansion on dynamic nodes in a heterogeneous evolution network of the intelligent guide system, acquiring an expanded graph data structure, comprising an expanded node set and an expanded edge set, and generating a timing sequence of the dynamic nodes;
the time sequence expansion module is also used for carrying out random projection on all the nodes and generating a first embedded matrix based on the sequence of the static nodes and the time sequence of the dynamic nodes;
the random time sequence pooling module is used for dividing the time sequence into a plurality of continuous intervals with equal length, and embedding an average value of nodes in each interval as an interval to generate a transformation matrix; the first embedded matrix is subjected to pooling treatment through the transformation matrix to reduce the number of nodes, and a second embedded matrix is output;
the heterogeneous convergence network module is used for acquiring convergence characteristics of all nodes based on the node-level attention characteristics and the relationship-level attention characteristics for each node in the second embedding matrix and generating a final embedding matrix;
the loss calculation module is used for carrying out edge reconstruction and weight regression based on the nodes finally embedded in the matrix, constructing an objective function, and predicting the training result of the learner through the objective function.
In another aspect, the present invention also provides a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, are capable of performing the steps of the method for embedding a dynamic diagram of an intelligent learning guide system provided by the above methods.
In yet another aspect, the present invention further provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the steps of the method for dynamic graph embedding of the intelligent guide system provided above.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. The dynamic image data embedding method of the intelligent guide system is characterized by comprising the following steps of:
performing time sequence expansion on dynamic nodes in a heterogeneous evolution network of the intelligent guide system to obtain an expanded graph data structure, wherein the expanded graph data structure comprises an expanded node set and an expanded edge set, and generating a time sequence of the dynamic nodes;
randomly projecting all nodes, and generating a first embedding matrix based on the sequence of the static nodes and the time sequence of the dynamic nodes;
dividing the time sequence into a plurality of continuous intervals with equal length, and embedding an average value of nodes in each interval as an interval to generate a transformation matrix; the first embedded matrix is subjected to pooling treatment through the transformation matrix to reduce the number of nodes, and a second embedded matrix is output;
for each node in the second embedding matrix, acquiring the convergence characteristics of all nodes based on the node-level attention characteristics and the relationship-level attention characteristics and generating a final embedding matrix;
and carrying out edge reconstruction and weight regression based on the nodes finally embedded in the matrix, constructing an objective function, and predicting the training result of the learner through the objective function.
2. The method for embedding dynamic graph data in an intelligent learning guiding system according to claim 1, wherein the step of performing time sequence expansion on dynamic nodes in a heterogeneous evolution network of the intelligent learning guiding system comprises the steps of:
graph data structure for heterogeneous evolved network
Figure FDA0004158376090000011
And (3) performing time sequence expansion:
Figure FDA0004158376090000012
acquiring a set of expansion nodes
Figure FDA0004158376090000013
And expanding the set ε of edges +
Figure FDA0004158376090000014
Creating an expansion node for each dynamic edge, changing the dynamic node connected with the dynamic edge into a corresponding expansion node, creating the expansion edge to link adjacent expansion nodes, and generating a time sequence of the dynamic nodes
Figure FDA0004158376090000015
Figure FDA0004158376090000016
wherein ,
Figure FDA0004158376090000021
represents the set of initial nodes, ε represents the set of initial edges, +.>
Figure FDA0004158376090000022
As a dynamic node at the initial moment in time,
Figure FDA0004158376090000023
is an extended node that evolves over time.
3. The method for embedding dynamic graph data in an intelligent learning system according to claim 2, wherein the step of randomly projecting all nodes and generating a first embedding matrix based on a sequence of static nodes and a time sequence of dynamic nodes comprises:
node set
Figure FDA0004158376090000024
Comprising dynamic node set->
Figure FDA0004158376090000025
And static node set->
Figure FDA0004158376090000026
Static node set based->
Figure FDA0004158376090000027
Is embedded statically->
Figure FDA0004158376090000028
Dynamic node set based->
Figure FDA0004158376090000029
Dynamic embedding of (a)>
Figure FDA00041583760900000210
Based on the static embedding and the dynamic embeddingForming the first embedding matrix:
Figure FDA00041583760900000211
4. a method of embedding dynamic image data of an intelligent learning system according to claim 3, wherein for a given time sequence
Figure FDA00041583760900000212
S is the number of nodes in each section, and S>1, presetting the size of the first interval as s 1 ,s 1 Is a discrete random variable obeying the average and distribution, s 1 ∈{1,…,s};
When s is 1 Is determined, the total number of all intervals is:
Figure FDA00041583760900000213
5. the method of claim 4, wherein outputting the second embedding matrix comprises:
generating a transformation matrix
Figure FDA00041583760900000214
p ij Is->
Figure FDA00041583760900000215
The j-th column of row i:
Figure FDA00041583760900000216
the first embedded matrix is subjected to pooling treatment through the transformation matrix to reduce the number of nodes, and a second embedded matrix is outputEmbedding matrix
Figure FDA00041583760900000217
Comprising the following steps:
Figure FDA00041583760900000218
Figure FDA0004158376090000031
wherein ,
Figure FDA0004158376090000032
representation s 1 Transformation matrix at=n, ++>
Figure FDA0004158376090000033
Representation s 1 Second embedding matrix at=n.
6. The method for embedding dynamic graph data in an intelligent learning guiding system according to claim 5, wherein for each node in the second embedding matrix, calculating a node level attention feature and a relationship level attention feature, and obtaining the aggregate features of all nodes to generate a final embedding matrix, comprises:
let node v i The characteristic of the layer I is that
Figure FDA0004158376090000034
Figure FDA0004158376090000035
The following formula is satisfied and the following formula is adopted,
Figure FDA0004158376090000036
Figure FDA0004158376090000037
v is i Attention of layer I under relation r, < ->
Figure FDA0004158376090000038
For aggregate features at node level, b r,l For parameters of the model, +.>
Figure FDA0004158376090000039
The method comprises the following steps:
Figure FDA00041583760900000310
Figure FDA00041583760900000311
v is i Is a set of relationships of (a);
Figure FDA00041583760900000312
Figure FDA00041583760900000313
for weighted summation of neighbor features under all relations, +.>
Figure FDA00041583760900000314
V is j Features of the previous layer,/->
Figure FDA00041583760900000315
Is a set of neighbors under relationship r;
Figure FDA00041583760900000316
is node v in the corresponding layer i And node v j Attention in between:
Figure FDA00041583760900000317
ρ is a linear rectifying unit with leakage, the symbol || represents the connection between vectors, a r,l Attention vectors belonging to model parameters;
H l =A l H l-1
wherein ,
Figure FDA00041583760900000318
for convergence feature of layer I, A l Is a convergence matrix of layer l,>
Figure FDA00041583760900000319
is A l I th row, j th column of (2) satisfies +.>
Figure FDA00041583760900000320
All other non-existent items are 0;
the aggregation characteristics of all layers are connected in series, and a final embedded matrix Z is obtained:
Z=[H 0 ,H 1 ,…,H L ];
wherein ,H0 =D,z i I-th row of Z.
7. The method for embedding dynamic graph data in an intelligent learning guide system according to claim 6, wherein performing edge reconstruction and weight regression based on nodes in the final embedding matrix to construct an objective function, and predicting a result of completion of training by a learner through the objective function, comprises:
the correlation between nodes is measured by similarity measurement, the larger the inner product of the corresponding vector is obtained, and the inner product is mapped to the range [0,1 ] by a sigmoid function delta]Acquiring a first objective function
Figure FDA0004158376090000041
Figure FDA0004158376090000042
P n (v) Is the noise distribution, M e Is the negative sampling rate;
performing weight regression of the edges by using a triplet o ij =<v i ,v j ,y ij >To describe the relationship of nodes to predicted outcomes:
v i representing learner, v j Representing exercise, y ij Representing the predicted result, if y ij =1 indicates that the learner has correct answer, if y ij =0 indicates learner answer error;
Figure FDA0004158376090000043
is made up of all triples o in a given dataset ij A second objective function is acquired by the composed set:
Figure FDA0004158376090000044
wherein ,pij Probability of correctly completing the exercise for the learner.
8. The method for embedding dynamic graph data in intelligent learning guide system according to claim 7, wherein based on the nodes in the final embedding matrix, edge reconstruction and weight regression are performed to construct an objective function, the result of learning by the learner is predicted by the objective function, and the prediction probability p is calculated ij Comprising:
p ij =sigmoid(W 2 ·σ(W 1 ·[z i ||z j ]+b 1 )+b 2 );
wherein ,W1 、W 2 、b 1 、b 2 Are parameters of the model, and sigma is notThe linear activation function, the final objective function is obtained as follows:
Figure FDA0004158376090000051
where λ is the equilibrium coefficient.
9. A dynamic diagram embedding system of an intelligent learning guiding system, comprising:
the time sequence expansion module is used for performing time sequence expansion on the dynamic nodes in the heterogeneous evolution network of the intelligent guide system, acquiring an expanded graph data structure, comprising an expanded node set and an expanded edge set, and generating a time sequence of the dynamic nodes;
the time sequence expansion module is also used for carrying out random projection on all the nodes and generating a first embedded matrix based on the sequence of the static nodes and the time sequence of the dynamic nodes;
the random time sequence pooling module is used for dividing the time sequence into a plurality of continuous intervals with equal length, and embedding an average value of nodes in each interval as an interval to generate a transformation matrix; the first embedded matrix is subjected to pooling treatment through the transformation matrix to reduce the number of nodes, and a second embedded matrix is output;
the heterogeneous convergence network module is used for acquiring convergence characteristics of all nodes based on the node-level attention characteristics and the relationship-level attention characteristics for each node in the second embedding matrix and generating a final embedding matrix;
and the loss calculation module is used for carrying out edge reconstruction and weight regression based on the nodes finally embedded in the matrix, constructing an objective function and predicting the training result of the learner through the objective function.
10. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the steps of the method for dynamic image data embedding of an intelligent guide system according to any of claims 1 to 8.
CN202310342163.7A 2023-03-31 2023-03-31 Dynamic diagram embedding method and system of intelligent learning guiding system Pending CN116257659A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310342163.7A CN116257659A (en) 2023-03-31 2023-03-31 Dynamic diagram embedding method and system of intelligent learning guiding system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310342163.7A CN116257659A (en) 2023-03-31 2023-03-31 Dynamic diagram embedding method and system of intelligent learning guiding system

Publications (1)

Publication Number Publication Date
CN116257659A true CN116257659A (en) 2023-06-13

Family

ID=86682639

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310342163.7A Pending CN116257659A (en) 2023-03-31 2023-03-31 Dynamic diagram embedding method and system of intelligent learning guiding system

Country Status (1)

Country Link
CN (1) CN116257659A (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215337A (en) * 2020-09-30 2021-01-12 江苏大学 Vehicle trajectory prediction method based on environment attention neural network model
CN112395466A (en) * 2020-11-27 2021-02-23 上海交通大学 Fraud node identification method based on graph embedded representation and recurrent neural network
CN112732932A (en) * 2021-01-08 2021-04-30 西安烽火软件科技有限公司 User entity group recommendation method based on knowledge graph embedding
CN112801355A (en) * 2021-01-20 2021-05-14 南京航空航天大学 Data prediction method based on multi-graph fusion space-time attention of long-short-term space-time data
CN113095439A (en) * 2021-04-30 2021-07-09 东南大学 Heterogeneous graph embedding learning method based on attention mechanism
CN113312498A (en) * 2021-06-09 2021-08-27 上海交通大学 Text information extraction method for embedding knowledge graph by undirected graph
US20220038341A1 (en) * 2015-09-11 2022-02-03 Ayasdi Ai Llc Network representation for evolution of clusters and groups
US20220058221A1 (en) * 2015-11-25 2022-02-24 Steven Ganz Methods to Support Version Control and Conflict Resolution in a Database using a Hierarchical Log
CN114553718A (en) * 2022-02-20 2022-05-27 武汉大学 Network traffic matrix prediction method based on self-attention mechanism
CN114626618A (en) * 2022-03-17 2022-06-14 南开大学 Student class withdrawal behavior interpretable prediction method based on self-attention mechanism
CN115082147A (en) * 2022-06-14 2022-09-20 华南理工大学 Sequence recommendation method and device based on hypergraph neural network
CN115082896A (en) * 2022-06-28 2022-09-20 东南大学 Pedestrian trajectory prediction method based on topological graph structure and depth self-attention network
CN115329959A (en) * 2022-07-19 2022-11-11 华中师范大学 Learning target recommendation method based on double-flow knowledge embedded network
CN115545155A (en) * 2022-09-21 2022-12-30 华中师范大学 Multi-level intelligent cognitive tracking method and system, storage medium and terminal

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220038341A1 (en) * 2015-09-11 2022-02-03 Ayasdi Ai Llc Network representation for evolution of clusters and groups
US20220058221A1 (en) * 2015-11-25 2022-02-24 Steven Ganz Methods to Support Version Control and Conflict Resolution in a Database using a Hierarchical Log
CN112215337A (en) * 2020-09-30 2021-01-12 江苏大学 Vehicle trajectory prediction method based on environment attention neural network model
CN112395466A (en) * 2020-11-27 2021-02-23 上海交通大学 Fraud node identification method based on graph embedded representation and recurrent neural network
CN112732932A (en) * 2021-01-08 2021-04-30 西安烽火软件科技有限公司 User entity group recommendation method based on knowledge graph embedding
CN112801355A (en) * 2021-01-20 2021-05-14 南京航空航天大学 Data prediction method based on multi-graph fusion space-time attention of long-short-term space-time data
CN113095439A (en) * 2021-04-30 2021-07-09 东南大学 Heterogeneous graph embedding learning method based on attention mechanism
CN113312498A (en) * 2021-06-09 2021-08-27 上海交通大学 Text information extraction method for embedding knowledge graph by undirected graph
CN114553718A (en) * 2022-02-20 2022-05-27 武汉大学 Network traffic matrix prediction method based on self-attention mechanism
CN114626618A (en) * 2022-03-17 2022-06-14 南开大学 Student class withdrawal behavior interpretable prediction method based on self-attention mechanism
CN115082147A (en) * 2022-06-14 2022-09-20 华南理工大学 Sequence recommendation method and device based on hypergraph neural network
CN115082896A (en) * 2022-06-28 2022-09-20 东南大学 Pedestrian trajectory prediction method based on topological graph structure and depth self-attention network
CN115329959A (en) * 2022-07-19 2022-11-11 华中师范大学 Learning target recommendation method based on double-flow knowledge embedded network
CN115545155A (en) * 2022-09-21 2022-12-30 华中师范大学 Multi-level intelligent cognitive tracking method and system, storage medium and terminal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孙建文等: "基于多阶拟合机制的深度认知追踪方法", 《现代教育技术》, pages 103 - 109 *
李智杰等: "面向图嵌入的改进图注意机制模型", 《计算机工程与应用》, pages 152 - 158 *

Similar Documents

Publication Publication Date Title
US20230153622A1 (en) Method, Apparatus, and Computing Device for Updating AI Model, and Storage Medium
CN112116090B (en) Neural network structure searching method and device, computer equipment and storage medium
CN111104595A (en) Deep reinforcement learning interactive recommendation method and system based on text information
CN109120462A (en) Prediction technique, device and the readable storage medium storing program for executing of opportunistic network link
CN112967088A (en) Marketing activity prediction model structure and prediction method based on knowledge distillation
CN110889450B (en) Super-parameter tuning and model construction method and device
CN114639483A (en) Electronic medical record retrieval method and device based on graph neural network
CN111967271A (en) Analysis result generation method, device, equipment and readable storage medium
CN111695046A (en) User portrait inference method and device based on spatio-temporal mobile data representation learning
CN114358250A (en) Data processing method, data processing apparatus, computer device, medium, and program product
CN114637911A (en) Next interest point recommendation method of attention fusion perception network
US20230206054A1 (en) Expedited Assessment and Ranking of Model Quality in Machine Learning
CN113065321B (en) User behavior prediction method and system based on LSTM model and hypergraph
CN114861917A (en) Knowledge graph inference model, system and inference method for Bayesian small sample learning
CN114493674A (en) Advertisement click rate prediction model and method
CN110866866B (en) Image color imitation processing method and device, electronic equipment and storage medium
CN113408721A (en) Neural network structure searching method, apparatus, computer device and storage medium
CN116975686A (en) Method for training student model, behavior prediction method and device
CN117834852A (en) Space-time video quality evaluation method based on cross-attention multi-scale visual transformer
CN116257659A (en) Dynamic diagram embedding method and system of intelligent learning guiding system
WO2022127603A1 (en) Model processing method and related device
CN116976402A (en) Training method, device, equipment and storage medium of hypergraph convolutional neural network
CN117010480A (en) Model training method, device, equipment, storage medium and program product
CN115564532A (en) Training method and device of sequence recommendation model
CN115908600A (en) Massive image reconstruction method based on prior regularization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination