CN114819070A - Timing sequence data self-adaptive credible sampling method based on graph neural network - Google Patents

Timing sequence data self-adaptive credible sampling method based on graph neural network Download PDF

Info

Publication number
CN114819070A
CN114819070A CN202210378197.7A CN202210378197A CN114819070A CN 114819070 A CN114819070 A CN 114819070A CN 202210378197 A CN202210378197 A CN 202210378197A CN 114819070 A CN114819070 A CN 114819070A
Authority
CN
China
Prior art keywords
node
nodes
neural network
graph neural
leaf
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210378197.7A
Other languages
Chinese (zh)
Inventor
李天泉
史晓雨
尚明生
陈浩
熊飞
罗元平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Kangzhou Big Data Group Co ltd
Original Assignee
Chongqing Kangzhou Big Data Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Kangzhou Big Data Group Co ltd filed Critical Chongqing Kangzhou Big Data Group Co ltd
Priority to CN202210378197.7A priority Critical patent/CN114819070A/en
Publication of CN114819070A publication Critical patent/CN114819070A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a timing sequence data self-adaptive credible sampling method based on a graph neural network, which comprises the following steps: constructing a graph neural network G based on the time sequence data; extracting a two-layer tree topology connection structure of each node; calculating the similarity between all nodes of the same type; calculating the credibility of connected leaf nodes in the two-layer tree topology connection structure of all root nodes; making the trustworthiness of all root nodes above a threshold; randomly sampling data of leaf nodes of the root node according to the proportion of the reliability, and taking a sampling result as state information; inputting the state information into a graph neural network to realize the updating and aggregation of the state so as to output a final node representation; and (3) splicing node representations generated by the graph neural network G, realizing score prediction by using an attention mechanism, and calculating MSE loss by combining real scores. The reliability of the neighbor nodes of the neural network of the graph is analyzed through the normalized timestamp information of the time sequence, and meanwhile, the topological structure of the sampling nodes is adaptively adjusted according to the reliability, so that the time sequence can be accurately predicted.

Description

Timing sequence data self-adaptive credible sampling method based on graph neural network
Technical Field
The invention belongs to the field of deep learning, and particularly relates to a time sequence data self-adaptive credible sampling method based on a graph neural network.
Background
The Graph Neural Network (GNN) is a deep learning model for a data structure of a graph, which is excellent in a plurality of graph-based machine learning tasks and has high interpretability, and has become a widely-used graph representation learning method in recent years. For example, GNNs can learn valid user and item representations from a user-item interaction network for use in a recommendation system. The core idea of the GNN is to learn a mapping, and map the features of a node and its neighboring nodes in a graph structure to a low-dimensional hidden feature representation of the node, which involves sampling the node.
In many existing methods, important neighbor nodes may be omitted by using uniform sampling, which may cause some important structural features of a graph to be removed in sampling, and a data source of a graph neural network cannot be guaranteed. In addition, most of the methods do not consider the time sequence characteristics of the structure, and ignore the shallow structure characteristics output by the middle layer of the graph neural network, which all limits the application effect of the graph neural network to a certain extent.
In recent years, the application of neural network to recommendation system has been studied more, but many of them still have the following problems: 1. the uniform sampling method for the neighbor node may omit the information provided by the important neighbor and the important information of other nodes such as the sub-neighbor node; 2. proper learnable parameters are not provided for the score prediction, and the performance of the graph neural network is limited; 3. no neighbor structure exists; 4, the shallow structure characteristics output by the middle layer of the neural network of the graph are ignored.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, it is an object of the present invention to provide an adaptive credible sampling method for time series data based on graph neural network, so as to solve at least one of the shortcomings in the prior art.
In order to achieve the above and other related objects, the present invention provides a method for adaptively and reliably sampling time series data based on a graph neural network, including:
step 1: constructing a graph neural network G based on the time sequence data; in the graph neural network G, no edge exists between nodes of the same type, and the edges between nodes of different types have weights;
step 2: extracting a two-layer tree topology connection structure of each node;
and step 3: calculating the similarity between all nodes of the same type;
and 4, step 4: calculating the credibility of connected leaf nodes in the two-layer tree topology connection structure of all root nodes by using the normalized timestamp information of the time sequence data;
and 5: if the credibility of all leaf nodes of a certain root node is less than or equal to the threshold, selecting the leaf nodes of the nodes similar to the certain root node according to the similarity, merging the leaf nodes into the leaf nodes of the certain root node, and returning to the step 4 until the credibility of all root nodes is higher than the threshold;
step 6: randomly sampling data of leaf nodes of the root node according to the proportion of the reliability, and taking a sampling result as state information; the reliability ratio refers to the ratio of the reliability of the leaf node to the reliability of a root node to which the leaf node belongs, and the reliability of the root node is equal to the sum of the reliabilities of all the leaf nodes under the root node;
and 7: inputting the state information into a graph neural network to realize the updating and aggregation of the state so as to output a final node representation;
and 8: and (3) splicing the root node and leaf node representations generated by the graph neural network G, realizing score prediction by using an attention mechanism, and calculating MSE loss by combining real scores.
Optionally, calculating the similarity between all nodes of the same type includes:
Figure BDA0003591669600000021
wherein, I i 、I j And respectively history record vectors of the interaction of the nodes i and j of the same type.
Optionally, the normalized timestamp information t of the time series data ij Comprises the following steps: the time at which the leaf node j occurs is normalized on the time scale of the root node i.
Optionally, the reliability is calculated as follows: confidence c for scores of root node i connected to leaf nodes j of node i i,j Comprises the following steps:
Figure BDA0003591669600000022
wherein d is i Degree of root node i; lambda is an adjusting factor, and the value range is [0,1 ].
Optionally, the sampling probability p (i | j) of the data of the leaf node j of the root node i is:
Figure BDA0003591669600000023
wherein N is i Set of all leaf nodes j for root node i, c i,j Is confidence level.
Optionally, the state updating and aggregating are to respectively update and aggregate states of nodes of different types;
the aggregation formula for aggregating the state representations of all leaf nodes j for the l-th level node i is:
Figure BDA0003591669600000031
l-1 represents the l-th layer node, k ∈ N i The number is the number corresponding to the set of all leaf nodes j of the root node i;
and for the l-th layer node i, updating by using the aggregated state representation to obtain a final node representation:
Figure BDA0003591669600000032
wherein, | | is splicing operation, W (l) And W' (l) Are parameter matrixes to be learned, sigma (-) is a sigmoid activation function,
Figure BDA0003591669600000033
for state information input, L is equal to [0, L ]]And L is a undetermined parameter.
Optionally, the score between the node i and the leaf node j of the node i is implemented by using an attention mechanism as follows:
Figure BDA0003591669600000034
wherein
Figure BDA0003591669600000035
SOFTMAX is a normalized exponential function, W 1 、W 2 、W r Is a parameter matrix to be learned.
Optionally, the MSE loss is:
Figure BDA0003591669600000036
wherein, | | E train I is training set E train Size of (a), r i,j The root node i and the leaf node j of the root node i are scored.
As described above, the time series data adaptive credible sampling method based on the graph neural network of the present invention has the following beneficial effects:
the reliability of the neighbor nodes of the neural network of the graph is analyzed through the normalized timestamp information of the time sequence, and meanwhile, the reliability of the nodes is ensured by utilizing the self-adaptive adjustment of the topological structure of the sampling nodes according to the reliability, so that the data characteristics can be accurately sampled better, and the time sequence can be accurately predicted.
Drawings
FIG. 1 is a flowchart of a method for adaptive trusted sampling of time series data based on a graph neural network according to an embodiment of the present invention;
fig. 2 is a flowchart of a time series data adaptive credible sampling method based on a graph neural network according to another embodiment of the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
As shown in fig. 1, an embodiment of the present application provides a time series data adaptive credible sampling method based on a graph neural network, including the following steps:
step 1: constructing a graph neural network G based on the time sequence data; in the graph neural network G, no edge exists between nodes of the same type, and the edges between nodes of different types have weights;
and 2, step: analyzing the topological structure of the graph neural network G, and extracting a two-layer tree topological connection structure of each node;
and 3, step 3: calculating the similarity between all nodes of the same type through similarity analysis;
and 4, step 4: calculating the credibility of connected leaf nodes in the two-layer tree topology connection structure of all root nodes by using the normalized timestamp information of the time sequence data;
and 5: if the credibility of all leaf nodes of a certain root node is less than or equal to the threshold, selecting the leaf nodes of the nodes similar to the certain root node according to the similarity, merging the leaf nodes into the leaf nodes of the certain root node, and returning to the step 4 until the credibility of all root nodes is higher than the threshold;
and 6: randomly sampling data of leaf nodes of the root node according to the proportion of the reliability, and taking a sampling result as state information; the reliability ratio refers to the ratio of the reliability of the leaf node to the reliability of a root node to which the leaf node belongs, and the reliability of the root node is equal to the sum of the reliabilities of all the leaf nodes under the root node;
and 7: inputting the state information into a graph neural network to realize the updating and aggregation of the state so as to output a final node representation;
and 8: and (3) splicing the root node and leaf node representations generated by the graph neural network G, realizing score prediction by using an attention mechanism, and calculating MSE loss by combining real scores.
Wherein, there is no edge between the same type of nodes in the graph neural network G, and there may be a weight between the different types of nodes (user nodes and project nodes), and the weight is a user score.
In step 1, the data structure contains time, type (user or item), connection relation, numerical relation, and the like.
In an embodiment, the similarity analysis is calculated by using a cosine similarity formula:
Figure BDA0003591669600000041
wherein, I i 、I j And the interactive history record vectors are respectively the interactive history record vectors of the root node i and the leaf node j of the same type.
In one embodiment, the time isThe normalized timestamp information of the sequence data is: normalized timestamp information t of user node to project node ij Normalizing the occurrence time of the leaf node j on the time scale of the root node i;
in one embodiment, the confidence level is calculated as follows: for a root node i, the confidence of the score connected to its leaf node j is, i.e., the confidence c of the score connected to the leaf node j of the root node i i,j Is composed of
Figure BDA0003591669600000051
Wherein d is i Degree of root node i; lambda is a regulating factor, the value range is [0,1 ], and the lambda is obtained by training a neural network of the graph.
In an embodiment, the threshold is adjusted according to the MSE loss calculated in step 8, and the initial time may be randomly given; the score of the newly incorporated leaf node is consistent with the original scores of the similar node and the leaf node.
In step 6, the sampling probability formula of the root node i is:
Figure BDA0003591669600000052
wherein N is i Is the set of all leaf nodes j for root node i.
In step 7, the status update and aggregation are performed to update and aggregate the status of the nodes of different types, respectively.
The aggregation formula for aggregating the state representations of all leaf nodes j for the l-th level node i is:
Figure BDA0003591669600000053
for the l-th level node i, the final representation of the node is obtained by using the status representation update of the aggregation:
Figure BDA0003591669600000054
wherein, | | is splicing operation, W (l) And W' (l) Are parameter matrixes to be learned, sigma (-) is a sigmoid activation function,
Figure BDA0003591669600000055
for state information input, L is equal to [0, L ∈]And L is a undetermined parameter.
In step 8, the score prediction between the node i and its leaf node j is realized by using the attention mechanism as follows:
Figure BDA0003591669600000056
wherein
Figure BDA0003591669600000057
SOFTMAX is a normalized exponential function, W 1 、W 2 、W r Is a parameter matrix to be learned.
The MSE loss is:
Figure BDA0003591669600000058
wherein, | | E train I is training set E train Size of (a), r i,j The score for node i and its leaf node j.
In a specific embodiment, the following six public data sets provided by amazon are used:
the method comprises the following steps: the interactive network of users and online books, the selected subgraph contains 6788854 user nodes, 33734 project nodes and 1000000 scores.
2. CD: the interactive network of the user and the online music album selects a subgraph which comprises 77172 user nodes, 11402 project nodes and 500000 scores.
FOOD: an interactive network of users and food products comprising 768438 user nodes, 166049 project nodes, and 1297156 scores.
MOVIE: an interactive network of users and movies, comprising 2088619 user nodes, 201298 project nodes, and 4607047 scores.
MUSIC: an interactive network of users and music single songs, comprising 478235 user nodes, 266414 project nodes and 836006 scores.
TOY: an interactive network of users and toys comprising 1342911 user nodes, 327705 project nodes, and 2252771 scores.
The data sets are divided according to 80% of the training set and 20% of the test set, and the following results are obtained:
sampling strategy BOOK CD FOOD MOVIE MUSIC TOY
Uniform sampling 1.2553 1.0921 1.5654 0.8922 0.9145 1.3857
Time-based sampling 1.2258 1.0790 1.5274 0.8602 0.9241 1.3915
Degree-based sampling 1.2373 1.0919 1.5141 0.8736 0.9317 1.3062
Confidence level sampling based on scoring 1.1824 1.0250 1.4874 0.8452 0.9048 1.3274
Therefore, sampling is better based on the scoring reliability.
Furthermore, a self-adaptive mode can be adopted on the basis, and the effect is improved.
In an embodiment, the present application further provides a time-series data adaptive trusted sampling apparatus based on a graph neural network, including:
the graph constructing module is used for constructing a graph neural network G based on the time sequence data; in the graph neural network G, no edge exists between nodes of the same type, and the edges between nodes of different types have weights;
the extraction module is used for analyzing the topological structure of the graph neural network G and extracting the two-layer tree topological connection structure of each node;
the similarity calculation module is used for calculating the similarity between all nodes of the same type through similarity analysis;
the credibility calculation module is used for calculating the credibility of the connected leaf nodes in the two-layer tree topology connection structure of all the root nodes by utilizing the normalized timestamp information of the time sequence data;
the node merging module is used for selecting the leaf nodes of the nodes similar to a certain root node and merging the leaf nodes into the leaf nodes of the certain root node according to the similarity when the credibility of all the leaf nodes of the certain root node is less than or equal to a threshold value until the credibility of all the root nodes is higher than the threshold value;
the sampling module is used for randomly sampling data of leaf nodes of the root node according to the proportion of the reliability, and taking a sampling result as state information; the reliability ratio refers to the ratio of the reliability of the leaf node to the reliability of a root node to which the leaf node belongs, and the reliability of the root node is equal to the sum of the reliabilities of all the leaf nodes under the root node;
the updating and aggregating module is used for inputting the state information into the graph neural network to realize the updating and aggregation of the state so as to output the final node representation;
and the loss calculation module is used for splicing the node representations generated by the graph neural network G, realizing score prediction by using an attention mechanism and calculating MSE loss by combining real scores.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one type of logical function division, and other division manners may be available in actual implementation, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may comprise any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, etc.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (8)

1. A time sequence data self-adaptive credible sampling method based on a graph neural network is characterized by comprising the following steps:
step 1: constructing a graph neural network G based on the time sequence data; in the graph neural network G, no edge exists between nodes of the same type, and the edges between nodes of different types have weight;
step 2: extracting a two-layer tree topology connection structure of each node;
and step 3: calculating the similarity between all nodes of the same type;
and 4, step 4: calculating the credibility of connected leaf nodes in the two-layer tree topology connection structure of all root nodes by using the normalized timestamp information of the time sequence data;
and 5: if the credibility of all leaf nodes of a certain root node is less than or equal to the threshold, selecting the leaf nodes of the nodes similar to the certain root node according to the similarity, merging the leaf nodes into the leaf nodes of the certain root node, and returning to the step 4 until the credibility of all root nodes is higher than the threshold;
step 6: randomly sampling data of leaf nodes of the root node according to the proportion of the reliability, and taking a sampling result as state information; the reliability ratio refers to the ratio of the reliability of the leaf node to the reliability of a root node to which the leaf node belongs, and the reliability of the root node is equal to the sum of the reliabilities of all the leaf nodes under the root node;
and 7: inputting the state information into a graph neural network to realize the updating and aggregation of the state so as to output a final node representation;
and 8: and (3) splicing the root node and leaf node representations generated by the graph neural network G, realizing score prediction by using an attention mechanism, and calculating MSE loss by combining real scores.
2. The graph neural network-based time series data adaptive credible sampling method according to claim 1, wherein the similarity p between all nodes of the same type is calculated i,j
Figure FDA0003591669590000011
Wherein, I i 、I j And respectively history record vectors of the interaction of the nodes i and j of the same type.
3. The graph neural network-based time-series data adaptive credible sampling method according to claim 1, wherein normalized time stamp information t of the time-series data ij Comprises the following steps: the time at which the leaf node j occurs is normalized on the time scale of the root node i.
4. The graph neural network-based time-series data adaptive credible sampling method according to claim 3, wherein the credibility is calculated as follows: confidence c for scores of root node i connected to leaf nodes j of node i i,j Comprises the following steps:
Figure FDA0003591669590000021
wherein d is i Degree of root node i; lambda is an adjusting factor, and the value range is [0,1 ].
5. The time-series data adaptive credible sampling method based on the graph neural network is characterized in that the sampling probability p (i | j) of the data of the leaf node j of the root node i is as follows:
Figure FDA0003591669590000022
wherein N is i Set of all leaf nodes j for root node i, c i,j Is confidence level.
6. The graph neural network-based time series data adaptive credible sampling method according to claim 1, wherein the state updating and aggregation are respectively performed on nodes of different types;
the aggregation formula for aggregating the state representations of all leaf nodes j for the l-th level node i is:
Figure FDA0003591669590000023
l-1 represents the l-th layer node, k ∈ N i The number is the number corresponding to the set of all leaf nodes j of the root node i;
and for the l-th layer node i, updating by using the aggregated state representation to obtain a final node representation:
Figure FDA0003591669590000024
wherein | | | is a splicing operation, W (l) And W' (l) Are parameter matrixes to be learned, sigma (-) is a sigmoid activation function,
Figure FDA0003591669590000025
for state information input, L is equal to [0, L ∈]And L is a undetermined parameter.
7. The time series data adaptive credible sampling method based on the graph neural network as claimed in claim 6, wherein the score between the node i and the leaf node j of the node i is realized by using an attention mechanism as follows:
Figure FDA0003591669590000026
wherein
Figure FDA0003591669590000027
SOFTMAX is a normalized exponential function, W 1 、W 2 、W r Is a parameter matrix to be learned.
8. The graph neural network-based time series data adaptive credible sampling method according to claim 7, wherein the MSE loss is:
Figure FDA0003591669590000028
wherein, | | E train I is training set E train Size of (a), r i,j The root node i and the leaf node j of the root node i are scored.
CN202210378197.7A 2022-04-12 2022-04-12 Timing sequence data self-adaptive credible sampling method based on graph neural network Pending CN114819070A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210378197.7A CN114819070A (en) 2022-04-12 2022-04-12 Timing sequence data self-adaptive credible sampling method based on graph neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210378197.7A CN114819070A (en) 2022-04-12 2022-04-12 Timing sequence data self-adaptive credible sampling method based on graph neural network

Publications (1)

Publication Number Publication Date
CN114819070A true CN114819070A (en) 2022-07-29

Family

ID=82533847

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210378197.7A Pending CN114819070A (en) 2022-04-12 2022-04-12 Timing sequence data self-adaptive credible sampling method based on graph neural network

Country Status (1)

Country Link
CN (1) CN114819070A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112541639A (en) * 2020-12-22 2021-03-23 宜宾电子科技大学研究院 Recommendation system scoring prediction method based on graph neural network and attention mechanism
CN113298634A (en) * 2021-04-26 2021-08-24 上海淇玥信息技术有限公司 User risk prediction method and device based on time sequence characteristics and graph neural network
US20220044791A1 (en) * 2020-08-10 2022-02-10 Kunnskap Medical, LLC Systems and devices for endoscopic procedure analysis based on state data
CN114186799A (en) * 2021-11-21 2022-03-15 南京理工大学 Enterprise valuation method and system based on heterogeneous graph neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220044791A1 (en) * 2020-08-10 2022-02-10 Kunnskap Medical, LLC Systems and devices for endoscopic procedure analysis based on state data
CN112541639A (en) * 2020-12-22 2021-03-23 宜宾电子科技大学研究院 Recommendation system scoring prediction method based on graph neural network and attention mechanism
CN113298634A (en) * 2021-04-26 2021-08-24 上海淇玥信息技术有限公司 User risk prediction method and device based on time sequence characteristics and graph neural network
CN114186799A (en) * 2021-11-21 2022-03-15 南京理工大学 Enterprise valuation method and system based on heterogeneous graph neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘鑫宇: "基于图神经网络的推荐算法研究及其应用", 《万方数据》, 1 December 2021 (2021-12-01), pages 22 - 38 *

Similar Documents

Publication Publication Date Title
US20210256403A1 (en) Recommendation method and apparatus
WO2023065859A1 (en) Item recommendation method and apparatus, and storage medium
CN110929164A (en) Interest point recommendation method based on user dynamic preference and attention mechanism
CN113468227B (en) Information recommendation method, system, equipment and storage medium based on graph neural network
CN111737426B (en) Method for training question-answering model, computer equipment and readable storage medium
CN105894372A (en) Method and device for predicting group credit
CN113435509B (en) Small sample scene classification and identification method and system based on meta-learning
CN105022754A (en) Social network based object classification method and apparatus
CN108960574A (en) Quality determination method, device, server and the storage medium of question and answer
CN112861936A (en) Graph node classification method and device based on graph neural network knowledge distillation
US20230297617A1 (en) Video retrieval method and apparatus, device, and storage medium
CN112163161B (en) Recommendation method and system for college library, readable storage medium and electronic equipment
CN109670927A (en) The method of adjustment and its device of credit line, equipment, storage medium
CN110222838A (en) Deep neural network and its training method, device, electronic equipment and storage medium
JP2020113044A (en) Data expansion program, data expansion method, and data expansion device
CN115456043A (en) Classification model processing method, intent recognition method, device and computer equipment
WO2023024408A1 (en) Method for determining feature vector of user, and related device and medium
CN113987236B (en) Unsupervised training method and unsupervised training device for visual retrieval model based on graph convolution network
CN111292197A (en) Community discovery method based on convolutional neural network and self-encoder
CN113887698B (en) Integral knowledge distillation method and system based on graph neural network
CN106096653B (en) Ascribed characteristics of population estimating method based on cross-platform user social contact multimedia behavior
CN112435034A (en) Marketing arbitrage black product identification method based on multi-network graph aggregation
CN114819070A (en) Timing sequence data self-adaptive credible sampling method based on graph neural network
Zhang et al. Data clustering using multivariant optimization algorithm
Zhou et al. Online nonparametric Bayesian analysis of parsimonious Gaussian mixture models and scenes clustering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination