CN116561229A - Data synchronization method, device and storage medium based on graphic neural network - Google Patents

Data synchronization method, device and storage medium based on graphic neural network Download PDF

Info

Publication number
CN116561229A
CN116561229A CN202310801625.7A CN202310801625A CN116561229A CN 116561229 A CN116561229 A CN 116561229A CN 202310801625 A CN202310801625 A CN 202310801625A CN 116561229 A CN116561229 A CN 116561229A
Authority
CN
China
Prior art keywords
data
graph
nodes
server
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310801625.7A
Other languages
Chinese (zh)
Other versions
CN116561229B (en
Inventor
洪跃宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Fanzhuo Information Technology Co ltd
Original Assignee
Xiamen Fanzhuo Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Fanzhuo Information Technology Co ltd filed Critical Xiamen Fanzhuo Information Technology Co ltd
Priority to CN202310801625.7A priority Critical patent/CN116561229B/en
Publication of CN116561229A publication Critical patent/CN116561229A/en
Application granted granted Critical
Publication of CN116561229B publication Critical patent/CN116561229B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/219Managing data history or versioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention provides a data synchronization method, a device and a storage medium based on a graph neural network, wherein the method comprises the following steps: using N servers and M clients as nodes in the graph, using transmission channels between the N servers and the M clients as edges of the graph, wherein data in each server has a data version number Snum i The data in each client has a data version number Cnum j The graph is used for a graph neural network; optimizing a loss function based on a data version number in the training samples, and using the optimized loss function to map the neural networkTraining to obtain a trained graph neural network; and determining a synchronization strategy of each of the M clients by using the trained graph neural network, and synchronizing the data in the clients with the data in the server based on the synchronization strategy. The invention can personally formulate the synchronization strategy, thereby executing the corresponding synchronization strategy according to the condition of each client, and improving the efficiency of data synchronization.

Description

Data synchronization method, device and storage medium based on graphic neural network
Technical Field
The invention relates to the technical field of artificial intelligence and data processing, in particular to a data synchronization method, device and storage medium based on a graph neural network.
Background
Inter-service data synchronization is a very common problem in software development. In the implementation process, the data provider (service end) and the data receiver (client end) face a plurality of problems. In the prior art, data synchronization between a client and a server is generally performed by adopting strategies such as a consistency protocol, timing and the like, so that a data synchronization mode is difficult to customize.
In the prior art, some technical schemes for data synchronization based on artificial intelligence generally adopt a neural network for data synchronization, but the method is difficult to determine which client a server side needs to transmit data to, quickly select a data transmission channel, and perform personalized data synchronization on the server side when the data synchronization starting points (time points) of different client sides are inconsistent.
Disclosure of Invention
The present invention proposes the following technical solution to one or more of the above technical drawbacks of the prior art.
A data synchronization method based on a graph neural network, the method comprising:
a graph construction step of using N servers and M clients as nodes in the graph, using transmission channels between the N servers and the M clients as edges of the graph, wherein data in each server has a data version number Snum i The data in each client has a data version number Cnum j The graph is used for a graph neural network;
a training step, optimizing a loss function based on a data version number in a training sample, and training the graph neural network by using the optimized loss function to obtain a trained graph neural network;
a synchronization step of determining a synchronization policy of each of the M clients by using the trained graph neural network, and synchronizing data in the clients with data in the server based on the synchronization policy;
wherein N is more than or equal to 2, M is more than or equal to 2, N is more than or equal to i is more than or equal to 1, and M is more than or equal to j is more than or equal to 1;
m, N, i, j are natural numbers.
Further, the data version number Snum i Data version number Cnum j Time values ST respectively of the latest update of data i 、CT j
Further, the characteristic values of the nodes in the graph as the server are:
the characteristic values of the nodes in the graph by the client are as follows: CC (CC) j =max(ST 1 ,S 2 ,…,ST N )-CT j
In the graph, the weights of the edges between the nodes are set as: the weight of the edges between the server nodes is set to be 1, the weight of the edges between the client nodes is set to be 0, and the weight of the edges between the server nodes and the client nodes is set to be:
wherein ,representing the number of transmission channels between server node i and said client node j via other nodes,/, respectively>Representing the historical failure rate of the transmission channel between server node i and said client node j +.>、/>Is constant.
Still further, timing between the server nodes is based on the data version number Snum i Data synchronization between servers is performed.
Still further, the synchronization policy is a time when each client performs synchronization data, a server from which the synchronization data originates, and a transmission channel when the synchronization data originates.
Still further, the loss function optimized based on the data version number in the training samples is:
;
wherein the training sample set is randomly divided into two sub-training sample sets, a first training sample subset having L training samples, a second training sample subset having Q training samples, and L being unequal to Q,、/>time values representing the data versions of the server in the kth, p training samples, respectively,/->、/>Time values representing the data versions of the clients in the kth, p training samples, respectively,/->、/>Represents the kth, p training samples, +.>、/>Respectively representing the output value of each training of the graph neural network, < >>、/>Respectively indicate and->、/>Corresponding tag values.
The invention also provides a data synchronization device based on the graph neural network, which comprises:
a graph construction unit, using N servers and M clients as nodes in the graph, using transmission channels between the N servers and the M clients as edges of the graph, wherein data in each server has a data version number Snum i The data in each client has a data version number Cnum j The graph is used for a graph neural network;
the training unit optimizes the loss function based on the data version number in the training sample, and trains the graph neural network by using the optimized loss function to obtain a trained graph neural network;
the synchronization unit is used for determining a synchronization strategy of each client in the M clients by using the trained graph neural network, and synchronizing the data in the clients with the data in the server based on the synchronization strategy;
wherein N is more than or equal to 2, M is more than or equal to 2, N is more than or equal to i is more than or equal to 1, and M is more than or equal to j is more than or equal to 1;
m, N, i, j are natural numbers.
Further, the data version number Snum i Data version number Cnum j Time values ST respectively of the latest update of data i 、CT j
Further, the characteristic values of the nodes in the graph as the server are: CS (circuit switching) i =max(ST 1 ,S 2 ,…,ST N )-ST i
The characteristic values of the nodes in the graph by the client are as follows: CC (CC) j =max(ST 1 ,S 2 ,…,ST N )-CT j
In the graph, the weights of the edges between the nodes are set as: the weight of the edges between the server nodes is set to be 1, the weight of the edges between the client nodes is set to be 0, and the weight of the edges between the server nodes and the client nodes is set to be:
wherein ,representing the number of transmission channels between server node i and said client node j via other nodes,/, respectively>Representing the historical failure rate of the transmission channel between server node i and said client node j +.>、/>Is constant.
Still further, timing between the server nodes is based on the data version number Snum i Data synchronization between servers is performed.
Still further, the synchronization policy is a time when each client performs synchronization data, a server from which the synchronization data originates, and a transmission channel when the synchronization data originates.
Still further, the loss function optimized based on the data version number in the training samples is:
wherein the training sample set is randomly divided into two sub-training sample sets, a first training sample subset having L training samples, a second training sample subset having Q training samples, and L being unequal to Q,、/>respectively represent the servers in the kth training sample and the p training sampleTime value of data version of +.>、/>Time values representing the data versions of the clients in the kth, p training samples, respectively,/->、/>Represents the kth, p training samples, +.>、/>Respectively representing the output value of each training of the graph neural network, < >>、/>Respectively indicate and->、/>Corresponding tag values.
The invention also proposes a computer readable storage medium having stored thereon computer program code which, when executed by a computer, performs any of the methods described above.
The invention has the technical effects that: the invention discloses a data synchronization method, a device and a storage medium based on a graph neural network, wherein the method comprises the following steps: a graph construction step S101, wherein N servers and M clients are used as nodes in the graph, transmission channels between the N servers and the M clients are used as edges of the graph, and data in each server has a numberAccording to the version number Snum i The data in each client has a data version number Cnum j The graph is used for a graph neural network; step S102 of training, namely optimizing a loss function based on a data version number in a training sample, and training the graph neural network by using the optimized loss function to obtain a trained graph neural network; and step S103, determining a synchronization strategy of each of the M clients by using the trained graph neural network, and synchronizing the data in the clients with the data in the server based on the synchronization strategy. In the invention, in order to solve the problems that in the prior art, a server transmits data to which client, how to quickly select a data transmission channel and how to guarantee individuation of pushing by the server due to inconsistent data synchronization starting points (time points) of different clients, the invention synchronizes the data based on a graph neural network, firstly uses N servers and M clients as nodes in the graph, uses the transmission channel between the N servers and the M clients as the edges of the graph, and the data in each server has a data version number Snum i The data in each client has a data version number Cnum j Training the graph neural network, determining a synchronization strategy of each client in M clients by using the trained graph neural network, synchronizing the data in the clients and the data in the server based on the synchronization strategy, thereby rapidly determining the source and the transmission channel of the client synchronization data, and improving the establishment of personalized synchronization strategies due to corresponding prediction based on the version of the data, so that the corresponding synchronization strategy is executed according to the condition of each client, and the efficiency of data synchronization is improved; the invention uses the time difference between the client, the server and the latest updated server as the characteristic value of each node based on the characteristic of data updating when constructing the graph neural network, instead of using the calculation resources of the nodes in the prior art as the characteristic value after processing, because the calculation capability of each node is very strong under the current calculation capability, the generated synchronization strategy is inaccurate by using the calculation capability of each node as the characteristic value, the operation of the graph neural network is larger, the operation efficiency is influenced, and the weight of the edge passes through based on the transmission channelThe node number and the transmission passing history fault rate are determined, so that the weight of the edge is more objective, and the characteristic value of the node and the weight setting method of the edge are provided for improving the accuracy of generating the synchronization strategy, so that the synchronization strategy is more matched with the condition of each node, and the efficiency of data synchronization is improved; in order to improve the training effect of the neural network, the training sample is divided into two subsets for simultaneous training, and the loss function is optimized based on the time value serving as the data version number, so that the loss function is guaranteed to be converged in time, the training speed is improved, and the prediction precision of the trained graph neural network is not reduced.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments made with reference to the following drawings.
Fig. 1 is a flowchart of a data synchronization method based on a graph neural network according to an embodiment of the present invention.
Fig. 2 is a block diagram of a data synchronization apparatus based on a graph neural network according to an embodiment of the present invention.
Detailed Description
The present application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 shows a data synchronization method based on a graph neural network, which comprises the following steps:
a graph construction step S101, wherein N servers and M clients are used as nodes in the graph, transmission channels between the N servers and the M clients are used as edges of the graph, and each serverThe data in (a) has a data version number Snum i The data in each client has a data version number Cnum j The graph is used for a graph neural network;
step S102 of training, namely optimizing a loss function based on a data version number in a training sample, and training the graph neural network by using the optimized loss function to obtain a trained graph neural network;
step S103, determining a synchronization strategy of each of M clients by using the trained graph neural network, and synchronizing the data in the clients with the data in the server based on the synchronization strategy;
wherein N is more than or equal to 2, M is more than or equal to 2, N is more than or equal to i is more than or equal to 1, and M is more than or equal to j is more than or equal to 1;
m, N, i, j are natural numbers.
In the invention, in order to solve the problems that in the prior art, a server transmits data to which client, how to quickly select a data transmission channel and how to guarantee individuation of pushing by the server due to inconsistent data synchronization starting points (time points) of different clients, the invention synchronizes the data based on a graph neural network, firstly uses N servers and M clients as nodes in the graph, uses the transmission channel between the N servers and the M clients as the edges of the graph, and the data in each server has a data version number Snum i The data in each client has a data version number Cnum j And then training the graphic neural network, determining the synchronization strategy of each client in the M clients by using the trained graphic neural network, and synchronizing the data in the clients and the data in the server based on the synchronization strategy, so that the source and the transmission channel of the client synchronization data can be rapidly determined.
In one embodiment, the data version number Snum in the present invention i Data version number Cnum j Respectively the most recently updated dataTime value ST i 、CT j . The characteristic values of the nodes in the graph by the server are as follows: CS (circuit switching) i =max(ST 1 ,S 2 ,…,ST N )-ST i
The characteristic values of the nodes in the graph by the client are as follows: CC (CC) j =max(ST 1 ,S 2 ,…,ST N )-CT j
In the graph, the weights of the edges between the nodes are set as: the weight of the edges between the server nodes is set to be 1, the weight of the edges between the client nodes is set to be 0, and the weight of the edges between the server nodes and the client nodes is set to be:
wherein ,representing the number of transmission channels between server node i and said client node j via other nodes,/, respectively>Representing the historical failure rate of the transmission channel between server node i and said client node j +.>、/>It can be set by empirical values or estimated using LSTM networks based on historical values.
When the graph neural network is constructed, based on the characteristic of data updating, the time difference between the client, the server and the latest updating server is adopted as the characteristic value of each node, instead of the characteristic value obtained after processing by using the computing resources of the nodes in the prior art, because the computing capacity of each node is very strong and the computing capacity of each node is used as the characteristic value under the current computing capacity, the generated synchronization strategy is inaccurate, the operation of the graph neural network is larger, the operation efficiency is influenced, the weight of the edge is determined based on the number of the nodes passing through a transmission channel and the historical failure rate passing through the transmission channel, so that the weight of the edge is more objective.
In one embodiment, the timing between the server nodes is based on the data version number Snum i And the data between the servers are synchronized, and the servers need to be backed up in time, so that the data between the servers are synchronized in a timing synchronization mode, for example, 1min, 5min and the like, so that the data difference between the servers is not too large.
In one embodiment, the synchronization policy is a time at which each client performs synchronization data, a server from which the synchronization data originates, and a transmission channel at which the synchronization data originates. The invention is characterized in that the synchronization policy is used for determining when each client starts to perform data synchronization, which server is used for performing synchronization and the synchronous data transmission link, thereby ensuring the customization of the client synchronization policy and improving the adaptability of data synchronization.
In one embodiment, the loss function optimized based on the data version number in the training samples is:
;
wherein the training sample set is randomly divided into two sub-training sample sets, a first training sample subset having L training samples, a second training sample subset having Q training samples, and L being unequal to Q,、/>data representing servers in the kth and p training samples, respectivelyTime value of version->、/>Time values representing the data versions of the clients in the kth, p training samples, respectively,/->、/>Represents the kth, p training samples, +.>、/>Respectively representing the output value of each training of the graph neural network, < >>、/>Respectively indicate and->、/>Corresponding tag values.
In order to improve the training effect of the neural network, the training sample is divided into two subsets for simultaneous training, and the loss function is optimized based on the time value serving as the data version number, so that the purpose of optimization is to ensure timely convergence of the loss function, improve the training speed and not reduce the prediction precision of the trained graph neural network, which is an important invention point of the application.
Fig. 2 shows a data synchronization device based on a graph neural network according to the present invention, the device includes:
graph construction unit 201 using N servers and M clientsThe end is used as a node in the graph, the transmission channels between the N servers and the M clients are used as edges of the graph, and data in each server has a data version number Snum i The data in each client has a data version number Cnum j The graph is used for a graph neural network;
the training unit 202 optimizes the loss function based on the data version number in the training sample, and trains the graph neural network by using the optimized loss function to obtain a trained graph neural network;
a synchronization unit 203, configured to determine a synchronization policy of each of the M clients using the trained graph neural network, and synchronize the data in the clients with the data in the server based on the synchronization policy;
wherein N is more than or equal to 2, M is more than or equal to 2, N is more than or equal to i is more than or equal to 1, and M is more than or equal to j is more than or equal to 1; m, N, i, j are natural numbers.
In the invention, in order to solve the problems that in the prior art, a server transmits data to which client, how to quickly select a data transmission channel and how to guarantee individuation of pushing by the server due to inconsistent data synchronization starting points (time points) of different clients, the invention synchronizes the data based on a graph neural network, firstly uses N servers and M clients as nodes in the graph, uses the transmission channel between the N servers and the M clients as the edges of the graph, and the data in each server has a data version number Snum i The data in each client has a data version number Cnum j And then training the graphic neural network, determining the synchronization strategy of each client in the M clients by using the trained graphic neural network, and synchronizing the data in the clients and the data in the server based on the synchronization strategy, so that the source and the transmission channel of the client synchronization data can be rapidly determined.
In one embodiment, the data in the present inventionVersion number Snum i Data version number Cnum j Time values ST respectively of the latest update of data i 、CT j . The characteristic values of the nodes in the graph by the server are as follows: CS (circuit switching) i =max(ST 1 ,S 2 ,…,ST N )-ST i
The characteristic values of the nodes in the graph by the client are as follows: CC (CC) j =max(ST 1 ,S 2 ,…,ST N )-CT j
In the graph, the weights of the edges between the nodes are set as: the weight of the edges between the server nodes is set to be 1, the weight of the edges between the client nodes is set to be 0, and the weight of the edges between the server nodes and the client nodes is set to be:
wherein ,representing the number of transmission channels between server node i and said client node j via other nodes,/, respectively>Representing the historical failure rate of the transmission channel between server node i and said client node j +.>It can be set by empirical values or estimated using LSTM networks based on historical values.
When the graph neural network is constructed, based on the characteristic of data updating, the time difference between the client, the server and the latest updating server is adopted as the characteristic value of each node, instead of the characteristic value obtained after processing by using the computing resources of the nodes in the prior art, because the computing capacity of each node is very strong and the computing capacity of each node is used as the characteristic value under the current computing capacity, the generated synchronization strategy is inaccurate, the operation of the graph neural network is larger, the operation efficiency is influenced, the weight of the edge is determined based on the number of the nodes passing through a transmission channel and the historical failure rate passing through the transmission channel, so that the weight of the edge is more objective.
In one embodiment, the timing between the server nodes is based on the data version number Snum i And the data between the servers are synchronized, and the servers need to be backed up in time, so that the data between the servers are synchronized in a timing synchronization mode, for example, 1min, 5min and the like, so that the data difference between the servers is not too large.
In one embodiment, the synchronization policy is a time at which each client performs synchronization data, a server from which the synchronization data originates, and a transmission channel at which the synchronization data originates. The invention is characterized in that the synchronization policy is used for determining when each client starts to perform data synchronization, which server is used for performing synchronization and the synchronous data transmission link, thereby ensuring the customization of the client synchronization policy and improving the adaptability of data synchronization.
In one embodiment, the loss function optimized based on the data version number in the training samples is:
;
wherein the training sample set is randomly divided into two sub-training sample sets, a first training sample subset having L training samples, a second training sample subset having Q training samples, and L being unequal to Q,、/>time values representing the data versions of the server in the kth, p training samples, respectively,/->、/>Time values representing the data versions of the clients in the kth, p training samples, respectively,/->、/>Represents the kth, p training samples, +.>、/>Respectively representing the output value of each training of the graph neural network, < >>、/>Respectively indicate and->、/>Corresponding tag values.
In order to improve the training effect of the neural network, the training sample is divided into two subsets for simultaneous training, and the loss function is optimized based on the time value serving as the data version number, so that the purpose of optimization is to ensure timely convergence of the loss function, improve the training speed and not reduce the prediction precision of the trained graph neural network, which is an important invention point of the application.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present application.
From the above description of embodiments, it will be apparent to those skilled in the art that the present application may be implemented in software plus a necessary general purpose hardware platform. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the apparatus described in the embodiments or some parts of the embodiments of the present application.
Finally, what should be said is: the above embodiments are merely for illustrating the technical aspects of the present invention, and it should be understood by those skilled in the art that although the present invention has been described in detail with reference to the above embodiments: modifications and equivalents may be made thereto without departing from the spirit and scope of the invention, which is intended to be encompassed by the claims.

Claims (10)

1. A data synchronization method based on a graph neural network, the method comprising:
a graph construction step of using N servers and M clients as nodes in the graph, using transmission channels between the N servers and the M clients as edges of the graph, wherein data in each server has a data version number Snum i The data in each client has a data version number Cnum j The graph is used for a graph neural network;
a training step, optimizing a loss function based on a data version number in a training sample, and training the graph neural network by using the optimized loss function to obtain a trained graph neural network;
a synchronization step of determining a synchronization policy of each of the M clients by using the trained graph neural network, and synchronizing data in the clients with data in the server based on the synchronization policy;
wherein N is more than or equal to 2, M is more than or equal to 2, N is more than or equal to i is more than or equal to 1, and M is more than or equal to j is more than or equal to 1;
m, N, i, j are natural numbers.
2. The method of claim 1, wherein the data version number, snam i Data version number Cnum j Time values ST respectively of the latest update of data i 、CT j
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
the characteristic values of the nodes in the graph by the server are as follows: CS (circuit switching) i =max(ST 1 ,S 2 ,…,ST N )-ST i
The characteristic values of the nodes in the graph by the client are as follows: CC (CC) j =max(ST 1 ,S 2 ,…,ST N )-CT j
In the graph, the weights of the edges between the nodes are set as: the weight of the edges between the server nodes is set to be 1, the weight of the edges between the client nodes is set to be 0, and the weight of the edges between the server nodes and the client nodes is set to be:
wherein ,representing the number of transmission channels between server node i and said client node j via other nodes,/, respectively>Representing the historical failure rate of the transmission channel between server node i and said client node j +.>、/>Is constant.
4. A method according to claim 3, characterized in that the timing between the server nodes is based on the data version number Snum i Data synchronization between servers is performed.
5. The method of claim 4, wherein the synchronization policy is a time for each client to synchronize data, a server from which the synchronized data originates, and a transmission channel upon which the synchronized data originates, and the loss function optimized based on the version number of the data in the training samples is:
wherein ,time value representing the data version of the server in the kth training sample, +.>Time value representing data version of client in kth training sample, L represents total number of training samples, +.>Representing the (k) th training sample,representing the output value of the graph neural network per training, < >>Representation and->Corresponding tag values.
6. A data synchronization device based on a graph neural network, the device comprising:
a graph construction unit, using N servers and M clients as nodes in the graph, using transmission channels between the N servers and the M clients as edges of the graph, wherein data in each server has a data version number Snum i The data in each client has a data version number Cnum j The graph is used for a graph neural network;
the training unit optimizes the loss function based on the data version number in the training sample, and trains the graph neural network by using the optimized loss function to obtain a trained graph neural network;
the synchronization unit is used for determining a synchronization strategy of each client in the M clients by using the trained graph neural network, and synchronizing the data in the clients with the data in the server based on the synchronization strategy;
wherein N is more than or equal to 2, M is more than or equal to 2, N is more than or equal to i is more than or equal to 1, and M is more than or equal to j is more than or equal to 1;
m, N, i, j are natural numbers.
7. The apparatus of claim 6, wherein the data version number, snam i Data version number Cnum j Time values ST respectively of the latest update of data i 、CT j
8. The apparatus of claim 7, wherein the device comprises a plurality of sensors,
the characteristic values of the nodes in the graph by the server are as follows: CS (circuit switching) i =max(ST 1 ,S 2 ,…,ST N )-ST i
The characteristic values of the nodes in the graph by the client are as follows: CC (CC) j =max(ST 1 ,S 2 ,…,ST N )-CT j
In the graph, the weights of the edges between the nodes are set as: the weight of the edges between the server nodes is set to be 1, the weight of the edges between the client nodes is set to be 0, and the weight of the edges between the server nodes and the client nodes is set to be:
wherein ,representing the number of transmission channels between server node i and said client node j via other nodes,/, respectively>Representing the historical failure rate of the transmission channel between server node i and said client node j +.>、/>Is constant.
9. The apparatus of claim 8, wherein timing between the server nodes is based on the data version number, snum i Data synchronization between servers is performed.
10. A computer readable storage medium having stored thereon computer program code which, when executed by a computer, performs the method of any of the preceding claims 1-5.
CN202310801625.7A 2023-07-03 2023-07-03 Data synchronization method, device and storage medium based on graphic neural network Active CN116561229B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310801625.7A CN116561229B (en) 2023-07-03 2023-07-03 Data synchronization method, device and storage medium based on graphic neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310801625.7A CN116561229B (en) 2023-07-03 2023-07-03 Data synchronization method, device and storage medium based on graphic neural network

Publications (2)

Publication Number Publication Date
CN116561229A true CN116561229A (en) 2023-08-08
CN116561229B CN116561229B (en) 2023-09-08

Family

ID=87496735

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310801625.7A Active CN116561229B (en) 2023-07-03 2023-07-03 Data synchronization method, device and storage medium based on graphic neural network

Country Status (1)

Country Link
CN (1) CN116561229B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112149808A (en) * 2020-09-28 2020-12-29 上海交通大学 Method, system and medium for expanding stand-alone graph neural network training to distributed training
CN112783940A (en) * 2020-12-31 2021-05-11 广州大学 Multi-source time series data fault diagnosis method and medium based on graph neural network
CN114861217A (en) * 2022-03-25 2022-08-05 支付宝(杭州)信息技术有限公司 Data synchronization method and device in multi-party combined training
CN115186806A (en) * 2022-04-06 2022-10-14 东北大学 Distributed graph neural network training method supporting cross-node automatic differentiation
CN115311205A (en) * 2022-07-07 2022-11-08 上海工程技术大学 Industrial equipment fault detection method based on pattern neural network federal learning
WO2022250910A1 (en) * 2021-05-23 2022-12-01 Microsoft Technology Licensing, Llc Scaling deep graph learning in distributed setting
CN116208399A (en) * 2023-02-17 2023-06-02 中国电子科技集团公司电子科学研究院 Network malicious behavior detection method and device based on metagraph

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112149808A (en) * 2020-09-28 2020-12-29 上海交通大学 Method, system and medium for expanding stand-alone graph neural network training to distributed training
CN112783940A (en) * 2020-12-31 2021-05-11 广州大学 Multi-source time series data fault diagnosis method and medium based on graph neural network
WO2022250910A1 (en) * 2021-05-23 2022-12-01 Microsoft Technology Licensing, Llc Scaling deep graph learning in distributed setting
CN114861217A (en) * 2022-03-25 2022-08-05 支付宝(杭州)信息技术有限公司 Data synchronization method and device in multi-party combined training
CN115186806A (en) * 2022-04-06 2022-10-14 东北大学 Distributed graph neural network training method supporting cross-node automatic differentiation
CN115311205A (en) * 2022-07-07 2022-11-08 上海工程技术大学 Industrial equipment fault detection method based on pattern neural network federal learning
CN116208399A (en) * 2023-02-17 2023-06-02 中国电子科技集团公司电子科学研究院 Network malicious behavior detection method and device based on metagraph

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHAILENDER KUMAR等: "Convolution Neural Network-Based Detection in Software Defined Networks", 2022 2ND INTERNATIONAL CONFERENCE ON INTELLIGENT TECHNOLOGIES (CONIT) *
刘其群;黄河清;冯文峰;: "基于有向无环图的P2P流媒体播放系统", 通信技术, no. 07 *

Also Published As

Publication number Publication date
CN116561229B (en) 2023-09-08

Similar Documents

Publication Publication Date Title
WO2021057245A1 (en) Bandwidth prediction method and apparatus, electronic device and storage medium
CN110162414B (en) Method and device for realizing artificial intelligent service based on micro-service architecture
Zhang et al. New agent-based proactive migration method and system for big data environment (BDE)
US10069942B2 (en) Method and apparatus for changing configurations
CN105530272B (en) A kind of synchronous method and device using data
EP4300323A1 (en) Data processing method and apparatus for blockchain network, computer device, computer readable storage medium, and computer program product
WO2013171481A2 (en) Mechanism for synchronising devices, system and method
US20240135191A1 (en) Method, apparatus, and system for generating neural network model, device, medium, and program product
CN109542851A (en) File updating method, apparatus and system
CN112925926B (en) Training method and device of multimedia recommendation model, server and storage medium
CN113469325A (en) Layered federated learning method, computer equipment and storage medium for edge aggregation interval adaptive control
US20230107334A1 (en) Computer-based systems configured for persistent state management and configurable execution flow and methods of use thereof
WO2020226821A1 (en) Messaging to enforce operation serialization for consistency of a distributed data structure
US20160212248A1 (en) Retry mechanism for data loading from on-premise datasource to cloud
CN111935242A (en) Data transmission method, device, server and storage medium
CN116561229B (en) Data synchronization method, device and storage medium based on graphic neural network
US11328205B2 (en) Generating featureless service provider matches
CN115577797B (en) Federal learning optimization method and system based on local noise perception
CN108833518B (en) A method of session id is generated based on nginx server
CN114266352B (en) Model training result optimization method, device, storage medium and equipment
CN111104247A (en) Method, apparatus and computer program product for managing data replication
US7958249B2 (en) Highly scalable, fault tolerant file transport using vector exchange
CN116933189A (en) Data detection method and device
CN107707595A (en) A kind of member organizes variation and device
WO2020113435A1 (en) Record transmitting method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant