CN116309002A - Graph data storage, access and processing methods, training methods, equipment and media - Google Patents

Graph data storage, access and processing methods, training methods, equipment and media Download PDF

Info

Publication number
CN116309002A
CN116309002A CN202310188954.9A CN202310188954A CN116309002A CN 116309002 A CN116309002 A CN 116309002A CN 202310188954 A CN202310188954 A CN 202310188954A CN 116309002 A CN116309002 A CN 116309002A
Authority
CN
China
Prior art keywords
data
graph
target
slice data
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310188954.9A
Other languages
Chinese (zh)
Other versions
CN116309002B (en
Inventor
王贤明
吴志华
吴鑫烜
冯丹蕾
姚雪峰
于佃海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202310188954.9A priority Critical patent/CN116309002B/en
Publication of CN116309002A publication Critical patent/CN116309002A/en
Application granted granted Critical
Publication of CN116309002B publication Critical patent/CN116309002B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

The disclosure provides a graph data storage, access and processing method, training equipment and medium, relates to the technical field of artificial intelligence, and particularly relates to the technical fields of graph neural network technology, computer vision and deep learning. The specific implementation scheme is as follows: dividing the graph data to be stored in response to receiving the graph data storage request to obtain at least two graph slice data to be stored; obtaining target image slice data and associated image slice data according to at least two image slice data to be stored; storing the target graph slice data to a graphics processor GPU; storing the associative map slice data to an internal memory; and storing at least two graph slice data to be stored into an external memory.

Description

Graph data storage, access and processing methods, training methods, equipment and media
The application is a divisional application of application with application date 2022, 5 month and 19 date, application number 202210573156.3 and application name of "image data storage, access, processing method, training method, equipment and medium".
Technical Field
The present disclosure relates to the field of artificial intelligence, and in particular, to the field of graphic neural network technology, computer vision, and deep learning. And in particular to a graph data storage, access, processing method, training method, device and medium.
Background
With the continuous development of computer technology, graph data has also been developed. The graph data may characterize the node and side information using graph theory. The graph data are widely applied to the fields of knowledge graph, financial anti-fraud, social relation mining and the like.
Disclosure of Invention
The present disclosure provides a graph data storage, access, processing method, training method, device and medium.
According to an aspect of the present disclosure, there is provided a graph data storage method including: dividing the graph data to be stored in response to receiving the graph data storage request to obtain at least two graph slice data to be stored; obtaining target image slice data and associated image slice data according to the at least two image slice data to be stored; storing the target graph slice data to a Graphic Processor (GPU); storing the correlation map slice data into an internal memory; and storing the at least two image slice data to be stored into an external memory.
According to another aspect of the present disclosure, there is provided a graph data access method including: acquiring an identifier to be accessed in response to receiving a graph data access request; and obtaining an access result from the target graph slice data according to the matching identifier when the matching identifier matched with the identifier to be accessed exists in the target graph slice data, wherein the target graph slice data is the graph data to be stored in the GPU according to the method disclosed by the invention.
According to another aspect of the present disclosure, there is provided a graph data access method including: in response to receiving an identifier to be accessed from a Graphics Processing Unit (GPU), under the condition that it is determined that a matching identifier matched with the identifier to be accessed exists in associated graph slice data, according to the matching identifier, an access result is obtained from the associated graph slice data, wherein the associated graph slice data is the graph data to be stored in the internal memory according to the method disclosed by the invention; and transmitting the access result to the GPU.
According to another aspect of the present disclosure, there is provided a training method of a graph neural network model, including: in response to receiving the model training request, determining at least one target sampling node from target graph slice data based on a sampling strategy, wherein the target graph slice data is data to be stored in a graphics processor GPU according to the method described above in the present disclosure; acquiring at least one first-order neighbor node corresponding to the at least one target sampling node from one of the target graph slice data, associated graph slice data and at least two graph slice data to be stored, wherein the associated graph slice data is graph data to be stored in an internal memory according to the method of the present disclosure, and the at least two graph slice data to be stored is graph data to be stored in an external memory according to the method of the present disclosure; obtaining at least first-order sub-graph data according to the target sampling node related data of the at least one target sampling node and the neighbor node related data of the at least first-order neighbor node; and transmitting the at least one level sub-graph data to a deep learning platform, so that the deep learning platform trains the graph neural network model by using the at least one level sub-graph data.
According to another aspect of the present disclosure, there is provided a graph data processing method including: inputting target graph data into a graph neural network model to obtain an output result, wherein the graph neural network model is trained by the method according to the disclosure.
According to another aspect of the present disclosure, there is provided a graph data storage device including: the first obtaining module is used for responding to the received graph data storage request, dividing the graph data to be stored, and obtaining at least two graph slice data to be stored; the second obtaining module is used for obtaining target graph slice data and associated graph slice data according to the at least two graph slice data to be stored; the first storage module is used for storing the target graph slice data to the GPU; the second storage module is used for storing the correlation diagram slice data into the internal memory; and a third storage module, configured to store the at least two slice data to be stored in the external memory.
According to another aspect of the present disclosure, there is provided a graph data access apparatus including: the first acquisition module is used for responding to the received graph data access request and acquiring the identification to be accessed; and a second obtaining module, configured to obtain, when it is determined that there is a matching identifier that matches the identifier to be accessed in the target image slice data, according to the matching identifier, an access result from the target image slice data, where the target image slice data is image data to be stored in a graphics processor GPU according to the device described in the disclosure.
According to another aspect of the present disclosure, there is provided a graph data access apparatus including: a third obtaining module, configured to obtain, in response to receiving an identifier to be accessed from a GPU, an access result from associated graph slice data according to a matching identifier when it is determined that the matching identifier matching the identifier to be accessed exists in the associated graph slice data, where the associated graph slice data is to be stored in the internal memory according to the device disclosed in the present disclosure; and the second sending module is used for sending the access result to the GPU.
According to another aspect of the present disclosure, there is provided a training apparatus of a graph neural network model, including:
a determining module, configured to determine, in response to receiving a model training request, at least one target sampling node from target graph slice data, based on a sampling policy, where the target graph slice data is data to be stored in a graphics processor GPU according to the apparatus described above in the disclosure; a fourth obtaining module, configured to obtain, according to the at least one target sampling node, at least one first-order neighbor node corresponding to the at least one target sampling node from the target graph slice data, associated graph slice data, and at least two graph slice data to be stored, where the associated graph slice data is graph data to be stored in an internal memory in the apparatus according to the disclosure, and the at least two graph slice data to be stored is graph data to be stored in an external memory in the apparatus according to the disclosure; the third obtaining module is used for obtaining at least one first-order sub-graph data according to the target sampling node related data of the at least one target sampling node and the neighbor node related data of the at least one first-order neighbor node; and a fifth transmitting module, configured to transmit the at least first-order sub-graph data to a deep learning platform, so that the deep learning platform trains the graph neural network model by using the at least first-order sub-graph data.
According to another aspect of the present disclosure, there is provided a graph data processing apparatus including: inputting target graph data into a graph neural network model to obtain an output result, wherein the graph neural network model is trained by the device according to the disclosure.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods described in the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method described in the present disclosure.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 schematically illustrates an exemplary system architecture to which graph data storage methods, graph data access methods, training methods of graph neural network models, and graph data processing methods and apparatuses may be applied, according to embodiments of the present disclosure;
FIG. 2 schematically illustrates a flow chart of a graph data storage method according to an embodiment of the disclosure;
FIG. 3 schematically illustrates an example schematic diagram of a graph data storage process in accordance with an embodiment of the disclosure;
FIG. 4 schematically illustrates a flow chart of a graph data access method according to an embodiment of the disclosure;
FIG. 5 schematically illustrates a flow chart of a method of graph data access according to another embodiment of the present disclosure;
FIG. 6 schematically illustrates a flowchart of a method of training the neural network model of FIG. 6, in accordance with an embodiment of the present disclosure;
FIG. 7 schematically illustrates an example schematic diagram of a training process of the neural network model of FIG. 7, in accordance with an embodiment of the present disclosure;
FIG. 8 schematically illustrates a flow chart of a graph data processing method according to an embodiment of the disclosure; and
FIG. 9 schematically illustrates a block diagram of a graph data store in accordance with an embodiment of the present disclosure;
FIG. 10 schematically illustrates a block diagram of a graph data access apparatus according to an embodiment of the disclosure;
FIG. 11 schematically illustrates a block diagram of a data access apparatus according to another embodiment of the present disclosure;
FIG. 12 schematically illustrates a block diagram of a training apparatus of the neural network model, in accordance with an embodiment of the present disclosure;
FIG. 13 schematically illustrates a block diagram of a graph data processing apparatus according to an embodiment of the disclosure; and
fig. 14 schematically illustrates a block diagram of an electronic device adapted to implement a graph data storage method, a graph data access method, a training method of a graph neural network model, and a graph data processing method, according to an embodiment of the disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 schematically illustrates an exemplary system architecture to which a graph data storage method, a graph data access method, a training method of a graph neural network model, and a graph data processing method and apparatus may be applied according to an embodiment of the present disclosure.
It should be noted that fig. 1 is only an example of a system architecture to which embodiments of the present disclosure may be applied to assist those skilled in the art in understanding the technical content of the present disclosure, but does not mean that embodiments of the present disclosure may not be used in other devices, systems, environments, or scenarios.
As shown in fig. 1, a system architecture 100 according to this embodiment may include a graph engine 101, a deep learning platform 102, and a network 103. The network 103 is the medium used to provide a communication link between the graph engine 101 and the deep learning platform 102. The network 103 may include various connection types. Such as a wired and/or wireless communication link, etc.
The memory structure of the graphics engine 101 may include a GPU (Graphics Processing Unit, graphics processor) 101_1, an internal memory 102_2, and an external memory_3. The internal memory 102_1 may include a CPU (Central Processing Unit ) memory. The external memory 101_3 may include at least one of: hard disk, floppy disk, optical disk, and USB flash disk. The hard disk may include at least one of: solid State Disk (SSD) and mechanical hard Disk.
The graph engine 101 may be configured to divide the graph data to be stored to obtain at least two graph slice data to be stored in response to receiving the graph data storage request. And obtaining target image slice data and associated image slice data according to at least two image slice data to be stored. The target graph slice data is stored to GPU101_1. The associated map slice data is stored to the internal memory 101_2. At least two pieces of drawing slice data to be stored are stored to the external memory 101_3.
The GPU101_1 in the graph engine 101 may obtain the identity to be accessed in response to receiving the graph data access request. And under the condition that the matching identification matched with the identification to be accessed exists in the target graph slice data, acquiring the access result from the target graph slice data according to the matching identification.
The internal memory 101_2 in the graph engine 101 may obtain, in response to receiving the identification to be accessed from the GPU101_1, an access result from the association graph slice data according to the matching identification in a case where it is determined that there is a matching identification matching the identification to be accessed in the association graph slice data. And sending an access result to the GPU 101_1.
The graph engine 101 determines at least one target sampling node from the target graph slice data based on the sampling strategy in response to receiving the model training request. And acquiring at least one first-order neighbor node corresponding to the at least one target sampling node from one of the target graph slice data, the associated graph slice data and the at least two graph slice data to be stored according to the at least one target sampling node. And obtaining at least one first-order sub-graph data according to the target sampling node related data of at least one target sampling node and the neighbor node related data of at least one first-order neighbor node. The at least first order sub-graph data is transmitted to the deep learning platform 102 such that the deep learning platform 102 trains the graph neural network model with the at least first order sub-graph data.
The deep learning platform 102 may train the graph neural network model with at least first-order sub-graph data in response to receiving the at least first-order sub-graph data from the graph engine 101.
The graph engine 101 may be a server or a terminal device. The deep learning platform 102 may be a server, a server cluster, or a terminal device. Various communication client applications may be installed on the terminal device, such as, for example, a knowledge reading class application, a web browser application, a search class application, an instant messaging tool, a mailbox client and/or social platform software, to name a few. The terminal device may be a variety of electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablet computers, laptop and desktop computers, and the like.
The server may be various types of servers providing various services. For example, the server may be a cloud server, also called a cloud computing server or a cloud host, which is a host product in a cloud computing service system, so as to solve the defects of large management difficulty and weak service expansibility in the traditional physical hosts and VPS services (Virtual Private Server, virtual private servers). The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be understood that the number of graph engines, deep learning platforms, and networks in fig. 1 are merely illustrative. There may be any number of graph engines, deep learning platforms, and networks, as desired for implementation.
It should be noted that the sequence numbers of the respective operations in the following methods are merely representative of the operations for the purpose of description, and should not be construed as representing the order of execution of the respective operations. The method need not be performed in the exact order shown unless explicitly stated.
Fig. 2 schematically illustrates a flow chart of a graph data storage method according to an embodiment of the disclosure.
As shown in fig. 2, the method 200 includes operations S210-S250.
In operation S210, in response to receiving the graph data storage request, the graph data to be stored is divided, and at least two graph slice data to be stored are obtained.
In operation S220, target graph slice data and associated graph slice data are obtained from at least two graph slice data to be stored.
In operation S230, the target graph slice data is stored to the GPU.
In operation S240, the associative map slice data is stored to the internal memory.
At least two graphic slice data to be stored are stored to an external memory in operation S250.
According to embodiments of the present disclosure, a graph data storage request may refer to a request for storing data to be stored. Graph data can refer to data characterized by node data and edge data. The node data may include at least one node tag of the node and at least one node attribute corresponding to the node tag. The edge data may include an edge tag of the edge and at least one edge attribute corresponding to the edge tag. Edge labels may be used to characterize the relationship between two nodes of an edge connection. May be used to characterize an entity. Edges may be used to characterize the relationship between two nodes connected. The node may be characterized by a node identification. Edges may be characterized by edge identifications. In addition, edge data related to a node may be regarded as dependency data of the node. The dependency data of the node may be used as node characteristic data of the node. Thus, the graph data may include node data and node feature data.
According to embodiments of the present disclosure, a node may have a neighbor node corresponding to the node. The number of neighbor nodes may include one or more. A neighbor node may refer to a node that has a relationship with the node. The neighbor nodes may have a hierarchical relationship. For example, a neighbor node determined directly from a node may be determined as a first-order neighbor node of the node. The neighbor node determined according to the first-order neighbor node of the node is determined as the second-order neighbor node of the node. Similarly, a node may have at least one first-order neighbor node corresponding to the node.
According to embodiments of the present disclosure, the graph data to be stored may refer to graph data to be stored. The map data to be stored may have the same data structure as the map data.
According to an embodiment of the present disclosure, the graph slice data may refer to graph data obtained by dividing the graph data. For example, the graph data may be partitioned using a graph slicing algorithm to obtain at least two graph slice data. The graph cut algorithm may include one of the following: node-based graph cut algorithms and edge-based graph cut algorithms. The edge-based graph cut algorithm may refer to a graph cut algorithm that divides graph data by edges. The node-based graph cut algorithm may refer to a graph data cut algorithm that divides data by nodes. The implementation of the graph slicing algorithm needs to meet one of the following three conditions, namely, the sliced nodes are related nodes as much as possible (i.e., the number of edges in sliced graph slice data to be stored needs to be as large as possible, and the number of edges crossing slices is as small as possible). And secondly, the data volume of each image slice data to be stored is as equal as possible. Thirdly, the weight sum of each node is equal to each other as much as possible. Each graph slice data to be stored has a node weight sum corresponding to the graph slice data to be stored. The node weights and may be determined according to the node weights of the respective nodes included in the graph slice data to be stored. The node weight of a node may be determined based on whether the node has annotation data. For example, if a node has annotation data, the node weight of the node may be set to a number greater than and less than 1. If the node does not have annotation data, the node weight of the node may be set to 0. The node weight setting manner of the node can be configured according to the actual service requirement, which is not limited herein. The reason why the weights of the respective nodes are required to be as equal as possible to each other is that: if all the target nodes in the target graph slice data obtained based on the graph segmentation algorithm are nodes without marking data, sampling is carried out by utilizing a condition sampling strategy, and the condition sampling strategy is to determine the target nodes with marking data as target sampling nodes, the target nodes in the target graph slice data cannot meet the condition and cannot become target sampling nodes, so that the target graph slice data is not utilized, and the data utilization efficiency is reduced.
According to an embodiment of the present disclosure, the map slice data to be stored may refer to map data obtained by dividing the map data to be stored. The target map slice data may be map slice data to be stored that satisfies a predetermined condition. The associated map slice data may be map data to be stored derived from the target map slice data. The predetermined conditions may be configured according to actual service requirements, and are not limited herein.
According to an embodiment of the present disclosure, the relationship of the target graph slice data and the associated graph slice data may include one of: there may be no intersection between the target map slice data and the associated map slice data and there may be an intersection between the target map slice data and the associated map slice data. In the case where there may be no intersection between the target map slice data and the associated map slice data, the associated map slice data may be referred to as complementary map slice data of the target map slice data. Having an intersection between the target map slice data and the associated map slice data may include at least one of: the associative map slice data includes all target map slice data and the associative map slice data includes part of the target map data.
According to an embodiment of the present disclosure, obtaining target graph slice data and associated graph slice data according to at least two graph slice data to be stored may include: and obtaining target graph slice data according to at least two graph slice data to be stored. From the target map slice data, associated map slice data is determined from at least one other map slice data to be stored. The other map slice data to be stored may refer to any one of the at least two map slice data to be stored except the target map slice data. Determining associated graph slice data from at least one other graph slice data to be stored from the target graph slice data may include: and extracting at least part of the predetermined number of other image slice data to be stored from at least one other image slice data to be stored according to the target image slice data to obtain associated image slice data.
According to embodiments of the present disclosure, after obtaining the target and associated graph slice data, the target graph slice data may be stored to the GPU. The associative map slice data may be stored to an internal memory. And storing at least two graph slice data to be stored into an external memory.
According to the embodiment of the disclosure, the target graph slice data is stored to the GPU, the associated graph slice data is stored to the internal memory, and at least two graph slice data to be stored are stored to the external memory, so that three-level storage based on a single machine is realized, communication overhead is reduced, and single machine energy storage capacity is enlarged.
According to an embodiment of the present disclosure, operation S210 may include the following operations.
And in response to receiving the graph data storage request, dividing the graph data to be stored by a graph segmentation algorithm based on the nodes to obtain at least two graph slice data to be stored.
According to embodiments of the present disclosure, node data and neighbor node data may be determined from graph data to be stored using a node-based graph cut algorithm. And obtaining at least two graph slice data to be stored according to the node data and the neighbor node data.
According to an embodiment of the present disclosure, operation S220 may include the following operations.
The target map slice data is determined from at least two map slice data to be stored. And determining associated graph slice data from the graph data to be stored according to the target graph slice data.
According to an embodiment of the present disclosure, target graph slice data may be determined from at least two graph slice data to be stored based on a predetermined determination policy. The predetermined determination policy may include a random determination policy, i.e., determining target map slice data from at least two map slice data to be stored at random. According to the embodiment of the disclosure, after determining the target graph slice data, the target predetermined node corresponding to the first predetermined node may be determined from other graph data to be stored according to the degree of association between the first predetermined node in the target graph slice data and the second predetermined node in the other graph data to be stored. The degree of association between the first predetermined node and the second predetermined node may be determined according to at least one of the degree of egress and the degree of ingress of the first predetermined node. And obtaining association graph slice data corresponding to the target graph slice data according to the target preset node corresponding to the first preset node. The degree of association between the second predetermined node and the first predetermined node may be determined according to the number of edges between the second predetermined node and the first predetermined node. For example, the number of edges between the second predetermined node and the first predetermined node is greater than or equal to the predetermined edge number threshold. The predetermined edge number threshold may be configured according to actual service requirements, which is not limited herein.
According to embodiments of the present disclosure, the GPU may be at least one GPU card.
According to an embodiment of the present disclosure, operation S230 may include the following operations.
And dividing the target graph slice data to obtain at least one target graph partition data. At least one target graph partition data is stored to at least one GPU card.
According to an embodiment of the present disclosure, the target graph slice data may include target node data and target node feature data of at least one target node. The target node may be characterized by a target node identification. The target graph slice data may be partitioned according to a target node identification of the target node to obtain at least one target graph Partition (i.e., partition) data. For example, at least one target node identification may be processed to obtain at least one target node identification processed value. And determining target node data of the target node identification corresponding to the same target node identification processing value as target graph partition data, whereby the target node data in each target graph partition data is target node data of the target node corresponding to the same target node identification processing value.
According to the embodiment of the disclosure, at least one target graph partition data can be stored to at least one GPU card according to the association relation between the GPU card and the target node identification processing value. Each GPU card may be used to store target graph partition data corresponding to the GPU card.
For example, N GPU cards, i.e., GPU cards, may be included 1 、GPU card 2 … … GPU card n … … GPU card N-1 And GPU card N . May include N target graph partition data, i.e., target graph partition data 1 Target graph partition data 2 Target graph partition data, … … n Target graph partition data, … … N-1 And target graph partition data N . GPU card n The corresponding target node identification process value is f (n). Partitioning data with a target graph n The corresponding target node identification process value is f (n). N may be an integer greater than or equal to 1. n.epsilon. {1,2, … …, (N-1), N }.
According to an embodiment of the present disclosure, dividing the target graph slice data to obtain at least one target graph partition data may include the following operations.
Dividing the target graph slice data based on a first hash algorithm to obtain at least one target graph partition data.
According to the embodiment of the disclosure, the node identification can be processed by utilizing a hash algorithm to obtain a node identification processing value. The node identification processing value obtained by processing the node identification using the hash algorithm may be referred to as a hash value.
According to the embodiment of the disclosure, a target node identifier used for representing a target node in target graph cut data can be processed by using a first hash algorithm to obtain at least one first hash value. And determining target node data of the target node identification corresponding to the same first hash value as target graph partition data.
According to an embodiment of the present disclosure, the internal memory may include at least one internal storage area.
Storing the associative map slice data to the internal memory according to embodiments of the present disclosure may include the following operations.
Dividing the correlation map slice data to obtain at least one correlation map partition data. At least one associative map partition data is stored to at least one internal storage area.
According to an embodiment of the present disclosure, each internal storage area may be used to store association graph partition data corresponding to the internal storage area, and the association graph slice data may include association node data and association node feature data of at least one association node according to an embodiment of the present disclosure. The association node may be characterized by an association node identification. The association graph slice data can be divided according to the association node identification of the association node to obtain at least one association graph partition data. For example, at least one associated node identification may be processed to obtain at least one associated node identification processing value. And determining the associated node data of the associated node identification corresponding to the same associated node identification processing value as associated graph partition data, wherein the associated node data in each associated graph partition data is associated node data of the associated node corresponding to the same associated node identification processing value.
According to an embodiment of the present disclosure, dividing the association graph slice data to obtain at least one association graph partition data may include the following operations.
And dividing the associated graph slice data based on a second hash algorithm to obtain at least one associated graph partition data.
According to the embodiment of the disclosure, the association node identifier for representing the association node in the association graph cut data can be processed by using a second hash algorithm to obtain at least one second hash value. And determining the association node data of the association node identification corresponding to the same second hash value as association diagram partition data.
According to an embodiment of the present disclosure, the above-described graph data storage method may further include the following operations.
And responding to the detection of the storage switching instruction, and obtaining new target graph slice data and new associated graph slice data according to at least two graph slice data to be stored. And deleting the target graph slice data stored in the GPU. And deleting the associated graph slice data stored in the internal memory. The new target graph slice data is stored to the GPU. The new associative map slice data is stored to the internal memory.
According to embodiments of the present disclosure, the storage switch instruction may refer to a method for redefining target graph slice data stored in the GPU and associated graph slice data stored in the internal memory. The store switch instruction may be generated in response to receiving a training completion instruction for the deep learning platform.
According to embodiments of the present disclosure, in the event that a store switch instruction is detected, the target graph slice data stored in the GPU may be deleted. And deleting the associated graph slice data stored in the internal memory. The new target graph slice data and the new associated graph slice data can be obtained from at least two graph slice data to be stored using the methods described above. The new target graph slice data is stored to the GPU. The new associative map slice data is stored to the internal memory.
According to an embodiment of the present disclosure, the relationship between the target graph slice data and the associated graph slice data may include one of: no intersection between the target map slice data and the associated map slice data may include the target map slice data.
According to an embodiment of the present disclosure, the target map slice data may include at least one of: node-related data and neighbor node-related data.
According to embodiments of the present disclosure, the node-related data may include at least one of: node data and node feature data.
According to embodiments of the present disclosure, the neighbor node related data may include at least one of: neighbor node data and neighbor node feature data.
According to an embodiment of the present disclosure, the associative map slice data may include at least one of: associated node related data and associated neighbor node related data.
According to an embodiment of the present disclosure, the associated node-related data may include at least one of: associated node data and associated node feature data.
According to embodiments of the present disclosure, the associated neighbor node related data may include at least one of: associated neighbor node data and associated neighbor node feature data.
According to embodiments of the present disclosure, node characteristic data of a node may be determined from edge data associated with the node.
According to an embodiment of the present disclosure, node data and node characteristic data of a node are stored in association on the same storage device. For example, the target node data for target node a is stored on the GPU, the target node feature data for target node a is also stored on the GPU, and the target node data for target node a and the target node feature data may be stored on the same GPU card.
According to the embodiment of the disclosure, since the node characteristic data and the node data are stored in the same storage device in an associated manner, the access efficiency of the subsequent data can be improved.
A graph data storage method according to an embodiment of the present disclosure is further described below with reference to fig. 3, in conjunction with a specific embodiment.
Fig. 3 schematically illustrates an example schematic diagram of a graph data storage process according to an embodiment of the disclosure.
As shown in fig. 3, in 300, the graphics engine may include GPU video memory 301, internal memory 302 of the CPU, and external memory 303.GPU memory 301 may include T GPU cards, namely GPU card 301_1, GPU cards 301_2, … …, GPU cards 301_t, … …, GPU card 301_1 (T-1), and GPU card 301_T. T is an integer greater than 1. T is an integer greater than or equal to 1 and less than T.
The internal memory may include S internal memory areas. The S internal storage areas may include internal storage area 302_1, internal storage areas 302_2, … …, internal storage areas 302_s, … …, internal storage area 302_ (S-1), and internal storage area 302_S. S is an integer greater than 1. S is an integer greater than or equal to 1 and less than S.
The map data to be stored may be divided to obtain R pieces of map slice data to be stored, that is, map slice data to be stored 304_1, map slice data to be stored 304_2, … …, map slice data to be stored 304_r, … …, map slice data to be stored 304_ (R-1), and map slice data to be stored 304_r. R is an integer greater than 1. R is an integer greater than or equal to 1 and less than R.
The map slice data to be stored 304_2 may be determined as target map slice data from among the R map slice data to be stored. And determining associated graph slice data from the graph data to be stored according to the target graph slice data.
The target map slice data may be partitioned to obtain T target map partition data, namely target map partition data 304_2_1, target map partition data 304_2_2, … …, target map partition data 304_2_t, … …, target map partition data 304_2_ (T-1), and target map partition data 304_2_t. The target map partition data 304_2_t is stored to the GPU card 301_t.
The association map slice data may be partitioned to obtain S association map partition data, i.e., association map partition data 305_1, association map partition data 305_2, … …, association map partition data 305_s, … …, association map partition data 305_ (S-1), and association map partition data 305_s. The target map partition data 305_s is stored to the internal storage area 302_s.
The R pieces of map slice data to be stored are stored to the external memory 303.
Fig. 4 schematically illustrates a flowchart of a graph data access method according to an embodiment of the present disclosure.
As shown in fig. 4, the method 400 includes operations S410 to S420.
In operation S410, in response to receiving the graph data access request, an identification to be accessed is acquired.
In operation S420, in the case where it is determined that there is a matching identifier matching the identifier to be accessed in the target graph slice data, an access result is acquired from the target graph slice data according to the matching identifier.
According to an embodiment of the present disclosure, the target graph slice data may be graph data to be stored in the GPU in the graph data storage method according to the embodiment of the present disclosure.
According to embodiments of the present disclosure, a graph data access request may refer to a request for accessing graph data to be stored. The access may include at least one of: add, delete, modify, query, and sample. The identity to be accessed may comprise a node identity to be accessed. The matching identification may include a matching node identification.
According to an embodiment of the present disclosure, the GPU may determine, after acquiring the identifier to be accessed, whether there is a matching identifier in the target graph slice data that matches the identifier to be accessed. In the case that the matching identifier matched with the identifier to be accessed exists in the target graph slice data, the graph data corresponding to the matching identifier can be acquired from the target graph slice data, and the graph data corresponding to the matching identifier is determined as the access result. For example, the GPU may determine whether there is a matching identification in the target graph slice data that is consistent with the identification to be accessed. In the case that the matching identifier consistent with the identifier to be accessed exists in the target graph slice data, the graph data corresponding to the matching identifier can be acquired from the target graph slice data, and the graph data corresponding to the matching identifier is determined as the access result. The identity to be accessed may be a node identity to be accessed. The matching identity may be a matching node identity.
According to the embodiment of the disclosure, the access result can be obtained from the GPU, so that the access speed is improved, and the communication overhead is reduced.
According to embodiments of the present disclosure, a GPU may include at least one GPU card.
According to an embodiment of the present disclosure, operation S420 may include the following operations.
And under the condition that the current target graph partition data is determined to have the matching identification matched with the identification to be accessed, acquiring an access result from the target graph slice data according to the matching identification. And under the condition that the matching identification matched with the identification to be accessed does not exist in the partition data of the current target graph, determining the GPU card corresponding to the identification to be accessed. And sending the identification to be accessed to the GPU card corresponding to the identification to be accessed, so that the GPU card corresponding to the identification to be accessed obtains an access result from the target graph slice data according to the matching identification under the condition that the matching identification matched with the identification to be accessed exists in the target graph partition data of the GPU card corresponding to the identification to be accessed. And responding to the received access result from the GPU card corresponding to the identification to be accessed.
According to embodiments of the present disclosure, a GPU card that receives a map data access request may be referred to as a current GPU card. The target map partition data stored in the current GPU card is referred to as current target map partition data.
According to an embodiment of the disclosure, the current GPU card determines whether there is a matching identification in the current target graph partition data that matches the identification to be accessed. Under the condition that the current GPU card determines that the matching identification matched with the identification to be accessed exists in the current target graph partition data, an access result can be obtained from the current target graph slice data according to the matching identification.
According to the embodiment of the disclosure, when determining that the matching identifier matched with the identifier to be accessed does not exist in the current target graph partition data, the current GPU card may determine the GPU card corresponding to the identifier to be accessed according to the identifier to be accessed. The GPU card corresponding to the identity to be accessed may comprise at least one. The current GPU card may send the identifier to be accessed to the GPU card corresponding to the identifier to be accessed. The GPU card corresponding to the identity to be accessed may determine whether there is a matching identity matching the identity to be accessed in the target graph partition data of the GPU card corresponding to the identity to be accessed. When the GPU card corresponding to the identification to be accessed determines that the matching identification matched with the identification to be accessed exists in the GPU card corresponding to the identification to be accessed, the access result can be obtained from the target graph segmentation data corresponding to the identification to be accessed according to the matching identification. The GPU card corresponding to the identification to be accessed may send the access result to the current GPU card.
According to an embodiment of the present disclosure, the above-described graph data access method may further include the following operations.
And under the condition that the matching identification matched with the identification to be accessed does not exist in the target graph partition data of the GPU card corresponding to the identification to be accessed, sending the identification to be accessed to the internal memory, so that the internal memory obtains an access result from the associated graph slice data according to the matching identification under the condition that the internal memory determines that the matching identification matched with the identification to be accessed exists in the associated graph slice data. And responding to the received access result from the GPU card corresponding to the identification to be accessed. The access result obtained by the GPU card corresponding to the identity to be accessed is in response to receiving the access result from the internal memory.
According to an embodiment of the present disclosure, the associated graph slice data may be graph data to be stored in an internal memory in the graph data storage method according to an embodiment of the present disclosure.
According to the embodiment of the disclosure, the GPU card corresponding to the identifier to be accessed may send the identifier to be accessed to the internal memory when it is determined that the GPU card corresponding to the identifier to be accessed does not have the matching identifier matching the identifier to be accessed. The internal memory may determine whether there is a matching identification in the associative map slice data that matches the identification to be accessed. And under the condition that the internal memory determines that the matching identification matched with the identification to be accessed exists in the associated graph slice data, the access result can be obtained from the associated graph slice data according to the matching identification. The internal memory may send the access result to the GPU card corresponding to the identification to be accessed. The GPU card corresponding to the identification to be accessed may send the access result to the current GPU card.
According to the embodiment of the disclosure, the internal memory may send the identifier to be accessed to the external memory under the condition that it is determined that the matching identifier matched with the identifier to be accessed does not exist in the associated graph slice data, so that the external memory obtains the access result from at least two graph slice data to be stored according to the identifier to be accessed. The external memory may send the access result to the internal memory. The internal memory may send the access result to the GPU card corresponding to the identification to be accessed. The GPU card corresponding to the identification to be accessed may send the access result to the current GPU card.
According to embodiments of the present disclosure, any one target graph partition data may be queried and sampled by multiple GPU cards. The query and sampling functions of multiple GPU cards support a high degree of concurrency. The GPU card may initiate requests to other GPU cards at the same time. In addition, the concurrency of the GPU is extremely strong, and the sampling speed of the GPU may be more than 400 times that of the internal memory.
According to an embodiment of the present disclosure, the above-described graph data access method may further include the following operations.
And creating a first access task according to the identification to be accessed. The first access task is added to a first task queue corresponding to the GPU.
According to an embodiment of the present disclosure, operation S420 may include the following operations.
And acquiring a first head task from a task queue corresponding to the GPU by using a thread corresponding to the GPU. And acquiring the identification to be accessed from the first head task by utilizing the thread corresponding to the GPU under the condition that the first head task is determined to be the first access task. And acquiring an access result from the target graph slice data according to the matching identification under the condition that the thread corresponding to the GPU determines that the matching identification matched with the identification to be accessed exists in the target graph slice data.
According to embodiments of the present disclosure, a thread corresponding to a GPU may perform a first access task in the manner of a task queue. The thread corresponding to the GPU may access a first task queue corresponding to the GPU, and obtain a first head task in the first task queue. In the case where the thread corresponding to the GPU determines that the first head task is the first access task, the thread corresponding to the GPU may perform the first access task.
According to an embodiment of the present disclosure, the above-described graph data access method may further include the following operations.
And deleting the first access task from the first task queue by utilizing a thread corresponding to the GPU under the condition that the execution of the first access task is determined to be finished.
Fig. 5 schematically illustrates a flow chart of a graph data access method according to another embodiment of the present disclosure.
As depicted in FIG. 5, the method 500 includes operations S510-S520.
In operation S510, in response to receiving the identification to be accessed from the graphics processor GPU, in a case where it is determined that there is a matching identification matching the identification to be accessed in the associated graph slice data, an access result is acquired from the associated graph slice data according to the matching identification.
In operation S520, the access result is transmitted to the GPU.
According to an embodiment of the present disclosure, the associated graph slice data may be graph data to be stored in an internal memory in the graph data storage method according to an embodiment of the present disclosure.
According to an embodiment of the present disclosure, the above-described graph data storage method may further include the following operations.
And under the condition that the matching identification matched with the identification to be accessed does not exist in the associated graph slice data, sending the identification to be accessed to an external memory so that the external memory can acquire access results from at least two graph slice data to be stored. And in response to receiving the access result from the external memory, sending the access result to the GPU.
According to an embodiment of the present disclosure, the at least two pieces of map slice data to be stored may be data to be stored in a map data storage method according to an embodiment of the present disclosure.
According to an embodiment of the present disclosure, operation S510 may include the following operations.
And in response to receiving the identification to be accessed from the GPU, creating a second access task according to the identification to be accessed. The second access task is added to a second task queue corresponding to the internal memory. And acquiring a second head task from the second task queue by using a thread corresponding to the internal memory. And acquiring the identification to be accessed from the second head task by utilizing the thread corresponding to the internal memory under the condition that the second head task is determined to be the second access task. And acquiring an access result from the associated graph slice data according to the matching identification under the condition that the thread corresponding to the internal memory determines that the matching identification matched with the identification to be accessed exists in the associated graph slice data.
According to embodiments of the present disclosure, a thread corresponding to the internal memory may perform the second access task in a task queue. The thread corresponding to the internal memory may access a second task queue corresponding to the internal memory, and obtain a second head task in the second task queue. If the thread corresponding to the internal memory determines that the second head task is the second access task, the thread corresponding to the internal memory may execute the second access task.
According to an embodiment of the present disclosure, the internal memory may include at least one internal storage area. The associative map partition data may be stored in an internal storage area. May have threads corresponding to the internal storage region. There is a second task queue corresponding to the internal storage region.
According to the embodiment of the disclosure, the task queue is designed so that consistency problems caused by concurrent threads do not need to be concerned in the subsequent strategy development process, and thus the data processing logic is focused. In addition, the locking and unlocking of a developer during data operation are avoided, and system delay is increased, so that efficiency is improved, development cost is reduced, and error probability is reduced.
According to an embodiment of the present disclosure, the above-described graph data access method may further include the following operations.
And deleting the second access task from the second task queue by utilizing the thread corresponding to the internal memory under the condition that the execution of the second access task is determined to be finished.
Fig. 6 schematically illustrates a flowchart of a method of training the neural network model of fig. 6, in accordance with an embodiment of the present disclosure.
As shown in fig. 6, the method 600 includes operations S610 to S640.
In response to receiving the model training request, at least one target sampling node is determined from the target graph slice data based on the sampling policy in operation S610.
In operation S620, at least one first-order neighbor node corresponding to the at least one target sampling node is acquired from one of the target graph slice data, the associated graph slice data, and the at least two graph slice data to be stored according to the at least one target sampling node.
In operation S630, at least one first-order sub-graph data is obtained according to the target sampling node related data of at least one target sampling node and the neighbor node related data of at least one first-order neighbor node.
At operation S640, at least first-order sub-graph data is transmitted to the deep learning platform so that the deep learning platform trains the graph neural network model using the at least first-order sub-graph data.
According to an embodiment of the present disclosure, the target graph slice data may be data to be stored in the GPU in the graph data storage method according to the embodiment of the present disclosure. The associated graph slice data may be graph data to be stored in the internal memory in the graph data storage method according to an embodiment of the present disclosure. The at least two map slice data to be stored may be map data to be stored in an external memory in a map data storage method according to an embodiment of the present disclosure.
According to an embodiment of the present disclosure, the graph neural network model may include one of: a graph convolutional neural network (Graph Convolutional Network, GCN) model, a graph round robin neural network (Graph Recurrent Network, GRN) model, a graph annotation force network (Graph Attention Network, GAT) model, and a graph residual network model. A node vector of nodes may be generated using a graph neural network model.
According to embodiments of the present disclosure, a model training request may refer to a request for training a graph neural network model. The model training request may be a model training request received by the graph engine from the client. Alternatively, the model training request may be generated by the graph engine in response to model training operations that detect object inputs.
According to embodiments of the present disclosure, a sampling policy may refer to a policy for determining sampling nodes. The sampling strategy may include one of: traversing the sampling strategy and the conditional sampling strategy.
According to embodiments of the present disclosure, graph data to be stored may be determined as sample graph data for training a graph neural network model. The graph data to be stored can be stored according to the graph data storage method according to the embodiment of the disclosure.
According to embodiments of the present disclosure, at least one target sampling node may be determined from the target graph slice data based on a sampling policy. For at least one target sampling node, at least one first-order neighbor node corresponding to the target sampling node is determined. For example, it may be determined from the target sampling node whether there is a first-order neighbor node corresponding to the target sampling node in the target graph slice data stored in the GPU. And acquiring the first-order neighbor node corresponding to the target sampling node from the target graph slice data stored in the GPU under the condition that the first-order neighbor node corresponding to the target sampling node exists in the target graph slice data stored in the GPU. In the case where it is determined that there is no first-order neighbor node corresponding to the target sampling node in the target graph slice data stored in the GPU, it may be determined whether there is a first-order neighbor node corresponding to the target sampling node in the associated graph slice data stored in the internal memory. When it is determined that the first-order neighbor node corresponding to the target sampling node exists in the correlation map slice data stored in the internal memory, the first-order neighbor node corresponding to the target sampling node is acquired from the correlation map slice data stored in the internal memory. And under the condition that the first-order neighbor node corresponding to the target sampling node does not exist in the associated graph slice data stored in the internal memory, acquiring the first-order neighbor node corresponding to the target sampling node from at least two graph slice data to be stored in the external memory. Thereby, at least one first-order neighbor node corresponding to the target sampling node can be obtained.
According to embodiments of the present disclosure, at least one neighbor node corresponding to a first-order neighbor node may be determined for a first-order neighbor node of at least one first-order neighbor node with respect to a target sampling node in a similar manner as described above. The neighbor node corresponding to the first-order neighbor node may be referred to as a second-order neighbor node of the target sampling node. Similarly, a neighbor node of a predetermined order corresponding to the target sampling node may be obtained. The predetermined order may be configured according to actual service requirements, and is not limited herein. For example, the predetermined order may be a third order.
According to the embodiment of the disclosure, the above-mentioned sampling method of the neighbor node is to search from the GPU first, search from the internal memory if it is determined that the GPU does not have a neighbor node, and search from the external memory if it is determined that the internal memory does not have a neighbor node. In addition, the sampling mode can be configured according to the actual service requirement. For example, neighbor nodes may be looked up in the GPU and, in the event that it is determined that no neighbor nodes are present in the GPU, the internal memory and external memory are no longer accessed. Alternatively, it may be looked up in the GPU, in case it is determined that there are no neighbor nodes in the GPU, in the internal memory, and in case it is determined that there are no neighbor nodes in the internal memory, no more external memory is accessed.
According to the embodiment of the disclosure, for a target sampling node in at least one target sampling node, at least first-order sub-graph data corresponding to the target sampling node can be obtained according to target sampling node related data of the target sampling node and neighbor node sampling related data of at least first-order neighbor nodes corresponding to the target sampling node. Thereby, at least first-order sub-graph data corresponding to each of the at least one target sampling node can be obtained.
According to embodiments of the present disclosure, the graph engine may send at least one first-order sub-graph data corresponding to each of the at least one target sampling node to the deep learning platform. The deep learning platform may train the graph neural network model using at least first order sub-graph data.
According to the embodiment of the disclosure, taking sampling as an example, since the graph slicing algorithm ensures that the point association degree in the graph slice data to be stored is relatively high, under the condition that first-order neighbor sampling is performed on the GPU, a relatively high probability is provided for acquiring first-order neighbor nodes from the GPU. And the probability that sampling operation occurs on the GPU is larger by adopting the second-order neighbor nodes based on the first-order neighbor nodes. In the case where there are no neighbor nodes in the GPU, a query request is initiated to the internal memory and the external memory. Because the access speed of the GPU is the fastest and the access speed of the internal memory is the second highest, the probability that the access operation is performed on the GPU and the internal memory can be realized, and the data processing efficiency is improved. In addition, user-configured sampling policies may also be supported.
According to the embodiment of the disclosure, the training of the graph neural network model is performed based on the graph data storage scheme and the graph data access scheme, so that the model training speed is improved.
According to an embodiment of the present disclosure, the training method of the graph neural network model may further include the following operations.
In response to receiving a training completion instruction from the deep learning platform, a store switch instruction is generated. In response to detecting the storage switching instruction, repeatedly performing the operation of obtaining at least first-order sub-image data and sending the at least first-order sub-image data to the deep learning platform under the condition that new target image slice data and new associated image slice data are obtained according to at least two image slice data to be stored, so that the deep learning platform trains the image neural network model by using the at least first-order sub-image data until performing the training operation of training the image neural network model by using the target image slice data under the condition that a preset number of image slice data to be stored in the at least two image slice data are determined as the target image slice data.
According to the embodiment of the disclosure, after training operation for the neural network model is completed by using the target graph slice data, new target graph slice data and new associated graph slice data can be obtained according to at least two graph slice data to be stored.
According to an embodiment of the present disclosure, the following operations may be repeatedly performed until a predetermined end condition is satisfied. At least one new target sampling node is determined from the new target graph slice data based on the sampling policy. And acquiring at least one first-order new neighbor node corresponding to the at least one new target sampling node from one of the new target graph slice data, the new associated graph slice data and the at least two graph slice data to be stored according to the at least one new target sampling node. And obtaining at least first-order new sub-graph data according to the target sampling node related data of at least one new target sampling node and the neighbor node related data of at least first-order new neighbor nodes. And sending the at least first-order new sub-graph data to the deep learning platform so that the deep learning platform trains the graph neural network model by utilizing the at least first-order new sub-graph data.
According to an embodiment of the present disclosure, the predetermined end condition may refer to performing a training operation of completing training of the graph neural network model using the target graph slice data in a case where a predetermined number of the at least two pieces of graph slice data to be stored are determined as the target graph slice data.
According to the embodiment of the disclosure, a round of sampling can be initiated to a target sampling point in target graph slice data according to a sampling strategy, and at least first-order sub-graph data obtained by sampling is sent to a deep learning platform, so that the deep learning platform trains a graph neural network model by using the at least first-order sub-graph data. Thus, one round of model training is completed. The above-described operation results in deviation of the sub-graph data, i.e., the sub-graph data is mostly the neighbor node of the target sampling node. For this reason, new target graph slice data may be periodically stored to the GPU and new associated graph slice data may be stored to the internal memory so that each target node has an opportunity to become a target sampling point, whereby the bias may be reduced. In addition, the system can also maintain relatively high sampling performance.
According to an embodiment of the present disclosure, operation S610 may include one of the following operations.
In response to receiving the model training request, all target nodes in the target graph slice data are determined to be target sampling nodes based on the traversal sampling strategy.
In response to receiving the model training request, a portion of the target nodes in the target graph slice data are determined to be at least one target sampling node based on the conditional sampling strategy.
According to an embodiment of the present disclosure, the conditional sampling policy may refer to a policy of determining a target node satisfying a predetermined sampling condition as a target sampling node. For example, the predetermined sampling condition may refer to the target sampling node being a target node having labeling information. Alternatively, the predetermined sampling condition may refer to the number of target sampling nodes being equal to a predetermined sampling number threshold.
A method for training a neural network model according to embodiments of the present disclosure is further described below with reference to fig. 7, in conjunction with a specific embodiment.
Fig. 7 schematically illustrates an example schematic diagram of a training process of the neural network model of fig. 7, according to an embodiment of the disclosure.
As shown in fig. 7, in 700, the graph engine may include a graph engine module 1, a graph engine module 2, and a graph engine module 3. The graphics engine module 1 may comprise a GPU. The GPU may include 4 GPU cards. Furthermore, the graph engine module 1 may include a multi-card sampling unit and an aggregation unit. The graph engine module 2 may include a correlation graph slice data generation unit. The graph engine module 3 may include a graph cut unit. The graph engine module 2 and the graph engine module 3 periodically run a user-configured node-based graph segmentation algorithm or a system-integrated node-based graph segmentation algorithm.
The graph segmentation unit in the graph engine module 3 may divide the graph data to be stored based on a graph segmentation algorithm to obtain at least two graph slice data to be stored. The target graph slice data (i.e., the light-colored target node related data and the target neighbor node related data in the graph engine module 2 in fig. 7) of the at least two graph slice data to be stored may be sent to the GPU in the graph engine module 1 through the graph engine module 2. The graph engine module 2 obtains association graph slice data (i.e., association node related data and association neighbor node related data in dark color in the graph engine module 2 in fig. 7) according to the target graph slice data and at least two graph data to be stored based on a graph splitting algorithm. After the map engine module 2 transmits the target map slice data to the map engine module 1, the map engine module 2 deletes the target map slice data stored in the map engine module 2. The associated graph slice data is finally retained in the graph engine module 2.
The map engine module 1 divides the target map slice data to obtain 4 target map partition data. The 4 target graph partition data is stored to 4 GPU cards. The target graph slice data stored by the respective GPU cards are different from each other.
The graph engine module 1 may sample from the 4 target graph partition data, at least one level of sub-graph data, by a multi-card sampling unit in the graph engine module 1 in response to receiving the sampling request from the client. The aggregation unit in the graph engine module 1 aggregates at least one level of sub-graph data. The aggregation unit transmits at least first-order sub-graph data to the client. The client sends at least one level of sub-graph data to the deep learning platform.
The deep learning platform trains a graph neural network model using at least first order sub-graph data. The node vector of the node can be obtained by using a graph neural network model. The node vector of the node is associated with the node identification.
Fig. 8 schematically illustrates a flowchart of a graph data processing method according to an embodiment of the present disclosure.
As shown in fig. 8, the method 800 includes operation S810.
In operation S810, target map data is input to the map neural network model, resulting in an output result.
According to embodiments of the present disclosure, the graph neural network model may be trained using a training method of the graph neural network model according to embodiments of the present disclosure.
According to embodiments of the present disclosure, the graph neural network model has been explored in a broad problem area of supervised, semi-supervised, unsupervised, and reinforcement learning settings. The graph neural network model can be applied to various fields, for example, recommended fields.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
The above is only an exemplary embodiment, but is not limited thereto, and other graph data storage methods, graph data access methods, training methods of graph neural network models, and graph data processing methods known in the art may be included as long as communication overhead can be reduced and stand-alone capacity can be enlarged.
Fig. 9 schematically illustrates a block diagram of a graph data storage device according to an embodiment of the disclosure.
As shown in fig. 9, the graph data storage device 900 may include a first obtaining module 910, a second obtaining module 920, a first storage module 930, a second storage module 940, and a third storage module 950.
The first obtaining module 910 is configured to divide the graph data to be stored to obtain at least two graph slice data to be stored in response to receiving the graph data storage request.
The second obtaining module 920 is configured to obtain target graph slice data and associated graph slice data according to at least two graph slice data to be stored.
The first storage module 930 is configured to store the target graph slice data to the GPU.
A second storage module 940, configured to store the associative map slice data in the internal memory.
And a third storage module 950, configured to store at least two slice data of the graph to be stored in the external memory.
According to an embodiment of the present disclosure, the second obtaining module 920 may include a first determining sub-module and a second determining sub-module.
And the first determining submodule is used for determining target graph slice data from at least two graph slice data to be stored.
And the second determining submodule is used for determining associated graph slice data from graph data to be stored according to the target graph slice data.
According to an embodiment of the present disclosure, the GPU includes at least one GPU card.
According to an embodiment of the present disclosure, the first storage module 930 may include a first acquisition sub-module and a first storage sub-module.
The first obtaining sub-module is used for dividing the target graph slice data to obtain at least one target graph partition data.
And the first storage sub-module is used for storing the at least one target graph partition data to the at least one GPU card.
According to an embodiment of the present disclosure, the first obtaining sub-module may include a first obtaining unit.
The first obtaining unit is used for dividing the target graph slice data based on a first hash algorithm to obtain at least one target graph partition data.
According to an embodiment of the present disclosure, the internal memory includes at least one internal storage area.
According to an embodiment of the present disclosure, the second storage module 940 may include a second acquisition sub-module and a second storage sub-module.
And the second obtaining submodule is used for dividing the correlation diagram slice data to obtain at least one correlation diagram partition data.
And the second storage sub-module is used for storing the at least one association diagram partition data to at least one internal storage area.
According to an embodiment of the present disclosure, the second obtaining sub-module may include a second obtaining unit.
The second obtaining unit is used for dividing the associated graph slice data based on a second hash algorithm to obtain at least one associated graph partition data.
According to an embodiment of the present disclosure, the first obtaining module 910 may include a third obtaining sub-module.
And the third obtaining submodule is used for responding to the received graph data storage request, dividing the graph data to be stored based on a graph segmentation algorithm of the nodes, and obtaining at least two graph slice data to be stored.
According to an embodiment of the present disclosure, the graph data storage device 900 may further include a second obtaining module, a first deleting module, a second deleting module, a third storage module, and a fourth storage module.
And the second obtaining module is used for responding to the detection of the storage switching instruction and obtaining new target graph slice data and new associated graph slice data according to at least two graph slice data to be stored.
And the first deleting module is used for deleting the target graph slice data stored in the GPU.
And the second deleting module is used for deleting the associated graph slice data stored in the internal memory.
And the third storage module is used for storing the new target graph slice data to the GPU.
And the fourth storage module is used for storing the new association graph slice data into the internal memory.
According to an embodiment of the present disclosure, the relationship between the target graph slice data and the associated graph slice data includes one of: no intersection exists between the target map slice data and the associated map slice data includes the target map slice data.
According to an embodiment of the present disclosure, the target graph slice data includes at least one of target node related data and target neighbor node related data. The target node related data includes at least one of target node data and target node characteristic data. The target neighbor node-related data includes at least one of target neighbor node data and target neighbor node feature data.
According to an embodiment of the present disclosure, the associative map sliced data includes at least one of associative node related data and associative neighbor node related data. The associated node related data includes at least one of associated node data and associated node feature data. The associated neighbor node related data includes at least one of: associated neighbor node data and associated neighbor node feature data.
Fig. 10 schematically illustrates a block diagram of a graph data access apparatus according to an embodiment of the disclosure.
As shown in fig. 10, the graph data access apparatus 1000 may include a first acquisition module 1010 and a second acquisition module 1020.
The first obtaining module 1010 is configured to obtain, in response to receiving the graph data access request, an identifier to be accessed.
And a second obtaining module 1020, configured to obtain, when it is determined that there is a matching identifier matching the identifier to be accessed in the target graph slice data, an access result from the target graph slice data according to the matching identifier.
According to an embodiment of the present disclosure, the target graphics slice data is to-be-stored graphics data stored in the graphics processor GPU in the graphics data storage according to an embodiment of the present disclosure.
According to an embodiment of the present disclosure, the GPU includes at least one GPU card.
According to an embodiment of the present disclosure, the second acquisition module 1020 may include a first acquisition sub-module, a third determination sub-module, a transmission sub-module, and a reception sub-module.
The first acquisition sub-module is used for acquiring an access result from the target graph slice data according to the matching identification under the condition that the matching identification matched with the identification to be accessed exists in the current target graph partition data.
In the case where it is determined that there is no matching identification matching the identification to be accessed in the current target graph partition data,
and the third determining submodule is used for determining the GPU card corresponding to the identification to be accessed.
The sending sub-module is used for sending the identification to be accessed to the GPU card corresponding to the identification to be accessed, so that the GPU card corresponding to the identification to be accessed obtains the access result from the target graph slice data according to the matching identification under the condition that the matching identification matched with the identification to be accessed exists in the target graph partition data of the GPU card corresponding to the identification to be accessed.
And the receiving sub-module is used for responding to the received access result from the GPU card corresponding to the identification to be accessed.
The above-described graph data access apparatus 1000 may further include a first transmitting module and a receiving module according to an embodiment of the present disclosure.
The first sending module is used for sending the identification to be accessed to the internal memory under the condition that the matching identification matched with the identification to be accessed does not exist in the target image partition data of the GPU card corresponding to the identification to be accessed, so that the internal memory can acquire the access result from the associated image slice data according to the matching identification under the condition that the internal memory determines that the matching identification matched with the identification to be accessed exists in the associated image slice data.
And the receiving module is used for responding to the received access result from the GPU card corresponding to the identification to be accessed, wherein the access result obtained by the GPU card corresponding to the identification to be accessed is in response to the received access result from the internal memory.
According to an embodiment of the present disclosure, the associated graph slice data is the number of graphs to be stored in the internal memory in the graph data storage method according to the embodiment of the present disclosure.
The graph data access apparatus 1000 described above may further include a creation module and an addition module according to an embodiment of the present disclosure.
The creating module is used for creating a first access task according to the identification to be accessed.
And the adding module is used for adding the first access task to a first task queue corresponding to the GPU.
According to an embodiment of the present disclosure, the second acquisition module 1020 may include a second acquisition sub-module, a third acquisition sub-module, and a fourth acquisition sub-module.
And the second acquisition submodule is used for acquiring the first head task from the first task queue by utilizing the thread corresponding to the GPU.
And the third acquisition sub-module is used for acquiring the identification to be accessed from the first head task by utilizing the thread corresponding to the GPU under the condition that the first head task is determined to be the first access task.
And the fourth acquisition sub-module is used for acquiring an access result from the target graph slice data according to the matching identification under the condition that the thread corresponding to the GPU determines that the matching identification matched with the identification to be accessed exists in the target graph slice data.
Fig. 11 schematically illustrates a block diagram of a graph data access apparatus according to another embodiment of the present disclosure.
As shown in fig. 11, the graph data access apparatus 1100 may include a third acquisition module 1110 and a second transmission module 1120.
A third obtaining module 1110, configured to obtain, in response to receiving the identifier to be accessed from the GPU, an access result from the associated graph slice data according to the matching identifier when it is determined that there is a matching identifier matching the identifier to be accessed in the associated graph slice data
And the second sending module 1120 is configured to send the access result to the GPU.
According to an embodiment of the present disclosure, the associated graph slice data is graph data to be stored in the internal memory in the graph data storage device according to an embodiment of the present disclosure.
The above-described graph data access apparatus 1100 may further include a third transmission module and a fourth transmission module according to an embodiment of the present disclosure.
In case it is determined that there is no matching identification matching the identification to be accessed in the association graph slice data,
and the third sending module is used for sending the identification to be accessed to the external memory so that the external memory can acquire the access result from at least two pieces of image slice data to be stored.
And the fourth sending module is used for sending the access result to the GPU in response to receiving the access result from the external memory.
According to an embodiment of the present disclosure, the at least two map slice data to be stored are data to be stored in a map data storage device according to an embodiment of the present disclosure.
According to an embodiment of the present disclosure, the third acquisition module 1110 may include a creation sub-module, an addition sub-module, a fifth acquisition sub-module, a sixth acquisition sub-module, and a seventh acquisition sub-module.
And the creating sub-module is used for responding to the received identification to be accessed from the GPU and creating a second access task according to the identification to be accessed.
And the adding sub-module is used for adding the second access task to a second task queue corresponding to the internal memory.
And a fifth obtaining sub-module, configured to obtain the second head task from the second task queue by using a thread corresponding to the internal memory.
And a sixth obtaining sub-module, configured to obtain, by using a thread corresponding to the internal memory, the identifier to be accessed from the second head task if it is determined that the second head task is the second access task.
And a seventh acquisition sub-module, configured to acquire, by using a thread corresponding to the internal memory, an access result from the associative map slice data according to the matching identifier when it is determined that the associative map slice data has the matching identifier matched with the identifier to be accessed.
Fig. 12 schematically illustrates a block diagram of a training apparatus of the neural network model, according to an embodiment of the disclosure.
As shown in fig. 12, the training apparatus 1200 of the graph neural network model may include a determining module 1210, a fourth obtaining module 1220, a third obtaining module 1230, and a fifth transmitting module 1240.
A determination module 1210 is configured to determine at least one target sampling node from the target graph slice data based on a sampling strategy in response to receiving the model training request.
A fourth obtaining module 1220 is configured to obtain, according to the at least one target sampling node, at least one first-order neighboring node corresponding to the at least one target sampling node from one of the target graph slice data, the associated graph slice data, and the at least two graph slice data to be stored.
The third obtaining module 1230 is configured to obtain at least one level of sub-graph data according to the target sampling node related data of the at least one target sampling node and the neighbor node related data of the at least one level of neighbor node.
A fifth transmitting module 1240 is configured to transmit the at least first-order sub-graph data to the deep learning platform, so that the deep learning platform trains the graph neural network model using the at least first-order sub-graph data.
According to an embodiment of the present disclosure, the target graphics slice data is data to be stored in the graphics processor GPU in the graphics data storage according to an embodiment of the present disclosure. The associated map slice data is map data to be stored in the internal memory in the map data storage device according to an embodiment of the present disclosure. The at least two pieces of map slice data to be stored are map data to be stored in an external memory in a map data storage device according to an embodiment of the present disclosure.
According to an embodiment of the present disclosure, the determination module 1210 may include one of:
and a fourth determining sub-module, configured to determine all target nodes in the target graph slice data as target sampling nodes based on the traversal sampling strategy in response to receiving the model training request.
And a fifth determining sub-module for determining a portion of the target nodes in the target graph slice data as at least one target sampling node based on the conditional sampling strategy in response to receiving the model training request.
According to an embodiment of the present disclosure, the training apparatus 1200 of the graph neural network model may further include a generating module and an executing module.
And the generation module is used for generating a storage switching instruction in response to receiving the training completion instruction from the deep learning platform.
And the execution module is used for responding to the detection of the storage switching instruction, repeatedly executing the operation of obtaining at least one level of sub-image data and sending the at least one level of sub-image data to the deep learning platform under the condition that new target image slice data and new associated image slice data are obtained according to at least two pieces of image slice data to be stored, so that the deep learning platform trains the image neural network model by utilizing the at least one level of sub-image data until the training operation of training the image neural network model by utilizing the target image slice data is executed under the condition that a preset number of pieces of image slice data to be stored in the at least two pieces of image slice data are determined as the target image slice data.
Fig. 13 schematically shows a block diagram of the graph data processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 13, the graph data processing apparatus 1300 may include a fourth obtaining module 1310.
And a fourth obtaining module 1310, configured to input the target graph data into the graph neural network model, and obtain an output result.
According to an embodiment of the present disclosure, the graph neural network model is trained using a training apparatus of the graph neural network model according to an embodiment of the present disclosure.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
According to an embodiment of the present disclosure, an electronic device includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to perform the method as described above.
According to an embodiment of the present disclosure, a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method as described above.
According to an embodiment of the present disclosure, a computer program product comprising a computer program which, when executed by a processor, implements a method as described above.
Fig. 14 schematically illustrates a block diagram of an electronic device adapted to implement a graph data storage method, a graph data access method, a training method of a graph neural network model, and a graph data processing method, according to an embodiment of the disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 14, the electronic device 1400 includes a computing unit 1401 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1402 or a computer program loaded from a storage unit 1408 into a Random Access Memory (RAM) 1403. In the RAM 1403, various programs and data required for the operation of the electronic device 1400 can also be stored. The computing unit 1401, the ROM 1402, and the RAM 1403 are connected to each other through a bus 1404. An input/output (I/O) interface 1405 is also connected to the bus 1404.
A number of components in electronic device 1400 are connected to I/O interface 1405, including: an input unit 1406 such as a keyboard, a mouse, or the like; an output unit 1407 such as various types of displays, speakers, and the like; a storage unit 1408 such as a magnetic disk, an optical disk, or the like; and a communication unit 1409 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 1409 allows the electronic device 1400 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunications networks.
The computing unit 1401 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 1401 performs the respective methods and processes described above, such as a graph data storage method, a graph data access method, a training method of a graph neural network model, and a graph data processing method. For example, in some embodiments, the graph data storage method, the graph data access method, the training method of the graph neural network model, and the graph data processing method may be implemented as computer software programs tangibly embodied on a machine-readable medium, such as the storage unit 1408. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 1400 via the ROM 1402 and/or the communication unit 1409. When the computer program is loaded into the RAM 1403 and executed by the computing unit 1401, one or more steps of the graph data storage method, the graph data access method, the training method of the graph neural network model, and the graph data processing method described above can be performed. Alternatively, in other embodiments, the computing unit 1401 may be configured to perform the graph data storage method, the graph data access method, the training method of the graph neural network model, and the graph data processing method in any other suitable manner (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (40)

1. A graph data storage method, comprising:
dividing the graph data to be stored in response to receiving the graph data storage request to obtain at least two graph slice data to be stored;
obtaining target image slice data and associated image slice data according to the at least two image slice data to be stored;
storing the target graph slice data to a graphics processor GPU;
storing the associative map slice data to an internal memory; and
Storing the at least two image slice data to be stored into an external memory;
the method for dividing the graph data to be stored to obtain at least two graph slice data to be stored in response to receiving the graph data storage request comprises the following steps:
and in response to receiving the graph data storage request, dividing the graph data to be stored by a graph segmentation algorithm based on nodes or a graph segmentation algorithm based on edges to obtain the at least two graph slice data to be stored.
2. The method according to claim 1, wherein the obtaining target graph slice data and associated graph slice data from the at least two graph slice data to be stored includes:
determining the target graph slice data from the at least two graph slice data to be stored; and
and determining the associated graph slice data from the graph data to be stored according to the target graph slice data.
3. The method of claim 1 or 2 or wherein the GPU comprises at least one GPU card;
wherein the storing the target graph slice data to a graphics processor GPU comprises:
dividing the target graph slice data to obtain at least one target graph partition data; and
Storing the at least one target graph partition data to at least one GPU card.
4. A method according to claim 3, wherein said dividing the target graph slice data to obtain at least one target graph partition data comprises:
dividing the target graph slice data based on a first hash algorithm to obtain the at least one target graph partition data.
5. The method of claim 1 or 2, wherein the internal memory comprises at least one internal memory area;
wherein the storing the associative map slice data to an internal memory includes:
dividing the correlation map slice data to obtain at least one correlation map partition data; and
storing the at least one associative map partition data to the at least one internal storage area.
6. The method of claim 5, wherein the dividing the associative map slice data to obtain at least one associative map partition data comprises:
and dividing the associated graph slice data based on a second hash algorithm to obtain the at least one associated graph partition data.
7. The method of claim 1 or 2, wherein the relationship between the target map slice data and the associated map slice data comprises one of: no intersection exists between the target graph slice data and the associated graph slice data includes the target graph slice data.
8. The method of claim 1 or 2, wherein the target graph slice data comprises at least one of target node-related data and target neighbor node-related data, the target node-related data comprising at least one of target node data and target node feature data, the target neighbor node-related data comprising at least one of target neighbor node data and target neighbor node feature data;
the associated graph slice data comprises at least one of associated node related data and associated neighbor node related data, the associated node related data comprises at least one of associated node data and associated node feature data, and the associated neighbor node related data comprises at least one of associated neighbor node data and associated neighbor node feature data.
9. A graph data access method, comprising:
acquiring an identifier to be accessed in response to receiving a graph data access request; and
in case that it is determined that a matching identifier matching the identifier to be accessed exists in the target graph slice data, an access result is obtained from the target graph slice data according to the matching identifier, wherein the target graph slice data is the graph data to be stored in the graphics processor GPU in the method according to any one of claims 1 to 8.
10. The method of claim 9, wherein the GPU comprises at least one GPU card;
under the condition that the matching identification matched with the identification to be accessed exists in the target graph slice data, acquiring the access result from the target graph slice data according to the matching identification comprises the following steps:
under the condition that the current target graph partition data is determined to have the matching identification matched with the identification to be accessed, acquiring the access result from the target graph slice data according to the matching identification; and
in case it is determined that there is no matching identification matching the identification to be accessed in the current target graph partition data,
determining a GPU card corresponding to the identification to be accessed;
sending the identification to be accessed to the GPU card corresponding to the identification to be accessed, so that the GPU card corresponding to the identification to be accessed obtains an access result from the target graph slice data according to the matching identification under the condition that the matching identification matched with the identification to be accessed exists in the target graph partition data of the GPU card corresponding to the identification to be accessed; and
and responding to the received access result from the GPU card corresponding to the identification to be accessed.
11. The method of claim 10, further comprising:
transmitting the identification to be accessed to an internal memory under the condition that the matching identification matched with the identification to be accessed does not exist in target graph partition data of a GPU card corresponding to the identification to be accessed, so that the internal memory obtains the access result from the associated graph slice data according to the matching identification under the condition that the matching identification matched with the identification to be accessed exists in the associated graph slice data, wherein the associated graph slice data is the graph data to be stored in the internal memory in the method according to any one of claims 1 to 8; and
and responding to receiving an access result from the GPU card corresponding to the identification to be accessed, wherein the access result obtained by the GPU card corresponding to the identification to be accessed is responding to the received access result from the internal memory.
12. The method of any of claims 9-11, further comprising:
creating a first access task according to the identification to be accessed;
adding the first access task to a first task queue corresponding to the GPU;
Under the condition that the matching identification matched with the identification to be accessed exists in the target graph slice data, acquiring the access result from the target graph slice data according to the matching identification comprises the following steps:
acquiring a first head task from the first task queue by using a thread corresponding to the GPU;
acquiring the identification to be accessed from the first head task by utilizing a thread corresponding to the GPU under the condition that the first head task is determined to be the first access task; and
and acquiring the access result from the target graph slice data according to the matching identification under the condition that the thread corresponding to the GPU determines that the matching identification matched with the identification to be accessed exists in the target graph slice data.
13. A graph data access method, comprising:
in response to receiving an identification to be accessed from a graphics processor GPU, obtaining an access result from associated graph slice data according to a matching identification which is matched with the identification to be accessed when the matching identification exists in the associated graph slice data, wherein the associated graph slice data is the graph data to be stored in the internal memory in the method according to any one of claims 1 to 8; and
And sending the access result to the GPU.
14. The method of claim 13, further comprising:
in case it is determined that there is no matching identification matching the identification to be accessed in the associative map slice data,
sending the identification to be accessed to an external memory so that the external memory obtains the access result from at least two pieces of image slice data to be stored, wherein the at least two pieces of image slice data to be stored are the pieces of data to be stored in the method of any one of claims 1 to 8; and
and in response to receiving the access result from the external memory, sending the access result to the GPU.
15. The method according to claim 13 or 14, wherein in response to receiving an identifier to be accessed from a graphics processor GPU, in a case where it is determined that there is a matching identifier matching the identifier to be accessed in the associated graph slice data, acquiring, according to the matching identifier, an access result from the associated graph slice data includes:
in response to receiving an identification to be accessed from the GPU, creating a second access task according to the identification to be accessed;
adding the second access task to a second task queue corresponding to the internal memory;
Acquiring a second head task from the second task queue by using a thread corresponding to the internal memory;
acquiring the identification to be accessed from the second head task by using a thread corresponding to the internal memory under the condition that the second head task is determined to be the second access task; and
and acquiring the access result from the associated graph slice data according to the matching identification under the condition that the thread corresponding to the internal memory determines that the matching identification matched with the identification to be accessed exists in the associated graph slice data.
16. A method of training a graph neural network model, comprising:
in response to receiving the model training request, determining at least one target sampling node from target graph slice data based on a sampling strategy, wherein the target graph slice data is data to be stored in a graphics processor GPU in the method of any of claims 1-8;
acquiring at least one first-order neighbor node corresponding to the at least one target sampling node from one of the target graph slice data, associated graph slice data and at least two graph slice data to be stored, wherein the associated graph slice data is the graph data to be stored in the internal memory in the method according to any one of claims 1 to 8, and the at least two graph slice data to be stored is the graph data to be stored in the external memory in the method according to any one of claims 1 to 8;
Obtaining at least first-order sub-graph data according to the target sampling node related data of the at least one target sampling node and the neighbor node related data of the at least first-order neighbor node; and
and sending the at least first-order sub-graph data to a deep learning platform so that the deep learning platform trains the graph neural network model by utilizing the at least first-order sub-graph data.
17. The method of claim 16, wherein the determining at least one target sampling node from the target graph slice data based on the sampling policy in response to receiving the model training request comprises one of:
in response to receiving the model training request, determining all target nodes in the target graph slice data as the target sampling nodes based on a traversal sampling strategy; and
in response to receiving the model training request, determining a portion of target nodes in the target graph slice data as the at least one target sampling node based on a conditional sampling policy.
18. The method of claim 16 or 17, further comprising:
generating a storage switching instruction in response to receiving a training completion instruction from the deep learning platform; and
In response to detecting the storage switching instruction, repeatedly performing an operation of obtaining the at least first-order sub-image data and transmitting the at least first-order sub-image data to the deep learning platform in a case where new target image slice data and new associated image slice data are obtained from the at least two image slice data to be stored, so that the deep learning platform trains the image neural network model using the at least first-order sub-image data until a training operation of training the image neural network model using the target image slice data is performed in a case where a predetermined number of the image slice data to be stored in the at least two image slice data are determined as the target image slice data.
19. A graph data processing method, comprising:
inputting target graph data into a graph neural network model, and obtaining an output result, wherein the graph neural network model is trained by the method according to any one of claims 16-18.
20. A graph data storage device, comprising:
the first obtaining module is used for responding to the received graph data storage request, dividing the graph data to be stored, and obtaining at least two graph slice data to be stored;
The second obtaining module is used for obtaining target image slice data and associated image slice data according to the at least two image slice data to be stored;
the first storage module is used for storing the target graph slice data to a graphics processor GPU;
the second storage module is used for storing the correlation graph slice data to an internal memory; and
the third storage module is used for storing the at least two image slice data to be stored into an external memory;
wherein, the first obtaining module is used for:
and in response to receiving the graph data storage request, dividing the graph data to be stored by a graph segmentation algorithm based on nodes or a graph segmentation algorithm based on edges to obtain the at least two graph slice data to be stored.
21. The apparatus of claim 20, wherein the second obtaining module comprises:
a first determining submodule, configured to determine the target graph slice data from the at least two graph slice data to be stored; and
and the second determining submodule is used for determining the associated graph slice data from the graph data to be stored according to the target graph slice data.
22. The apparatus of claim 20 or 21 or wherein the GPU comprises at least one GPU card;
Wherein, the first storage module includes:
the first obtaining submodule is used for dividing the target graph slice data to obtain at least one target graph partition data; and
and the first storage sub-module is used for storing the at least one target graph partition data to at least one GPU card.
23. The apparatus of claim 22, wherein the first obtaining sub-module comprises:
the first obtaining unit is used for dividing the target graph slice data based on a first hash algorithm to obtain the at least one target graph partition data.
24. The apparatus of claim 20 or 21, wherein the internal memory comprises at least one internal storage area;
wherein the second storage module comprises:
the second obtaining submodule is used for dividing the correlation diagram slice data to obtain at least one correlation diagram partition data; and
and the second storage sub-module is used for storing the at least one association diagram partition data to the at least one internal storage area.
25. The apparatus of claim 24, wherein the second obtaining sub-module comprises:
the second obtaining unit is used for dividing the associated graph slice data based on a second hash algorithm to obtain the at least one associated graph partition data.
26. The apparatus of claim 20 or 21, wherein the relationship between the target map slice data and the association map slice data comprises one of: no intersection exists between the target graph slice data and the associated graph slice data includes the target graph slice data.
27. The apparatus of claim 20 or 21, wherein the target graph slice data comprises at least one of target node-related data and target neighbor node-related data, the target node-related data comprising at least one of target node data and target node feature data, the target neighbor node-related data comprising at least one of target neighbor node data and target neighbor node feature data;
the associated graph slice data comprises at least one of associated node related data and associated neighbor node related data, the associated node related data comprises at least one of associated node data and associated node feature data, and the associated neighbor node related data comprises at least one of associated neighbor node data and associated neighbor node feature data.
28. A graph data access apparatus comprising:
The first acquisition module is used for responding to the received graph data access request and acquiring the identification to be accessed; and
a second obtaining module, configured to obtain, according to the matching identifier, an access result from target image slice data when it is determined that there is a matching identifier that matches the identifier to be accessed in the target image slice data, where the target image slice data is image data to be stored in a graphics processor GPU in an apparatus according to any one of claims 20 to 27.
29. The apparatus of claim 28, wherein the GPU comprises at least one GPU card;
wherein, the second acquisition module includes:
the first acquisition sub-module is used for acquiring the access result from the target graph slice data according to the matching identification under the condition that the matching identification matched with the identification to be accessed exists in the current target graph partition data; and
in case it is determined that there is no matching identification matching the identification to be accessed in the current target graph partition data,
a third determining submodule, configured to determine a GPU card corresponding to the identifier to be accessed;
the sending sub-module is used for sending the identification to be accessed to the GPU card corresponding to the identification to be accessed, so that the GPU card corresponding to the identification to be accessed obtains an access result from the target graph slice data according to the matching identification under the condition that the matching identification matched with the identification to be accessed exists in the target graph partition data of the GPU card corresponding to the identification to be accessed; and
And the receiving sub-module is used for responding to the received access result from the GPU card corresponding to the identification to be accessed.
30. The apparatus of claim 29, further comprising:
a first sending module, configured to send, if it is determined that a matching identifier that matches the to-be-accessed identifier does not exist in target graphics partition data of a GPU card corresponding to the to-be-accessed identifier, to an internal memory, so that the internal memory obtains, according to the matching identifier, the access result from associated graphics slice data if it is determined that a matching identifier that matches the to-be-accessed identifier exists in the associated graphics slice data, where the associated graphics slice data is to-be-stored graphics data stored in the internal memory in the apparatus according to any one of claims 20 to 27; and
and the receiving module is used for responding to the received access result from the GPU card corresponding to the identification to be accessed, wherein the access result obtained by the GPU card corresponding to the identification to be accessed is in response to the received access result from the internal memory.
31. The apparatus of any one of claims 28-30, further comprising:
The creating module is used for creating a first access task according to the identification to be accessed;
the adding module is used for adding the first access task to a first task queue corresponding to the GPU;
wherein, the second acquisition module includes:
a second obtaining sub-module, configured to obtain a first head task from the first task queue by using a thread corresponding to the GPU;
a third obtaining sub-module, configured to obtain, by using a thread corresponding to the GPU, the identifier to be accessed from the first head task when it is determined that the first head task is the first access task; and
and the fourth acquisition sub-module is used for acquiring the access result from the target graph slice data according to the matching identification under the condition that the thread corresponding to the GPU determines that the matching identification matched with the identification to be accessed exists in the target graph slice data.
32. A graph data access apparatus comprising:
a third obtaining module, configured to obtain, in response to receiving an identifier to be accessed from a GPU, an access result from associated graph slice data according to a matching identifier when it is determined that the matching identifier matching the identifier to be accessed exists in the associated graph slice data, where the associated graph slice data is the graph data to be stored in the internal memory in the apparatus according to any one of claims 20 to 27; and
And the second sending module is used for sending the access result to the GPU.
33. The apparatus of claim 32, further comprising:
in case it is determined that there is no matching identification matching the identification to be accessed in the associative map slice data,
a third sending module, configured to send the identifier to be accessed to an external memory, so that the external memory obtains the access result from at least two pieces of to-be-stored image slice data, where the at least two pieces of to-be-stored image slice data are to-be-stored data in the apparatus according to any one of claims 20 to 27; and
and the fourth sending module is used for sending the access result to the GPU in response to receiving the access result from the external memory.
34. The apparatus of claim 32 or 33, wherein the third acquisition module comprises:
the creating sub-module is used for responding to the received identification to be accessed from the GPU and creating a second access task according to the identification to be accessed;
an adding sub-module, configured to add the second access task to a second task queue corresponding to the internal memory;
a fifth obtaining sub-module, configured to obtain a second head task from the second task queue by using a thread corresponding to the internal memory;
A sixth obtaining sub-module, configured to obtain, by using a thread corresponding to the internal memory, the identifier to be accessed from the second head task if it is determined that the second head task is the second access task; and
and a seventh obtaining sub-module, configured to obtain, by using a thread corresponding to the internal memory, the access result from the associative graph slice data according to the matching identifier when it is determined that the matching identifier matched with the identifier to be accessed exists in the associative graph slice data.
35. A training device for a graph neural network model, comprising:
a determining module, configured to determine, in response to receiving a model training request, at least one target sampling node from target graph slice data based on a sampling policy, wherein the target graph slice data is data to be stored in a graphics processor GPU in an apparatus according to any one of claims 20-27;
a fourth obtaining module, configured to obtain, from the at least one target sampling node, at least one first-order neighbor node corresponding to the at least one target sampling node from the target graph slice data, associated graph slice data, and at least two graph slice data to be stored, where the associated graph slice data is graph data to be stored in an internal memory in the apparatus according to any one of claims 20 to 27, and the at least two graph slice data to be stored is graph data to be stored in an external memory in the apparatus according to any one of claims 20 to 27;
The third obtaining module is used for obtaining at least one first-order sub-graph data according to the target sampling node related data of the at least one target sampling node and the neighbor node related data of the at least one first-order neighbor node; and
and a fifth sending module, configured to send the at least first-order sub-graph data to a deep learning platform, so that the deep learning platform trains the graph neural network model by using the at least first-order sub-graph data.
36. The apparatus of claim 35, wherein the means for determining comprises one of:
a fourth determining sub-module, configured to determine all target nodes in the target graph slice data as the target sampling nodes based on a traversal sampling policy in response to receiving the model training request; and
and a fifth determining sub-module, configured to determine, based on a conditional sampling strategy, a portion of target nodes in the target graph slice data as the at least one target sampling node in response to receiving the model training request.
37. The apparatus of claim 35 or 36, further comprising:
the generation module is used for generating a storage switching instruction in response to receiving a training completion instruction from the deep learning platform; and
And the execution module is used for responding to the detection of the storage switching instruction, repeatedly executing the operation of obtaining the at least first-order sub-image data and sending the at least first-order sub-image data to the deep learning platform under the condition that new target image slice data and new associated image slice data are obtained according to the at least two pieces of image slice data to be stored, so that the deep learning platform trains the image neural network model by utilizing the at least first-order sub-image data until the training operation of training the image neural network model by utilizing the target image slice data is executed under the condition that a preset number of image slice data to be stored in the at least two pieces of image slice data are determined as the target image slice data.
38. A graph data processing apparatus comprising:
a fourth obtaining module, configured to input target graph data into a graph neural network model to obtain an output result, where the graph neural network model is obtained by training using the apparatus according to any one of claims 35 to 37.
39. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 19.
40. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-19.
CN202310188954.9A 2022-05-19 2022-05-19 Graph data storage, access and processing methods, training methods, equipment and media Active CN116309002B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310188954.9A CN116309002B (en) 2022-05-19 2022-05-19 Graph data storage, access and processing methods, training methods, equipment and media

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210573156.3A CN114897666B (en) 2022-05-19 2022-05-19 Graph data storage, access, processing method, training method, device and medium
CN202310188954.9A CN116309002B (en) 2022-05-19 2022-05-19 Graph data storage, access and processing methods, training methods, equipment and media

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202210573156.3A Division CN114897666B (en) 2022-05-19 2022-05-19 Graph data storage, access, processing method, training method, device and medium

Publications (2)

Publication Number Publication Date
CN116309002A true CN116309002A (en) 2023-06-23
CN116309002B CN116309002B (en) 2024-03-01

Family

ID=82725589

Family Applications (4)

Application Number Title Priority Date Filing Date
CN202310188954.9A Active CN116309002B (en) 2022-05-19 2022-05-19 Graph data storage, access and processing methods, training methods, equipment and media
CN202310188950.0A Pending CN116362955A (en) 2022-05-19 2022-05-19 Graph data storage, access and processing methods, training methods, equipment and media
CN202210573156.3A Active CN114897666B (en) 2022-05-19 2022-05-19 Graph data storage, access, processing method, training method, device and medium
CN202310188952.XA Pending CN116029891A (en) 2022-05-19 2022-05-19 Graph data storage, access and processing methods, training methods, equipment and media

Family Applications After (3)

Application Number Title Priority Date Filing Date
CN202310188950.0A Pending CN116362955A (en) 2022-05-19 2022-05-19 Graph data storage, access and processing methods, training methods, equipment and media
CN202210573156.3A Active CN114897666B (en) 2022-05-19 2022-05-19 Graph data storage, access, processing method, training method, device and medium
CN202310188952.XA Pending CN116029891A (en) 2022-05-19 2022-05-19 Graph data storage, access and processing methods, training methods, equipment and media

Country Status (1)

Country Link
CN (4) CN116309002B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117556095B (en) * 2024-01-11 2024-04-09 腾讯科技(深圳)有限公司 Graph data segmentation method, device, computer equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5748844A (en) * 1994-11-03 1998-05-05 Mitsubishi Electric Information Technology Center America, Inc. Graph partitioning system
WO2009107412A1 (en) * 2008-02-27 2009-09-03 日本電気株式会社 Graph structure estimation apparatus, graph structure estimation method, and program
US20180075159A1 (en) * 2016-09-13 2018-03-15 International Business Machines Corporation Efficient property graph storage for streaming / multi-versioning graphs
CN110633378A (en) * 2019-08-19 2019-12-31 杭州欧若数网科技有限公司 Graph database construction method supporting super-large scale relational network
CN110909015A (en) * 2019-09-12 2020-03-24 华为技术有限公司 Splitting method, device and equipment of microservice and storage medium
CN111444395A (en) * 2019-01-16 2020-07-24 阿里巴巴集团控股有限公司 Method, system and equipment for obtaining relation expression between entities and advertisement recalling system
DE102020110447A1 (en) * 2019-04-26 2021-01-21 Intel Corporation Methods, computer programs and devices for signal processing in a user device and network infrastructure, user device and network infrastructure
WO2021223465A1 (en) * 2020-05-06 2021-11-11 北京嘀嘀无限科技发展有限公司 High-precision map building method and system
CN113961351A (en) * 2021-10-28 2022-01-21 北京百度网讯科技有限公司 Distributed training method, device, equipment and storage medium for deep learning model
KR20220056892A (en) * 2020-10-28 2022-05-09 주식회사 뷰노 Method for segmentation based on medical image

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111124250B (en) * 2018-10-30 2023-11-21 伊姆西Ip控股有限责任公司 Method, apparatus and computer program product for managing memory space
CN110099112B (en) * 2019-04-28 2022-03-29 平安科技(深圳)有限公司 Data storage method, device, medium and terminal equipment based on point-to-point network
CN113672162B (en) * 2020-05-14 2024-09-27 杭州萤石软件有限公司 Data storage method, device and equipment
CN114443873A (en) * 2021-12-31 2022-05-06 深圳云天励飞技术股份有限公司 Data processing method, device, server and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5748844A (en) * 1994-11-03 1998-05-05 Mitsubishi Electric Information Technology Center America, Inc. Graph partitioning system
WO2009107412A1 (en) * 2008-02-27 2009-09-03 日本電気株式会社 Graph structure estimation apparatus, graph structure estimation method, and program
US20180075159A1 (en) * 2016-09-13 2018-03-15 International Business Machines Corporation Efficient property graph storage for streaming / multi-versioning graphs
CN111444395A (en) * 2019-01-16 2020-07-24 阿里巴巴集团控股有限公司 Method, system and equipment for obtaining relation expression between entities and advertisement recalling system
DE102020110447A1 (en) * 2019-04-26 2021-01-21 Intel Corporation Methods, computer programs and devices for signal processing in a user device and network infrastructure, user device and network infrastructure
CN110633378A (en) * 2019-08-19 2019-12-31 杭州欧若数网科技有限公司 Graph database construction method supporting super-large scale relational network
CN110909015A (en) * 2019-09-12 2020-03-24 华为技术有限公司 Splitting method, device and equipment of microservice and storage medium
WO2021223465A1 (en) * 2020-05-06 2021-11-11 北京嘀嘀无限科技发展有限公司 High-precision map building method and system
KR20220056892A (en) * 2020-10-28 2022-05-09 주식회사 뷰노 Method for segmentation based on medical image
CN113961351A (en) * 2021-10-28 2022-01-21 北京百度网讯科技有限公司 Distributed training method, device, equipment and storage medium for deep learning model

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
WEI HAN 等: "Graphie: Large-Scale Asynchronous Graph Traversals on Just a GPU", 《2017 26TH INTERNATIONAL CONFERENCE ON PARALLEL ARCHITECTURES AND COMPILATION TECHNIQUES》, pages 233 - 245 *
刘锁兰;王江涛;王建国;杨静宇;: "一种新的基于图论聚类的分割算法", 计算机科学, no. 09 *
田小平;吴成茂;: "一种改进的图谱阈值分割算法", 现代电子技术, no. 16 *
赵港 等: "大规模图神经网络系统综述", 《软件学报》, pages 150 - 170 *

Also Published As

Publication number Publication date
CN114897666A (en) 2022-08-12
CN114897666B (en) 2023-04-18
CN116362955A (en) 2023-06-30
CN116309002B (en) 2024-03-01
CN116029891A (en) 2023-04-28

Similar Documents

Publication Publication Date Title
CN112000763B (en) Method, device, equipment and medium for determining competition relationship of interest points
CN114444619B (en) Sample generation method, training method, data processing method and electronic device
CN113792212B (en) Multimedia resource recommendation method, device, equipment and storage medium
CN113033194A (en) Training method, device, equipment and storage medium of semantic representation graph model
CN116309002B (en) Graph data storage, access and processing methods, training methods, equipment and media
CN117407584A (en) Method, device, electronic equipment and storage medium for determining recommended content
CN116597443A (en) Material tag processing method and device, electronic equipment and medium
CN115329748B (en) Log analysis method, device, equipment and storage medium
CN115186738B (en) Model training method, device and storage medium
CN112887426B (en) Information stream pushing method and device, electronic equipment and storage medium
CN114969444A (en) Data processing method and device, electronic equipment and storage medium
CN113961797A (en) Resource recommendation method and device, electronic equipment and readable storage medium
CN112860626A (en) Document sorting method and device and electronic equipment
CN115759233B (en) Model training method, graph data processing device and electronic equipment
CN117131197B (en) Method, device, equipment and storage medium for processing demand category of bidding document
CN115935027B (en) Data processing method of target object topological graph and training method of graph classification model
CN116304253B (en) Data storage method, data retrieval method and method for identifying similar video
CN113326416B (en) Method for searching data, method and device for sending search data to client
CN113312521B (en) Content retrieval method, device, electronic equipment and medium
CN114037057B (en) Pre-training model generation method and device, electronic equipment and storage medium
CN115795023B (en) Document recommendation method, device, equipment and storage medium
CN115828915B (en) Entity disambiguation method, device, electronic equipment and storage medium
CN112783507B (en) Data stream guiding playback method and device, electronic equipment and readable storage medium
CN116167978A (en) Model updating method and device, electronic equipment and storage medium
CN116932913A (en) Information pushing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant