CN113408564A - Graph processing method, network training method, device, equipment and storage medium - Google Patents
Graph processing method, network training method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN113408564A CN113408564A CN202011134804.2A CN202011134804A CN113408564A CN 113408564 A CN113408564 A CN 113408564A CN 202011134804 A CN202011134804 A CN 202011134804A CN 113408564 A CN113408564 A CN 113408564A
- Authority
- CN
- China
- Prior art keywords
- graph
- node
- feature
- category
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The embodiment of the application discloses a graph processing method, a network training method, a device, equipment and a storage medium, and is applicable to the fields of artificial intelligence and the like. The method comprises the following steps: acquiring a graph to be processed; acquiring node characteristics of each node in a graph to be processed; for each node, determining the class of the node from each candidate graph class according to the similarity between the node feature of the node and the graph class feature corresponding to each candidate graph class; and correspondingly processing at least one node in the graph to be processed based on the category of each node. By adopting the method and the device, the category of each node in the graph to be processed can be determined based on each candidate graph category, the prediction accuracy of the category of the node can be improved, the node in the graph to be processed can be effectively processed, and the applicability is high.
Description
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to a graph processing method, a network training method, an apparatus, a device, and a storage medium.
Background
In the classification task of the nodes in the graph in the current artificial intelligence field, the classification of each node in the sample graph is marked, a network model is trained according to the node characteristics of the nodes to obtain a node classification model, and then the classification of each node is predicted through the node classification model. And the node characteristics and the marked classes of all nodes in the sample graph are used as input information of the network model, and the classes of the nodes to be tested are output through the trained node classification model.
However, in an actual scenario, since the class of each node in the graph is influenced by the class of the graph to some extent, the existing node classification model training method and the existing node class determining method often ignore the influence of the class of the node caused by the class of the graph, and thus the accuracy of the node classification model and the node classification method is seriously influenced.
In summary, how to further improve the accuracy of node classification becomes an urgent problem to be solved.
Disclosure of Invention
The embodiment of the application provides a graph processing method, a network training method, a device, equipment and a storage medium.
In a first aspect, an embodiment of the present application provides a graph processing method, where the method includes:
acquiring a graph to be processed;
acquiring node characteristics of each node in the graph to be processed;
for each node, determining the class of the node from the candidate graph classes according to the similarity between the node feature of the node and the graph class feature corresponding to each candidate graph class, wherein each candidate graph class comprises the graph class of the graph to be processed;
and performing corresponding processing on at least one node in the graph to be processed based on the category of each node.
In a second aspect, an embodiment of the present application provides a node feature extraction network training method, where the method includes:
acquiring an initial graph classification network, wherein the initial graph classification network comprises a node feature extraction module, a graph feature extraction module and a graph classification module which are sequentially cascaded;
acquiring training data, wherein each sample icon in the training data is marked with a sample label, and the sample label represents the real image category of the sample image;
inputting each sample graph to the node feature extraction module to obtain the node features of each node of each sample graph;
inputting the node characteristics of each node into the graph characteristic extraction module to obtain the graph characteristics of each sample graph;
inputting the graph features of the sample graphs into the graph classification module to obtain the prediction graph types of the sample graphs;
determining a total training loss value according to the prediction graph type of each sample graph and the sample label of each sample graph;
and performing iterative training on the initial graph classification network according to the total training loss value and the training data until the total training loss value meets the training end condition, and determining a node feature extraction module in the initial graph classification network at the training end as a node feature extraction network.
In a third aspect, an embodiment of the present application provides a graph processing apparatus, including:
the graph acquisition module is used for acquiring a graph to be processed;
a characteristic obtaining module, configured to obtain node characteristics of each node in the graph to be processed;
a classification module, configured to, for each node, determine a class of the node from each candidate graph class according to a similarity between a node feature of the node and a graph class feature corresponding to each candidate graph class, where each candidate graph class includes a graph class of the to-be-processed graph;
and the graph processing module is used for correspondingly processing at least one node in the graph to be processed based on the type of each node.
In a fourth aspect, an embodiment of the present application provides a node feature extraction network training apparatus, where the apparatus is configured to:
the network acquisition module is used for acquiring an initial graph classification network, and the initial graph classification network comprises a node feature extraction module, a graph feature extraction module and a graph classification module which are sequentially cascaded;
the data acquisition module is used for acquiring training data, wherein each sample icon in the training data is marked with a sample label, and the sample label represents the real image category of the sample image;
an input module, configured to input each sample graph to the node feature extraction module, so as to obtain a node feature of each node of each sample graph;
the input module is configured to input the node features of each node to the graph feature extraction module to obtain the graph features of each sample graph;
the input module is used for inputting the graph characteristics of each sample graph into the graph classification module to obtain the prediction graph type of the sample graph;
a loss determining module, configured to determine a total training loss value according to the prediction graph type of each sample graph and the sample label of each sample graph;
and the network determining module is used for performing iterative training on the initial graph classification network according to the total training loss value and the training data until the total training loss value meets the training ending condition, and determining the node feature extracting module in the initial graph classification network at the training ending time as the node feature extracting network.
In a fifth aspect, an embodiment of the present application provides an electronic device, including a processor and a memory, where the processor and the memory are connected to each other;
the memory is used for storing computer programs;
the processor is configured to execute the method provided in any of the embodiments of the first aspect and/or the second aspect when the computer program is called.
In a sixth aspect, the present application provides a computer-readable storage medium, where a computer program is stored, where the computer program is executed by a processor to implement the method provided in any one of the foregoing first and/or second aspects.
In a seventh aspect, the present application provides a computer program product or a computer program, where the computer program product or the computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium. The processor of the electronic device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the method provided by any one of the embodiments of the first aspect and/or the second aspect.
In the embodiment of the application, the similarity between the node characteristics of each node in the graph to be processed and the graph category characteristics corresponding to the candidate graph categories is determined, so that the category of each node is determined based on the similarity, the node characteristics of each node are fully considered in the determination process of the node category, the influence of the graph category on the category of each node can be combined, the category of each node is determined through the candidate graph categories, the accuracy of determining the category of each node in the graph to be processed can be further improved, and the applicability is high.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic diagram of a graph processing method provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a graph processing method provided in an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating a method for determining node characteristics according to an embodiment of the present disclosure;
fig. 4 is a flowchart illustrating a method for determining node characteristics according to an embodiment of the present application;
fig. 5 is a schematic view of a scene for processing a graph to be processed according to an embodiment of the present application;
fig. 6 is a schematic view of another scenario for processing a to-be-processed graph according to an embodiment of the present application;
fig. 7 is a schematic flowchart of a training method for a node feature extraction network according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of a graph classification network provided in an embodiment of the present application;
FIG. 9 is a schematic diagram of a method for determining feature distances provided by an embodiment of the present application;
fig. 10 is a schematic diagram of a training method of a node feature extraction network according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a graph processing apparatus provided in an embodiment of the present application;
fig. 12 is a schematic structural diagram of a node feature extraction network training apparatus according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The graph processing method provided by the embodiment of the application relates to the field of Machine Learning (ML) in Artificial Intelligence (AI), the field of Computer Vision technology (CV) and the like. Among them, Machine Learning (ML) is a specialized study on how a computer simulates or implements human Learning behavior to acquire new knowledge or skills and reorganize an existing knowledge structure to continuously improve its performance. In the computer vision technology, a camera and a computer can be used for replacing human eyes to perform machine vision such as identification, tracking and measurement on a target, and further drawing is performed for processing.
Artificial intelligence is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. According to the embodiment of the application, the nodes in the graph can be identified through a computer vision technology, and then on the basis of a neural network, a machine is enabled to have the performance of classifying the nodes through machine learning, so that the graph is processed according to the classes of the nodes.
The graph processing method provided by the embodiment of the application also relates to the fields of Cloud computing (Cloud computing) in Cloud technology, artificial intelligence Cloud service and the like. In the embodiment of the application, the computing tasks involved in the graph processing method are distributed on the resource pool formed by a large number of computers through cloud computing so as to improve the graph processing efficiency. And the graph processing method can be used as an artificial intelligence service, and the artificial intelligence cloud service for corresponding graph processing is provided through an artificial intelligence platform.
Referring to fig. 1, fig. 1 is a schematic diagram of a graph processing method provided in an embodiment of the present application. As shown in fig. 1, a node 101, a node 102, a node 103, and a node 104 are nodes in fig. 10 to be processed, where a connection line between the nodes may represent a connection relationship between the nodes, and a specific relationship type may be determined based on an actual application scenario, which is not limited herein. When the to-be-processed fig. 10 is a user relationship diagram, each node in the to-be-processed fig. 10 may represent each user, and a connection line between the nodes represents a user relationship (e.g., a friend relationship) between the users, for example, the node 101 has a friend relationship with the node 102, the node 103, and the node 104 in the to-be-processed fig. 10. When the to-be-processed fig. 10 is a knowledge graph, each node in the to-be-processed fig. 10 may represent each knowledge point, and the nodes continuously represent the association relationship between the knowledge points, for example, the knowledge point represented by the node 103 in the to-be-processed fig. 10 has an association relationship with the knowledge points represented by the nodes 101 and 102.
Further, the node features of each node in the graph 10 to be processed can be obtained through the graph neural network, that is, each node feature in the graph 10 to be processed is represented by a vector. For each node (taking the node 104 as an example), the node category of the node 104 is determined from the candidate graph categories according to the similarity between the node feature of the node 104 and the graph category feature corresponding to each candidate graph category. The candidate graph type is a preset graph type, so that the type of each node in the graph 10 to be processed is determined from the graph type level, and further, any one or more nodes in the graph 10 to be processed can be correspondingly processed based on the list of each node.
Referring to fig. 2, fig. 2 is a schematic flow chart diagram of a graph processing method provided in an embodiment of the present application. As shown in fig. 2, a graph processing method provided in an embodiment of the present application may include the following steps:
and step S21, acquiring the graph to be processed.
In some possible embodiments, the graph to be processed may represent objects and relationships between the objects, the objects in the graph to be processed are called nodes, and the relationships between the objects are described by edges. When the graph to be processed is obtained, objects in a certain range and the relation between the objects can be determined, the objects are used as nodes, the relation between the objects is mapped to edges between the objects, and the graph to be processed is formed based on the nodes and the edges between the nodes.
The graph to be processed may be used to represent a social network, an information dissemination network, a knowledge graph, a chemical molecular structure, and the like, and may be determined based on an actual application scenario, which is not limited herein. For example, in a certain social application scope, for a certain user, the user, another user having a social relationship with the user (an application contact of the user), and a user having a social relationship with the application contact of the user may be used as nodes in the graph to be processed, the social relationship between the user and the other user represented by any node is mapped to each edge corresponding to the node, and the graph to be processed is formed based on the nodes representing the users and the edges between the nodes.
For another example, for 2,4, 6-trinitrotoluene, the chemical molecular structure of 2,4, 6-trinitrotoluene can be represented by the same graph, that is, the functional group (benzene ring) and the substituent (nitro) in 2,4, 6-trinitrotoluene are used as each node, the chemical molecular connection relationship between each substituent and the functional group is used as the node representing each substituent and the edge between the nodes representing the functional group, and then the graph to be processed is formed based on each node and each edge.
It should be particularly noted that the object selection in obtaining the graph to be processed and the selection range of the object may be determined based on the actual application scene requirements, which is not limited herein. For example, to represent the friend relationship of a user in a certain social application, the user and the application contact of the user may be used as nodes, the friend relationship between the user and the application contact of the user and the friend relationship between the user and the application contact of the user may be used as edges between corresponding nodes, for example, to represent a propagation path of a certain message, each user through which the message propagates within a certain time range may be used as a node, and the edges between the nodes are the propagation path of the message.
And step S22, acquiring node characteristics of each node in the graph to be processed.
In some possible embodiments, an embodiment of determining node characteristics of each node in the graph to be processed is further described with reference to fig. 3, where fig. 3 is a schematic diagram of a method for determining node characteristics provided in an embodiment of the present application. As shown in fig. 3, the node feature extraction network includes an initial feature extraction sub-network, and a plurality of node feature extraction sub-networks cascaded with the initial feature extraction sub-network. And obtaining the initial characteristics of each node in the graph to be identified through the initial characteristic extraction sub-network, and further processing the initial characteristics of each node through the node characteristic extraction sub-network to obtain the node characteristics of each node. Each node feature extraction sub-network corresponds to one candidate graph category, a specific determination method of the node features of each node may specifically refer to fig. 4, and fig. 4 is a schematic flow diagram of the determination method of the node features provided in the embodiment of the present application. The method for determining the node characteristics shown in fig. 4 may include the following steps:
and step S221, extracting the initial features of each node in the graph to be processed through the initial feature extraction sub-network.
In some possible embodiments, the initial features of each node in the graph to be processed can be obtained by an initial feature extraction sub-network. Specifically, when the initial feature of each node in the graph to be processed is extracted through the initial feature extraction sub-network, a convolution operation is directly defined on a connection relation between each node in the graph to be processed by using a space-domain convolution method through the initial feature extraction sub-network to extract a spatial feature on the graph to be processed, so that a feature (for convenience of description, hereinafter referred to as a first feature) of each node in the graph to be processed after preliminary processing is obtained, and the first feature of each node is determined as the initial feature of the node.
Optionally, by using the initial feature extraction sub-network, based on a method of graph signal processing, a transformation of signal processing, such as fourier transformation or laplace transformation, is performed on the graph to be processed, and then convolution of the graph to be processed is performed, so as to extract the first feature of each node in the graph, and determine the first feature of each node as the initial feature of the node. It should be particularly noted that the specific implementation manner of extracting the node features of each node in the graph to be processed through the initial feature extraction sub-network may be determined based on the requirements of the actual application scenario and the network mechanism of the initial feature extraction sub-network, which is not limited herein.
In some possible embodiments, since the connection relationship between nodes in the graph to be processed has different degrees of importance, and the relative importance between a certain node and different adjacent nodes also has different differences, in order to take the above problems into consideration, the first feature of each node in the graph to be processed may be processed based on the graph attention network, and the processed feature may be used as the initial feature of each node.
In a specific implementation, after the initial feature extraction sub-network obtains the first feature of each node in the graph to be processed, the attention autocorrelation coefficient of the node can be determined according to the first feature of the node. Assume that the first characteristic of each node in the graph to be processed isEach section in the graph to be processedThe initial characteristic of the point isn is the number of nodes in the graph to be processed, and for the node i in the graph to be processed, the autocorrelation coefficient of the node i isWherein alpha iskIs the weight coefficient of the kth attention mechanism, the weight coefficient alphakCalculated by a single-layer forward neural network, namely:
wherein,for the weight vector corresponding to the forward neural network of the layer, LeakyReLU is an activation function, W1A weight matrix shared by each node in the graph to be processed for representing a first characteristic of each node in the graph to be processedAnd initial characteristicsN (i) is a set of neighboring nodes to node i. Similarly, the first characteristic of the node i in the graph to be processed can be determinedAnd a first characteristic of each of the nodes adjacent to the node iThe cross-correlation coefficient of the node i and each adjacent node is obtained asWherein:
further, for the node i, the initial feature of the node i can be obtained through the initial feature extraction networkI.e. the initial characteristics of node i under the influence of itself and neighboring nodes. Where K represents the number of attention mechanisms, | | | represents the stitching process, and σ () represents the nonlinear activation function.
Optionally, for any node in the graph to be processed, to further aggregate node features of its neighboring nodes, after obtaining the initial feature of the node, the initial feature of the node may be updated one or more times based on the above-mentioned attention mechanism and the initial feature of the node, so as to obtain a final initial feature of the node. And in each updating, the initial feature before updating can be used as the first feature in the calculation process, and then the updated initial feature is obtained based on the process, and the specific updating times can be determined based on the actual application scene, which is not limited herein. For example, taking the above-mentioned primary calculation process of the initial features as a calculation process of a graph convolution layer, the number of times of updating the initial features of each node in the graph to be processed may be determined based on the number of layers of the graph convolution layer in the initial feature extraction sub-network, where the number and weight of attention mechanisms between each convolution layer, the weight vector of the corresponding forward neural network, and the weight matrix representing the conversion relationship between the initial features before updating and the initial features after updating of each node may be the same or different, and may specifically be determined based on the number of graph convolution layers and the requirements of the actual application scenario, which is not limited herein.
Step S222, for each node, extracting a sub-network according to the node feature corresponding to each candidate graph category based on the initial feature of the node, and obtaining the feature of the node corresponding to each candidate graph category.
In some possible embodiments, since only the initial features under the influence of the connection relationship between each node itself and its neighboring nodes can be obtained based on the attention mechanism, if the node type of each node is directly determined based on the initial features of each node, the node type of the node will be biased, and the node type determination will be inaccurate. Therefore, for each node in the graph to be processed, the feature of the node corresponding to each candidate graph category can be obtained by extracting the sub-network from the node feature corresponding to each candidate graph category in the node feature extraction network according to the initial feature of the node. In other words, when the initial features of the nodes themselves include the connection relationships between the nodes, the initial features of the nodes are further represented by the candidate graph categories, and the features of the nodes corresponding to the candidate graph categories are obtained, so that the accuracy of node category determination is further improved.
Specifically, in the node feature extraction network, a node feature extraction sub-network corresponding to each candidate graph category is connected to the initial feature extraction sub-network, and each candidate graph category is a different graph category preset in the process of training the node feature extraction network. Based on the above, for each node, the feature of the node corresponding to each candidate graph category is obtained according to the graph category feature corresponding to each candidate graph category through each node feature extraction sub-network. Namely, each node feature extraction sub-network further represents the initial feature of the node under the influence of each candidate graph category.
The graph category feature corresponding to one candidate graph category is a network parameter of the extraction sub-network for the node feature corresponding to the candidate graph category, and the specific network parameter may be determined based on an actual network of the feature extraction sub-network, which is not limited herein.
For example, each node feature extraction sub-network is a network based on a gaussian mixture model, that is, one node feature extraction sub-network corresponds to one gaussian function in the gaussian mixture model, and a graph class feature corresponding to one candidate graph class is a gaussian distribution parameter of the gaussian function in the gaussian mixture model corresponding to the candidate graph class. The gaussian distribution parameters include, but are not limited to, mean vectors and covariance matrices corresponding to gaussian functions, and the likeThe determination may be based on actual application scenario requirements, and is not limited herein. Specifically, for the node i in the graph to be processed, the feature of the node i corresponding to each candidate graph category can be obtained through each node feature extraction sub-network
Wherein,as an initial node characteristic of node i, W2Weights, ω, corresponding to the Gaussian mixture modelc() For the gaussian function corresponding to the c-th candidate class,a covariance matrix representing a gaussian function corresponding to the c-th candidate graph class,and representing the mean vector of the Gaussian function corresponding to the c-th candidate graph category, namely representing the center of the candidate graph category.
And step S223, fusing the characteristics of the node corresponding to each candidate graph type to obtain the node characteristics of the node.
In some possible embodiments, for each node, the features of the node corresponding to the candidate graph categories may be fused to obtain the node features of the node under the influence of different candidate graph categories. Such as based on obtaining a class weight for each candidate graph class, based on the weight for each candidate graph class, and the feature of the node corresponding to each candidate graph class. Or, taking the mean feature of the node corresponding to each candidate graph category as the final node feature of the node, i.e. the node feature of the nodeAnd C represents the number of each candidate graph category, and the node characteristics of the node corresponding to each candidate graph category can be obtained by fusing the characteristics of the node based on other preset fusion networks, models and the like.
It should be particularly noted that the specific implementation manner for fusing the features of each node in the graph to be processed corresponding to each candidate graph category is only an example, and may be determined based on the actual application scenario requirements, without limitation.
Step S23, for each node, the class of the node is determined from the candidate graph classes according to the similarity between the node feature of the node and the graph class feature corresponding to each candidate graph class.
In some possible embodiments, after obtaining the node features of each node in the graph to be processed, the category of each node may be determined from each candidate graph category. For each node in the graph to be processed, the similarity between the node feature of the node and the graph category feature corresponding to each candidate graph category can be specifically determined, and then the candidate graph category corresponding to the highest similarity is determined as the category of the node. That is, the higher the similarity between the node feature of the node and the graph category feature corresponding to the candidate graph category, the closer the category of the node is to the candidate graph category. The candidate graph categories comprise graph categories of the graph to be processed, so that when the node categories of the nodes are determined, the influence of the graph categories of the graph to be processed on the categories of the nodes is fully considered, and the determination accuracy of the categories of the nodes is improved.
The similarity between the node feature of the node and the graph category feature of each candidate graph category, including but not limited to euclidean distance, cosine similarity, manhattan distance, chebyshev distance, jaccard coefficient, and the like, may be determined based on the actual application scenario requirements, and is not limited herein. For example, when the similarity is cosine similarity, the classification of each nodeCan be represented by formulaAnd determining, namely determining the candidate graph class corresponding to the graph class characteristic corresponding to the maximum cosine similarity as the class of the node.
In some possible implementations, the graph category in the embodiment of the present application is used to characterize a feature of a graph in a certain domain and a certain dimension. The same graph may have different graph categories in different domains and different dimensions, and different graphs may have the same graph category or different graph categories, which may be determined based on graph composition and actual application scenarios, without limitation.
For example, if the pending graph represents a payment network, and each node in the pending graph is a payment user in the payment network. In the field of transfer wind control, the graph type of the graph to be processed may be a money laundering network, that is, a money laundering phenomenon exists in a payment network represented by the graph to be processed, or the graph type of the graph to be processed may be a normal payment network, that is, a user payment behavior has no abnormal behavior.
Optionally, for the payment network, the graph category of the to-be-processed graph may also be a mobile payment network, that is, it indicates that the payment user in the payment network adopts mobile payment, or the graph category of the to-be-processed graph may also be a bank card transfer, that is, it indicates that the payment user adopts bank card transfer mode to pay.
For another example, if the graph to be processed represents a social network and each node in the graph to be processed is a social user in the social network, in the information dissemination field, the graph category of the graph to be processed may be a rumor dissemination network, that is, there is a rumor dissemination phenomenon in the social network represented by the graph to be processed, or the graph category of the graph to be processed may be a normal social network (non-rumor social network).
Optionally, for social networks, the graph category of the pending graph may also be classified based on the age of the user, such as a middle-aged or elderly social network, a teenagers social network, and so on.
For another example, if the to-be-processed graph is a task topology graph or a program control graph, and each node in the to-be-processed graph is each task node in the task topology graph (the program control graph). For the operation angle, the graph category of the graph to be processed may be that the task (program) is failed to be executed, that is, there is a task (program) node that fails to be executed in the task topology graph, or the graph category of the graph to be processed may be that the task (program) is successfully executed, that is, the complete task (program) corresponding to the task topology graph is successfully executed.
Optionally, for the task topology map, in a task execution angle, the graph categories of the graph to be processed may be a front-end task and a back-end task, and in a task type angle, the graph categories of the graph to be processed may also be an identification task, a classification task, a translation task, and the like.
The candidate graph categories are a graph category including the graph to be processed and a category set of possible other graph categories of the graph to be processed, and the specific graph category number and division may be determined based on requirements of an actual application scenario, which is not limited herein.
And step S24, performing corresponding processing on at least one node in the graph to be processed based on the category of each node.
In some possible embodiments, after determining the category of each node in the graph to be processed, one or more nodes in the graph to be processed may be processed accordingly. Specifically, the nodes in the graph to be processed may be clustered and segmented according to the categories of the nodes, so as to implement segmentation of the graph to be processed. If the pending graph is a payment network, the payment network represented by the pending graph can be divided into a normal payment network and a money laundering network based on the category of each node. And for the payment user corresponding to the money laundering network, processing measures such as risk early warning and transaction limiting can be taken in time. And for the payment user corresponding to the normal payment network, risk prompt can be carried out on the payment user so as to improve the risk prevention consciousness of the payment user.
Optionally, each node may be screened based on the category of each node in the graph to be processed, specifically, the screening may be performed based on the number of nodes in each category, a preset screening category, and the like, which is not limited herein. If the graph to be processed is a task topological graph, different task testing methods are adopted based on different node types (task types).
The method for processing the nodes in the graph to be processed based on the category of each node in the graph to be processed is only an example, and is not limited in the embodiment of the present application.
In some possible embodiments, after determining the category of each node in the graph to be processed, the node of each node, which is different from the graph category of the graph to be processed, may be further processed correspondingly based on the graph category of the graph to be processed, and a specific processing manner may be determined based on the actual application scenario requirement, the category of the node, and the graph category of the graph to be processed, which is not limited herein.
The node type is the same as the graph type of the graph to be processed, which can be understood as the node type corresponding to the same feature, the same behavior, etc., for example, the graph type is rumor propagation network, and the node type is rumor propagation node corresponding to rumor propagation, which means that the node type is the same as the graph type. Similarly, the nodes with different classes from the graph classes of the graph to be processed can be determined in the same manner.
For example, if the graph type of the graph to be processed indicates that the corresponding program control graph operates normally, if there is a node whose type is failed to operate in the types of the nodes in the graph to be processed determined based on the above implementation, measures such as program test, program modification, or program replacement may be performed on the node.
For example, if the graph type of the graph to be processed indicates that the corresponding social network is a normal social network (rumor-free social network), there are rumor propagation nodes in the types of the nodes in the graph to be processed determined based on the above implementation. That is, the rumor propagation node is a node in the graph to be processed whose type is different from the graph type of the graph to be processed, and then the rumor can be performed on the node, or corresponding reminding can be performed on other non-rumor nodes.
As shown in fig. 5, fig. 5 is a schematic view of a scene for processing a graph to be processed according to an embodiment of the present application. In fig. 5, a to-be-processed graph can be obtained based on each user in the information dissemination network and the information dissemination relationship between each user. Each node in the graph to be processed corresponds to each user in the propagation network, and edges among the nodes represent corresponding information propagation relations. When each candidate graph category includes two categories, that is, the categories of the nodes a, B, C, E and F in fig. 5 are rumors (rumor propagation nodes), that is, the information propagated by the users corresponding to the nodes a, B, C, E and F is a rumor, based on the graph processing method provided in the embodiment of the present application, the candidate graph categories include a rumor (rumor propagation network) and real information (rumor-free social network). The category of the node D, the node G, and the node H is real information (normal user node), that is, information propagated by the users corresponding to the node D, the node G, and the node H is real information.
If the graph categories of the graph to be processed are labeled as non-rumor social networks in the process of determining the graph to be processed, the categories of the node a, the node B, the node C, the node E and the node F are different from the graph categories of the graph to be processed, and then the rumor treatment can be performed on the user a, the user B, the user C, the user E and the user F. Further, based on the information propagation relationship among the nodes, the node A in the graph to be processed can be determined as an information propagation source, and further measures such as information propagation limitation, information propagation alarm and the like can be taken for the user A.
Optionally, a target node in the graph to be processed may be determined and processed based on the graph type of the graph to be processed and the node type of each node in the graph to be processed. The target node may be a node that changes a structural relationship, an operation logic, a composition mode, and the like of the graph to be processed represented by the graph type, or may be a node that is unrelated to the graph type of the graph to be processed, and may be specifically determined based on the requirements of the actual application scenario, which is not limited herein.
For example, the map to be processed represents the chemical molecular structure, each node represents a functional group of the chemical molecular structure, the map class of the map to be processed may represent the chemical name of the chemical molecular structure, and the class of each node represents the name of each functional group. And determining a wrong functional group in the molecular structure under the chemical name based on the graph type of the graph to be processed and the type of each node, and modifying the wrong functional group so that the chemical molecular structure formed by each functional group is consistent with the corresponding chemical name.
As shown in fig. 6, fig. 6 is another schematic view of a scene for processing a graph to be processed according to an embodiment of the present application. FIG. 6 shows a chemical molecular structure, designated 2, 4-dinitroethylbenzene, which can be represented in the pending chart by basing the functional groups in the molecular structure on the connection relationship of the functional groups. When each candidate graph category includes each functional group and each substituent, the node category of each node in the graph to be processed may be determined based on the graph processing method provided in the embodiment of the present application, that is, the category of node a in fig. 6 is a benzene ring, the category of node B is a methyl group, the category of node C is a nitro group, and the category of node E is a nitro group. Since the chemical molecular structure is initially labeled as 2, 4-dinitroethylbenzene, that is, the graph category of the graph to be processed is 2, 4-dinitroethylbenzene, the category of the node B is determined as methyl in the process of determining the category of each node, that is, the functional group corresponding to the node B is methyl, so that the node B can be determined as the target node. The functional group corresponding to node B can be changed from methyl to ethyl, so that the final molecular structure is that of 2, 4-dinitroethylbenzene.
In the embodiment of the application, the similarity between the node characteristics of each node in the graph to be processed and the graph category characteristics corresponding to the candidate graph categories is determined, so that the category of each node is determined based on the similarity, the node characteristics of each node are fully considered in the determination process of the node category, the influence of the graph category on the category of each node can be combined, the category of each node is determined through the candidate graph categories, and the accuracy of determining the category of each node in the graph to be processed can be further improved. Meanwhile, the application brought by the graph type as the node type is effectively combined in the process of determining the type of the node in the graph to be processed, so that the application scenario of the graph processing method provided by the embodiment of the application can be further improved, and the applicability is high.
In some possible embodiments, in the embodiment of the present application, the node feature extraction network used for obtaining the node features of each node in the graph to be processed is obtained by pre-training according to training data, and a specific training process may be referred to in fig. 7. Fig. 7 is a schematic flowchart of a training method for a node feature extraction network according to an embodiment of the present application, where the training method for a node feature extraction network shown in fig. 7 may include the following steps:
and step S71, acquiring an initial graph classification network and training data.
In some possible embodiments, when training the node feature extraction network, an initial graph classification network and training data for training may be obtained first, where each sample icon in the training data is labeled with a sample label, and the sample label corresponding to each sample graph represents a real graph category of the sample graph. The initial graph classification network includes a node feature extraction module, a graph feature extraction module, and a graph classification module, which are sequentially cascaded, and the specific functions of the modules in the training process can be referred to in step S72, which is not described herein.
The training data in the embodiment of the application may be acquired from existing data in a database and cloud storage, or the training data for training the node feature extraction network may be acquired based on a big data technology, and may be specifically determined based on requirements of an actual application scenario, which is not limited herein.
And step S72, inputting each sample graph into the node feature extraction module to obtain the node features of each node of each sample graph, inputting the node features of each node into the graph feature extraction module to obtain the graph features of each sample graph, and inputting the graph features of each sample graph into the graph classification module to obtain the prediction graph category of each sample graph.
In some feasible embodiments, in the graph processing method provided in the embodiment of the present application, the node feature extraction network may extract the node features of each node in the graph to be processed, and further determine the class of the node based on the node features of each node and the graph class features corresponding to each candidate graph class. Therefore, in the training process, the node features of the nodes in various local graphs obtained by the node feature extraction module can be processed based on the graph feature extraction module and the graph classification module to obtain the prediction graph categories of various sample graphs, so as to measure whether the node features of the nodes extracted by the node feature extraction module are effective and accurate according to the graph categories of various sample graphs. Therefore, when the prediction graph class of each sample graph is consistent with the real graph class represented by the sample label of the prediction graph, the node feature of each node extracted by the node feature extraction module can be proved to be accurate, and therefore the node feature extraction module in the graph classification network at the end of training can be used as the node feature extraction network obtained by training.
Specifically, referring to fig. 8, fig. 8 is a schematic structural diagram of a graph classification network provided in the embodiment of the present application. Fig. 8 shows a process of performing a first training on the initial graph classification network based on training data, and as shown in fig. 8, after the training data is obtained, each sample graph in the training data may be input to a node feature extraction module in the initial graph classification network, so as to obtain a node feature of each node in each sample graph. The node feature extraction module also comprises an initial feature extraction sub-network and a node feature extraction sub-network corresponding to each candidate graph category connected with the initial feature extraction sub-network, and specifically can extract the initial features of each node in each graph through the initial feature extraction sub-network and further obtain the node features of each node through the node feature extraction sub-network. For a specific implementation manner of extracting the node features of each node in each sample graph through the node feature extraction module in the initial graph classification network, refer to an implementation manner of extracting the node features of each node in the graph to be processed through the node feature extraction network in step S22 in fig. 2, which is not described herein again.
Further, the node features of the nodes in each sample graph obtained by the node feature extraction module are input into the graph feature extraction module to obtain the graph features of each sample graph. Specifically, for each sample graph, the node features of each node may be compressed through the attention pooling network in the graph feature extraction module to reduce data processing amount while aggregating the node features of each node, and further, the attention score corresponding to each node may be obtained through the perceptron, and the attention score of each node is normalized through the softmax function to obtain the attention distribution that may clearly represent the importance degree of each node. For each sample graph, the node characteristics of each node in the sample graph can be converted into a graph characteristic representation with fixed length according to the node characteristics of each node in the sample graph and the attention distribution corresponding to each node in the sample graph, and the graph characteristics of the sample graph are obtained. Through the attention pooling network in the graph feature extraction module, the nodes with the same category as the real category of the sample graph can obtain more attention, and the nodes with the different category from the real category of the sample graph can obtain less attention, so that the obtained graph features of the sample graph can effectively distinguish the categories of the nodes.
When the attention scores corresponding to the nodes are obtained through the perceptron, the number of layers of the perceptron and the corresponding activation functions may be determined based on an actual application scenario, which is not limited herein. For example, for a sample graph, the node feature of each node in the sample graph may be represented by a matrix H, and when the attention score corresponding to each node is obtained by using a two-layer multi-layer perceptron, the graph feature E ═ attn (H) ═ softmax (Θ) of the sample graph obtained based on the above implementation manner2tanh(Θ1HT) H). Wherein, tanh () is the activation function corresponding to the perceptron, Θ1And Θ2The weight theta corresponding to each layer in the two layers of perceptrons is respectively2tanh(Θ1HT) For the attention scores corresponding to the nodes in the sample graph, the attention distribution corresponding to each node obtained by the softmax function is softmax (Θ)2tanh(Θ1HT))。
Based on the implementation manner, the graph features of the sample graphs can be obtained through the graph feature extraction model, and the graph features of the sample graphs with different node numbers can have the same length through the attention pooling network in the graph feature extraction model.
And finally, inputting the graph features of each sample graph into a graph classification module to obtain the prediction graph category of each sample graph. Specifically, the prediction graph category of each sample graph can be predicted separately based on the graph features of each sample graph through a graph classification module, and the specific prediction mode can be determined through a classification algorithm, a related neural network and the like, which is not limited herein.
Optionally, when there is an association relationship between the sample images, in order to improve the prediction efficiency of the prediction image categories of each sample image and further improve the prediction accuracy of the prediction image categories of each sample image by combining the association relationship between each sample image, a hierarchical image feature may be constructed by the image classification module based on the association relationship between each sample image. In other words, if the hierarchical graph features are analogized to the graph features of a new graph, the graph features of each sample graph may be node features of each node in the new graph.
The association relationship between the sample graphs may be changed to that each sample graph corresponds to a different part in the same social network, and each sample graph corresponds to a different part in the same chemical molecular structure, or based on different social networks of the same user, and the like, which may be specifically determined based on the requirements of the actual application scenario, and is not limited herein.
Furthermore, the hierarchical diagram features can be updated through a message transmission network, a diagram convolution network, a diagram attention network and the like in the diagram classification module so as to strengthen the up-to-now incidence relation of the diagram features of each sample diagram in the hierarchical diagram features and obtain the updated hierarchical diagram features. For example, the specific updating process for updating the hierarchical graph features through the graph volume network is as follows:
F=hierGCN(E′)=Concat(E′,σ(AhierE′Θ3));
wherein E' is a hierarchical graph feature constructed based on the graph feature E of each sample graph, and theta3Weights corresponding to the graph convolution network, AhierAnd sigma () is an activation function corresponding to the graph convolution network, and Concat represents splicing operation.
And processing the updated hierarchical graph feature F through the full connection layer and the softmax function to obtain the probability that each sample graph is of each candidate graph type, and further taking the candidate graph type with the highest probability as the prediction graph type of the corresponding sample graph.
Optionally, when no association exists between the sample graphs or the association between the sample graphs is unknown, the hierarchical graph features may be updated by a sensing machine in the graph classification module, so as to obtain updated hierarchical graph features while reducing data processing amount, and further obtain the prediction graph categories of the sample graphs according to the updated hierarchical graph features.
And step S73, determining a total training loss value according to the prediction graph type of each sample graph and the sample label of each sample graph.
In some feasible embodiments, whether the graph class prediction capability of the graph classification network is stable and accurate can be measured through the total training loss value in the training process, and further, the capability of the node feature extraction module in the graph classification network for obtaining the node features of each node is determined to be stable and accurate under the condition that the graph class of each sample graph can be accurately predicted based on the graph classification network.
Specifically, a first training loss value in the graph classification network training process may be determined according to the prediction graph category of each sample graph and the sample label of each sample graph, and the first training loss value is determined as the total training loss value. The first training loss value represents the difference between the prediction graph class and the real graph class of each sample graph, and the smaller the first training loss value, the higher the prediction accuracy and stability of the graph classification network for the graph class of the sample graph. The first training loss value may be obtained by calculating a cross entropy function loss function or other classification loss functions, and may be specifically determined based on requirements of an actual application scenario, which is not limited herein.
In some feasible embodiments, because the node feature extraction network capable of accurately and stably obtaining the node features of each node is obtained mainly through the graph classification network which is stable at the training position, on the basis of integrally measuring the difference between the prediction graph category and the real graph category of each sample graph, the similarity between the graph features obtained by each sample graph based on the node features extracted by the node feature extraction network and the graph category features of each candidate graph category can be further measured, so as to further evaluate the stability and the accuracy of the graph classification network and the node extraction network.
For the graph features of the sample graphs, each graph feature may be determinedFirst feature distance between graph feature of sample graph and graph category feature corresponding to each candidate graph categoryWhere n denotes an index of the graph feature of each sample graph, and c denotes an index of the candidate graph feature. To further highlight the graph features of each sample graphGraph category characteristics corresponding to each candidate graph categoryFirst characteristic distance d betweenncThe target candidate graph category corresponding to each graph feature may be determined first, where for a graph feature of a sample graph, the corresponding target candidate graph category is a candidate category consistent with the real graph category of the sample graph. The corresponding first feature distance may be further enhanced based on a second feature distance between each graph feature and a graph category feature of the corresponding target candidate graph category, as follows:
wherein μ is a discount hyperparameter in the distance enhancing process, which may be determined based on actual application requirements, and is not limited herein. y is(n)A real graph class representing graph features of a sample graph indexed by n,a second feature distance between the graph feature representing the sample graph with index n and the graph class feature corresponding to the target candidate graph class corresponding to the graph feature, (d)nc)enhanceAnd representing the feature distance between each graph feature and the graph category feature corresponding to each candidate graph category.
For example, referring to fig. 9, fig. 9 is a schematic diagram of a method for determining a feature distance according to an embodiment of the present application. In the context of figure 9 of the drawings,graph features being features of a sample graphAnd graph category characteristicsA first characteristic distance d between1And map class characteristicsA first characteristic distance d between2And map class characteristicsA first characteristic distance d between3. Class of current map featureCorresponding candidate graph categories and graph featuresWhen the real categories of the corresponding sample graphs are the same, the graph category characteristicsThe corresponding candidate graph category is the graph featureCorresponding target candidate graph class, graph class characteristicsThe second feature distance between the graph class features corresponding to the target candidate graph class is also d1At this time, the drawing category featureGraph class features corresponding to target candidate graph classesA second characteristic distance therebetween is denoted as δ dy. Thus, for graph featuresTo say, it is associated with the graph class characteristicsCharacteristic distance up to now is d1+δdyAnd map class characteristicsA characteristic distance d between2+δdyAnd map class characteristicsA characteristic distance d between3+δdy。
Further, the distance matrix D is matched by a softmax functionenhanceNormalization processing is carried out, and a similarity matrix S is obtained, wherein S is softmax (-D)enhance). Wherein the distance matrix DenhanceIs prepared from (d)nc)enhanceAnd determining that each row element in the similarity matrix represents the similarity between the graph features of one sample graph and the graph class features corresponding to the candidate graph classes, and each column element represents the similarity between the graph features of each sample graph and the graph class features corresponding to one candidate graph class. That is, any element in the similarity matrix may represent the similarity between the graph feature of one sample graph and the graph class feature corresponding to one candidate graph class.
Further, a second training loss value may be calculated according to the similarity matrix and the real graph category of each sample graph, and a total training loss value may be obtained according to the first training loss value and the second training loss value. And constructing a relation matrix for representing the real graph category and each candidate graph category of each sample graph according to the real graph category of each sample graph. Each element in the relationship matrix represents a relationship between a real graph type of a sample graph and a candidate graph type (if the two are the same, the value of the element is 1, and if the two are different, the value of the element is 0), and each element may represent a real similarity between a graph feature of a sample graph and a graph type feature of each candidate graph type. Therefore, the second training loss value characterizes the similarity between the graph class features corresponding to the candidate classes and the sample graphs in the training process, and the difference between the actual similarity between the graph features and the graph class features.
The second training loss value may be obtained by calculating a cross entropy function loss function or other classification loss functions, and may be specifically determined based on actual application scene requirements, which is not limited herein.
Optionally, after obtaining the first training loss value and the second training loss value, a first weight corresponding to the first training loss value and a second weight corresponding to the second training loss value may be obtained, and according to the first training loss value, the second training loss value, the first weight and the second weight, the total training loss value L ═ al L may be determinedcls+(1-α)Lcon. Wherein L is the total training loss value, LclsIs the first training loss value, LconFor the second training loss value, α is the first weight, (1- α) is the second weight, and α ∈ [0,1]。
And step S74, performing iterative training on the initial graph classification network according to the total training loss value and the training data until the total training loss value meets the training end condition, and determining a node feature extraction module in the graph classification network at the training end as a node feature extraction network.
In some feasible embodiments, the initial graph classification network is trained based on the implementation method according to the total training loss value and each sample graph in the training data, and graph class characteristics corresponding to each candidate graph class and other network and model parameters are updated through a back propagation network in each training process, so that the node characteristics obtained by the node characteristic extraction module are more reasonably represented. Referring to fig. 10, fig. 10 is a schematic diagram of a training method of a node feature extraction network according to an embodiment of the present application. As shown in fig. 10, for the first training, the input of the initial graph classification network is each sample graph, and the node feature extraction module may extract and obtain the node feature of each node in each sample graph, where the graph attention network may be used as an initial feature extraction sub-network in the node feature extraction module to extract the initial feature of each node in the sample graph, and the node feature extraction sub-network may obtain the node feature of each node based on the initial feature of each node. Through the attention pooling network in the graph feature module, the graph feature corresponding to each sample graph can be obtained based on the node feature of each node in each sample graph, so that the hierarchical graph feature is further obtained through the graph classification module, and the graph is updated through the graph convolution network hierarchical graph feature, so that the prediction graph category of each sample graph is obtained according to the updated hierarchical graph feature.
When the total training loss value meets the training end condition, the node feature extraction model in the graph classification network at the training end can be used as the node feature extraction network, and then the category of each node in the graph to be processed is determined according to the node feature extraction network obtained by training. The training end condition may be that the total training loss value reaches a convergence state, or that the total training loss value is lower than a preset threshold, and the like, and may be specifically determined based on a requirement of an actual application scenario, which is not limited herein.
In the embodiment of the application, because the node feature extraction network determines the category of each node according to the graph category of the candidate graph category, by training the graph classification network, the node feature extraction module in the graph classification network at the end of training is used as the node feature extraction network for determining the category of the node, on one hand, the accuracy of the node feature extraction module can be measured according to the accuracy of graph category prediction in the process of training the graph classification network, on the other hand, the graph category features corresponding to the candidate graph category can be continuously adjusted according to the total training loss value in the training process, so that the finally obtained node feature extraction network can determine the category of each node according to the accurate and proper graph category features, and the applicability is high.
Referring to fig. 11, fig. 11 is a schematic structural diagram of a graph processing apparatus according to an embodiment of the present application. The graph processing apparatus provided in the embodiment of the present application includes:
the graph acquiring module 111 is used for acquiring a graph to be processed;
a feature obtaining module 112, configured to obtain node features of each node in the graph to be processed;
a classification module 113, configured to, for each node, determine a class of the node from the candidate graph classes according to a similarity between a node feature of the node and a graph class feature corresponding to each candidate graph class, where each candidate graph class includes a graph class of the to-be-processed graph;
and a graph processing module 114, configured to perform corresponding processing on at least one node in the graph to be processed based on the type of each node.
In some possible embodiments, the feature obtaining module 112 is configured to:
obtaining node characteristics of each node in the graph to be processed through a node characteristic extraction network, wherein the node characteristic extraction network comprises an initial characteristic extraction sub-network and node characteristic extraction sub-networks corresponding to candidate graph types connected with the initial characteristic extraction sub-network;
for each node, extracting the initial feature of the node in the graph to be processed through the initial feature extraction sub-network;
extracting a sub-network through the node characteristics corresponding to each candidate graph type based on the initial characteristics of the node to obtain the characteristics of the node corresponding to each candidate graph type;
and fusing the characteristics of the node corresponding to each candidate graph category to obtain the node characteristics of the node.
In some possible embodiments, the feature obtaining module 112 is configured to:
acquiring a first characteristic of the node, and determining an attention autocorrelation coefficient of the node according to the first characteristic of the node;
according to the first characteristic of the node and the first characteristics of each adjacent node of the node, determining the attention cross-correlation coefficient of the node and each adjacent node;
and determining the initial characteristics of the node according to the attention autocorrelation coefficient and the attention cross-correlation coefficient.
In some possible embodiments, a graph class feature corresponding to one candidate graph class extracts a network parameter of a sub-network for a node feature corresponding to the candidate graph class.
In some possible embodiments, the node feature extraction sub-network is a network based on a gaussian mixture model, and the graph class feature corresponding to one candidate graph class is a gaussian distribution parameter of the gaussian mixture model corresponding to the candidate graph class.
In some possible embodiments, the classification module 113 is configured to:
for each node, determining the highest similarity from the similarity between the node characteristics of the node and the graph category characteristics corresponding to each candidate graph category;
and determining the candidate graph category corresponding to the highest similarity as the category of the node.
In some possible embodiments, the graph processing module 114 is further configured to:
acquiring the graph type of the graph to be processed;
and according to the type of each node, correspondingly processing the nodes with the type of the node in each node different from the graph type of the graph to be processed.
Referring to fig. 12, fig. 12 is a schematic structural diagram of a node feature extraction network training apparatus provided in the embodiment of the present application. The training device that this application embodiment provided includes:
a network obtaining module 121, configured to obtain an initial graph classification network, where the initial graph classification network includes a node feature extraction module, a graph feature extraction module, and a graph classification module that are sequentially cascaded;
a data obtaining module 122, configured to obtain training data, where each sample icon in the training data is labeled with a sample label, and the sample label represents a real image category of a sample image;
an input module 123, configured to input each sample graph to the node feature extraction module, so as to obtain a node feature of each node of each sample graph;
the input module 123 is configured to input the node features of each node into the graph feature extraction module to obtain the graph features of each sample graph;
the input module 123 is configured to input the graph features of each sample graph to the graph classification module to obtain a prediction graph type of the sample graph;
a loss determining module 124, configured to determine a total training loss value according to the prediction graph type of each sample graph and the sample label of each sample graph;
and a network determining module 125, configured to perform iterative training on the initial graph classification network according to the total training loss value and the training data, and determine the node feature extraction module in the initial graph classification network at the end of training as the node feature extraction network until the total training loss value meets a training end condition.
In some possible embodiments, the loss determining module 124 is configured to:
determining a first training loss value according to the prediction graph type of each sample graph and the sample label of each sample graph;
determining the characteristic distance between each image characteristic and the image category characteristic corresponding to each candidate image category;
determining a second training loss value according to the feature distance corresponding to each graph feature and each candidate graph category;
and determining a total training loss value according to the first training loss value and the second training loss value.
In some possible embodiments, the input module 123 is configured to:
inputting the node characteristics of each node into the graph characteristic extraction module, and executing the following operations through the graph characteristic extraction module:
for each sample graph, determining attention distribution corresponding to each node in the sample graph according to the node characteristics of each node in the sample graph;
and determining the graph characteristics of the sample graph according to the node characteristics of each node in the sample graph and the attention distribution.
In some possible embodiments, the loss determining module 124 is configured to:
for each graph feature, determining a first feature distance between the graph feature and a graph class feature corresponding to each candidate graph class;
determining target candidate graph types consistent with real graph types corresponding to the graph features from the candidate graph types;
and determining a second feature distance between the graph feature and the graph class feature corresponding to the target graph class, and determining a feature distance between the graph feature and the graph class feature corresponding to each candidate graph class according to the first feature distance and the second feature distance.
In some possible embodiments, the loss determining module 124 is configured to:
acquiring a first weight corresponding to the first training loss value and a second weight corresponding to the second training loss value;
determining a total training loss value based on the first training loss value, the second training loss value, the first weight, and the second weight.
In a specific implementation, the graph processing apparatus may execute, through each built-in functional module thereof, the implementation manners provided in each step in fig. 2, fig. 4, and/or fig. 7, which may specifically refer to the implementation manners provided in each step, and are not described herein again.
In the embodiment of the application, because the node feature extraction network determines the category of each node according to the graph category of the candidate graph category, by training the graph classification network, the node feature extraction module in the graph classification network at the end of training is used as the node feature extraction network for determining the category of the node, on one hand, the accuracy of the node feature extraction module can be measured according to the accuracy of graph category prediction in the process of training the graph classification network, on the other hand, the graph category features corresponding to the candidate graph category can be continuously adjusted according to the total training loss value in the training process, so that the finally obtained node feature extraction network can determine the category of each node according to the accurate and proper graph category features, and the applicability is high.
Referring to fig. 13, fig. 13 is a schematic structural diagram of an electronic device provided in an embodiment of the present application. As shown in fig. 13, the electronic device 1000 in the present embodiment may include: the processor 1001, the network interface 1004, and the memory 1005, and the electronic device 1000 may further include: a user interface 1003, and at least one communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display) and a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a standard wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1004 may be a high-speed RAM memory or a non-volatile memory (e.g., at least one disk memory). The memory 1005 may optionally be at least one memory device located remotely from the processor 1001. As shown in fig. 13, a memory 1005, which is a kind of computer-readable storage medium, may include therein an operating system, a network communication module, a user interface module, and a device control application program.
In the electronic device 1000 shown in fig. 13, the network interface 1004 may provide a network communication function; the user interface 1003 is an interface for providing a user with input; and the processor 1001 may be used to invoke a device control application stored in the memory 1005 to implement:
acquiring a graph to be processed;
acquiring node characteristics of each node in the graph to be processed;
for each node, determining the class of the node from the candidate graph classes according to the similarity between the node feature of the node and the graph class feature corresponding to each candidate graph class, wherein each candidate graph class comprises the graph class of the graph to be processed;
and performing corresponding processing on at least one node in the graph to be processed based on the category of each node.
In some possible embodiments, the processor 1001 is configured to:
obtaining node characteristics of each node in the graph to be processed through a node characteristic extraction network, wherein the node characteristic extraction network comprises an initial characteristic extraction sub-network and node characteristic extraction sub-networks corresponding to candidate graph types connected with the initial characteristic extraction sub-network;
for each node, extracting the initial feature of the node in the graph to be processed through the initial feature extraction sub-network;
extracting a sub-network through the node characteristics corresponding to each candidate graph type based on the initial characteristics of the node to obtain the characteristics of the node corresponding to each candidate graph type;
and fusing the characteristics of the node corresponding to each candidate graph category to obtain the node characteristics of the node.
In some possible embodiments, the processor 1001 is configured to:
acquiring a first characteristic of the node, and determining an attention autocorrelation coefficient of the node according to the first characteristic of the node;
according to the first characteristic of the node and the first characteristics of each adjacent node of the node, determining the attention cross-correlation coefficient of the node and each adjacent node;
and determining the initial characteristics of the node according to the attention autocorrelation coefficient and the attention cross-correlation coefficient.
In some possible embodiments, a graph class feature corresponding to one candidate graph class extracts a network parameter of a sub-network for a node feature corresponding to the candidate graph class.
In some possible embodiments, the node feature extraction sub-network is a network based on a gaussian mixture model, and the graph class feature corresponding to one candidate graph class is a gaussian distribution parameter of the gaussian mixture model corresponding to the candidate graph class.
In some possible embodiments, the processor 1001 is configured to:
for each node, determining the highest similarity from the similarity between the node characteristics of the node and the graph category characteristics corresponding to each candidate graph category;
and determining the candidate graph category corresponding to the highest similarity as the category of the node.
In some possible embodiments, the processor 1001 is configured to:
acquiring the graph type of the graph to be processed;
and according to the type of each node, correspondingly processing the nodes with the type of the node in each node different from the graph type of the graph to be processed.
In some possible embodiments, when the electronic device 1000 is used to train a node feature extraction network, the processor 1001 is configured to:
acquiring an initial graph classification network, wherein the initial graph classification network comprises a node feature extraction module, a graph feature extraction module and a graph classification module which are sequentially cascaded;
acquiring training data, wherein each sample icon in the training data is marked with a sample label, and the sample label represents the real image category of the sample image;
inputting each sample graph to the node feature extraction module to obtain the node features of each node of each sample graph;
inputting the node characteristics of each node into the graph characteristic extraction module to obtain the graph characteristics of each sample graph;
inputting the graph features of the sample graphs into the graph classification module to obtain the prediction graph types of the sample graphs;
determining a total training loss value according to the prediction graph type of each sample graph and the sample label of each sample graph;
and performing iterative training on the initial graph classification network according to the total training loss value and the training data until the total training loss value meets the training end condition, and determining a node feature extraction module in the initial graph classification network at the training end as a node feature extraction network.
In some possible embodiments, the processor 1001 is configured to:
determining a first training loss value according to the prediction graph type of each sample graph and the sample label of each sample graph;
determining the characteristic distance between each image characteristic and the image category characteristic corresponding to each candidate image category;
determining a second training loss value according to the feature distance corresponding to each graph feature and each candidate graph category;
and determining a total training loss value according to the first training loss value and the second training loss value.
In some possible embodiments, the processor 1001 is configured to:
inputting the node characteristics of each node into the graph characteristic extraction module, and executing the following operations through the graph characteristic extraction module:
for each sample graph, determining attention distribution corresponding to each node in the sample graph according to the node characteristics of each node in the sample graph;
and determining the graph characteristics of the sample graph according to the node characteristics of each node in the sample graph and the attention distribution.
In some possible embodiments, the processor 1001 is configured to:
for each graph feature, determining a first feature distance between the graph feature and a graph class feature corresponding to each candidate graph class;
determining target candidate graph types consistent with real graph types corresponding to the graph features from the candidate graph types;
and determining a second feature distance between the graph feature and the graph class feature corresponding to the target graph class, and determining a feature distance between the graph feature and the graph class feature corresponding to each candidate graph class according to the first feature distance and the second feature distance.
In some possible embodiments, the processor 1001 is configured to:
acquiring a first weight corresponding to the first training loss value and a second weight corresponding to the second training loss value;
determining a total training loss value based on the first training loss value, the second training loss value, the first weight, and the second weight.
It should be understood that in some possible embodiments, the processor 1001 may be a Central Processing Unit (CPU), and the processor may be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), field-programmable gate arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The memory may include both read-only memory and random access memory, and provides instructions and data to the processor. The portion of memory may also include non-volatile random access memory. For example, the memory may also store device type information.
In a specific implementation, the electronic device 1000 may execute, through each built-in functional module thereof, the implementation manners provided in each step in fig. 2, fig. 4, and/or fig. 7, which may be referred to specifically for the implementation manners provided in each step, and are not described herein again.
In the embodiment of the application, because the node feature extraction network determines the category of each node according to the graph category of the candidate graph category, by training the graph classification network, the node feature extraction module in the graph classification network at the end of training is used as the node feature extraction network for determining the category of the node, on one hand, the accuracy of the node feature extraction module can be measured according to the accuracy of graph category prediction in the process of training the graph classification network, on the other hand, the graph category features corresponding to the candidate graph category can be continuously adjusted according to the total training loss value in the training process, so that the finally obtained node feature extraction network can determine the category of each node according to the accurate and proper graph category features, and the applicability is high.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and is executed by a processor to implement the method provided in each step in fig. 2, fig. 4, and/or fig. 7, which may specifically refer to implementation manners provided in each step, and are not described herein again.
The computer readable storage medium may be an internal storage unit of the task processing device provided in any of the foregoing embodiments, for example, a hard disk or a memory of an electronic device. The computer readable storage medium may also be an external storage device of the electronic device, such as a plug-in hard disk, a Smart Memory Card (SMC), a Secure Digital (SD) card, a flash card (flash card), and the like, which are provided on the electronic device. The computer readable storage medium may further include a magnetic disk, an optical disk, a read-only memory (ROM), a Random Access Memory (RAM), and the like. Further, the computer readable storage medium may also include both an internal storage unit and an external storage device of the electronic device. The computer-readable storage medium is used for storing the computer program and other programs and data required by the electronic device. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the electronic device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the methods provided by the steps of fig. 2, fig. 4, and/or fig. 7.
The terms "first", "second", and the like in the claims and in the description and drawings of the present application are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or electronic device that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or electronic device. Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments. The term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present application and is not intended to limit the scope of the present application, which is defined by the appended claims.
Claims (15)
1. A graph processing method, comprising:
acquiring a graph to be processed;
acquiring node characteristics of each node in the graph to be processed;
for each node, determining the class of the node from each candidate graph class according to the similarity between the node feature of the node and the graph class feature corresponding to each candidate graph class, wherein each candidate graph class comprises the graph class of the graph to be processed;
and performing corresponding processing on at least one node in the graph to be processed based on the category of each node.
2. The method according to claim 1, wherein the obtaining the node characteristics of each node in the graph to be processed comprises:
obtaining node characteristics of each node in the graph to be processed through a node characteristic extraction network, wherein the node characteristic extraction network comprises an initial characteristic extraction sub-network and node characteristic extraction sub-networks corresponding to candidate graph categories connected with the initial characteristic extraction sub-network;
the obtaining of the node characteristics of each node in the graph to be processed through the node characteristic extraction network includes:
for each node, extracting the initial feature of the node in the graph to be processed through the initial feature extraction sub-network;
extracting a sub-network through the node characteristics corresponding to each candidate graph category based on the initial characteristics of the node to obtain the characteristics of the node corresponding to each candidate graph category;
and fusing the characteristics of the node corresponding to each candidate graph category to obtain the node characteristics of the node.
3. The method of claim 2, wherein the extracting the initial feature of the node in the graph to be processed through the initial feature extraction sub-network comprises:
acquiring a first characteristic of the node, and determining an attention autocorrelation coefficient of the node according to the first characteristic of the node;
according to the first characteristic of the node and the first characteristics of each adjacent node of the node, determining the attention cross-correlation coefficient of the node and each adjacent node;
and determining the initial characteristics of the node according to the attention autocorrelation coefficient and the attention cross-correlation coefficient.
4. The method of claim 3, wherein a graph class feature corresponding to a candidate graph class extracts the network parameters of the sub-network for the node feature corresponding to the candidate graph class.
5. The method of claim 4, wherein the node feature extraction sub-network is a Gaussian mixture model-based network, and the feature of the candidate graph class is the Gaussian distribution parameter of the Gaussian mixture model corresponding to the candidate graph class.
6. The method according to any one of claims 1 to 5, wherein for each node, determining the class of the node from the candidate graph classes according to the similarity between the node feature of the node and the graph class feature corresponding to each candidate graph class comprises:
for each node, determining the highest similarity from the similarity between the node characteristics of the node and the graph category characteristics corresponding to each candidate graph category;
and determining the candidate graph category corresponding to the highest similarity as the category of the node.
7. The method according to claim 1, wherein said performing the corresponding processing on at least one node in the graph to be processed based on the category of each node comprises:
acquiring the graph category of the graph to be processed;
and according to the category of each node, correspondingly processing the nodes with the categories of the nodes in each node different from the graph category of the graph to be processed.
8. A node feature extraction network training method is characterized by comprising the following steps:
acquiring an initial graph classification network, wherein the initial graph classification network comprises a node feature extraction module, a graph feature extraction module and a graph classification module which are sequentially cascaded;
acquiring training data, wherein each sample icon in the training data is marked with a sample label, and the sample label represents the real image category of the sample image;
inputting each sample graph into the node feature extraction module to obtain the node feature of each node of each sample graph;
inputting the node characteristics of each node into the graph characteristic extraction module to obtain the graph characteristics of each sample graph;
inputting the graph characteristics of each sample graph into the graph classification module to obtain the prediction graph category of the sample graph;
determining a total training loss value according to the prediction graph type of each sample graph and the sample label of each sample graph;
and performing iterative training on the initial graph classification network according to the total training loss value and the training data until the total training loss value meets the training end condition, and determining a node feature extraction module in the initial graph classification network at the training end as a node feature extraction network.
9. The method of claim 8, wherein determining a total training loss value based on the prediction graph class for each of the sample graphs and the sample label for each of the sample graphs comprises:
determining a first training loss value according to the prediction graph type of each sample graph and the sample label of each sample graph;
determining a feature distance between each graph feature and a graph category feature corresponding to each candidate graph category;
determining a second training loss value according to the feature distance corresponding to each graph feature and each candidate graph category;
and determining a total training loss value according to the first training loss value and the second training loss value.
10. The method of claim 8, wherein inputting the node feature of each node into the graph feature extraction module to obtain the graph feature of each sample graph comprises:
inputting the node characteristics of each node into the graph characteristic extraction module, and executing the following operations through the graph characteristic extraction module:
for each sample graph, determining attention distribution corresponding to each node in the sample graph according to the node characteristics of each node in the sample graph;
and determining the graph characteristics of the sample graph according to the node characteristics of each node in the sample graph and the attention distribution.
11. The method of claim 9, wherein determining a feature distance between each graph feature and a graph class feature corresponding to each candidate graph class comprises:
for each graph feature, determining a first feature distance between the graph feature and a graph category feature corresponding to each candidate graph category;
determining a target candidate graph category which is consistent with the real graph category corresponding to the graph feature from all the candidate graph categories;
and determining a second feature distance between the graph feature and the graph category feature corresponding to the target graph category, and determining a feature distance between the graph feature and the graph category feature corresponding to each candidate graph category according to the first feature distance and the second feature distance.
12. The method of any of claims 9 to 11, wherein determining a total training loss value based on the first training loss value and the second training loss value comprises:
acquiring a first weight corresponding to the first training loss value and a second weight corresponding to the second training loss value;
determining a total training loss value according to the first training loss value, the second training loss value, the first weight, and the second weight.
13. A graph processing apparatus, characterized in that the apparatus comprises:
the graph acquisition module is used for acquiring a graph to be processed;
the characteristic acquisition module is used for acquiring the node characteristics of each node in the graph to be processed;
a classification module, configured to, for each node, determine a class of the node from each candidate graph class according to a similarity between a node feature of the node and a graph class feature corresponding to each candidate graph class, where each candidate graph class includes a graph class of the to-be-processed graph;
and the graph processing module is used for correspondingly processing at least one node in the graph to be processed based on the category of each node.
14. An electronic device comprising a processor and a memory, the processor and the memory being interconnected;
the memory is used for storing a computer program;
the processor is configured to perform the method of any of claims 1 to 7 or the method of any of claims 8 to 12 when the computer program is invoked.
15. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which is executed by a processor to implement the method of any of claims 1 to 7 or the method of any of claims 8 to 12.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202011134804.2A CN113408564A (en) | 2020-10-21 | 2020-10-21 | Graph processing method, network training method, device, equipment and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202011134804.2A CN113408564A (en) | 2020-10-21 | 2020-10-21 | Graph processing method, network training method, device, equipment and storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN113408564A true CN113408564A (en) | 2021-09-17 |
Family
ID=77677377
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202011134804.2A Pending CN113408564A (en) | 2020-10-21 | 2020-10-21 | Graph processing method, network training method, device, equipment and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN113408564A (en) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114332516A (en) * | 2021-11-24 | 2022-04-12 | 腾讯科技(深圳)有限公司 | Data processing method, data processing device, model training method, model training device, data processing equipment, storage medium and product |
| CN114511024A (en) * | 2022-01-28 | 2022-05-17 | 腾讯科技(深圳)有限公司 | Node classification method, apparatus, apparatus, medium and computer program product |
| CN115115031A (en) * | 2022-06-28 | 2022-09-27 | 支付宝(杭州)信息技术有限公司 | Data processing method and device |
| CN115114484A (en) * | 2022-05-13 | 2022-09-27 | 腾讯科技(深圳)有限公司 | Abnormal event detection method and device, computer equipment and storage medium |
| CN116151354A (en) * | 2023-04-10 | 2023-05-23 | 之江实验室 | Network node learning method, device, electronic device and storage medium |
| CN117131252A (en) * | 2023-09-12 | 2023-11-28 | 广东电网有限责任公司 | An electrical node processing method, device, equipment and storage medium |
Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5526281A (en) * | 1993-05-21 | 1996-06-11 | Arris Pharmaceutical Corporation | Machine-learning approach to modeling biological activity for molecular design and to modeling other characteristics |
| US20090262664A1 (en) * | 2008-04-18 | 2009-10-22 | Bonnie Berger Leighton | Method for identifying network similarity by matching neighborhood topology |
| KR20160015005A (en) * | 2014-07-30 | 2016-02-12 | 에스케이텔레콤 주식회사 | Method and apparatus for discriminative training acoustic model based on class, and speech recognition apparatus using the same |
| CN109447169A (en) * | 2018-11-02 | 2019-03-08 | 北京旷视科技有限公司 | The training method of image processing method and its model, device and electronic system |
| CN110276406A (en) * | 2019-06-26 | 2019-09-24 | 腾讯科技(深圳)有限公司 | Expression classification method, apparatus, computer equipment and storage medium |
| CN110598065A (en) * | 2019-08-28 | 2019-12-20 | 腾讯云计算(北京)有限责任公司 | A data mining method, device and computer-readable storage medium |
| US20200053118A1 (en) * | 2018-08-10 | 2020-02-13 | Visa International Service Association | Replay spoofing detection for automatic speaker verification system |
| CN111291827A (en) * | 2020-02-28 | 2020-06-16 | 北京市商汤科技开发有限公司 | Image clustering method, device, equipment and storage medium |
| CN111309923A (en) * | 2020-01-22 | 2020-06-19 | 腾讯科技(深圳)有限公司 | Object vector determination method, model training method, device, equipment and storage medium |
| CN111582409A (en) * | 2020-06-29 | 2020-08-25 | 腾讯科技(深圳)有限公司 | Training method of image label classification network, image label classification method and device |
| CN111737474A (en) * | 2020-07-17 | 2020-10-02 | 支付宝(杭州)信息技术有限公司 | Method and device for training business model and determining text classification category |
-
2020
- 2020-10-21 CN CN202011134804.2A patent/CN113408564A/en active Pending
Patent Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5526281A (en) * | 1993-05-21 | 1996-06-11 | Arris Pharmaceutical Corporation | Machine-learning approach to modeling biological activity for molecular design and to modeling other characteristics |
| US20090262664A1 (en) * | 2008-04-18 | 2009-10-22 | Bonnie Berger Leighton | Method for identifying network similarity by matching neighborhood topology |
| KR20160015005A (en) * | 2014-07-30 | 2016-02-12 | 에스케이텔레콤 주식회사 | Method and apparatus for discriminative training acoustic model based on class, and speech recognition apparatus using the same |
| US20200053118A1 (en) * | 2018-08-10 | 2020-02-13 | Visa International Service Association | Replay spoofing detection for automatic speaker verification system |
| CN109447169A (en) * | 2018-11-02 | 2019-03-08 | 北京旷视科技有限公司 | The training method of image processing method and its model, device and electronic system |
| CN110276406A (en) * | 2019-06-26 | 2019-09-24 | 腾讯科技(深圳)有限公司 | Expression classification method, apparatus, computer equipment and storage medium |
| CN110598065A (en) * | 2019-08-28 | 2019-12-20 | 腾讯云计算(北京)有限责任公司 | A data mining method, device and computer-readable storage medium |
| CN111309923A (en) * | 2020-01-22 | 2020-06-19 | 腾讯科技(深圳)有限公司 | Object vector determination method, model training method, device, equipment and storage medium |
| CN111291827A (en) * | 2020-02-28 | 2020-06-16 | 北京市商汤科技开发有限公司 | Image clustering method, device, equipment and storage medium |
| CN111582409A (en) * | 2020-06-29 | 2020-08-25 | 腾讯科技(深圳)有限公司 | Training method of image label classification network, image label classification method and device |
| CN111737474A (en) * | 2020-07-17 | 2020-10-02 | 支付宝(杭州)信息技术有限公司 | Method and device for training business model and determining text classification category |
Non-Patent Citations (2)
| Title |
|---|
| JUNCHI YU ET AL: "Graph Information Bottleneck for Subgraph Recognition", 《ARXIV》, 13 October 2020 (2020-10-13), pages 1 - 13 * |
| TIAN BIAN ET AL: "Inverse Graph Identification: Can We Identify Node Labels Given Graph Labels?", 《ARXIV》, 12 July 2020 (2020-07-12), pages 1 - 15 * |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114332516A (en) * | 2021-11-24 | 2022-04-12 | 腾讯科技(深圳)有限公司 | Data processing method, data processing device, model training method, model training device, data processing equipment, storage medium and product |
| CN114511024A (en) * | 2022-01-28 | 2022-05-17 | 腾讯科技(深圳)有限公司 | Node classification method, apparatus, apparatus, medium and computer program product |
| CN114511024B (en) * | 2022-01-28 | 2024-09-10 | 腾讯科技(深圳)有限公司 | Node classification method, device, equipment, medium and computer program product |
| CN115114484A (en) * | 2022-05-13 | 2022-09-27 | 腾讯科技(深圳)有限公司 | Abnormal event detection method and device, computer equipment and storage medium |
| CN115115031A (en) * | 2022-06-28 | 2022-09-27 | 支付宝(杭州)信息技术有限公司 | Data processing method and device |
| CN116151354A (en) * | 2023-04-10 | 2023-05-23 | 之江实验室 | Network node learning method, device, electronic device and storage medium |
| CN117131252A (en) * | 2023-09-12 | 2023-11-28 | 广东电网有限责任公司 | An electrical node processing method, device, equipment and storage medium |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN113657465B (en) | Pre-training model generation method, device, electronic device and storage medium | |
| CN113408564A (en) | Graph processing method, network training method, device, equipment and storage medium | |
| CN110163300B (en) | An image classification method, device, electronic device and storage medium | |
| CN110555481B (en) | Portrait style recognition method, device and computer readable storage medium | |
| CN108345587B (en) | Method and system for detecting authenticity of comments | |
| CN112434721A (en) | Image classification method, system, storage medium and terminal based on small sample learning | |
| CN110348437B (en) | A Target Detection Method Based on Weakly Supervised Learning and Occlusion Awareness | |
| CN114332680A (en) | Image processing method, video searching method, image processing device, video searching device, computer equipment and storage medium | |
| KR20220047228A (en) | Method and apparatus for generating image classification model, electronic device, storage medium, computer program, roadside device and cloud control platform | |
| KR102599020B1 (en) | Method, program, and apparatus for monitoring behaviors based on artificial intelligence | |
| US20240185090A1 (en) | Assessment of artificial intelligence errors using machine learning | |
| CN112420125A (en) | Molecular property prediction method, device, intelligent device and terminal | |
| CN114595352A (en) | Image recognition method, device, electronic device and readable storage medium | |
| CN112270671B (en) | Image detection method, device, electronic device and storage medium | |
| CN114239805A (en) | Cross-modal retrieval neural network, training method and device, electronic equipment and medium | |
| CN111159279B (en) | Model visualization method, device and storage medium | |
| Zhao et al. | An efficient class-dependent learning label approach using feature selection to improve multi-label classification algorithms | |
| Wen et al. | AIoU: Adaptive bounding box regression for accurate oriented object detection | |
| US12327397B2 (en) | Electronic device and method with machine learning training | |
| CN117011577A (en) | Image classification method, device, computer equipment and storage medium | |
| CN114998908A (en) | Sample image labeling method, sample image labeling device, sample image model training equipment and storage medium | |
| CN118761474B (en) | Data processing method, electronic device and computer readable storage medium | |
| CN116958777B (en) | Image recognition method, device, storage medium, and electronic device | |
| HK40051399A (en) | Graph processing method, network training method, apparatus, device, and storage medium | |
| CN111476144A (en) | Pedestrian attribute identification model determination method and device and computer readable storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40051399 Country of ref document: HK |
|
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |







































