CN117235584B - Picture data classification method, device, electronic device and storage medium - Google Patents

Picture data classification method, device, electronic device and storage medium Download PDF

Info

Publication number
CN117235584B
CN117235584B CN202311522727.1A CN202311522727A CN117235584B CN 117235584 B CN117235584 B CN 117235584B CN 202311522727 A CN202311522727 A CN 202311522727A CN 117235584 B CN117235584 B CN 117235584B
Authority
CN
China
Prior art keywords
trained
neural network
graph
network model
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311522727.1A
Other languages
Chinese (zh)
Other versions
CN117235584A (en
Inventor
黄勇
韩乔
杨耀
翟毅腾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202311522727.1A priority Critical patent/CN117235584B/en
Publication of CN117235584A publication Critical patent/CN117235584A/en
Application granted granted Critical
Publication of CN117235584B publication Critical patent/CN117235584B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to a graph data classification method, a device, an electronic device and a storage medium, wherein the graph data classification method comprises the following steps: acquiring a graph data training set; performing dimension reduction processing on the adjacent matrix in the target to-be-trained graph data to obtain a dimension reduction target adjacent matrix; training the to-be-trained graph neural network model based on node attribute characteristics of the target to-be-trained graph data and the dimension-reduction target adjacency matrix to obtain a trained graph neural network model; classifying the map data to be classified based on the trained map neural network model to obtain a classification result of the map data to be classified. According to the method and the device for classifying the social network graph data, the problem that the accuracy of the classification result of the social network graph data by the existing graph neural network model is low is solved, the robustness of the graph neural network model is improved, and the stability and the robustness of the classification result of the social network graph data by the graph neural network model are further improved.

Description

Picture data classification method, device, electronic device and storage medium
Technical Field
The present disclosure relates to the field of graph data classification, and in particular, to a graph data classification method, device, electronic device, and storage medium.
Background
With the advent of the large data age, people have been surrounded by massive amounts of data, and non-european space data that cannot be represented by continuous vectors is generally referred to as graph data. Map data is widely available in real society, for example, in social network map data, users are regarded as nodes, relationships between users are regarded as edges, and each user has individual attribute characteristic information such as gender, age, hobbies, and the like.
The graph neural network model is widely applied to classification of social network graph data. However, the stability of the graph neural network model is low, and the graph neural network model is easy to attack by an attacker through a white box-like model and a black box-like model, and edges or nodes of the graph data are added or deleted, so that the robustness of the graph neural network model is low, and further the classification result of the graph neural network model on the social network graph data is changed, so that the stability and the robustness of the graph neural network model on the classification result of the social network graph data are low.
Aiming at the problem that the stability and the robustness of the classification result of the graph neural network model on the social network graph data are low in the related attack and defense technology, no effective solution is proposed at present.
Disclosure of Invention
The embodiment provides a graph data classification method, device, electronic device and storage medium, so as to solve the problem that the stability and robustness of a graph neural network model to a classification result of social network graph data in a related attack and defense technology are low.
In a first aspect, in this embodiment, there is provided a graph data classifying method, including:
obtaining a graph data training set, wherein each graph data to be trained in the graph data training set comprises node attribute characteristics and an adjacency matrix, the graph data to be trained is social network graph data, a single node in the social network graph data represents a user, the node attribute characteristics comprise at least one of age, gender, hobbies and occupation of the user, and the adjacency matrix represents an association relation between the users;
performing dimension reduction processing on an adjacent matrix in target to-be-trained graph data to obtain a dimension reduction target adjacent matrix, wherein the target to-be-trained graph data is any to-be-trained graph data in the graph data training set;
training the graph neural network model to be trained based on the node attribute characteristics of the target graph data to be trained and the dimension-reduction target adjacency matrix to obtain a trained graph neural network model;
Classifying the map data to be classified based on the trained map neural network model to obtain a classification result of the map data to be classified, wherein the map data to be classified is social network map data to be classified, a single node in the social network map data to be classified represents one user to be classified, the social network map data to be classified comprises at least one of age, gender, hobbies and occupation of the user to be classified and association relations among a plurality of users to be classified, and the classification result of the map data to be classified comprises the activity level of each user to be classified in the social network map data to be classified in a social network platform.
In some embodiments, the performing a dimension reduction process on the adjacency matrix in the target to-be-trained graph data to obtain a dimension-reduced target adjacency matrix includes:
performing dimension reduction processing on the adjacent matrix in the target to-be-trained graph data based on a first dimension reduction method to obtain a first dimension reduction target adjacent matrix;
and performing dimension reduction processing on the adjacent matrix in the target to-be-trained graph data based on a second dimension reduction method to obtain a second dimension reduction target adjacent matrix, wherein the dimension reduction target adjacent matrix comprises the first dimension reduction target adjacent matrix and the second dimension reduction target adjacent matrix.
In some embodiments, the to-be-trained graph neural network model includes a first graph neural network sub-module and a second graph neural network sub-module, the training of the to-be-trained graph neural network model based on the node attribute feature of the target to-be-trained graph data and the dimension-reduction target adjacency matrix to obtain a trained graph neural network model includes:
training the first graph neural network sub-module based on node attribute characteristics in the target graph data to be trained and the first dimension-reduction target adjacency matrix to obtain a trained first graph neural network sub-module;
training the second graph neural network sub-module based on the node attribute characteristics in the target graph data to be trained and the second dimension-reduction target adjacency matrix to obtain a trained second graph neural network sub-module;
and obtaining the trained graph neural network model based on the trained first graph neural network sub-module and the trained second graph neural network sub-module.
In some embodiments, the obtaining the trained neural network model based on the trained first neural network sub-module and the trained second neural network sub-module includes:
Determining a target loss function of the graph neural network model to be trained based on a divergence between the trained first graph neural network sub-module and the trained second graph neural network sub-module;
and adjusting the structural parameters of the graph neural network model to be trained based on the target loss function to obtain a trained graph neural network model.
In some of these embodiments, the determining the objective loss function of the graph neural network model to be trained based on a divergence between the trained first graph neural network sub-module and the trained second graph neural network sub-module includes:
determining a divergence between an output layer of the trained first graph neural network sub-module and an output layer of the trained second graph neural network sub-module;
the target loss function is determined based on the divergence, the first loss function of the trained first graph neural network sub-module, and the second loss function of the trained second graph neural network sub-module.
In some embodiments, the adjusting the structural parameter of the graph neural network model to be trained based on the target loss function to obtain a trained graph neural network model includes:
Adjusting the structural parameters of the graph neural network model to be trained based on the target loss function to obtain an adjusted graph neural network model;
obtaining a graph data test set, wherein the graph data test set comprises labeling results of each graph data to be tested;
the graph data test set and the adjusted graph neural network model are attacked to obtain an attacked graph neural network model and an attacked test set;
inputting the attacked test set into the attacked graph neural network model to obtain the attacked test result of the graph data test set;
determining the accuracy of the attacked graph neural network model based on the attacked test result and the labeling result;
and circularly executing the step of adjusting the structural parameters of the graph neural network model to be trained based on the target loss function until the accuracy is greater than or equal to a preset accuracy, and determining the adjusted graph neural network model as the trained graph neural network model.
In some embodiments, before determining the adjusted graph neural network model as the trained graph neural network model when the accuracy is greater than or equal to a preset accuracy, the method further includes:
Training the initial graph neural network model based on node attribute characteristics and an adjacency matrix of each graph data to be trained in the graph data training set to obtain a trained reference graph neural network model;
inputting the graph data test set into the trained reference graph neural network model to obtain a reference test result of the graph data test set;
determining the reference accuracy of the trained reference graph neural network model based on the reference test result and the labeling result;
and determining the preset accuracy based on the reference accuracy.
In a second aspect, in this embodiment, there is provided a graph data classifying apparatus including:
the system comprises an acquisition module, a storage module and a storage module, wherein the acquisition module is used for acquiring a graph data training set, each graph data to be trained in the graph data training set comprises node attribute characteristics and an adjacency matrix, the graph data to be trained is social network graph data, a single node in the social network graph data represents a user, the node attribute characteristics comprise at least one of age, gender, hobbies and occupation of the user, and the adjacency matrix represents an association relation among the users;
The dimension reduction module is used for carrying out dimension reduction processing on the adjacent matrix in the target to-be-trained image data to obtain a dimension reduction target adjacent matrix, wherein the target to-be-trained image data is any to-be-trained image data in the image data training set;
the model training module is used for training the to-be-trained graph neural network model based on the node attribute characteristics of the target to-be-trained graph data and the dimension reduction target adjacency matrix to obtain a trained graph neural network model;
the image data classification module is used for classifying image data to be classified based on the trained image neural network model to obtain a classification result of the image data to be classified, wherein the image data to be classified is social network image data to be classified, a single node in the social network image data to be classified represents one user to be classified, the social network image data to be classified comprises at least one of age, gender, hobbies and occupation of the user to be classified and association relations among a plurality of users to be classified, and the classification result of the image data to be classified comprises the activity level of each user to be classified in the social network platform in the social network image data to be classified.
In a third aspect, in this embodiment, there is provided an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the graph data classification method according to the first aspect.
In a fourth aspect, in this embodiment, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the graph data classification method of the first aspect described above.
Compared with the related art, in the graph data classification method provided in the embodiment, the dimension reduction processing is performed on the adjacent matrix in the target graph data to be trained, so that the dimension-reduced adjacent matrix more completely retains the node and side information in the original graph data, the dimension-reduced adjacent matrix and the corresponding node attribute characteristics are used for training the graph neural network data, and further, in the training process of the graph neural network model, the node and side information in the original graph data can be effectively learned, so that the trained graph neural network model can effectively defend against attacks, that is, even if the trained graph neural network model is attacked, the graph data to be classified can be accurately classified, the robustness of the graph neural network model is improved, and further, when the graph neural network model with higher robustness is used for classifying the social network graph data, the accurate classification result can be obtained, and therefore, the stability and the robustness of the graph neural network model on the classification result of the social network graph data are improved.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the other features, objects, and advantages of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
fig. 1 is an application environment schematic diagram of a graph data classification method according to an embodiment of the present application;
FIG. 2 is a flowchart of a method for classifying graph data according to an embodiment of the present application;
FIG. 3 is a flow chart of an embodiment of a graph data classification method provided by embodiments of the present application;
FIG. 4 is a schematic diagram of training principle of a neural network model according to an embodiment of the present application;
FIG. 5 is a block diagram of a device for classifying image data according to an embodiment of the present application;
fig. 6 is a schematic diagram of an internal structure of a computer device according to an embodiment of the present application.
Detailed Description
For a clearer understanding of the objects, technical solutions and advantages of the present application, the present application is described and illustrated below with reference to the accompanying drawings and examples.
Unless defined otherwise, technical or scientific terms used herein shall have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terms "a," "an," "the," "these," and the like in this application are not intended to be limiting in number, but rather are singular or plural. The terms "comprising," "including," "having," and any variations thereof, as used in the present application, are intended to cover a non-exclusive inclusion; for example, a process, method, and system, article, or apparatus that comprises a list of steps or modules (units) is not limited to the list of steps or modules (units), but may include other steps or modules (units) not listed or inherent to such process, method, article, or apparatus. The terms "connected," "coupled," and the like in this application are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference to "a plurality" in this application means two or more. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., "a and/or B" may mean: a exists alone, A and B exist together, and B exists alone. Typically, the character "/" indicates that the associated object is an "or" relationship. The terms "first," "second," "third," and the like, as referred to in this application, merely distinguish similar objects and do not represent a particular ordering of objects.
With the advent of the large data age, people have been surrounded by massive amounts of data, and non-european space data that cannot be represented by continuous vectors is generally referred to as graph data. Map data is widely available in real society, for example, in social network map data, users are regarded as nodes, relationships between users are regarded as edges, and each user has individual attribute characteristic information such as gender, age, hobbies, and the like.
The graph neural network model is widely used for classification of social network graph data. However, the stability of the graph neural network model is low, and the graph neural network model is easy to attack by an attacker through white box-like and black box-like models, and edges or nodes of the graph data are added or deleted, so that the robustness of the graph neural network model is low, and further the classification result of the graph neural network model on the social network graph data is changed, and the accuracy of the classification result of the graph neural network model on the social network graph data is low.
Therefore, how to improve the accuracy of the classification result of the graph neural network model on the social network graph data is a problem to be solved.
The method for classifying graph data provided by the embodiment of the application can be applied to an application environment shown in fig. 1, and fig. 1 is a schematic diagram of the application environment of the method for classifying graph data provided by the embodiment of the application. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices, where the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices, and the like. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
In this embodiment, a method for classifying image data is provided, and fig. 2 is a flowchart of the method for classifying image data provided in the embodiment of the present application, where an execution body of the method may be an electronic device, and optionally, the electronic device may be a server or a terminal device, but the application is not limited thereto. Specifically, as shown in fig. 2, the process includes the following steps:
step S201, a graph data training set is acquired.
Each piece of graph data to be trained in the graph data training set comprises node attribute characteristics and an adjacency matrix.
Illustratively, graph data in the field of social networks is collected, and the collected graph data is used as a graph data training set. Specifically, the graph data in the graph data training set includes isomorphic graph data and heterogeneous graph data.
And, each piece of graph data to be trained in the graph training set includes node attribute features and an adjacency matrix. The to-be-trained graph data is social network graph data, a single node in the social network graph data represents a user, the node attribute characteristics comprise at least one of age, gender, hobbies and occupation of the user, and the adjacency matrix represents the association relation among the users.
Specifically, a single node in social network graph data may represent a user, the attribute characteristics of the corresponding node may include attribute information such as age, gender, hobbies, occupation, and the like of the user, the corresponding adjacency matrix may represent an association relationship between the user and other users, for example, the user is a, and the user B in the same social network is a family, and in the corresponding graph data, the a and the B may be connected to represent that an association relationship exists between the user a and the user B, and further the association relationship is represented by the adjacency matrix.
Step S202, performing dimension reduction processing on the adjacent matrix in the target to-be-trained graph data to obtain a dimension reduction target adjacent matrix.
The target to-be-trained graph data is any to-be-trained graph data in the graph data training set.
Further, any image data to be trained in the image data training set is determined to be target image data to be trained, and dimension reduction processing is carried out on the adjacent matrix in the target image data to be trained, so that a dimension reduction target adjacent matrix is obtained. Specifically, the dimension of high-rank nodes and edges in the adjacent matrix in the target to-be-trained graph data is reduced, so that an adjacent matrix formed by the corresponding low-rank nodes and edges is obtained, and the adjacent matrix formed by the low-rank nodes and edges is the dimension-reduced target adjacent matrix.
And step S203, training the to-be-trained graph neural network model based on the node attribute characteristics of the target to-be-trained graph data and the dimension-reduction target adjacency matrix to obtain a trained graph neural network model.
Further, the node attribute characteristics corresponding to the target to-be-trained graph data and the dimension-reduced target adjacent matrix are used as inputs, and the to-be-trained graph neural network model is trained, so that the trained graph neural network model is obtained.
The graph neural network model to be trained may be one or a combination of various graph representation learning neural networks such as an untrained graph convolution neural network (Graph Convolutional Network, GCN), (Graph Sample and aggregate, graphpage), graph annotation force network (graph attention networks, GAT), etc., and in the embodiment of the present application, the graph neural network model to be trained is illustrated by taking GCN as an example, which is not limited herein.
And step S204, classifying the map data to be classified based on the trained map neural network model to obtain a classification result of the map data to be classified.
And inputting the graph data to be classified into the trained graph neural network model, so that the trained graph neural network model outputs a classification result of the graph data to be classified.
The social network map data to be classified are social network map data to be classified, a single node in the social network map data to be classified represents one user to be classified, the social network map data to be classified comprises at least one of age, gender, hobbies and occupation of the user to be classified, and association relations among a plurality of users to be classified, and classification results of the social network map data to be classified comprise activity grades of each user to be classified in the social network map data to be classified in a social network platform.
Specifically, the social network diagram data to be classified is input into a trained neural network diagram model, classification results of all node users in the social network diagram data can be obtained, specifically, the classification results can be activity grades of each user to be classified in a social network platform, and specifically, the activity grades can comprise three types of high activity grades, medium activity grades and low activity grades.
For example, in a certain social network platform, the classification results are: the activity level of the professional teacher is a high activity level, the activity level of the professional student is a medium activity level, and the activity level of the professional engineer is a low activity level.
In the implementation process, the adjacent matrix in the target graph data to be trained is subjected to dimension reduction processing, so that the dimension-reduced adjacent matrix more completely retains the node and side information in the original graph data, the dimension-reduced adjacent matrix and the corresponding node attribute characteristics are used for training the graph neural network data, and further, in the training process of the graph neural network model, the node and side information in the original graph data can be effectively learned, so that the trained graph neural network model can effectively defend against attacks, namely, even if the trained graph neural network model is attacked, the graph data to be classified can be accurately classified, the robustness of the graph neural network model is improved, and further, when the graph neural network model with higher robustness is used for classifying the social network graph data, the accurate classification result can be obtained, and therefore, the stability and the robustness of the classification result of the graph neural network model on the social network graph data are improved.
In some embodiments, performing a dimension reduction process on an adjacency matrix in the target to-be-trained graph data to obtain a dimension-reduced target adjacency matrix may include the following steps:
step 1: and performing dimension reduction processing on the adjacent matrix in the target to-be-trained graph data based on the first dimension reduction method to obtain a first dimension reduction target adjacent matrix.
Step 2: and performing dimension reduction processing on the adjacent matrix in the target to-be-trained graph data based on a second dimension reduction method to obtain a second dimension reduction target adjacent matrix.
The dimension reduction target adjacency matrix comprises a first dimension reduction target adjacency matrix and a second dimension reduction target adjacency matrix.
The method comprises the steps of performing dimension reduction on an adjacent matrix in target to-be-trained graph data by using two different dimension reduction methods respectively to obtain two corresponding dimension reduced adjacent matrices.
Specifically, the first dimension reduction method is adopted to perform dimension reduction processing on the adjacent matrix in the target to-be-trained graph data to obtain a first dimension reduction target adjacent matrix, wherein the first dimension reduction method can be one or a combination of matrix decomposition algorithms such as eigenvalue decomposition, singular value decomposition (Singular Value Decomposition, SVD), filling scoring matrix (SVD++), non-negative matrix decomposition (Nonnegative Matrix Factorization, NMF) and Laplace decomposition, and can also be other matrix decomposition algorithms, and the method is not limited herein.
And performing dimension reduction processing on the adjacent matrix in the target to-be-trained graph data by adopting a second dimension reduction method to obtain a second dimension reduction target adjacent matrix, wherein the second dimension reduction method can be a graph reconstruction method, for example, transE, transM and the like.
In the implementation process, the adjacent matrixes in the target graph data to be trained are subjected to dimension reduction processing according to two different dimension reduction methods, so that two dimension reduced adjacent matrixes are obtained, training of the graph neural network model to be trained based on the two dimension reduced adjacent matrixes in the subsequent graph neural network model training process is facilitated, and the robustness of the model is improved.
In some embodiments, the to-be-trained graph neural network model includes a first graph neural network sub-module and a second graph neural network sub-module, and the to-be-trained graph neural network model is trained based on node attribute features of target to-be-trained graph data and a dimension-reduction target adjacency matrix to obtain a trained graph neural network model, which may include the following steps:
step 1: and training the first graph neural network submodule based on the node attribute characteristics in the target graph data to be trained and the first dimension-reduction target adjacency matrix to obtain a trained first graph neural network submodule.
Step 2: and training the second graph neural network submodule based on the node attribute characteristics in the target graph data to be trained and the second dimension-reduction target adjacency matrix to obtain a trained second graph neural network submodule.
Step 3: and obtaining a trained graphic neural network model based on the trained first graphic neural network sub-module and the trained second graphic neural network sub-module.
The graphic neural network model to be trained illustratively includes a first graphic neural network sub-module and a second graphic neural network sub-module.
And training the first graph neural network sub-module in the graph neural network model to be trained by taking the node attribute characteristics in the target graph data to be trained and the first dimension-reduction target adjacent matrix as inputs, so as to obtain the trained first graph neural network sub-module. And simultaneously, taking node attribute characteristics in target to-be-trained graph data and a second dimension-reduction target adjacency matrix as inputs, and training a second graph neural network sub-module in the to-be-trained graph neural network model, so as to obtain a trained second graph neural network sub-module.
Further, a trained neural network model is obtained according to the trained first neural network sub-module and the trained second neural network sub-module. Specifically, in the graph neural network model to be trained, the trained first graph neural network sub-module and the trained second graph neural network sub-module are mutually fused for training, and finally the trained graph neural network model is obtained.
In the implementation process, the node attribute characteristics in the target graph data to be trained and the two adjacent matrixes after dimension reduction respectively form two types of graph data input during model training, and the two types of graph data are respectively input into two sub-modules in the graph neural network model to be trained, so that when the graph neural network model to be trained is trained, the two sub-modules are mutually fused for training, the defending performance of the model is improved, the trained graph neural network model has higher robustness, and attack can be effectively defended.
In some embodiments, obtaining the trained neural network model based on the trained first neural network sub-module and the trained second neural network sub-module may include the steps of:
step 1: and determining an objective loss function of the graph neural network model to be trained based on the divergence between the trained first graph neural network sub-module and the trained second graph neural network sub-module.
Step 2: and adjusting the structural parameters of the neural network model of the graph to be trained based on the target loss function to obtain the trained neural network model of the graph.
Illustratively, the neural network model of the graph to be trained is The first graphic neural network submodule is +.>The second graphic neural network submodule is +.>
If the adjacency matrix in the target to-be-trained graph data is A 1 The corresponding first dimension-reduction target adjacent matrix isThe corresponding second dimension-reducing target adjacency matrix is +.>If the node attribute characteristic in the target to-be-trained graph data is F, the node is to be trainedInput->In obtaining the trained first graphic neural network submodule->Will beInput->In, a trained second graphic neural network submodule is obtained->
Further, determining a trained first graph neural network submoduleSecond graph neural network submodule after training +.>The divergence is specifically, the divergence may be KL divergence, JS divergence, or other types of divergences, which is not limited herein.
And determining a target loss function of the graph neural network model to be trained according to the divergence, and further training the graph neural network model to be trained according to the target loss function, namely adjusting structural parameters of the graph neural network model to be trained according to the target loss function, so as to obtain a trained graph neural network model.
In the implementation process, according to the divergence between the trained first graph neural network sub-module and the trained second graph neural network sub-module, determining a target loss function of the graph neural network model to be trained, and adjusting structural parameters of the graph neural network model to be trained according to the target loss function, so that the adjustment of the structural parameters of the graph neural network model to be trained can be controlled through the distribution difference between the two graph neural network sub-modules, and information in different graph neural network sub-models can be learned in a mutually fused mode in the training process of the model, so that the defensive capability of the trained graph neural network model can be effectively improved, and the robustness of the model is improved.
In some of these embodiments, determining the target loss function of the graph neural network model to be trained based on the divergence between the trained first graph neural network sub-module and the trained second graph neural network sub-module may include the steps of:
step 1: a divergence between the output layer of the trained first graph neural network sub-module and the output layer of the trained second graph neural network sub-module is determined.
Step 2: the target loss function is determined based on the divergence, the first loss function of the trained first neural network sub-module, and the second loss function of the trained second neural network sub-module.
Illustratively, the trained first neural network submodules are respectively determinedIntermediate output layer->And a trained second graphic neural network submodule->Intermediate output layer->Further, the middle output layer is determined>And middle output layer->KL divergence between.
And determining a first loss function of the trained first neural network sub-module and a second loss function of the trained second neural network sub-module, wherein the first loss function of the trained first neural network sub-module may be a cross entropy loss function (Cross Entropy Loss, CE) of the trained first neural network sub-module, and the second loss function of the trained second neural network sub-module may be a CE loss of the trained second neural network sub-module.
If the first loss function of the trained first graph neural network sub-module isThe second loss function of the trained second graph neural network submodule is +.>The target loss function y may be:
wherein,for coefficients of the neural network model of the graph to be trained, < +.>Is an intermediate output layerAnd middle output layer->KL divergence between.
In the implementation process, the target loss function is determined according to the first loss function of the trained first graph neural network sub-module, the second loss function of the trained second graph neural network sub-module and the divergence between the output layer of the trained first graph neural network sub-module and the output layer of the trained second graph neural network sub-module, so that in the process of adjusting the structural parameters of the graph neural network model to be trained by the target loss function, the model parameters can be adjusted according to the distribution difference between the two graph neural network sub-modules, and the loss of the two graph neural network sub-modules can be used as the basis for adjusting the overall model parameters, thereby improving the accuracy and the robustness of the trained graph neural network model.
In some embodiments, adjusting structural parameters of the graph neural network model to be trained based on the objective loss function to obtain a trained graph neural network model may include the following steps:
Step 1: and adjusting the structural parameters of the neural network model of the graph to be trained based on the target loss function to obtain an adjusted neural network model of the graph.
Step 2: and obtaining a graph data test set which comprises labeling results of each graph data to be tested.
Step 3: and (3) attacking the graph data test set and the adjusted graph neural network model to obtain an attacked graph neural network model and an attacked test set.
Step 4: and inputting the attacked test set into the attacked graph neural network model to obtain the attacked test result of the graph data test set.
Step 5: and determining the accuracy of the attacked graph neural network model based on the attacked test result and the labeling result.
Step 6: and circularly executing the step of adjusting the structural parameters of the neural network model of the graph to be trained based on the target loss function until the accuracy is greater than or equal to the preset accuracy, and determining the adjusted neural network model of the graph as a trained neural network model of the graph.
The training process of the graph neural network model is illustratively a continuous loop, so that the model parameters are optimized.
And in the process of adjusting the structural parameters of the neural network model of the graph to be trained through the target loss function, the adjusted neural network model of the graph can be obtained.
And then the graph data test set is used for testing the adjusted graph neural network model, and specifically, the graph data test set is obtained, wherein the graph data test set comprises a labeling result of each graph data to be tested, and the labeling result represents a corresponding accurate classification result of the data to be tested.
Furthermore, the graph data test set and the adjusted graph neural network model can be attacked through a graph neural network model attack algorithm, so that an attacked graph neural network model and an attacked test set are obtained. Specifically, the Attack algorithm may be a global Attack algorithm of a graph network such as a topology Attack (Project Gradient Descent, PGD) and a poisoning Attack Meta Attack, or may be other Attack algorithms, which is not limited herein.
Specifically, if the test data to be tested in the graph data test set includes the test node attribute feature F 3 Test adjacency matrix A 3 The adjusted graphic neural network model isThe attacked graph neural network model is +.>The attacked test set comprises a test node attribute characteristic F 3 Adjacent matrix of attacked test>
Further, node attribute feature F will be tested 3 Adjacent matrix for attacked testInput attacked graph neural network model +. >And obtaining an attacked test result.
In the embodiment of the present application, only the example where the adjacency matrix in the graph data test set is attacked is described, and in practical application, the attribute feature F of the test node in the graph data test set may also be 3 Is attacked to obtain an attacked test set, or a test node attribute characteristic F in the graph data test set 3 Test adjacency matrix A 3 Is attacked at the same time to obtain an attacked test set, and is not limited herein。
Determining an attacked graph neural network model according to the attacked test result and the corresponding labeling resultAccuracy m of (2) 3 Specifically, the accuracy m 3 The accuracy, precision, and TOP-K accuracy of the attacked test results may be one or a combination of these, without limitation.
Further, the accuracy m 3 Comparing with a preset accuracy m, if the accuracy m 3 If the accuracy is greater than or equal to the preset accuracy m, the adjusted graph neural network model is used as a trained graph neural network model; if the accuracy m 3 If the accuracy is smaller than the preset accuracy m, the step of adjusting the structural parameters of the neural network model of the graph to be trained according to the target loss function is circularly executed until the accuracy m is reached 3 Greater than or equal to a preset accuracy m.
In the implementation process, the graph data test set and the adjusted graph neural network model are attacked, the attacked test set is tested through the attacked graph neural network model, so that a attacked test result is obtained, the accuracy of the attacked graph neural network model is determined by the attacked test result and the labeling result of the test set until the accuracy of the attacked graph neural network model is greater than or equal to the preset accuracy, the attacked graph neural network model has higher defensive capability, when the attacked graph neural network model is used for classifying graph data, an accurate classification result can be obtained, the adjusted graph neural network model is determined to be a trained graph neural network model, the trained graph neural network model has higher defensive performance, if the accuracy of the attacked graph neural network model is lower, the structural parameters of the graph neural network model to be trained are repeatedly adjusted, the training of the graph neural network model is realized, the robustness of the trained graph neural network model is ensured to be higher, the defensive capability of the attacked graph neural network model can be effectively improved, and the social network model classification result of the graph neural network model on the graph data is improved.
In some embodiments, before determining the adjusted graph neural network model as the trained graph neural network model when the accuracy is greater than or equal to the preset accuracy, the method may further include the following steps:
step 1: and training the initial graph neural network model based on the node attribute characteristics and the adjacency matrix of each piece of graph data to be trained in the graph data training set to obtain a trained reference graph neural network model.
Step 2: and inputting the graph data test set into the trained reference graph neural network model to obtain a reference test result of the graph data test set.
Step 3: and determining the reference accuracy of the trained reference graph neural network model based on the reference test result and the labeling result.
Step 4: the preset accuracy is determined based on the reference accuracy.
The preset accuracy may also be determined by the non-attacked graph neural network model, for example, before comparing the accuracy of the attacked graph neural network model with the preset accuracy.
Specifically, the initial graph neural network model is trained through the graph data training set, namely, node attribute characteristics and an adjacent matrix of each graph data to be trained in the graph data training set are input into the initial graph neural network model, and a trained reference graph neural network model is obtained.
It should be noted that, the graph data training set may be the same as or different from the graph data training set in step S201, and in order to avoid differences between different model structures in the training sets, in this embodiment of the present application, the graph data training set and the graph data training set in step S201 may be the same as an example. The initial graph neural network model is an untrained GCN, and the trained reference graph neural network model is
Further, the graph data test set is input into the trained reference graph neural network model, and a reference test result of the graph data test set is obtained.
Comparing the reference test result with the standard result of the graph data training set to determine the reference accuracy m of the trained reference graph neural network model 1 Thereby according to the reference accuracy m 1 And determining the preset accuracy m. Specifically, the preset accuracy may be equal to or slightly smaller than the reference accuracy.
In the implementation process, the initial graph neural network model is trained through the graph data training set, so that a trained reference graph neural network model is obtained, the accuracy of the trained reference graph neural network model is determined through the graph data testing set, and the preset accuracy is determined according to the accuracy, namely, the preset accuracy is determined by taking the accuracy of the unaddressed graph neural network model as a determination basis, so that the graph neural network model obtained through the training of the scheme can accurately classify graph data to be classified even if the graph neural network model is attacked.
As another embodiment, the determining the preset accuracy according to the accuracy of the trained reference graph neural network model after being attacked may further include:
and (3) attacking the trained reference graph neural network model and the graph data test set to obtain an attacked reference graph neural network model and an attacked test set, inputting the attacked test set into the attacked reference graph neural network model to obtain a attacked reference test result, determining the attacked accuracy of the attacked reference graph neural network model according to the attacked reference test result and the labeling result, and determining the preset accuracy according to the attacked accuracy and the reference accuracy.
As an example, the trained reference graph neural network model and the graph data test set are attacked by a PGD attack algorithm to obtain an attacked reference graph neural network modelAnd an attacked test set, wherein the attacked test set comprises a test node attribute feature F 3 Adjacent matrix of attacked test>
To test node attribute characteristics F 3 Adjacent matrix for attacked testInput attacked reference graph neural network model +. >Obtaining an attacked reference test result, and further determining the attacked accuracy m of the attacked reference graph neural network model according to the attacked reference test result and the labeling result 2 According to the attacked accuracy m 2 Accuracy of reference m 1 Determining a preset accuracy m, which may be greater than the attacked accuracy m 2 And is less than the reference accuracy m 1
In the implementation process, the preset accuracy is determined according to the accuracy of the attacked reference graph neural network model, so that the lower limit of the preset accuracy is effectively determined by taking the accuracy of the attacked model as a reference.
An embodiment of a graph data classification method is also provided in this embodiment. Fig. 3 is a flowchart of an embodiment of a method for classifying graph data according to an embodiment of the present application, as shown in fig. 3, where the flowchart includes the following steps:
step S301, performing dimension reduction processing on adjacent matrixes in target to-be-trained graph data based on two different dimension reduction methods to obtain a first dimension reduction target adjacent matrix and a second dimension reduction target adjacent matrix.
Fig. 4 is a schematic diagram of a training principle of a neural network model according to an embodiment of the present application, as shown in fig. 4, and a first dimension reduction method, such as a matrix decomposition algorithm, is used to perform dimension reduction processing on an adjacent matrix in target to-be-trained graph data, so as to obtain a first dimension reduction target adjacent matrix Adopting a second dimension reduction method, for example, a graph reconstruction method to perform dimension reduction treatment on the adjacent matrix in the target graph data to be trained to obtain a second dimension reduction target adjacent matrix ∈>
Step S302, training the first graph neural network submodule based on the first dimension-reduction target adjacent matrix and the corresponding node attribute characteristics, and training the second graph neural network submodule based on the second dimension-reduction target adjacent matrix and the corresponding node attribute characteristics to obtain a trained first graph neural network submodule and a trained second graph neural network submodule.
Specifically, the first dimension-reducing target adjacent matrixInputting a first neural network submodule with a corresponding node attribute feature F>In the meantime, the second dimension-reducing target adjacency matrix +.>Inputting a second neural network submodule with the corresponding node attribute feature F>Thereby obtaining the trained first graph neural network submoduleSecond graphic nerve network submodule after training +.>
Step S303, determining a target loss function based on a divergence between the output layer of the trained first neural network sub-module and the output layer of the trained second neural network sub-module, the first loss function of the trained first neural network sub-module, and the second loss function of the trained second neural network sub-module.
Further, determining a trained first graph neural network submoduleIntermediate output layer->Second graph neural network submodule after training +.>Intermediate output layer->Divergence between->
And according to the first loss function of the trained first graph neural network sub-module asThe second loss function of the trained second graph neural network submodule is +.>And +.>Constructing a target loss function y, namely:
and step S304, adjusting the structural parameters of the neural network model of the graph to be trained based on the target loss function until the model converges, and obtaining the trained neural network model of the graph.
Further, structural parameters of the training graph neural network model are adjusted according to the target loss function, and the adjusted graph neural network model is obtained. And (3) attacking the graph data test set and the adjusted graph neural network model to obtain an attacked graph neural network model and an attacked test set, inputting the attacked test set into the attacked graph neural network model to obtain a attacked test result of the graph data test set, determining the accuracy of the attacked graph neural network model according to the attacked test result and the labeling result, converging the model when the accuracy is greater than or equal to the preset accuracy, and taking the adjusted graph neural network model as a trained graph neural network model.
Step S305, inputting the graph data to be classified into a trained graph neural network model to obtain a classification result of the graph data to be classified.
Although the steps in the flowcharts according to the embodiments described above are shown in order as indicated by the arrows, these steps are not necessarily executed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
The embodiment also provides a graph data classifying device, which is used for implementing the above embodiment and the preferred implementation, and is not described in detail. The terms "module," "unit," "sub-unit," and the like as used below may refer to a combination of software and/or hardware that performs a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementations in hardware, or a combination of software and hardware, are also possible and contemplated.
Fig. 5 is a block diagram of a device for classifying image data according to an embodiment of the present application, as shown in fig. 5, where the device includes:
the obtaining module 501 is configured to obtain a graph data training set, where each graph data to be trained in the graph data training set includes node attribute features and an adjacency matrix, the graph data to be trained is social network graph data, a single node in the social network graph data represents a user, the node attribute features include at least one of age, gender, hobbies and occupation of the user, and the adjacency matrix represents an association relationship between the users;
the dimension reduction module 502 is configured to perform dimension reduction processing on an adjacency matrix in target to-be-trained graph data to obtain a dimension reduction target adjacency matrix, where the target to-be-trained graph data is any to-be-trained graph data in the graph data training set;
the model training module 503 is configured to train the graph neural network model to be trained based on the node attribute features of the target graph data to be trained and the dimension-reduction target adjacency matrix, so as to obtain a trained graph neural network model;
the graph data classification module 504 is configured to classify graph data to be classified based on the trained graph neural network model, obtain a classification result of the graph data to be classified, where the graph data to be classified is social network graph data to be classified, a single node in the social network graph data to be classified represents one user to be classified, the social network graph data to be classified includes at least one of age, gender, hobbies and occupation of the user to be classified, and association relations among a plurality of users to be classified, and the classification result of the graph data to be classified includes an activity level of each user to be classified in the social network graph data to be classified in the social network platform.
In some of these embodiments, the dimension reduction module 502 is specifically configured to: performing dimension reduction processing on the adjacent matrix in the target to-be-trained graph data based on a first dimension reduction method to obtain a first dimension reduction target adjacent matrix;
and performing dimension reduction processing on the adjacent matrix in the target to-be-trained graph data based on a second dimension reduction method to obtain a second dimension reduction target adjacent matrix, wherein the dimension reduction target adjacent matrix comprises a first dimension reduction target adjacent matrix and a second dimension reduction target adjacent matrix.
In some embodiments, the graphic neural network model to be trained includes a first graphic neural network sub-module and a second graphic neural network sub-module, and the model training module 503 is specifically configured to:
training the first graph neural network sub-module based on node attribute characteristics in target to-be-trained graph data and a first dimension-reduction target adjacency matrix to obtain a trained first graph neural network sub-module;
training the second graph neural network submodule based on node attribute features in target to-be-trained graph data and a second dimension-reduction target adjacency matrix to obtain a trained second graph neural network submodule;
and obtaining a trained graphic neural network model based on the trained first graphic neural network sub-module and the trained second graphic neural network sub-module.
In some of these embodiments, model training module 503 is specifically configured to:
determining a target loss function of the graph neural network model to be trained based on the divergence between the trained first graph neural network sub-module and the trained second graph neural network sub-module;
and adjusting the structural parameters of the neural network model of the graph to be trained based on the target loss function to obtain the trained neural network model of the graph.
In some of these embodiments, model training module 503 is specifically configured to:
determining a divergence between an output layer of the trained first graph neural network sub-module and an output layer of the trained second graph neural network sub-module;
the target loss function is determined based on the divergence, the first loss function of the trained first neural network sub-module, and the second loss function of the trained second neural network sub-module.
In some of these embodiments, model training module 503 is specifically configured to:
adjusting structural parameters of the neural network model of the graph to be trained based on the target loss function to obtain an adjusted neural network model of the graph;
obtaining a graph data test set which comprises labeling results of each graph data to be tested;
Carrying out attack on the graph data test set and the adjusted graph neural network model to obtain an attacked graph neural network model and an attacked test set;
inputting the attacked test set into the attacked graph neural network model to obtain the attacked test result of the graph data test set;
determining the accuracy of the attacked graph neural network model based on the attacked test result and the labeling result;
and circularly executing the step of adjusting the structural parameters of the neural network model of the graph to be trained based on the target loss function until the accuracy is greater than or equal to the preset accuracy, and determining the adjusted neural network model of the graph as a trained neural network model of the graph.
In some of these embodiments, model training module 503 is further to:
training the initial graph neural network model based on node attribute characteristics and an adjacency matrix of each graph data to be trained in the graph data training set to obtain a trained reference graph neural network model;
inputting the graph data test set into a trained reference graph neural network model to obtain a reference test result of the graph data test set;
determining the reference accuracy of the trained reference graph neural network model based on the reference test result and the labeling result;
The preset accuracy is determined based on the reference accuracy.
The above-described respective modules may be functional modules or program modules, and may be implemented by software or hardware. For modules implemented in hardware, the various modules described above may be located in the same processor; or the above modules may be located in different processors in any combination.
In one embodiment, a computer device is provided, the computer device may be a server, an internal structure diagram of the computer device may be shown in fig. 6, and fig. 6 is a schematic internal structure diagram of the computer device provided in an embodiment of the present application. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of classifying graph data.
It will be appreciated by those skilled in the art that the structure shown in fig. 6 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, there is also provided an electronic device including a memory and a processor, the memory storing a computer program, the processor implementing the steps of the method embodiments described above when executing the computer program.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
It should be noted that, user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the patent. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (7)

1. A graph data classifying method, comprising:
obtaining a graph data training set, wherein each graph data to be trained in the graph data training set comprises node attribute characteristics and an adjacency matrix, the graph data to be trained is social network graph data, a single node in the social network graph data represents a user, the node attribute characteristics comprise at least one of age, gender, hobbies and occupation of the user, and the adjacency matrix represents an association relation between the users;
Performing dimension reduction processing on an adjacent matrix in target to-be-trained graph data to obtain a dimension reduction target adjacent matrix, wherein the target to-be-trained graph data is any to-be-trained graph data in the graph data training set;
performing dimension reduction processing on the adjacency matrix in the target to-be-trained graph data to obtain a dimension reduction target adjacency matrix, wherein the dimension reduction target adjacency matrix comprises: performing dimension reduction processing on the adjacent matrix in the target to-be-trained graph data based on a first dimension reduction method to obtain a first dimension reduction target adjacent matrix; performing dimension reduction processing on the adjacent matrix in the target to-be-trained graph data based on a second dimension reduction method to obtain a second dimension reduction target adjacent matrix, wherein the dimension reduction target adjacent matrix comprises the first dimension reduction target adjacent matrix and the second dimension reduction target adjacent matrix, the first dimension reduction method comprises a matrix decomposition algorithm, and the second dimension reduction method comprises a graph reconstruction method;
training the graph neural network model to be trained based on the node attribute characteristics of the target graph data to be trained and the dimension-reduction target adjacency matrix to obtain a trained graph neural network model;
the graph neural network model to be trained comprises a first graph neural network sub-module and a second graph neural network sub-module, the graph neural network model to be trained is trained based on node attribute characteristics of the target graph data to be trained and the dimension reduction target adjacent matrix, and a trained graph neural network model is obtained, and the graph neural network model comprises: training the first graph neural network sub-module based on node attribute characteristics in the target graph data to be trained and the first dimension-reduction target adjacency matrix to obtain a trained first graph neural network sub-module; training the second graph neural network sub-module based on the node attribute characteristics in the target graph data to be trained and the second dimension-reduction target adjacency matrix to obtain a trained second graph neural network sub-module; obtaining the trained graph neural network model based on the trained first graph neural network sub-module and the trained second graph neural network sub-module;
The obtaining the trained neural network model based on the trained first neural network sub-module and the trained second neural network sub-module includes: determining a target loss function of the graph neural network model to be trained based on a divergence between the trained first graph neural network sub-module and the trained second graph neural network sub-module; adjusting structural parameters of the graph neural network model to be trained based on the target loss function to obtain a trained graph neural network model;
classifying the map data to be classified based on the trained map neural network model to obtain a classification result of the map data to be classified, wherein the map data to be classified is social network map data to be classified, a single node in the social network map data to be classified represents one user to be classified, the social network map data to be classified comprises at least one of age, gender, hobbies and occupation of the user to be classified and association relations among a plurality of users to be classified, and the classification result of the map data to be classified comprises the activity level of each user to be classified in the social network map data to be classified in a social network platform.
2. The graph data classification method of claim 1, wherein the determining the objective loss function of the graph neural network model to be trained based on the divergence between the trained first graph neural network sub-module and the trained second graph neural network sub-module comprises:
determining a divergence between an output layer of the trained first graph neural network sub-module and an output layer of the trained second graph neural network sub-module;
the target loss function is determined based on the divergence, the first loss function of the trained first graph neural network sub-module, and the second loss function of the trained second graph neural network sub-module.
3. The graph data classification method according to claim 1, wherein the adjusting the structural parameters of the graph neural network model to be trained based on the objective loss function to obtain a trained graph neural network model includes:
adjusting the structural parameters of the graph neural network model to be trained based on the target loss function to obtain an adjusted graph neural network model;
obtaining a graph data test set, wherein the graph data test set comprises labeling results of each graph data to be tested;
The graph data test set and the adjusted graph neural network model are attacked to obtain an attacked graph neural network model and an attacked test set;
inputting the attacked test set into the attacked graph neural network model to obtain the attacked test result of the graph data test set;
determining the accuracy of the attacked graph neural network model based on the attacked test result and the labeling result;
and circularly executing the step of adjusting the structural parameters of the graph neural network model to be trained based on the target loss function until the accuracy is greater than or equal to a preset accuracy, and determining the adjusted graph neural network model as the trained graph neural network model.
4. The graph data classification method of claim 3, further comprising, before determining the adjusted graph neural network model as the trained graph neural network model when the accuracy is greater than or equal to a preset accuracy:
training the initial graph neural network model based on node attribute characteristics and an adjacency matrix of each graph data to be trained in the graph data training set to obtain a trained reference graph neural network model;
Inputting the graph data test set into the trained reference graph neural network model to obtain a reference test result of the graph data test set;
determining the reference accuracy of the trained reference graph neural network model based on the reference test result and the labeling result;
and determining the preset accuracy based on the reference accuracy.
5. A graph data classifying apparatus, comprising:
the system comprises an acquisition module, a storage module and a storage module, wherein the acquisition module is used for acquiring a graph data training set, each graph data to be trained in the graph data training set comprises node attribute characteristics and an adjacency matrix, the graph data to be trained is social network graph data, a single node in the social network graph data represents a user, the node attribute characteristics comprise at least one of age, gender, hobbies and occupation of the user, and the adjacency matrix represents an association relation among the users;
the dimension reduction module is used for carrying out dimension reduction processing on the adjacent matrix in the target to-be-trained image data to obtain a dimension reduction target adjacent matrix, wherein the target to-be-trained image data is any to-be-trained image data in the image data training set;
the dimension reduction module is specifically used for: performing dimension reduction processing on the adjacent matrix in the target to-be-trained graph data based on a first dimension reduction method to obtain a first dimension reduction target adjacent matrix; performing dimension reduction processing on the adjacent matrix in the target to-be-trained graph data based on a second dimension reduction method to obtain a second dimension reduction target adjacent matrix, wherein the dimension reduction target adjacent matrix comprises the first dimension reduction target adjacent matrix and the second dimension reduction target adjacent matrix, the first dimension reduction method comprises a matrix decomposition algorithm, and the second dimension reduction method comprises a graph reconstruction method;
The model training module is used for training the to-be-trained graph neural network model based on the node attribute characteristics of the target to-be-trained graph data and the dimension reduction target adjacency matrix to obtain a trained graph neural network model;
the to-be-trained graphic neural network model comprises a first graphic neural network sub-module and a second graphic neural network sub-module, and the model training module is specifically used for: training the first graph neural network sub-module based on node attribute characteristics in the target graph data to be trained and the first dimension-reduction target adjacency matrix to obtain a trained first graph neural network sub-module; training the second graph neural network sub-module based on the node attribute characteristics in the target graph data to be trained and the second dimension-reduction target adjacency matrix to obtain a trained second graph neural network sub-module; obtaining the trained graph neural network model based on the trained first graph neural network sub-module and the trained second graph neural network sub-module;
the model training module is specifically used for: determining a target loss function of the graph neural network model to be trained based on a divergence between the trained first graph neural network sub-module and the trained second graph neural network sub-module; adjusting structural parameters of the graph neural network model to be trained based on the target loss function to obtain a trained graph neural network model;
The image data classification module is used for classifying image data to be classified based on the trained image neural network model to obtain a classification result of the image data to be classified, wherein the image data to be classified is social network image data to be classified, a single node in the social network image data to be classified represents one user to be classified, the social network image data to be classified comprises at least one of age, gender, hobbies and occupation of the user to be classified and association relations among a plurality of users to be classified, and the classification result of the image data to be classified comprises the activity level of each user to be classified in the social network platform in the social network image data to be classified.
6. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the graph data classification method of any one of claims 1 to 4.
7. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the graph data classification method of any one of claims 1 to 4.
CN202311522727.1A 2023-11-15 2023-11-15 Picture data classification method, device, electronic device and storage medium Active CN117235584B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311522727.1A CN117235584B (en) 2023-11-15 2023-11-15 Picture data classification method, device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311522727.1A CN117235584B (en) 2023-11-15 2023-11-15 Picture data classification method, device, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN117235584A CN117235584A (en) 2023-12-15
CN117235584B true CN117235584B (en) 2024-04-02

Family

ID=89086556

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311522727.1A Active CN117235584B (en) 2023-11-15 2023-11-15 Picture data classification method, device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN117235584B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111795742A (en) * 2020-05-18 2020-10-20 南京林业大学 Dimension reduction method for single RGB image reconstruction spectrum
WO2020248581A1 (en) * 2019-06-11 2020-12-17 中国科学院自动化研究所 Graph data identification method and apparatus, computer device, and storage medium
CN112862093A (en) * 2021-01-29 2021-05-28 北京邮电大学 Graph neural network training method and device
CN114201572A (en) * 2022-02-15 2022-03-18 深圳依时货拉拉科技有限公司 Interest point classification method and device based on graph neural network
CN114897161A (en) * 2022-05-17 2022-08-12 中国信息通信研究院 Mask-based graph classification backdoor attack defense method and system, electronic equipment and storage medium
CN115222044A (en) * 2022-07-13 2022-10-21 深圳市腾讯信息技术有限公司 Model training method, graph data processing method, device, equipment and storage medium
CN115965058A (en) * 2022-12-28 2023-04-14 连连(杭州)信息技术有限公司 Neural network training method, entity information classification method, device and storage medium
CN116383441A (en) * 2022-12-26 2023-07-04 招联消费金融有限公司 Community detection method, device, computer equipment and storage medium
CN116665786A (en) * 2023-07-21 2023-08-29 曲阜师范大学 RNA layered embedding clustering method based on graph convolution neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA3080373A1 (en) * 2019-05-10 2020-11-10 Royal Bank Of Canada System and method for machine learning architecture with privacy-preserving node embeddings

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020248581A1 (en) * 2019-06-11 2020-12-17 中国科学院自动化研究所 Graph data identification method and apparatus, computer device, and storage medium
CN111795742A (en) * 2020-05-18 2020-10-20 南京林业大学 Dimension reduction method for single RGB image reconstruction spectrum
CN112862093A (en) * 2021-01-29 2021-05-28 北京邮电大学 Graph neural network training method and device
CN114201572A (en) * 2022-02-15 2022-03-18 深圳依时货拉拉科技有限公司 Interest point classification method and device based on graph neural network
CN114897161A (en) * 2022-05-17 2022-08-12 中国信息通信研究院 Mask-based graph classification backdoor attack defense method and system, electronic equipment and storage medium
CN115222044A (en) * 2022-07-13 2022-10-21 深圳市腾讯信息技术有限公司 Model training method, graph data processing method, device, equipment and storage medium
CN116383441A (en) * 2022-12-26 2023-07-04 招联消费金融有限公司 Community detection method, device, computer equipment and storage medium
CN115965058A (en) * 2022-12-28 2023-04-14 连连(杭州)信息技术有限公司 Neural network training method, entity information classification method, device and storage medium
CN116665786A (en) * 2023-07-21 2023-08-29 曲阜师范大学 RNA layered embedding clustering method based on graph convolution neural network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Theoretical framework in graph embedding-based discriminant dimensionality reduction;Guodong Zhao等;《Signal Processing》;全文 *
一种基于极大熵的快速无监督线性降维方法;王继奎等;《软件学报》;第34卷(第4期);全文 *
基于图编码网络的社交网络节点分类方法;郝志峰;柯妍蓉;李烁;蔡瑞初;温雯;王丽娟;;计算机应用(第01期);全文 *
基于流形学习与图神经网络的高光谱图像降维与分类研究;史世豪;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;第2023年卷(第6期);全文 *

Also Published As

Publication number Publication date
CN117235584A (en) 2023-12-15

Similar Documents

Publication Publication Date Title
CN111639230B (en) Similar video screening method, device, equipment and storage medium
CN114549913A (en) Semantic segmentation method and device, computer equipment and storage medium
CN111709415B (en) Target detection method, device, computer equipment and storage medium
CN116188878A (en) Image classification method, device and storage medium based on neural network structure fine adjustment
CN117235584B (en) Picture data classification method, device, electronic device and storage medium
CN116258923A (en) Image recognition model training method, device, computer equipment and storage medium
CN116524357A (en) High-voltage line bird nest detection method, model training method, device and equipment
CN116012841A (en) Open set image scene matching method and device based on deep learning
CN113743533B (en) Picture clustering method and device and storage medium
CN111428741B (en) Network community discovery method and device, electronic equipment and readable storage medium
CN116501993B (en) House source data recommendation method and device
CN116976464A (en) Unbiased federal learning training method, unbiased federal learning training apparatus, computer device, and storage medium
CN116821817A (en) Data prediction method and device based on joint tree model and computer equipment
CN117390250A (en) Information matching method, device, computer equipment and storage medium
US20220382741A1 (en) Graph embeddings via node-property-aware fast random projection
CN116894964A (en) Later fusion object image clustering method, device and computer equipment
CN114819076A (en) Network distillation method, device, computer equipment and storage medium
CN117634751A (en) Data element evaluation method, device, computer equipment and storage medium
CN116719994A (en) Media object recommendation method, device, computer equipment and storage medium
CN116881450A (en) Information classification method, apparatus, computer device, storage medium, and program product
CN115496158A (en) Object value prediction method, device, computer equipment and storage medium
CN116756426A (en) Project recommendation method, apparatus, computer device and storage medium
CN117593084A (en) Financial product pushing method, apparatus, device, storage medium and program product
CN116912348A (en) Pseudo color image generation method, pseudo color image generation device, computer apparatus, and storage medium
CN117539982A (en) Operation and maintenance method and device for natural language processing model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant