WO2023174189A1 - 图网络模型节点分类方法、装置、设备及存储介质 - Google Patents

图网络模型节点分类方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2023174189A1
WO2023174189A1 PCT/CN2023/080970 CN2023080970W WO2023174189A1 WO 2023174189 A1 WO2023174189 A1 WO 2023174189A1 CN 2023080970 W CN2023080970 W CN 2023080970W WO 2023174189 A1 WO2023174189 A1 WO 2023174189A1
Authority
WO
WIPO (PCT)
Prior art keywords
network model
nodes
graph network
node
graph
Prior art date
Application number
PCT/CN2023/080970
Other languages
English (en)
French (fr)
Inventor
罗光圣
杨宇
Original Assignee
上海爱数信息技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海爱数信息技术股份有限公司 filed Critical 上海爱数信息技术股份有限公司
Publication of WO2023174189A1 publication Critical patent/WO2023174189A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present application relates to the field of graph network technology, for example, to a graph network model node classification method, device, equipment and storage medium.
  • table recognition model training methods in related technologies mainly include table general training methods based on Graph Convolutional Network (GCN), table recognition training methods based on YOLO network model, and fast volume-based table recognition training methods.
  • GCN Graph Convolutional Network
  • table recognition training methods based on YOLO network model mainly include table recognition training methods based on YOLO network model, and fast volume-based table recognition training methods.
  • the first method is based on the neural network model, which requires a large amount of labeled data, but the cost of obtaining a large amount of manually labeled data is very high, and the cost of training a GCN model from scratch is very high, which is not conducive to practical applications;
  • the second method only Using a convolutional neural network (Convolutional Neural Networks, CNN) to directly predict the categories and locations of different targets cannot guarantee accuracy;
  • the third method based on the sliding window region selection strategy is not targeted, has high time complexity, and window redundancy , and hand-designed features are not very robust to changes in diversity.
  • This application provides a graph network model node classification method, device, equipment and storage medium to improve the efficiency and accuracy of node classification.
  • a graph network model node classification method including:
  • the target graph network model is used to construct positive examples and negative examples, and the nodes in the original graph data are classified according to the positive examples and negative examples.
  • a graph network model node classification device including:
  • the initial network model building module is configured to build an initial graph network model based on the original graph data
  • An initial network model adjustment module is configured to adjust the initial graph network model to obtain a target graph network model
  • a positive example and negative example construction module is configured to use the target graph network model to construct positive examples and negative examples, and classify nodes in the original graph data according to the positive examples and negative examples.
  • an electronic device including:
  • a memory communicatively connected to the at least one processor; wherein the memory stores a computer program executable by the at least one processor, and the computer program is executed by the at least one processor so that the at least one A processor is capable of executing the graph network model node classification method described in any embodiment of the present application.
  • a computer-readable storage medium stores computer instructions, and the computer instructions are used to implement any of the embodiments of the present application when executed by a processor. Node classification method of graph network model.
  • Figure 1 is a flow chart of a graph network model node classification method provided according to Embodiment 1 of the present application;
  • Figure 2 is a flow chart of a graph network model node classification method provided according to Embodiment 2 of the present application;
  • Figure 3 is a schematic diagram of a graph network pre-training process provided according to Embodiment 2 of the present application.
  • Figure 4 is a schematic structural diagram of a graph network model node classification device provided according to Embodiment 3 of the present application.
  • Figure 5 is a schematic structural diagram of an electronic device that implements the graph network model node classification method in Embodiment 4 of the present application.
  • Figure 1 is a flow chart of a graph network model node classification method provided in Embodiment 1 of the present application. This embodiment can be applied to situations where a graph network model is used to classify nodes.
  • This method can be executed by a graph network model node classification device.
  • the graph network model node classification device can be implemented in the form of hardware and/or software, and the graph network model node classification device can be configured in electronic equipment. As shown in Figure 1, the method includes:
  • the original graph data is graph data containing multiple nodes to be classified, and the initial graph network model is an unadjusted rough model built based on the original graph data.
  • the original graph data contains labeled sample data and unlabeled sample data.
  • the labeled sample data in the original graph data can be used to analyze the labeled sample data.
  • Obtain the degree information of the node and use the graph structure to construct an initial graph network model.
  • the number of neighbor nodes of each node in the initial graph network model corresponds to the degree information of the node.
  • the target graph network model is a model after adjusting the initial graph network model.
  • the initial graph network model is constructed using labeled data (labeled sample data) in the original graph data.
  • the initial graph network model can be regarded as an encoder and can be used to generate nodes in unlabeled data. Attribute characteristics and edge structure.
  • unlabeled data unlabeled sample data
  • This process can be called pre-training of the initial graph network model.
  • the initial graph network model is adjusted during the pre-training process, and the target graph network model can be obtained after the adjustment is completed.
  • S130 Use the target graph network model to construct positive examples and negative examples, and classify the nodes in the original graph data according to the positive examples and negative examples.
  • a neighbor subgraph For a graph network model, taking any node as the starting point and performing a random walk can generate a neighbor subgraph centered on the node. It can be considered that the neighbor subgraphs generated from the same central node have similar structural attributes and are therefore regarded as positive examples; the neighbor subgraphs generated from different nodes (including nodes in the same network or different networks) have properties related to the central node. unique structural properties, that is to say not Neighbor subgraphs generated starting from the same node do not have structural similarity between them, so they are regarded as negative examples.
  • the target graph network model can be used to construct positive examples and negative examples.
  • Each positive example can be used as a category, and each negative example can also be used as a category.
  • the original graph data is input into the target graph network model to complete the final node classification task.
  • the connection relationship between the node to be classified and the nodes in each positive example and each negative example can be determined, and the category of the node to be classified is determined based on the category corresponding to the subgraph where the node connected to the node to be classified is located.
  • the embodiment of the present application constructs an initial graph network model based on the original graph data, adjusts the initial graph network model to obtain a target graph network model, uses the target graph network model to construct positive examples and negative examples, and analyzes the original graph based on the positive examples and negative examples. Nodes in the data are classified.
  • the graph network model classification method provided by the embodiment of the present application realizes the labeling of unlabeled data in the original graph data by pre-training the graph network model, so that the original unlabeled data can be used for learning in the final node classification task. , improving the efficiency and accuracy of graph neural network learning.
  • Figure 2 is a flow chart of a graph network model node classification method provided in Embodiment 2 of the present application. As shown in Figure 2, the method includes:
  • the nodes in the original graph data can be randomly sorted, and the sorted original graph data can be used to construct the initial graph network model.
  • Masked nodes are nodes in unlabeled sample data.
  • the attribute characteristics and edge structures of these nodes can be masked.
  • the influence of the unlabeled sample data can be eliminated, and the labeled sample data can be used Use the target nodes to build graph network models.
  • Degree information can represent the number and direction of edges connected to a node.
  • the graph information related to the labeled sample data can be obtained, and then the attribute characteristics and edge structure of the target node can be generated sequentially according to the graph information in the order determined in the above steps until completion. Construction of the entire initial graph network model.
  • unlabeled data can be labeled through pre-training of the initial graph network model. Since the initial graph network model is a relatively rough model, the key to pre-training is to Learn how to fine-tune it along the way.
  • the method of debugging the parameters of the initial graph network model may be: determining a pair of nodes in the initial graph network model; determining the loss function value corresponding to the pair of nodes, and adjusting the parameters of the initial graph network model according to the loss function value.
  • the parameters of the initial graph network model can be debugged.
  • the overall vector representation of the subgraph in the latent space can be obtained through graph neural network coding.
  • the pre-training task of the graph network model can be expressed as querying in the dictionary under the latent space representation.
  • Subgraph (query) q finds the similar key subgraph (key) k0, that is, the noise contrastive estimation (Info Noise Contrastive Estimation, InfoNCE) loss function commonly used in contrastive learning is used. This loss function can also be learned in small sample meta tasks Make fine adjustments when necessary.
  • the loss function formula is as follows, where u and v are a pair of nodes in the initial graph network model, and A represents the graph network model where the node is located:
  • graph network models can adopt contextual text embedding methods to maintain a task support set that is large enough to support dynamic updates.
  • the way to debug the parameters of the initial graph network model can be: create a node-level subtask test set and a graph-level graph task test set based on the original graph data; use the subtask test set and the graph task test set respectively to The initial graph network model is trained; the parameters of the initial graph network model are adjusted according to the training results.
  • the graph network model can adopt dual adaptation mechanisms at the node level and the graph level.
  • the node level is the learning of small sample mask nodes
  • the graph level is the learning of pre-trained models on public data sets.
  • the graph network model can first create several node-level subtasks and graph-level tasks on the graph data set. The data set of each training task will be divided into a support set and a test set. .
  • the meta-model performs dual adaptation adjustments on the sub-task support set and the graph task support set, and performs gradient backpropagation on the sub-task test set and graph task test set according to the calculated loss function, thereby achieving the initial Adjustment of graph network model parameters.
  • the target graph network model can be obtained after parameter debugging of the initial graph network model.
  • S260 Determine at least two starting nodes in the target graph network model.
  • the starting node can be any node in the target graph network model. Starting from the starting node, the target graph Walking in the network model can generate neighbor subgraphs.
  • At least two nodes can be determined in the target graph network model as starting nodes for the generation of neighbor subgraphs in the next step.
  • a neighbor subgraph corresponding to each start node can be generated centered on each start node, and the same start node can correspond to one or more neighbor subgraphs.
  • the neighbor subgraphs generated from the same central node have similar structural attributes and are therefore regarded as positive examples; the neighbor subgraphs generated from different nodes (including nodes in the same network or different networks) , has unique structural properties related to the central node, that is to say, there is no structural similarity between the neighbor subgraphs generated from different nodes, so it is regarded as a negative example.
  • each positive example corresponds to a category and each negative example corresponds to a category.
  • each positive or negative example can be regarded as a category, and the nodes to be classified in the original graph data and the positive and negative example nodes to which the categories belong are determined. , and then determine the connection relationship between the node to be classified and each positive node and each negative node.
  • whether there is a connection relationship between two nodes can be determined by the following formula, where u and v are the two nodes whose connection relationship is to be determined, and D ⁇ rec(*,*) is a neural tensor network
  • the decoder of the (Neural Tensor Network, NTN) model, g ⁇ * is the graph structure with noise obtained after randomly deleting some existing edges in the input graph G.
  • Figure 3 is a schematic diagram of the graph network pre-training process provided in this embodiment.
  • x ⁇ q, x ⁇ (k_0), x ⁇ (k_1) and x ⁇ (k_2) are four neighbor subgraphs.
  • x ⁇ q and x ⁇ (k_0) correspond to one starting node
  • x ⁇ (k_1) and x ⁇ (k_2) correspond to another starting node
  • for the neighbor subgraph x ⁇ q, x ⁇ (k_0 ) is its positive example
  • x ⁇ (k_1) and x ⁇ (k_2) are its negative examples.
  • Encode these four neighbor subgraphs to obtain vectors q, k_0, k_1 and k_2 respectively. Use the encoded vectors Similarity calculations can be performed and losses can be compared.
  • S2110. Determine the category of the node to be classified according to the categories to which the connected nodes of the node to be classified belong.
  • the similarity between the node to be classified and multiple categories can be determined based on the connected nodes of the node to be classified, and the category to which each node to be classified can be determined based on the similarity, thereby completing the final node classification task.
  • the embodiment of this application randomly sorts the nodes in the original graph data, then determines the unlabeled masked nodes in the nodes, removes the masked nodes from the nodes and determines the remaining nodes as target nodes, and then constructs an initialization based on the degree information of the target node.
  • Graph network model and then perform parameter debugging on the initial graph network model, and then determine the graph network model after parameter debugging as the target graph network model, and then determine at least two starting nodes in the target graph network model, and then use each starting node
  • the starting node is used as the center to generate neighbor subgraphs corresponding to each starting node, and then the neighbor subgraphs corresponding to the same starting node are determined as positive examples, and the neighbor subgraphs corresponding to different starting nodes are determined as negative examples, and then the original
  • the nodes to be classified in the graph data, the positive example nodes included in the positive examples, and the negative example nodes included in the negative examples are then determined.
  • the connection relationship between the node to be classified and the positive example node and the negative example node is determined.
  • the category to which a node belongs determines the category of the node to be classified.
  • the graph network model node classification method provided by the embodiment of the present application realizes the labeling of unlabeled data in the original graph data by pre-training the graph network model, so that the original unlabeled data can be used for learning in the final node classification task. , improving the efficiency and accuracy of graph neural network learning.
  • Figure 4 is a schematic structural diagram of a graph network model node classification device provided in Embodiment 3 of the present application. As shown in Figure 4, the device includes: an initial network model construction module 310, an initial network model adjustment module 320 and a positive and negative example construction module 330.
  • the initial network model building module 310 is configured to build an initial graph network model based on the original graph data.
  • the initial network model building module 310 is configured to: randomly sort the nodes in the original graph data; determine unlabeled masked nodes in the nodes, remove masked nodes from the nodes, and determine the remaining nodes as target nodes; Build an initial graph network model based on the degree information of the target node.
  • the initial network model adjustment module 320 is configured to adjust the initial graph network model to obtain the target graph network model.
  • the initial network model adjustment module 320 is configured to: perform parameter debugging on the initial graph network model; and determine the graph network model after parameter debugging as the marked graph network model.
  • the initial network model adjustment module 320 is configured to perform parameter debugging on the initial graph network model through the following methods: determining a pair of nodes in the initial graph network model; determining the loss function value corresponding to the pair of nodes, and based on the loss function value Adjust the parameters of the initial graph network model.
  • the initial network model adjustment module 320 is configured to perform parameter debugging on the initial graph network model through the following method: creating a node-level subtask test set and a graph-level test set based on the original graph data. Graph task test set; use the subtask test set and the graph task test set to train the initial graph network model; adjust the parameters of the initial graph network model based on the training results.
  • the positive example and negative example construction module 330 is configured to use the target graph network model to construct positive examples and negative examples, and classify the nodes in the original graph data according to the positive examples and negative examples.
  • the positive example and negative example construction module 330 is configured to use the target graph network model to construct positive examples and negative examples in the following manner: determine at least two starting nodes in the target graph network model; use each starting node to Generate neighbor subgraphs corresponding to each starting node for the center, where the number of neighbor subgraphs corresponding to each starting node is at least one; among all neighbor subgraphs, all neighbor subgraphs that have the same starting node as each neighbor subgraph are Each neighbor subgraph is determined as a positive example of each neighbor subgraph, and each neighbor subgraph among all neighbor nodes that has a different starting node from each neighbor subgraph is determined as each neighbor subgraph.
  • a negative example is configured to use the target graph network model to construct positive examples and negative examples in the following manner: determine at least two starting nodes in the target graph network model; use each starting node to Generate neighbor subgraphs corresponding to each starting node for the center, where the number of neighbor subgraphs corresponding to each starting node is at least one; among all neighbor subgraphs, all neighbor sub
  • the positive example and negative example construction module 330 is configured to classify the nodes in the original graph data according to the positive examples and negative examples in the following manner: obtain the nodes to be classified in the original graph data, and each positive example includes positive node and the negative node included in each negative example; determine the connection relationship between the node to be classified and the positive node and the negative node; determine the category of the node to be classified according to the category to which the connected node of the node to be classified belongs.
  • the graph network model node classification device provided by the embodiments of this application can execute the graph network model node classification method provided by any embodiment of this application, and has functional modules and effects corresponding to the execution method.
  • FIG. 5 shows a schematic structural diagram of an electronic device 10 that can be used to implement embodiments of the present application.
  • Electronic devices may represent various forms of digital computers, such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers.
  • Electronic devices may also represent various forms of mobile devices, such as personal digital assistants, cellular phones, smartphones, wearable devices (eg, helmets, glasses, watches, etc.), and other similar computing devices.
  • the components shown herein, their connections and relationships, and their functions are examples only and are not intended to limit the implementation of the present application as described and/or claimed herein.
  • the electronic device 10 includes at least one processor 11, and a memory communicatively connected to the at least one processor 11, such as a read-only memory (Read-Only Memory, ROM) 12, a random access memory (Random Access Memory, RAM) 13, etc., in which the memory stores computer programs that can be executed by at least one processor.
  • the processor 11 can execute a variety of functions according to the computer program stored in the ROM 12 or the computer program loaded into the RAM 13 from the storage unit 18. Proper action and handling. In the RAM 13, various programs and data required for the operation of the electronic device 10 can also be stored.
  • the processor 11, the ROM 12 and the RAM 13 are connected to each other via the bus 14. Input/Output, I/O interface 15 is also connected to bus 14.
  • the I/O interface 15 Multiple components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16, such as a keyboard, a mouse, etc.; an output unit 17, such as various types of displays, speakers, etc.; a storage unit 18, such as a magnetic disk, an optical disk, etc. etc.; and communication unit 19, such as network card, modem, wireless communication transceiver, etc.
  • the communication unit 19 allows the electronic device 10 to exchange information/data with other devices through a computer network such as the Internet and/or various telecommunications networks.
  • Processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the processor 11 include, but are not limited to, a central processing unit (Central Processing Unit, CPU), a graphics processing unit (Graphic Processing Unit, GPU), a variety of dedicated artificial intelligence (Artificial Intelligence, AI) computing chips, a variety of running Processors for machine learning model algorithms, digital signal processors (Digital Signal Processing, DSP), and any appropriate processors, controllers, microcontrollers, etc.
  • the processor 11 executes a plurality of methods and processes described above, such as graph network model node classification methods.
  • the graph network model node classification method may be implemented as a computer program, which is tangibly included in a computer-readable storage medium, such as the storage unit 18 .
  • part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19.
  • the processor 11 may be configured to perform the graph network model node classification method in any other suitable manner (eg, by means of firmware).
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSP Application Specific Standard Parts
  • SOC System on Chip
  • CPLD Complex Programmable Logic Device
  • These various embodiments may include implementation in one or more computer programs executable and/or interpreted on a programmable system including at least one programmable processor, the programmable processor
  • the processor which may be a special purpose or general purpose programmable processor, may receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device.
  • An output device may be a special purpose or general purpose programmable processor, may receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device.
  • An output device may be a special purpose or general purpose programmable processor, may receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device.
  • Computer programs for implementing the methods of the present application may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer program, when executed by the processor, causes the flowcharts and/or blocks to The functions/operations specified in the diagram are implemented.
  • a computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
  • a computer-readable storage medium may be a tangible medium that may contain or store a computer program for use by or in connection with an instruction execution system, apparatus, or device.
  • Computer-readable storage media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or devices, or any suitable combination of the foregoing.
  • the computer-readable storage medium may be a machine-readable signal medium.
  • Machine-readable storage media may include electrical connections based on one or more wires, portable computer disks, hard drives, RAM, ROM, Erasable Programmable Read-Only Memory (EPROM) or flash memory, Optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • the systems and techniques described herein may be implemented on an electronic device having a display device (e.g., a cathode ray tube (CRT) or liquid crystal) configured to display information to a user.
  • a display device e.g., a cathode ray tube (CRT) or liquid crystal
  • a display Liquid Crystal Display, LCD monitor
  • a keyboard and pointing device e.g., a mouse or a trackball
  • Other kinds of devices may also be configured to provide interaction with the user; for example, the feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and may be provided in any form, including Acoustic input, voice input or tactile input) to receive input from the user.
  • the systems and techniques described herein may be implemented in a computing system that includes back-end components (e.g., as a data server), or a computing system that includes middleware components (e.g., an application server), or a computing system that includes front-end components (e.g., A user's computer having a graphical user interface or web browser through which the user can interact with implementations of the systems and technologies described herein), or including such backend components, middleware components, or any combination of front-end components in a computing system.
  • the components of the system may be interconnected by any form or medium of digital data communication (eg, a communications network). Examples of communication networks include: Local Area Network (LAN), Wide Area Network (WAN), blockchain network, and the Internet.
  • Computing systems may include clients and servers.
  • Clients and servers are generally remote from each other and typically interact over a communications network.
  • the relationship of client and server is created by computer programs running on corresponding computers and having a client-server relationship with each other.
  • the server can be a cloud server, also known as a cloud computing server or cloud host. It is a host product in the cloud computing service system to solve It overcomes the shortcomings of difficult management and weak business scalability in traditional physical hosts and Virtual Private Server (VPS) services.
  • VPN Virtual Private Server

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种图网络模型节点分类方法,包括:根据原始图数据构建初始图网络模型;对初始图网络模型进行调整,得到目标图网络模型;利用目标图网络模型构造正例和负例,并根据正例和负例对原始图数据中的节点进行分类。

Description

图网络模型节点分类方法、装置、设备及存储介质
本申请要求在2022年03月15日提交中国专利局、申请号为202210251047.X的中国专利申请的优先权,以上申请的全部内容通过引用结合在本申请中。
技术领域
本申请涉及图网络技术领域,例如涉及一种图网络模型节点分类方法、装置、设备及存储介质。
背景技术
在图网络模型的训练和学习中,如果我们有充足的数据和标签,可以通过有监督学习得到非常好的结果。但是在现实生活中,我们常常有大量的数据而仅仅有少量的标签,而标注数据需要耗费大量的精力,若直接丢掉这些未标注的数据也很可惜。
以excel表格识别场景为例,相关技术中的表格识别模型训练方法主要有基于图卷积网络(Graph Convolutional Network,GCN)的表格通用训练方法、基于YOLO网络模型的表格识别训练方法和基于快速卷积神经网络(Faster Region-Convolutional Neural Networks,Faster R-CNN)的表格识别训练方法。
但是,第一种方法基于神经网络的模型,需要大量的标记数据,但是获得大量人工标记的数据成本很高,且从头开始训练一个GCN模型成本非常高,不利于实际应用;第二种方法仅用一个卷积神经网络(Convolutional Neural Networks,CNN)直接预测不同目标的类别与位置,不能保证准确性;第三种方法基于滑动窗口的区域选择策略无针对性,时间复杂度高,窗口冗余,且手工设计的特征对多样性的变化无很好的鲁棒性。
发明内容
本申请提供了一种图网络模型节点分类方法、装置、设备及存储介质,以提高节点分类时的高效性与准确性。
根据本申请的一方面,提供了一种图网络模型节点分类方法,包括:
根据原始图数据构建初始图网络模型;
对所述初始图网络模型进行调整,得到目标图网络模型;
利用所述目标图网络模型构造正例和负例,并根据所述正例和负例对所述原始图数据中的节点进行分类。
根据本申请的另一方面,提供了一种图网络模型节点分类装置,包括:
初始网络模型构建模块,设置为根据原始图数据构建初始图网络模型;
初始网络模型调整模块,设置为对所述初始图网络模型进行调整,得到目标图网络模型;
正例和负例构造模块,设置为利用所述目标图网络模型构造正例和负例,并根据所述正例和负例对所述原始图数据中的节点进行分类。
根据本申请的另一方面,提供了一种电子设备,所述电子设备包括:
至少一个处理器;以及
与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的计算机程序,所述计算机程序被所述至少一个处理器执行,以使所述至少一个处理器能够执行本申请任一实施例所述的图网络模型节点分类方法。
根据本申请的另一方面,提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机指令,所述计算机指令用于使处理器执行时实现本申请任一实施例所述的图网络模型节点分类方法。
附图说明
下面将对实施例描述中所需要使用的附图作简单地介绍。
图1是根据本申请实施例一提供的一种图网络模型节点分类方法的流程图;
图2是根据本申请实施例二提供的一种图网络模型节点分类方法的流程图;
图3是根据本申请实施例二提供的一种图网络预训练过程示意图;
图4为根据本申请实施例三提供的一种图网络模型节点分类装置的结构示意图;
图5是实现本申请实施例四的图网络模型节点分类方法的电子设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行说明。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。 应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备,不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
实施例一
图1为本申请实施例一提供了一种图网络模型节点分类方法的流程图,本实施例可适用于利用图网络模型进行节点分类的情况,该方法可以由图网络模型节点分类装置来执行,该图网络模型节点分类装置可以采用硬件和/或软件的形式实现,该图网络模型节点分类装置可配置于电子设备中。如图1所示,该方法包括:
S110、根据原始图数据构建初始图网络模型。
原始图数据为包含多个待分类节点的图数据,初始图网络模型为根据原始图数据构建的未经调整的粗糙模型。
在本实施例中,原始图数据中包含已标注样本数据和未标注样本数据,根据原始图数据构建初始图网络模型时可以利用原始图数据中已标注样本数据,对已标注样本数据进行分析,获取节点的度信息,并采用图结构构建初始图网络模型,初始图网络模型中每个节点的邻居节点数与该节点的度信息对应。
S120、对初始图网络模型进行调整,得到目标图网络模型。
目标图网络模型为对初始图网络模型进行调整之后的模型。
在本实施例中,初始图网络模型是利用原始图数据中的已标注数据(已标注样本数据)构建的,初始图网络模型可以看作是一个编码器,可以用来生成未标注数据中节点的属性特征与边结构。通过初始图网络模型,可以为未标注数据(未标注样本数据)“造标签”,即将未标注数据转换为标注数据,这个过程可以称为初始图网络模型的预训练。在预训练过程中对初始图网络模型进行调整,调整完成后可以得到目标图网络模型。
S130、利用目标图网络模型构造正例和负例,并根据正例和负例对原始图数据中的节点进行分类。
对于一个图网络模型,以任意节点为起点进行随机游走,可以生成以该节点为中心的邻居子图。可以认为从同一中心节点出发生成的邻居子图具有相似的结构属性,因此被作为正例;从不同节点(包括相同网络或不同网络中的节点)出发生成的邻居子图,具有与中心节点相关的独特结构属性,也就是说不 同节点出发生成的邻居子图之间不具有结构相似性,因此被作为负例。
在本实施例中,可以利用目标图网络模型构造正例和负例,每一个正例可以作为一个类别,每一个负例也可以作为一个类别,然后将原始图数据输入目标图网络模型完成最终的节点分类任务。可选的,可以确定待分类节点与每个正例和每个负例中节点的连接关系,根据与待分类节点相连的节点所在的子图对应的类别确定待分类节点的类别。
本申请实施例根据原始图数据构建初始图网络模型,对初始图网络模型进行调整,得到目标图网络模型,利用目标图网络模型构造正例和负例,并根据正例和负例对原始图数据中的节点进行分类。本申请实施例提供的图网络模型姐分类方法,通过对图网络模型进行预训练实现对原始图数据中未标注数据的标注,使得在最终的节点分类任务中可以利用原本未标注的数据进行学习,提高了图神经网络学习的效率与准确性。
实施例二
图2为本申请实施例二提供的一种图网络模型节点分类方法的流程图。如图2所示,该方法包括:
S210、对原始图数据中的节点进行随机排序。
在本实施例中,构建初始图网络模型时可以对原始图数据中的节点进行随机排序,利用排序后的原始图数据进行初始图网络模型的构建。
S220、确定节点中的未标注的遮掩节点,在节点中去除遮掩节点并将剩余节点确定为目标节点。
遮掩节点为未标注样本数据中的节点。
在本实施例中,确定遮掩节点之后,可以遮掩这些节点的属性特征及边结构,通过将原始图数据中的未标注样本数据进行遮掩,可以排除未标注样本数据的影响,利用已标注样本数据作为目标节点构建图网络模型。
S230、根据目标节点的度信息构建初始图网络模型。
度信息可以表示与节点相连的边的数量及方向。
在本实施例中,通过获取目标节点的度信息,可以得到已标注样本数据相关的图信息,然后可以根据图信息按照上述步骤中确定的顺序依次生成目标节点的属性特征和边结构,直至完成整个初始图网络模型的构建。
S240、对初始图网络模型进行参数调试。
在本实施例中,通过对初始图网络模型的预训练可以为未标注数据进行标注,由于初始图网络模型是比较粗糙的模型,故预训练的关键在于在预训练的 过程中学习如何微调。
可选的,对初始图网络模型进行参数调试的方式可以是:确定初始图网络模型中的一对节点;确定一对节点对应的损失函数值,根据损失函数值调整初始图网络模型的参数。
示例性的,构建初始图网络模型之后,可以对初始图网络模型进行参数调试。可选地,可以使用图模型迁移和数据样本迁移的方法实现新学习任务的数据参数调试,例如修改学习率、优化器、网络层数、超参数等,以使得预训练后的模型能够针对目标下游任务被快速、有效地适应。对于初始图网络模型中的一对节点,可以通过图神经网络编码得到其中的子图在隐空间的整体向量表示,图网络模型的预训练任务可以表示为在隐空间表示下在词典中为查询子图(query)q找到与之相似的键子图(key)k0,即采用对比学习常用的噪音对比估计(Info Noise Contrastive Estimation,InfoNCE)损失函数,该损失函数也可在小样本元任务学习时进行微调。损失函数公式如下所示,其中,u和v为初始图网络模型中的一对节点,A表示该节点所在的图网络模型:
此外,图网络模型可以采用上下文文本嵌入方法,来维护足够大且支持动态更新的任务支持集。
可选的,对初始图网络模型进行参数调试的方式可以是:根据原始图数据创建节点级别的子任务测试集和图级别的图任务测试集;分别利用子任务测试集和图任务测试集对初始图网络模型进行训练;根据训练结果调整初始图网络模型的参数。
为了捕捉图中的局部信息和全局信息,图网络模型可以采用节点级别和图级别的双适应机制。其中,节点级别为小样本遮掩节点的学习,图级别为公共数据集上预训练模型学习。对于给定的预训练图数据集,图网络模型可以先在图数据集上创建数个节点级别的子任务和图级别的任务,每个训练任务的数据集都会被划分为支撑集和测试集。在预训练过程中,元模型在子任务支撑集和图任务支撑集上进行双适应调整,并根据计算出的损失函数在子任务测试集和图任务测试集上进行梯度回传,从而实现初始图网络模型参数的调整。
S250、将参数调试后的图网络模型确定为目标图网络模型。
在本实施例中,初始图网络模型经过参数调试后可以得到目标图网络模型。
S260、在目标图网络模型中确定至少两个起始节点。
起始节点可以是目标图网络模型中的任意节点,从起始节点出发在目标图 网络模型中游走可以生成邻居子图。
在本实施例中,为实现预训练过程中的正例和负例的构造,可以在目标图网络模型中确定至少两个节点作为起始节点,用于下一步中邻居子图的生成。
S270、以每个起始节点为中心生成每个起始节点对应的邻居子图。
在本实施例中,确定多个起始节点后,可以以每个起始节点为中心生成每个起始节点对应的邻居子图,相同起始节点可以对应一个或多个邻居子图。
S280、将全部邻居节点中与每个邻居子图具有相同起始节点的每个邻居子图确定为所述每个邻居子图的一个正例,将全部邻居节点中与每个邻居子图具有不同起始节点的邻居子图确定为所述每个邻居子图的一个负例。
在本实施例中,可以认为从同一中心节点出发生成的邻居子图具有相似的结构属性,因此被作为正例;从不同节点(包括相同网络或不同网络中的节点)出发生成的邻居子图,具有与中心节点相关的独特结构属性,也就是说不同节点出发生成的邻居子图之间不具有结构相似性,因此被作为负例。
S290、获取原始图数据中的待分类节点、每个正例中包括的正例节点及每个负例中包括的负例节点。
示例性的,每一个正例对应一个类别且每一个负例对应一个类别。
在本实施例中,为完成最终的节点分类任务,可以将每一个正例或负例作为一个类别,确定原始图数据中的待分类节点及已知类别归属的正例节点及和负例节点,进而确定待分类节点与每个正例节点及每个负例节点的连接关系。
S2100、确定待分类节点与正例节点及负例节点的连接关系。
在本实施例中,两个节点之间是否存在连接关系可以通过以下公式确定,其中,u和v为待确定连接关系的两个节点,D^rec(*,*)为一个神经张量网络(Neural Tensor Network,NTN)模型的解码器,g^*为随机删除输入图G中一些已存在的边之后获得的带有噪声的图结构,图网络模型以g^*为输入可以得到编码器F^rec(g^*):
A^_(u,v)=D^rec(F^rec(g^*)[u],F^rec(g^*)[v])
图3为本实施例提供的一种图网络预训练过程示意图,如图3所示,x^q、x^(k_0)、x^(k_1)和x^(k_2)为4个邻居子图,其中,x^q和x^(k_0)对应一个起始节点,x^(k_1)和x^(k_2)对应另一个起始节点,对于邻居子图x^q来说,x^(k_0)是它的正例,x^(k_1)和x^(k_2)是它的负例,对这4个邻居子图进行编码,分别得到向量q、k_0、k_1和k_2,利用编码之后的向量可以进行相似度计算并对比损失。
S2110、根据待分类节点的相连节点所属的类别确定待分类节点的类别。
在本实施例中,根据待分类节点的相连节点可以确定待分类节点与划分的多个类别的相似性,根据相似性可以确定每个待分类节点所属的类别,从而完成最终的节点分类任务。
本申请实施例对原始图数据中的节点进行随机排序,然后确定节点中的未标注的遮掩节点,在节点中去除遮掩节点并将剩余节点确定为目标节点,再根据目标节点的度信息构建初始图网络模型,再对初始图网络模型进行参数调试,再将参数调试后的图网络模型确定为目标图网络模型,再在目标图网络模型中确定至少两个起始节点,再以每个起始节点为中心生成每个起始节点对应的邻居子图,再将相同起始节点对应的邻居子图确定为正例,将不同起始节点对应的邻居子图确定为负例,再获取原始图数据中的待分类节点、正例中包括的正例节点及负例中包括的负例节点,再确定待分类节点与正例节点及负例节点的连接关系,最后根据待分类节点的相连节点所属的类别确定待分类节点的类别。本申请实施例提供的图网络模型节点分类方法,通过对图网络模型进行预训练实现对原始图数据中未标注数据的标注,使得在最终的节点分类任务中可以利用原本未标注的数据进行学习,提高了图神经网络学习的效率与准确性。
实施例三
图4为本申请实施例三提供的一种图网络模型节点分类装置的结构示意图。如图4所示,该装置包括:初始网络模型构建模块310,初始网络模型调整模块320和正例和负例构造模块330。
初始网络模型构建模块310,设置为根据原始图数据构建初始图网络模型。
可选的,初始网络模型构建模块310是设置为:对原始图数据中的节点进行随机排序;确定节点中的未标注的遮掩节点,在节点中去除遮掩节点并将剩余节点确定为目标节点;根据目标节点的度信息构建初始图网络模型。
初始网络模型调整模块320,设置为对初始图网络模型进行调整,得到目标图网络模型。
可选的,初始网络模型调整模块320是设置为:对初始图网络模型进行参数调试;将参数调试后的图网络模型确定为所标图网络模型。
可选的,初始网络模型调整模块320是设置为通过如下方法对初始图网络模型进行参数调试:确定初始图网络模型中的一对节点;确定一对节点对应的损失函数值,根据损失函数值调整初始图网络模型的参数。
可选的,初始网络模型调整模块320是设置为通过如下方法对初始图网络模型进行参数调试:根据原始图数据创建节点级别的子任务测试集和图级别的 图任务测试集;分别利用子任务测试集和图任务测试集对初始图网络模型进行训练;根据训练结果调整初始图网络模型的参数。
正例和负例构造模块330,设置为利用目标图网络模型构造正例和负例,并根据正例和负例对原始图数据中的节点进行分类。
可选的,正例和负例构造模块330是设置为通过如下方式利用目标图网络模型构造正例和负例:在目标图网络模型中确定至少两个起始节点;以每个起始节点为中心生成每个起始节点对应的邻居子图,其中,每个起始节点对应的邻居子图的数量为至少一个;将全部邻居子图中与每个邻居子图具有相同起始节点的每个邻居子图确定为所述每个邻居子图的一个正例,将全部邻居节点中与每个邻居子图具有不同起始节点的每个邻居子图确定为所述每个邻居子图的一个负例。
可选的,正例和负例构造模块330是设置为通过如下方式根据正例和负例对原始图数据中的节点进行分类:获取原始图数据中的待分类节点、每个正例中包括的正例节点及每个负例中包括的负例节点;确定待分类节点与正例节点及负例节点的连接关系;根据待分类节点的相连节点所属的类别确定待分类节点的类别。
本申请实施例所提供的图网络模型节点分类装置可执行本申请任意实施例所提供的图网络模型节点分类方法,具备执行方法相应的功能模块和效果。
实施例四
图5示出了可以用来实施本申请的实施例的电子设备10的结构示意图。电子设备可以表示多种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示多种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备(如头盔、眼镜、手表等)和其它类似的计算装置。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本申请的实现。
如图5所示,电子设备10包括至少一个处理器11,以及与至少一个处理器11通信连接的存储器,如只读存储器(Read-Only Memory,ROM)12、随机访问存储器(Random Access Memory,RAM)13等,其中,存储器存储有可被至少一个处理器执行的计算机程序,处理器11可以根据存储在ROM12中的计算机程序或者从存储单元18加载到RAM13中的计算机程序,来执行多种适当的动作和处理。在RAM 13中,还可存储电子设备10操作所需的多种程序和数据。处理器11、ROM 12以及RAM 13通过总线14彼此相连。输入/输出(Input/Output, I/O)接口15也连接至总线14。
电子设备10中的多个部件连接至I/O接口15,包括:输入单元16,例如键盘、鼠标等;输出单元17,例如多种类型的显示器、扬声器等;存储单元18,例如磁盘、光盘等;以及通信单元19,例如网卡、调制解调器、无线通信收发机等。通信单元19允许电子设备10通过诸如因特网的计算机网络和/或多种电信网络与其他设备交换信息/数据。
处理器11可以是多种具有处理和计算能力的通用和/或专用处理组件。处理器11的一些示例包括但不限于中央处理单元(Central Processing Unit,CPU)、图形处理单元(Graphic Processing Unit,GPU)、多种专用的人工智能(Artificial Intelligence,AI)计算芯片、多种运行机器学习模型算法的处理器、数字信号处理器(Digital Signal Processing,DSP)、以及任何适当的处理器、控制器、微控制器等。处理器11执行上文所描述的多个方法和处理,例如图网络模型节点分类方法。
在一些实施例中,图网络模型节点分类方法可被实现为计算机程序,其被有形地包含于计算机可读存储介质,例如存储单元18。在一些实施例中,计算机程序的部分或者全部可以经由ROM 12和/或通信单元19而被载入和/或安装到电子设备10上。当计算机程序加载到RAM 13并由处理器11执行时,可以执行上文描述的图网络模型节点分类的一个或多个步骤。备选地,在其他实施例中,处理器11可以通过其他任何适当的方式(例如,借助于固件)而被配置为执行图网络模型节点分类方法。
本文中以上描述的系统和技术的多种实施方式可以在数字电子电路系统、集成电路系统、场可编程门阵列(Field Programmable Gate Array,FPGA)、专用集成电路(Application Specific Integrated Circuit,ASIC)、专用标准产品(Application Specific Standard Parts,ASSP)、芯片上的系统(System on Chip,SOC)、复杂可编程逻辑设备(Complex Programmable Logic Device,CPLD)、计算机硬件、固件、软件、和/或它们的组合中实现。这些多种实施方式可以包括:实施在一个或者多个计算机程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程系统上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储系统、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储系统、该至少一个输入装置、和该至少一个输出装置。
用于实施本申请的方法的计算机程序可以采用一个或多个编程语言的任何组合来编写。这些计算机程序可以提供给通用计算机、专用计算机或其他可编程数据处理装置的处理器,使得计算机程序当由处理器执行时使流程图和/或框 图中所规定的功能/操作被实施。计算机程序可以完全在机器上执行、部分地在机器上执行,作为独立软件包部分地在机器上执行且部分地在远程机器上执行或完全在远程机器或服务器上执行。
在本申请的上下文中,计算机可读存储介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的计算机程序。计算机可读存储介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。备选地,计算机可读存储介质可以是机器可读信号介质。机器可读存储介质可以包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、RAM、ROM、可擦除可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM)或快闪存储器、光纤、便捷式紧凑盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
为了提供与用户的交互,可以在电子设备上实施此处描述的系统和技术,该电子设备具有:设置为向用户显示信息的显示装置(例如,阴极射线管(Cathode Ray Tube,CRT)或者液晶显示器(Liquid Crystal Display,LCD)监视器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给电子设备。其它种类的装置还可以设置为提供与用户的交互;例如,提供给用户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声输入、语音输入或者、触觉输入)来接收来自用户的输入。
可以将此处描述的系统和技术实施在包括后台部件的计算系统(例如,作为数据服务器)、或者包括中间件部件的计算系统(例如,应用服务器)、或者包括前端部件的计算系统(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的系统和技术的实施方式交互)、或者包括这种后台部件、中间件部件、或者前端部件的任何组合的计算系统中。可以通过任何形式或者介质的数字数据通信(例如,通信网络)来将系统的部件相互连接。通信网络的示例包括:局域网(Local Area Network,LAN)、广域网(Wide Area Network,WAN)、区块链网络和互联网。
计算系统可以包括客户端和服务器。客户端和服务器一般远离彼此并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。服务器可以是云服务器,又称为云计算服务器或云主机,是云计算服务体系中的一项主机产品,以解决 了传统物理主机与虚拟专用服务器(Virtual Private Server,VPS)服务中,存在的管理难度大,业务扩展性弱的缺陷。
应该理解,可以使用上面所示的多种形式的流程,重新排序、增加或删除步骤。例如,本申请中记载的多个步骤可以并行地执行也可以顺序地执行也可以不同的次序执行,只要能够实现本申请的技术方案所期望的结果,本文在此不进行限制。

Claims (10)

  1. 一种图网络模型节点分类方法,包括:
    根据原始图数据构建初始图网络模型;
    对所述初始图网络模型进行调整,得到目标图网络模型;
    利用所述目标图网络模型构造正例和负例,并根据所述正例和负例对所述原始图数据中的节点进行分类。
  2. 根据权利要求1所述的方法,其中,根据原始图数据构建初始图网络模型,包括:
    对所述原始图数据中的节点进行随机排序;
    确定所述节点中的未标注的遮掩节点,在所述节点中去除所述遮掩节点并将剩余节点确定为目标节点;
    根据所述目标节点的度信息构建所述初始图网络模型。
  3. 根据权利要求1所述的方法,其中,对所述初始图网络模型进行调整,得到目标图网络模型,包括:
    对所述初始图网络模型进行参数调试;
    将参数调试后的图网络模型确定为所述目标图网络模型。
  4. 根据权利要求3所述的方法,其中,对所述初始图网络模型进行参数调试,包括:
    确定所述初始图网络模型中的一对节点;
    确定所述一对节点对应的损失函数值,根据所述损失函数值调整所述初始图网络模型的参数。
  5. 根据权利要求3所述的方法,其中,对所述初始图网络模型进行参数调试,包括:
    根据所述原始图数据创建节点级别的子任务测试集和图级别的图任务测试集;
    分别利用所述子任务测试集和图任务测试集对所述初始图网络模型进行训练;
    根据训练结果调整所述初始图网络模型的参数。
  6. 根据权利要求1所述的方法,其中,利用所述目标图网络模型构造正例和负例,包括:
    在所述目标图网络模型中确定至少两个起始节点;
    以每个起始节点为中心生成每个起始节点对应的邻居子图,其中,每个起始节点对应的邻居子图的数量为至少一个;
    将全部邻居子图中与每个邻居子图具有相同起始节点的每个邻居子图确定为所述每个邻居子图的一个正例,将全部邻居节点中与每个邻居子图具有不同起始节点的每个邻居子图确定为所述每个邻居子图的一个负例。
  7. 根据权利要求所述的方法,其中,每一个正例对应一个类别且每一个负例对应一个类别,根据所述正例和负例对所述原始图数据中的节点进行分类,包括:
    获取所述原始图数据中的待分类节点、每个正例中包括的正例节点及每个负例中包括的负例节点;
    确定所述待分类节点与所述正例节点及所述负例节点的连接关系;
    根据所述待分类节点的相连节点所属的类别确定所述待分类节点的类别。
  8. 一种图网络模型节点分类装置,包括:
    初始网络模型构建模块,设置为根据原始图数据构建初始图网络模型;
    初始网络模型调整模块,设置为对所述初始图网络模型进行调整,得到目标图网络模型;
    正例和负例构造模块,设置为利用所述目标图网络模型构造正例和负例,并根据所述正例和负例对所述原始图数据中的节点进行分类。
  9. 一种电子设备,包括:
    至少一个处理器;以及
    与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的计算机程序,所述计算机程序被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-7中任一项所述的图网络模型节点分类方法。
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机指令,所述计算机指令用于使处理器执行时实现权利要求1-7中任一项所述的图网络模型节点分类方法。
PCT/CN2023/080970 2022-03-15 2023-03-13 图网络模型节点分类方法、装置、设备及存储介质 WO2023174189A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210251047.XA CN114611609A (zh) 2022-03-15 2022-03-15 一种图网络模型节点分类方法、装置、设备及存储介质
CN202210251047.X 2022-03-15

Publications (1)

Publication Number Publication Date
WO2023174189A1 true WO2023174189A1 (zh) 2023-09-21

Family

ID=81863036

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/080970 WO2023174189A1 (zh) 2022-03-15 2023-03-13 图网络模型节点分类方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN114611609A (zh)
WO (1) WO2023174189A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114611609A (zh) * 2022-03-15 2022-06-10 上海爱数信息技术股份有限公司 一种图网络模型节点分类方法、装置、设备及存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111026544A (zh) * 2019-11-06 2020-04-17 中国科学院深圳先进技术研究院 图网络模型的节点分类方法、装置及终端设备
CN113011282A (zh) * 2021-02-26 2021-06-22 腾讯科技(深圳)有限公司 图数据处理方法、装置、电子设备及计算机存储介质
CN114611609A (zh) * 2022-03-15 2022-06-10 上海爱数信息技术股份有限公司 一种图网络模型节点分类方法、装置、设备及存储介质

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111026544A (zh) * 2019-11-06 2020-04-17 中国科学院深圳先进技术研究院 图网络模型的节点分类方法、装置及终端设备
CN113011282A (zh) * 2021-02-26 2021-06-22 腾讯科技(深圳)有限公司 图数据处理方法、装置、电子设备及计算机存储介质
CN114611609A (zh) * 2022-03-15 2022-06-10 上海爱数信息技术股份有限公司 一种图网络模型节点分类方法、装置、设备及存储介质

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HU ZINIU; DONG YUXIAO; WANG KUANSAN; CHANG KAI-WEI; SUN YIZHOU: "GPT-GNN Generative Pre-Training of Graph Neural Networks", PROCEEDINGS OF THE 2022 ACM SOUTHEAST CONFERENCE, ACMPUB27, NEW YORK, NY, USA, 23 August 2020 (2020-08-23) - 29 April 2022 (2022-04-29), New York, NY, USA, pages 1857 - 1867, XP058997002, ISBN: 978-1-4503-8713-2, DOI: 10.1145/3394486.3403237 *
LU YUANFU, JIANG XUNQIANG, FANG YUAN, SHI CHUAN: "Learning to Pre-train Graph Neural Networks", PROCEEDINGS OF THE AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, vol. 35, no. 5, 18 May 2021 (2021-05-18), pages 4276 - 4284, XP093092677, ISSN: 2159-5399, DOI: 10.1609/aaai.v35i5.16552 *
QIU JIEZHONG XPTREE@FOXMAIL.COM; CHEN QIBIN CQB19@MAILS.TSINGHUA.EDU.CN; DONG YUXIAO ERICDONGYX@GMAIL.COM; ZHANG JING ZHANG-JING@R: "GCC Graph Contrastive Coding for Graph Neural Network Pre-Training", PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, ACMPUB27, NEW YORK, NY, USA, 23 August 2020 (2020-08-23) - 10 July 2020 (2020-07-10), New York, NY, USA , pages 1150 - 1160, XP058460997, ISBN: 978-1-4503-7998-4, DOI: 10.1145/3394486.3403168 *

Also Published As

Publication number Publication date
CN114611609A (zh) 2022-06-10

Similar Documents

Publication Publication Date Title
US11829880B2 (en) Generating trained neural networks with increased robustness against adversarial attacks
JP6790286B2 (ja) 強化学習を用いたデバイス配置最適化
KR102302609B1 (ko) 신경망 아키텍처 최적화
JP2022058915A (ja) 画像認識モデルをトレーニングするための方法および装置、画像を認識するための方法および装置、電子機器、記憶媒体、並びにコンピュータプログラム
US20220004811A1 (en) Method and apparatus of training model, device, medium, and program product
CN107301170B (zh) 基于人工智能的切分语句的方法和装置
US20210374542A1 (en) Method and apparatus for updating parameter of multi-task model, and storage medium
JP2021505993A (ja) 深層学習アプリケーションのための堅牢な勾配重み圧縮方式
CN111602148A (zh) 正则化神经网络架构搜索
US20220374776A1 (en) Method and system for federated learning, electronic device, and computer readable medium
JP2022018095A (ja) マルチモーダル事前訓練モデル取得方法、装置、電子デバイス及び記憶媒体
US11423307B2 (en) Taxonomy construction via graph-based cross-domain knowledge transfer
CN114970522B (zh) 语言模型的预训练方法、装置、设备、存储介质
US20210319262A1 (en) Model training, image processing method, device, storage medium, and program product
WO2023138188A1 (zh) 特征融合模型训练及样本检索方法、装置和计算机设备
US20230084055A1 (en) Method for generating federated learning model
CN111667056A (zh) 用于搜索模型结构的方法和装置
WO2023178965A1 (zh) 一种意图识别方法、装置、电子设备及存储介质
JP7412489B2 (ja) 連合学習方法及び装置、電子機器、記憶媒体ならびにコンピュータプログラム
WO2023174189A1 (zh) 图网络模型节点分类方法、装置、设备及存储介质
US20220374678A1 (en) Method for determining pre-training model, electronic device and storage medium
JP2023541742A (ja) ソートモデルのトレーニング方法及び装置、電子機器、コンピュータ可読記憶媒体、コンピュータプログラム
JP2023547010A (ja) 知識の蒸留に基づくモデルトレーニング方法、装置、電子機器
CN114357105A (zh) 地理预训练模型的预训练方法及模型微调方法
CN114492788A (zh) 训练深度学习模型的方法和装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23769692

Country of ref document: EP

Kind code of ref document: A1