WO2023165352A1 - 一种业务对象的分类方法、装置、设备及存储介质 - Google Patents

一种业务对象的分类方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2023165352A1
WO2023165352A1 PCT/CN2023/077115 CN2023077115W WO2023165352A1 WO 2023165352 A1 WO2023165352 A1 WO 2023165352A1 CN 2023077115 W CN2023077115 W CN 2023077115W WO 2023165352 A1 WO2023165352 A1 WO 2023165352A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
weight
neural network
probability
graph neural
Prior art date
Application number
PCT/CN2023/077115
Other languages
English (en)
French (fr)
Inventor
李岩
Original Assignee
百果园技术(新加坡)有限公司
李岩
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 百果园技术(新加坡)有限公司, 李岩 filed Critical 百果园技术(新加坡)有限公司
Publication of WO2023165352A1 publication Critical patent/WO2023165352A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2465Query processing support for facilitating data mining operations in structured databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present application relates to the technical field of computer processing, for example, to a business object classification method, device, equipment and storage medium.
  • the integration mechanism mostly uses the average value method, that is, multiple classification algorithms output multiple score vectors for a business object, and calculate the average value of all score vectors, which is the final score vector.
  • the category corresponding to the largest score vector is final classification result.
  • the processing method for calculating the average value of multiple score vectors is single, and a unified processing method is used for different business objects, and the classification accuracy is low.
  • the present application provides a business object classification method, device, equipment and storage medium to solve how to improve the classification accuracy through the integration mechanism.
  • a method for classifying business objects including:
  • each business object has a feature vector composed of multiple features, and the nodes in the graph neural network represent the feature vectors of the corresponding business objects;
  • the category to which the business object belongs is determined according to the third probabilities of the multiple categories.
  • a device for classifying business objects including:
  • the graph neural network conversion module is configured to convert multiple business objects into a graph neural network, each business object has a feature vector composed of multiple features, and the nodes in the graph neural network represent the feature vectors of the corresponding business objects;
  • a node degree calculation module configured to, for each node in the graph neural network, calculate the degree of association between each node and other nodes in the graph neural network, as the node degree of each node ;
  • the first classification module is configured to execute the graph neural network to output the first probability that the corresponding business object belongs to each category of multiple categories;
  • the second classification module is configured to execute a preset classification model to use the feature vector to identify a second probability that the corresponding business object belongs to each of the plurality of categories;
  • a probability fusion module configured to, for each category, fuse the first probability and the second probability into a third probability according to the node degree of the node;
  • the category determining module is configured to determine the category to which the business object belongs according to the third probability of the multiple categories.
  • a business object classification device includes:
  • the memory stores a computer program that can be executed by the at least one processor, and the computer program is executed by the at least one processor, so that the at least one processor can execute the method described in any embodiment of the present application.
  • the classification method for the business object is a computer program that can be executed by the at least one processor, and the computer program is executed by the at least one processor, so that the at least one processor can execute the method described in any embodiment of the present application.
  • a computer-readable storage medium stores a computer program, and the computer program is configured to enable a processor to implement the implementation described in any embodiment of the present application.
  • the taxonomy of business objects is provided.
  • FIG. 1 is a flowchart of a method for classifying business objects according to Embodiment 1 of the present application
  • FIG. 2 is an example diagram of a graph neural network provided according to Embodiment 1 of the present application.
  • FIG. 3 is a schematic structural diagram of a business object classification device provided according to Embodiment 2 of the present application.
  • Fig. 4 is a schematic structural diagram of a business object classification device implementing the business object classification method of the embodiment of the present application.
  • Fig. 1 is a flow chart of a business object classification method provided in Embodiment 1 of the present application. This embodiment can be applied to the case where the result of integrating multiple classifications into the final classification result through an integration mechanism based on node degree, the method It can be executed by a business object classification device, which can be implemented in at least one form of hardware and software, and the business object classification device can be configured in a business object classification device. As shown in Figure 1, the method includes:
  • Step 101 converting multiple business objects into a graph neural network.
  • the business object can be users; for the news media field, the business object can be news data; for the mobile communication field, the business object can be mobile communication data; for e-commerce ( For the Electronic Commerce (EC) field, the business object can be advertising data, for the autonomous driving field, the business object can be point cloud, and so on.
  • EC Electronic Commerce
  • business objects carry the business characteristics of different business fields, the essence of business objects is still data, such as text information, image data, audio data, video data, and so on.
  • URI Uniform Resource Identifier
  • the business object is converted into a graph neural network (Graph Neural Networks, GNN).
  • the graph neural network is a generalized neural network based on a graph structure, and it is also a connection model. Message passing to capture graph dependencies. Unlike standard neural networks, graph neural networks preserve a state that can represent information from its neighborhood with arbitrary depth.
  • a graph is a data structure that models a set of objects (nodes) and their relationships (edges).
  • graph neural networks can be divided into five categories, namely: Graph Convolution Networks (GCN), Graph Attention Networks, Graph Autoencoders, and Graph Generation Networks (Graph Generative Networks) and Graph Spatial-temporal Networks.
  • GCN Graph Convolution Networks
  • Graph Attention Networks Graph Attention Networks
  • Graph Autoencoders Graph Autoencoders
  • Graph Generation Networks Graph Generative Networks
  • multiple feature vectors of multiple business objects are input into the graph neural network, and multiple nodes and multiple edges are output, wherein, nodes in the graph neural network represent feature vectors of business objects, and in the graph neural network
  • the edges of represent the relationship between business objects, that is, there is a certain relationship between the business objects with edges.
  • Step 102 for each node in the graph neural network, calculate the degree of association between each node and other nodes in the graph neural network, as the node degree of each node.
  • the degree of association between the node and other nodes can be calculated as the node degree of the node.
  • the node degree of a node can be set to distinguish the type of the node, which includes community nodes and isolated nodes.
  • Community nodes also known as non-isolated nodes, are nodes with a high degree of association with other nodes, which is expressed as the node degree of the node Greater than or equal to the preset association threshold, an isolated node is a node with a low degree of association with other nodes or even zero, which means that the node degree of the node is less than the preset association threshold, that is, the node degree of the community node is greater than the isolated node node degree.
  • the number of edges connected (also known as association) of each node can be counted as the node degree of each node, wherein one edge connection (also called Association) two nodes.
  • the node when the number of edges connected to a node is equal to 0, it means that the node has no associated edges with other nodes, and at this time, the node is an isolated node.
  • the node When the number of edges connected to a node is greater than or equal to 1, it means that the node has associated edges with other nodes. At this time, the node is a community node.
  • association threshold can be set to 1, the isolated node can be a node whose node degree is less than 1 (association threshold), and the community node can be a node whose node degree is greater than or equal to 1 (association threshold).
  • the edge e i (v j , v k ) in the query graph neural network, e i ⁇ E, means that e i connects node v j and node v k , v j , v k ⁇ V.
  • the node degree of the two nodes connected to the edge is accumulated by 1, that is,
  • node degree of node v 1 is 5
  • node degree of node v 2 is 2
  • the node degree of node v 3 is 2
  • the node degree of node v 4 is 0,
  • node v 5 has degree 2
  • node v 6 has degree 5
  • node v 7 has degree 4
  • node v 8 has degree 3
  • node v 9 has degree 1
  • node v 1 , v 2 , v 3 , v 5 , v 6 , v 7 , v 8 , and v 9 are all community nodes
  • node v 4 is an isolated node.
  • the above method of calculating node degree is just an example.
  • other ways of calculating node degree can be set according to the actual situation, for example, summing the weights of all associated edges of node degree as the node degree, etc., which are not limited in this embodiment of the present application.
  • those skilled in the art may also use other manners for calculating node degrees according to actual needs, which is not limited in this embodiment.
  • Step 103 execute the graph neural network to output the first probability that the business object belongs to the preset category.
  • a complete graph neural network can be trained in an end-to-end manner in advance according to business requirements in a business scenario.
  • the graph neural network uses the underlying graph as a calculation graph, and learns the neural network primitives to generate a single node by passing, transforming, and aggregating the features of the nodes on the entire graph.
  • the embedding vector that is, the feature vector, the generated embedding vector can be used as the input of the differentiable prediction layer for the classification of nodes.
  • the probability (also known as score) of category C t of is recorded as the first probability, where t ⁇ m, m is a positive integer.
  • Step 104 execute a preset classification model to identify a second probability that the business object belongs to the preset category using the feature vector.
  • At least one classification model can be trained in advance according to the business requirements in the business scenario.
  • the classification model is a model independent of the graph neural network and is a model for classification based on feature vectors, that is, the input contains only feature A vector of business objects, outputting the probability (also known as score) that the business object belongs to a preset category.
  • the classification model can apply a machine learning algorithm, for example, a support vector machine (support vector machines, SVM), a gradient boosting machine (Light Gradient Boosting Machine, LightGBM), etc., and the classification model can apply a deep learning algorithm, For example, Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), and so on.
  • a machine learning algorithm for example, a support vector machine (support vector machines, SVM), a gradient boosting machine (Light Gradient Boosting Machine, LightGBM), etc.
  • a deep learning algorithm For example, Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), and so on.
  • CNN Convolutional Neural Network
  • RNN Recurrent Neural Network
  • the structure of the classification model is not limited to the artificially designed neural network, but also the neural network optimized by the model quantification method, and the neural network structure search (Neural Architecture Search, NAS) method is used to target
  • the neural network for business requirement search, etc. is not limited in this embodiment.
  • st t represents the probability (also known as score) that the node v i belongs to the preset category C t , which is recorded as the second probability, where t ⁇ m, m is a positive integer.
  • Step 105 for the same category, fuse the first probability and the second probability into a third probability according to the node degree.
  • the graph neural network predicts the first probability that the business object belongs to multiple (at least two) categories, and at the same time, each classification model predicts the second probability that the business object belongs to multiple (at least two) categories,
  • the category of the business object divided by the graph neural network is the same as the category of the business object divided by the classification model, then, for the same category, the same business object has at least two probabilities belonging to the category (ie, the first probability, at least one second probability) .
  • the graph neural network has an advantage in classifying and predicting community nodes.
  • the characteristics of business objects are propagated between nodes through the edges between nodes.
  • the more edges associated with nodes the richer the feature propagation, thereby optimizing the results of community mining.
  • the effect of graph neural network classification is often poor due to no associated edges or sparse associated edges.
  • the classification model takes the feature vector of a single business data as input, and does not consider the correlation between business objects. It is better for the classification of isolated nodes with no correlation or relatively sparse correlation. However, for business objects with rich correlations, due to ignoring The relevance is lost, resulting in poor classification effect.
  • the degree of association between business objects represented by node degree can be used as the adjustment for the first probability and the second probability.
  • the parameters of the probability, the first probability and the second probability are linearly or nonlinearly fused into the third probability through the node degree, and the third probability is a comprehensive measure of the result of the classification of the graph neural network and the classification of the classification model.
  • step 105 may include the following steps:
  • Step 1051 Calculate the first weight of the graph neural network for classification and the second weight of the classification model for classification according to the node degree.
  • the node degree representation and the degree of association between business objects can refer to the node degree representation and the degree of association between business objects to configure the graph neural network.
  • Set the weight suitable for classification which is recorded as the first weight.
  • This first weight can reflect the advantages of the graph neural network for the classification of community nodes and the disadvantages of the classification of isolated nodes.
  • business object The degree of association among the classification models configures the weight suitable for classification, which is recorded as the second weight, which can reflect the disadvantage of the classification model for the classification of community nodes and the advantage of classification for isolated nodes.
  • the first mapping function can be configured for the graph neural network and the second mapping function can be configured for the classification model according to the business requirements in the business scenario.
  • the node degree can be substituted into the first mapping function configured for the graph neural network to generate a graph neural network for classification and, substituting the node degree into the second mapping function configured for the classification model to generate the second weight of the second mapping function for the classification.
  • the first mapping function and the second mapping function are usually monotonically increasing functions, and the first weight is positively correlated with the node degree, that is, the larger the node degree, the greater the first weight; conversely, the smaller the node degree, the smaller the first weight , the second weight is positively correlated with the node degree, that is, the larger the node degree is, the larger the second weight is, conversely, the smaller the node degree is, the smaller the second weight is.
  • the first mapping function and the second mapping function are designed in pairs, and the increasing rate of the first weight is not consistent with the increasing rate of the second weight. For the same node degree, there is a difference between the first weight and the second weight, so that the graph There are differences in importance between neural networks and classification models, adapting to different business scenarios.
  • the first weight is greater than or equal to the second weight to reflect the importance of the graph neural network for the community node.
  • the first weight is smaller than the second weight to reflect the importance of the classification model for the isolated node.
  • the first mapping function includes:
  • H G (x) is the first weight
  • x is the node degree
  • is the lower limit value of the weight
  • the second mapping function includes:
  • H P (x) is the second weight
  • x is the node degree
  • is the lower limit value of the weight
  • the weights are better adapted to the business requirements of the business scenario as the node degree increases slowly and smoothly.
  • the first mapping function includes:
  • H G (x) is the first weight
  • x is the node degree
  • is the lower limit value of the weight
  • the second mapping function includes:
  • H P (x) is the second weight
  • x is the node degree
  • is the lower limit value of the weight
  • the weights (the first weight, the second weight) increase rapidly within this range, better adapting to the business requirements of the business scenario.
  • the first mapping function includes:
  • H G (x) is the first weight
  • x is the node degree
  • is the lower limit value of the weight, ⁇ [0,1).
  • the second mapping function includes:
  • H P (x) is the second weight
  • x is the node degree
  • is the lower limit value of the weight
  • the weights (the first weight, the second weight) increase rapidly within this range, better adapting to the business requirements of the business scenario.
  • first mapping function and second mapping function are only examples.
  • other first mapping functions and second mapping functions may be set according to actual conditions, which is not limited in this embodiment of the present application.
  • those skilled in the art may also use other first mapping functions and second mapping functions according to actual needs, which is not limited in this embodiment.
  • Step 1052 For each category, set the product of the first weight and the first probability as a first weight adjustment value, and set the product of the second weight and the second probability as a second weight adjustment value.
  • Step 1053 Calculate the sum of the first weight adjustment value and the second weight adjustment value as the third probability.
  • the first probability and the second probability are fused into the third probability in a linear manner, that is, for the same category, for the graph neural network, the first weight is multiplied by the first probability, and the obtained product is denoted as is the first weight adjustment value, for the classification model, multiply the second weight by the second probability, and record the obtained product as the second weight adjustment value, add the first weight adjustment value and the second weight adjustment value, and The obtained sum value is recorded as the third probability.
  • the classification model predicts that the second probability that node v i belongs to multiple classifications is Configure the second weight for node v i
  • Step 106 determine the category to which the business object belongs according to the third probability.
  • the rules can be designed in advance according to the confidence level. If the third probability of a certain category satisfies the rule, it means that the business object belongs to the category with high confidence, and the business object can be finally determined. Service objects belong to this category.
  • the largest third probability of multiple categories is selected as the target probability, and the category corresponding to the target probability is determined as the category to which the business object belongs.
  • the method for determining the category above is just an example.
  • other methods for determining the category can be set according to the actual situation.
  • the category corresponding to the probability is the category to which the business object belongs, etc., which is not limited in this embodiment of the present application.
  • those skilled in the art may also use other methods for determining categories according to actual needs, which is not limited in this embodiment.
  • the business object is converted into a graph neural network
  • the business object has multiple features
  • the nodes in the graph neural network represent the feature vectors of the business object
  • the degree of association between nodes is calculated in the graph neural network as a node node degree
  • execute the graph neural network to output the first probability that the business object belongs to the preset category
  • execute the preset classification model to use the feature vector to identify the second probability that the business object belongs to the preset category
  • the first probability and the second probability are fused into a third probability
  • the category to which the business object belongs is determined.
  • the graph neural network has advantages for the classification prediction of business objects with a strong degree of association, but has disadvantages for the classification and prediction of business objects with a weak degree of association.
  • the classification model has advantages for the classification and prediction of business objects with a weak degree of association. There are disadvantages in prediction, and there is a complementary relationship between the two. The two can ensure the comprehensiveness of the classification effect.
  • the degree of association between business objects is fused with the results of graph neural network classification and classification model classification.
  • the node degree is different in different business scenarios, and the results of fusion graph neural network classification and classification model classification results can be flexibly adjusted , greatly improving the classification accuracy.
  • the node degree is used to combine the classification predicted by the community node based on the graph neural network and the classification predicted by the single node based on the feature vector to finally determine the classification and improve the classification accuracy.
  • FIG. 3 is a schematic structural diagram of an apparatus for classifying business objects provided in Embodiment 2 of the present application. As shown in Figure 3, the device includes:
  • the graph neural network conversion module 301 is configured to convert multiple business objects into a graph neural network, each business object has a feature vector composed of multiple features, and the nodes in the graph neural network represent the feature vectors of the corresponding business objects;
  • the node degree calculation module 302 is configured to, for each node in the graph neural network, calculate the degree of association between each node and other nodes in the graph neural network, as the node of each node Spend;
  • the first classification module 303 is configured to execute the graph neural network to output the first probability that the corresponding business object belongs to each category of multiple categories;
  • the second classification module 304 is configured to execute a preset classification model to use the feature vector to identify the second probability that the corresponding business object belongs to each of the multiple categories;
  • a probability fusion module 305 configured to, for each category, fuse the first probability and the second probability into a third probability according to the node degree of the node;
  • the category determining module 306 is configured to determine the category to which the business object belongs according to the third probability of the multiple categories.
  • the node degree calculation module 302 includes:
  • the edge statistics module is configured to, for each of the nodes in the graph neural network, count the number of edges connected to each of the nodes as the node degree of each of the nodes, wherein one edge connects two of said nodes.
  • the edge statistics module includes:
  • a node degree initialization module configured to initialize the node degree to zero for each of the nodes in the graph neural network
  • An edge query module configured to query the edges in the graph neural network
  • the node degree accumulation module is configured to connect the edges to each edge in the graph neural network.
  • the node degrees of the two connected nodes are accumulated by 1.
  • the probability fusion module 305 includes:
  • the weight calculation module is configured to calculate the first weight of the graph neural network for classification and the second weight of the classification model for classification according to the node degree;
  • an adjusted weight value calculation module configured to set the product of the first weight and the first probability as a first adjusted weight value for each category, and set the product of the second weight and the second The product between the probabilities is set as the second weighting value;
  • the weighted value summation module is configured to calculate the sum of the first weighted value and the second weighted value as the third probability.
  • the weight calculation module includes:
  • a first mapping module configured to substitute the node degree into a first mapping function configured for the graph neural network to generate a first weight of the graph neural network for classification;
  • a second mapping module configured to substitute the node degree into a second mapping function configured for the classification model, so as to generate a second weight of the second mapping function for classification
  • the first weight is positively correlated with the node degree
  • the second weight is positively correlated with the node degree
  • the first weight is greater than or equal to the second weight
  • the first weight is smaller than the second weight.
  • the first mapping function includes:
  • H G (x) is the first weight
  • x is the node degree
  • is the lower limit value of the weight
  • is a hyperparameter
  • the second mapping function includes:
  • H P (x) is the second weight
  • x is the node degree
  • is the lower limit value of the weight
  • is a hyperparameter
  • the first mapping function includes:
  • H G (x) is the first weight
  • x is the node degree
  • is the lower limit of the weight
  • is a hyperparameter
  • the second mapping function includes:
  • H P (x) is the second weight
  • x is the node degree
  • is the lower limit of the weight
  • ⁇ and ⁇ are hyperparameters.
  • the first mapping function includes:
  • H G (x) is the first weight, x is the node degree, and ⁇ is the lower limit value of the weight;
  • the second mapping function includes:
  • H P (x) is the second weight
  • x is the node degree
  • is the lower limit value of the weight
  • ⁇ and ⁇ belong to hyperparameters.
  • the category determination module 306 includes:
  • the target probability selection module is configured to select the largest one from the third probabilities of multiple categories as the target probability
  • the target probability determination module is configured to determine that the category corresponding to the target probability is the category to which the business object belongs.
  • the device for classifying business objects provided in the embodiments of the present application can execute the method for classifying business objects provided in any embodiment of the present application, and has corresponding functional modules and effects for executing the method for classifying business objects.
  • FIG. 4 shows a schematic structural diagram of a classification device 10 that can be used to implement the business object of the embodiment of the present application.
  • the classification device 10 of business object comprises at least one processor 11, and the memory that is connected with at least one processor 11 in communication, such as read-only memory (Read-Only Memory, ROM) 12, random access memory (Random Access Memory, RAM) 13 etc., wherein, memory is stored with the computer program that can be executed by at least one processor, and processor 11 can be stored in the computer program in read-only memory (ROM) 12 or be loaded into random access from storage unit 18 Various appropriate actions and processes are executed by accessing the computer program in the memory (RAM) 13 . In the RAM 13, various programs and data required for the operation of the classification device 10 of business objects can also be stored.
  • the processor 11, ROM 12, and RAM 13 are connected to each other via a bus 14.
  • An input/output (Input/Output, I/O) interface 15 is also connected to the bus 14 .
  • the communication unit 19 allows the classification device 10 of business objects to exchange information/data with other devices through at least one of a computer network such as the Internet and various telecommunication networks.
  • Processor 11 may be at least one of various general and special purpose processing components having processing and computing capabilities. Examples of the processor 11 include but are not limited to a central processing unit (Central Processing Unit, CPU), a graphics processing unit (Graphics Processing Unit, GPU), various dedicated artificial intelligence (Artificial Intelligence, AI) computing chips, various operating machines A processor for learning model algorithms, a digital signal processor (Digital Signal Process, DSP), and any suitable processor, control devices, microcontrollers, etc. Processor 11 executes the methods and processes described above, such as the classification method of business objects.
  • CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • AI Artificial Intelligence
  • processor for learning model algorithms a digital signal processor
  • DSP Digital Signal Process, DSP
  • Processor 11 executes the methods and processes described above, such as the classification method of business objects.
  • the method for classifying business objects can be implemented as a computer program, which is tangibly embodied in a computer-readable storage medium, such as the storage unit 18 .
  • part or all of the computer program can be loaded and/or installed on the business object classification device 10 via at least one of the ROM 12 and the communication unit 19 .
  • the computer program is loaded into the RAM 13 and executed by the processor 11, at least one step of the business object classification method described above can be executed.
  • the processor 11 may be configured in any other appropriate way (for example, by means of firmware) to execute the business object classification method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Fuzzy Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本申请公开了一种业务对象的分类方法、装置、设备及存储介质,该方法包括:将多个业务对象转换为图神经网络,每个业务对象具有由多个特征组成的特征向量,图神经网络中的节点表示对应业务对象的特征向量;针对图神经网络中的每个节点,计算每个节点与图神经网络中的其他节点之间的关联程度,作为每个节点的节点度;执行图神经网络,以输出对应业务对象归属多个类别的每个类别的第一概率;使用特征向量识别对应业务对象归属多个类别的每个的类别的第二概率;针对每个类别,根据节点的节点度将所述第一概率与第二概率融合为第三概率;根据多个类别的第三概率确定业务对象归属的类别。

Description

一种业务对象的分类方法、装置、设备及存储介质
本申请要求在2022年3月3日提交中国专利局、申请号为202210203772.X的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机处理的技术领域,例如涉及一种业务对象的分类方法、装置、设备及存储介质。
背景技术
在社群挖掘、异常检测等业务场景中,会对用户、视频、音频等业务对象进行分类,不同分类算法的优劣有所不同,因此,常常使用多个分类算法进行分类,通过集成机制将多个分类的结果集成为一个最终分类的结果。
目前,集成机制多使用平均值的方法,即,多个分类算法对一个业务对象输出多个分数向量,计算所有分数向量的平均值,为最终分数向量,取最大的分数向量所对应的类别为最终的分类结果。
对多个分数向量计算平均值的处理方法形式单一,针对不同的业务对象均使用统一的处理方式,分类的精度较低。
发明内容
本申请提供了一种业务对象的分类方法、装置、设备及存储介质,以解决如何提高通过集成机制分类的精度。
根据本申请的一方面,提供了一种业务对象的分类方法,包括:
将多个业务对象转换为图神经网络,每个业务对象具有由多个特征组成的特征向量,所述图神经网络中的节点表示对应业务对象的特征向量;
针对所述图神经网络中的每个节点,计算所述每个节点与所述图神经网络中的其他节点之间的关联程度,作为所述每个节点的节点度;
执行所述图神经网络,以输出所述对应业务对象归属多个类别的每个类别的第一概率;
执行预设的分类模型,以使用所述特征向量识别所述对应业务对象归属所述多个类别的每个类别的第二概率;
针对所述每个类别,根据所述节点的节点度将所述第一概率与所述第二概率融合为第三概率;
根据所述多个类别的第三概率确定所述业务对象归属的所述类别。
根据本申请的另一方面,提供了一种业务对象的分类装置,包括:
图神经网络转换模块,设置为将多个业务对象转换为图神经网络,每个业务对象具有由多个特征组成的特征向量,所述图神经网络中的节点表示对应业务对象的特征向量;
节点度计算模块,设置为针对所述图神经网络中的每个节点,计算所述每个节点与所述图神经网络中的其他节点之间的关联程度,作为所述每个节点的节点度;
第一分类模块,设置为执行所述图神经网络,以输出所述对应业务对象归属多个类别的每个类别的第一概率;
第二分类模块,设置为执行预设的分类模型,以使用所述特征向量识别所述对应业务对象归属多个类别的每个的类别的第二概率;
概率融合模块,设置为针对所述每个类别,根据所述节点的所述节点度将所述第一概率与所述第二概率融合为第三概率;
类别确定模块,设置为根据所述多个类别的第三概率确定所述业务对象归属的所述类别。
根据本申请的另一方面,提供了一种业务对象的分类设备,所述业务对象的分类设备包括:
至少一个处理器;以及
与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有可被所述至少一个处理器执行的计算机程序,所述计算机程序被所述至少一个处理器执行,以使所述至少一个处理器能够执行本申请任一实施例所述的业务对象的分类方法。
根据本申请的另一方面,提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序设置为使处理器执行时实现本申请任一实施例所述的业务对象的分类方法。
附图说明
图1是根据本申请实施例一提供的一种业务对象的分类方法的流程图;
图2是根据本申请实施例一提供的一种图神经网络的示例图;
图3是根据本申请实施例二提供的一种业务对象的分类装置的结构示意图;
图4是实现本申请实施例的业务对象的分类方法的业务对象的分类设备的结构示意图。
具体实施方式
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
实施例一
图1为本申请实施例一提供的一种业务对象的分类方法的流程图,本实施例可适用于通过基于节点度的集成机制集成多个分类的结果为最终分类的结果的情况,该方法可以由业务对象的分类装置来执行,该业务对象的分类装置可以采用硬件和软件中至少之一的形式实现,该业务对象的分类装置可配置于业务对象的分类设备中。如图1所示,该方法包括:
步骤101、将多个业务对象转换为图神经网络。
在不同业务场景中,具有不同的业务对象,这些业务对象为具有业务领域特征的数据的集合。
例如,对于面向用户的服务领域而言,业务对象可以为用户,对于新闻媒体领域而言,业务对象可以为新闻数据,对于移动通讯领域而言,业务对象可以为移动通讯数据;对于电子商务(Electronic Commerce,EC)领域而言,业务对象可以为广告数据,对于自动驾驶领域而言,业务对象可以为点云,等等。
业务对象虽然承载不同业务领域的业务特性,但业务对象本质仍然是数据,例如,文本信息、图像数据、音频数据、视频数据,等等。
对于这些业务对象,可以预先按照业务场景中的业务需求,调用对该业务配置的方法从业务对象提取特征,以向量的形式表示,形成特征向量,即,在对业务对象分类时,业务对象具有由多个特征组成的特征向量,属于多维的特征向量。
示例性地,若业务为识别用户(客户端)是否为网络爬虫,可以提取用户(客户端)访问统一资源标识符(Uniform Resource Identifier,URI)的行为,作为特征向量,例如,访问各种URI的数量,访问各种URI的时间,访问各种URI的密度,等等。
在本实施例中,将业务对象转换为图神经网络(Graph Neural Networks,GNN),图神经网络是一种基于图结构的广义神经网络,也是一种连接模型,它通过图的节点之间的消息传递来捕捉图的依赖关系。与标准神经网络不同的是,图神经网络保留了一种状态,可以表示来自其邻域的具有任意深度的信息。
其中,图是一种数据结构,它为一组对象(节点)及其关系(边)建模。
一般情况下,图神经网络可以划分为五大类别,分别是:图卷积网络(Graph Convolution Networks,GCN)、图注意力网络(Graph Attention Networks)、图自编码器(Graph Autoencoders)、图生成网络(Graph Generative Networks)和图时空网络(Graph Spatial-temporal Networks)。
图神经网络的输入包括图G=(V,E)和特征向量Feat,在图G中,节点(Vertex,又称顶点)的集合V={v1,v2,…,vp},边(Edge)的集合E={e1,e2,…,eq},任意一条边ei=(vj,vk)表示ei连接节点vj和节点vk,每个节点vi对应一个n维特征向量Feati={f1,f2,…,fn}。
在本实施例中,将多个业务对象的多个特征向量输入图神经网络中,输出多个节点、多个边,其中,图神经网络中的节点表示业务对象的特征向量,图神经网络中的边表示业务对象之间的关系,即,存在边的业务对象之间存在一定的关联性。
步骤102、针对图神经网络中的每个节点,计算每个节点与所述图神经网络中的其他节点之间的关联程度,作为每个节点的节点度。
在图神经网络中,部分节点与其他节点之间存在一定的关联性(又称相关性),这些关联性以边的数量、权重等形式体现,在本实施例中,针对图神经网络中的每个节点,可以计算该节点与其他节点之间的关联程度,作为节点的节点度。
节点的节点度可设置为区分该节点的类型,该类型包括社群节点、孤立节点,社群节点又称非孤立节点,为与其他节点关联程度较高的节点,表现为该节点的节点度大于或等于预设的关联阈值,孤立节点为与其他节点关联程度较低甚至为零的节点,表现为该节点的节点度小于预设的关联阈值,即,社群节点的节点度大于孤立节点的节点度。
以边的数量为例,可以针对图神经网络中的每个节点,统计每个节点连接(又称关联)的边的数量,作为每个节点的节点度,其中,一条边连接(又称 关联)两个节点。
在本示例中,当一个节点连接的边的数量等于0,表示该节点与其他节点没有任何关联的边,此时,该节点为孤立节点。
当一个节点连接的边的数量大于或等于1,表示该节点与其他节点有关联的边,此时,该节点为社群节点。
那么,可以设置关联阈值为1,孤立节点可以为节点度小于1(关联阈值)的节点,社群节点可以为节点度大于或等于1(关联阈值)的节点。
在一种统计方式中,在图神经网络中,设节点的集合V={v1,v2,…,vp},边的集合E={e1,e2,…,eq},针对图神经网络中的每个节点vi,vi∈V,对每个节点vi初始化节点度为零
查询图神经网络中的边ei=(vj,vk),ei∈E,表示ei连接节点vj和节点vk,vj、vk∈V。
针对图神经网络中每条边,对边连接的两个节点的节点度累加1,即,
在如图2所示的图神经网络中共存在9个节点,节点v1的节点度为5,节点v2的节点度为2,节点v3的节点度为2,节点v4的节点度为0,节点v5的节点度为2,节点v6的节点度为5,节点v7的节点度为4,节点v8的节点度为3,节点v9的节点度为1,节点v1、v2、v3、v5、v6、v7、v8、v9均为社群节点,节点v4为孤立节点。
当然,上述计算节点度的方式只是作为示例,在实施本实施例时,可以根据实际情况设置其它计算节点度的方式,例如,对节点度所有关联的边的权重求和,作为该节点的节点度,等等,本申请实施例对此不加以限制。另外,除了上述计算节点度的方式外,本领域技术人员还可以根据实际需要采用其它计算节点度的方式,本实施例对此也不加以限制。
步骤103、执行图神经网络,以输出业务对象归属预设的类别的第一概率。
在本实施例中,可以预先按照业务场景中的业务需求通过端到端的方式训练完整的图神经网络。
在对业务对象分类时,执行图神经网络,图神经网络是将底层的图形作为计算图,并通过在整张图形上传递、转换和聚合节点的特征,从而学习神经网络基元以生成单节点的嵌入向量(Embedding),即特征向量,生成的嵌入向量可作为可微预测层的输入,用于节点的分类。
在本实施例中,对于每个节点vi,图神经网络输出一个m维的向量Si=(s1,s2,…,sm),其中,st表示该节点vi属于预设的类别Ct的概率(又称分数),记为第一概率,其中,t∈m,m为正整数。
步骤104、执行预设的分类模型,以使用特征向量识别业务对象归属预设的类别的第二概率。
在本实施例中,可以根据预先按照业务场景中的业务需求训练至少一个分类模型,该分类模型为独立于图神经网络的模型,是基于特征向量进行分类的模型,即,输入为仅包含特征向量的业务对象,输出业务对象属于预设的类别的概率(又称分数)。
示例性地,该分类模型可以应用机器学习算法,例如,支持向量机(support vector machines,SVM)、梯度提升机(Light Gradient Boosting Machine,LightGBM),等等,该分类模型可以应用深度学习算法,例如,卷积神经网络(Convolutional Neural Network,CNN),循环神经网络(Recurrent Neural Network,RNN),等等。
其中,对于深度学习算法,该分类模型的结构不局限于人工设计的神经网络,也可以通过模型量化方法优化的神经网络,通过神经网络结构搜索(Neural Architecture Search,NAS)方法针对业务场景中的业务需求搜索的神经网络,等等,本实施例对此不加以限制。
在本实施例中,对于每个节点vi,输入为业务对象的n维特征向量Feati={f1,f2,…,fn},每个分类模型均输出一个m维的向量Si=(s1,s2,…,sm), 其中,st表示该节点vi属于预设的类别Ct的概率(又称分数),记为第二概率,其中,t∈m,m为正整数。
步骤105、针对同一类别,根据节点度将第一概率与第二概率融合为第三概率。
针对同一业务对象,图神经网络预测业务对象属于多个(至少两个)类别的第一概率,与此同时,每个分类模型预测业务对象属于多个(至少两个)类别的第二概率,图神经网络对业务对象划分的类别与分类模型对业务对象划分的类别相同,那么,针对同一类别,同一业务对象具有至少两个属于该类别的概率(即第一概率、至少一个第二概率)。
图神经网络对于社群节点的分类预测具有优势,通过节点之间的边让业务对象的特征在节点之间传播,节点关联的边越多,特征传播越丰富,从而优化社群挖掘的结果,但是,对于孤立节点,由于没有关联的边或者关联的边稀疏,图神经网络分类的效果往往较差。
分类模型以单个业务数据的特征向量为输入,不考虑业务对象之间的关联性,对于无关联或关联较为稀疏的孤立节点的分类效果较好,但是,对于关联较为丰富的业务对象,由于忽略了关联性,导致分类的效果较差。
对于同一类别,考虑到节点度表征的、业务对象之间的关联程度对于图神经网络、分类模型的影响,可以以节点度表征的、业务对象之间的关联程度作为调节第一概率、第二概率的参数,通过节点度将第一概率与第二概率线性或非线性融合为第三概率,该第三概率为综合衡量图神经网络分类的结果与分类模型分类的结果。
在本申请的一个实施例中,步骤105可以包括如下步骤:
步骤1051、按照节点度分别计算图神经网络对于分类的第一权重、分类模型对于分类的第二权重。
考虑到节点度表征的、业务对象之间的关联程度对于图神经网络、分类模型的影响,可以参考节点度表征的、业务对象之间的关联程度对图神经网络配 置适用于分类的权重,记为第一权重,该第一权重可体现图神经网络对于社群节点分类的优势、对于孤立节点分类的劣势,同理,可以参考节点度表征的、业务对象之间的关联程度对分类模型配置适用于分类的权重,记为第二权重,该第二权重可体现分类模型对于社群节点分类的劣势、对于孤立节点分类的优势。
在实现中,可以预先按照业务场景中的业务需求对图神经网络配置第一映射函数,对分类模型配置第二映射函数。
那么,在对神经网络分类的类别配置第一权重、对分类模型分类的类别配置第二权重时,可以将节点度代入对图神经网络配置的第一映射函数中,以生成图神经网络对于分类的第一权重,以及,将节点度代入对分类模型配置的第二映射函数中,以生成第二映射函数对于分类的第二权重。
第一映射函数、第二映射函数通常属于单调递增的函数,第一权重与节点度正相关,即,节点度越大,第一权重越大,反之,节点度越小,第一权重越小,第二权重与节点度正相关,即,节点度越大,第二权重越大,反之,节点度越小,第二权重越小。
第一映射函数、第二映射函数是成对设计,第一权重增加的速率与第二权重增长的速率并不一致,对于同一节点度,第一权重与第二权重之间存在差异,从而使得图神经网络与分类模型之间的重要性存在差异,适应不同的业务场景。
若节点度表示节点为社群节点,则第一权重大于或等于第二权重,以体现图神经网络对于社群节点的重要性。
若节点度表示节点为孤立节点,则第一权重小于第二权重,以体现分类模型对于孤立节点的重要性。
在一个示例中,第一映射函数包括:
其中,HG(x)为第一权重,x为节点度,δ为权重的下限值,δ∈[0,1),α为 超参数,如α=1。
相应地,第二映射函数包括:
其中,HP(x)为第二权重,x为节点度,δ为权重的下限值,δ∈[0,1),α为超参数,如α=1。
在本示例中,权重(第一权重、第二权重)随着节点度的缓慢平滑增长更好地适配业务场景的业务需求。
在另一个示例中,第一映射函数包括:
其中,HG(x)为第一权重,x为节点度,δ为权重的下限值,δ∈[0,1),β为超参数,如β=1。
相应地,第二映射函数包括:
其中,HP(x)为第二权重,x为节点度,δ为权重的下限值,δ∈[0,1),β、γ均为超参数,如β=1、γ=2。
在本示例中,节点度存在一范围,权重(第一权重、第二权重)在该范围内增长迅速,更好地适配业务场景的业务需求。
在又一个示例中,第一映射函数包括:
其中,HG(x)为第一权重,x为节点度,δ为权重的下限值,δ∈[0,1)。
相应地,第二映射函数包括:
其中,HP(x)为第二权重,x为节点度,δ为权重的下限值,μ、ε属于超参数, 如μ=2、ε=2。
在本示例中,节点度存在一范围,权重(第一权重、第二权重)在该范围内增长迅速,更好地适配业务场景的业务需求。
当然,上述第一映射函数、第二映射函数只是作为示例,在实施本实施例时,可以根据实际情况设置其它第一映射函数、第二映射函数,本申请实施例对此不加以限制。另外,除了上述第一映射函数、第二映射函数外,本领域技术人员还可以根据实际需要采用其它第一映射函数、第二映射函数,本实施例对此也不加以限制。
步骤1052、针对每个类别,将第一权重与第一概率之间的乘积设置为第一调权值,将第二权重与第二概率之间的乘积设置为第二调权值。
步骤1053、计算第一调权值与第二调权值之间的和值,作为第三概率。
在本实施例中,通过线性的方式将第一概率与第二概率融合为第三概率,即,针对同一类别,对于图神经网络,将第一权重与第一概率相乘,得到的乘积记为第一调权值,对于分类模型,将第二权重与第二概率相乘,将得到的乘积记为第二调权值,将第一调权值与第二调权值相加,将得到的和值记为第三概率。
假设,图神经网络预测节点vi归属多个分类的第一概率为 对节点vi配置第一权重
分类模型预测节点vi归属多个分类的第二概率为对节点vi配置第二权重
则集成节点vi归属多个分类的第三概率为
步骤106、根据第三概率确定业务对象归属的类别。
在本实施例中,可以预先按照置信度对分类设计规则,若某个类别的第三概率满足该规则,表示业务对象属于该类别的置信度较高,则可以最终确定业 务对象归属该类别。
示例性地,从多个类别的第三概率中选择最大者,作为目标概率,确定目标概率对应的类别为业务对象归属的类别。
当然,上述确定类别的方法只是作为示例,在实施本实施例时,可以根据实际情况设置其它确定类别的方法,例如,从大于概率阈值的第三概率中选择最大者,作为目标概率,确定目标概率对应的类别为业务对象归属的类别,等等,本申请实施例对此不加以限制。另外,除了上述确定类别的方法外,本领域技术人员还可以根据实际需要采用其它确定类别的方法,本实施例对此也不加以限制。
在本实施例中,将业务对象转换为图神经网络,业务对象具有多个特征,图神经网络中的节点表示业务对象的特征向量;在图神经网络中计算节点之间的关联程度,作为节点的节点度;执行图神经网络,以输出业务对象归属预设的类别的第一概率;执行预设的分类模型,以使用特征向量识别业务对象归属预设的类别的第二概率;针对同一类别,根据节点度将第一概率与第二概率融合为第三概率;根据第三概率确定业务对象归属的类别。图神经网络对于关联程度强的业务对象分类预测具有优势,对于关联程度弱的业务对象分类预测存在劣势,而分类模型对于关联程度弱的业务对象分类预测具有优势,对于关联程度强的业务对象分类预测存在劣势,两者存在互补的关系,两者可以保证分类效果的全面性,考虑到节点度表征的、业务对象之间的关联程度对于图神经网络、分类模型的影响,可以以节点度表征的、业务对象之间的关联程度融合图神经网络分类的结果与分类模型分类的结果,不同的业务场景下节点度有所不同,可以灵活调节融合图神经网络分类的结果与分类模型分类的结果,大大提高了分类的精度。
本实施例使用节点度对基于图神经网络的社群节点预测的分类和基于特征向量的单个节点预测的分类进行结合,最终确定分类,提高了分类的精度。
实施例二
图3为本申请实施例二提供的一种业务对象的分类装置的结构示意图。如图3所示,该装置包括:
图神经网络转换模块301,设置为将多个业务对象转换为图神经网络,每个业务对象具有由多个特征组成的特征向量,所述图神经网络中的节点表示对应业务对象的特征向量;
节点度计算模块302,设置为针对所述图神经网络中的每个节点,计算所述每个节点与所述图神经网络中的其他节点之间的关联程度,作为所述每个节点的节点度;
第一分类模块303,设置为执行所述图神经网络,以输出所述对应业务对象归属多个类别的每个类别的第一概率;
第二分类模块304,设置为执行预设的分类模型,以使用所述特征向量识别所述对应业务对象归属多个类别的每个类别的第二概率;
概率融合模块305,设置为针对所述每个类别,根据所述节点的所述节点度将所述第一概率与所述第二概率融合为第三概率;
类别确定模块306,设置为根据所述多个类别的第三概率确定所述业务对象归属的所述类别。
在本申请的一个实施例中,所述节点度计算模块302包括:
边统计模块,设置为针对所述图神经网络中的每个所述节点,统计每个所述节点连接的边的数量,作为每个所述节点的节点度,其中,一条所述边连接两个所述节点。
在本申请的一个实施例中,所述边统计模块包括:
节点度初始化模块,设置为针对所述图神经网络中的每个所述节点,对每个所述节点初始化节点度为零;
边查询模块,设置为查询所述图神经网络中的边;
节点度累加模块,设置为针对所述图神经网络中每条所述边,对所述边连 接的两个所述节点的所述节点度累加1。
在本申请的一个实施例中,所述概率融合模块305包括:
权重计算模块,设置为按照所述节点度分别计算所述图神经网络对于分类的第一权重、所述分类模型对于分类的第二权重;
调权值计算模块,设置为针对所述每个类别,将所述第一权重与所述第一概率之间的乘积设置为第一调权值,将所述第二权重与所述第二概率之间的乘积设置为第二调权值;
调权值求和模块,设置为计算所述第一调权值与所述第二调权值之间的和值,作为第三概率。
在本申请的一个实施例中,所述权重计算模块包括:
第一映射模块,设置为将所述节点度代入对所述图神经网络配置的第一映射函数中,以生成所述图神经网络对于分类的第一权重;
第二映射模块,设置为将所述节点度代入对所述分类模型配置的第二映射函数中,以生成所述第二映射函数对于分类的第二权重;
其中,所述第一权重与所述节点度正相关,所述第二权重与所述节点度正相关;
若所述节点度表示所述节点为社群节点,则所述第一权重大于或等于所述第二权重;
若所述节点度表示所述节点为孤立节点,则所述第一权重小于所述第二权重。
在本实施例的一个示例中,所述第一映射函数包括:
其中,HG(x)为所述第一权重,x为所述节点度,δ为权重的下限值,α为超参数;
所述第二映射函数包括:
其中,HP(x)为所述第二权重,x为所述节点度,δ为权重的下限值,α为超参数。
在本实施例的另一个示例中,所述第一映射函数包括:
其中,HG(x)为所述第一权重,x为所述节点度,δ为权重的下限值,β为超参数;
所述第二映射函数包括:
其中,HP(x)为所述第二权重,x为所述节点度,δ为权重的下限值,β、γ均为超参数。
在本实施例的又一个示例中,所述第一映射函数包括:
其中,HG(x)为所述第一权重,x为所述节点度,δ为权重的下限值;
所述第二映射函数包括:
其中,HP(x)为所述第二权重,x为所述节点度,δ为权重的下限值,μ、ε属于超参数。
在本申请的一个实施例中,所述类别确定模块306包括:
目标概率选择模块,设置为从多个所述类别的所述第三概率中选择最大者,作为目标概率;
目标概率确定模块,设置为确定所述目标概率对应的所述类别为所述业务对象归属的所述类别。
本申请实施例所提供的业务对象的分类装置可执行本申请任意实施例所提供的业务对象的分类方法,具备执行业务对象的分类方法相应的功能模块和效果。
实施例三
图4示出了可以用来实施本申请的实施例的业务对象的分类设备10的结构示意图。
如图4所示,业务对象的分类设备10包括至少一个处理器11,以及与至少一个处理器11通信连接的存储器,如只读存储器(Read-Only Memory,ROM)12、随机访问存储器(Random Access Memory,RAM)13等,其中,存储器存储有可被至少一个处理器执行的计算机程序,处理器11可以根据存储在只读存储器(ROM)12中的计算机程序或者从存储单元18加载到随机访问存储器(RAM)13中的计算机程序,来执行各种适当的动作和处理。在RAM 13中,还可存储业务对象的分类设备10操作所需的各种程序和数据。处理器11、ROM12以及RAM 13通过总线14彼此相连。输入/输出(Input/Output,I/O)接口15也连接至总线14。
业务对象的分类设备10中的多个部件连接至I/O接口15,包括:输入单元16,例如键盘、鼠标等;输出单元17,例如各种类型的显示器、扬声器等;存储单元18,例如磁盘、光盘等;以及通信单元19,例如网卡、调制解调器、无线通信收发机等。通信单元19允许业务对象的分类设备10通过诸如因特网的计算机网络和各种电信网络中的至少之一与其他设备交换信息/数据。
处理器11可以是各种具有处理和计算能力的通用和专用处理组件中的至少之一。处理器11的示例包括但不限于中央处理单元(Central Processing Unit,CPU)、图形处理单元(Graphics Processing Unit,GPU)、各种专用的人工智能(Artificial Intelligence,AI)计算芯片、各种运行机器学习模型算法的处理器、数字信号处理器(Digital Signal Process,DSP)、以及任何适当的处理器、控制 器、微控制器等。处理器11执行上文所描述的方法和处理,例如业务对象的分类方法。
在一个实施例中,业务对象的分类方法可被实现为计算机程序,其被有形地包含于计算机可读存储介质,例如存储单元18。在一个实施例中,计算机程序的部分或者全部可以经由ROM 12和通信单元19中至少之一而被载入和/或安装到业务对象的分类设备10上。当计算机程序加载到RAM 13并由处理器11执行时,可以执行上文描述的业务对象的分类方法的至少一个步骤。备选地,在其他实施例中,处理器11可以通过其他任何适当的方式(例如,借助于固件)而被配置为执行业务对象的分类方法。

Claims (12)

  1. 一种业务对象的分类方法,包括:
    将多个业务对象转换为图神经网络,每个业务对象具有由多个特征组成的特征向量,所述图神经网络中的节点表示对应业务对象的特征向量;
    针对所述图神经网络中的每个节点,计算所述每个节点与所述图神经网络中的其他节点之间的关联程度,作为所述每个节点的节点度;
    执行所述图神经网络,以输出所述对应业务对象归属多个类别的每个类别的第一概率;
    执行预设的分类模型,以使用所述特征向量识别所述对应业务对象归属所述多个类别的每个类别的第二概率;
    针对所述每个类别,根据所述节点的节点度将所述第一概率与所述第二概率融合为第三概率;
    根据所述多个类别的第三概率确定所述业务对象归属的所述类别。
  2. 根据权利要求1所述的方法,其中,所述针对所述图神经网络中的每个节点,计算每个所述节点与所述图神经网络中的其他节点之间的关联程度,作为每个所述节点的节点度,包括:
    针对所述图神经网络中的每个所述节点,统计每个所述节点连接的边的数量,作为每个所述节点的节点度,其中,一条所述边连接两个所述节点。
  3. 根据权利要求2所述的方法,其中,所述针对所述图神经网络中的每个所述节点,统计每个所述节点连接的边的数量,作为每个所述节点的节点度,包括:
    针对所述图神经网络中的每个所述节点,对每个所述节点初始化节点度为零;
    查询所述图神经网络中的边;
    针对所述图神经网络中每条所述边,对所述边连接的两个所述节点的所述节点度累加1。
  4. 根据权利要求1所述的方法,其中,所述针对所述每个类别,根据所述 节点的所述节点度将所述第一概率与所述第二概率融合为第三概率,包括:
    按照所述节点度分别计算所述图神经网络对于分类的第一权重、所述分类模型对于分类的第二权重;
    针对所述每个类别,将所述第一权重与所述第一概率之间的乘积设置为第一调权值,将所述第二权重与所述第二概率之间的乘积设置为第二调权值;
    计算所述第一调权值与所述第二调权值之间的和值,作为第三概率。
  5. 根据权利要求4所述的方法,其中,所述按照所述节点度分别计算所述图神经网络对于分类的第一权重、所述分类模型对于分类的第二权重,包括:
    将所述节点度代入对所述图神经网络配置的第一映射函数中,以生成所述图神经网络对于分类的第一权重;
    将所述节点度代入对所述分类模型配置的第二映射函数中,以生成所述第二映射函数对于分类的第二权重;
    其中,所述第一权重与所述节点度正相关,所述第二权重与所述节点度正相关;
    若所述节点度表示所述节点为社群节点,则所述第一权重大于或等于所述第二权重;
    若所述节点度表示所述节点为孤立节点,则所述第一权重小于所述第二权重。
  6. 根据权利要求5所述的方法,其中,
    所述第一映射函数包括:
    其中,HG(x)为所述第一权重,x为所述节点度,δ为权重的下限值,α为超参数;
    所述第二映射函数包括:
    其中,HP(x)为所述第二权重,x为所述节点度,δ为权重的下限值,α为超参数。
  7. 根据权利要求5所述的方法,其中,
    所述第一映射函数包括:
    其中,HG(x)为所述第一权重,x为所述节点度,δ为权重的下限值,β为超参数;
    所述第二映射函数包括:
    其中,HP(x)为所述第二权重,x为所述节点度,δ为权重的下限值,β、γ均为超参数。
  8. 根据权利要求5所述的方法,其中,
    所述第一映射函数包括:
    其中,HG(x)为所述第一权重,x为所述节点度,δ为权重的下限值;
    所述第二映射函数包括:
    其中,HP(x)为所述第二权重,x为所述节点度,δ为权重的下限值,μ、ε属于超参数。
  9. 根据权利要求1-8中任一项所述的方法,其中,所述根据所述多个类别的第三概率确定所述业务对象归属的所述类别,包括:
    从多个所述类别的所述第三概率中选择最大者,作为目标概率;
    确定所述目标概率对应的所述类别为所述业务对象归属的所述类别。
  10. 一种业务对象的分类装置,包括:
    图神经网络转换模块(301),设置为将多个业务对象转换为图神经网络,每个业务对象具有由多个特征组成的特征向量,所述图神经网络中的节点表示对应业务对象的特征向量;
    节点度计算模块(302),设置为针对所述图神经网络中的每个节点,计算所述每个节点与所述图神经网络中的其他节点之间的关联程度,作为所述每个节点的节点度;
    第一分类模块(303),设置为执行所述图神经网络,以输出所述对应业务对象归属多个类别的每个类别的第一概率;
    第二分类模块(304),设置为执行预设的分类模型,以使用所述特征向量识别所述对应业务对象归属所述多个类别的每个类别的第二概率;
    概率融合模块(305),设置为针对所述每个类别,根据所述节点的所述节点度将所述第一概率与所述第二概率融合为第三概率;
    类别确定模块(306),设置为根据所述多个类别的第三概率确定所述业务对象归属的所述类别。
  11. 一种业务对象的分类设备,包括:
    至少一个处理器(11);以及
    与所述至少一个处理器(11)通信连接的存储器(12,13);其中,
    所述存储器(12,13)存储有可被所述至少一个处理器(11)执行的计算机程序,所述计算机程序被所述至少一个处理器(11)执行,以使所述至少一个处理器(11)能够执行权利要求1-9中任一项所述的业务对象的分类方法。
  12. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序设置为被处理器执行时实现权利要求1-9中任一项所述的业务对象的分类方法。
PCT/CN2023/077115 2022-03-03 2023-02-20 一种业务对象的分类方法、装置、设备及存储介质 WO2023165352A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210203772.X 2022-03-03
CN202210203772.XA CN114564516A (zh) 2022-03-03 2022-03-03 一种业务对象的分类方法、装置、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023165352A1 true WO2023165352A1 (zh) 2023-09-07

Family

ID=81718672

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/077115 WO2023165352A1 (zh) 2022-03-03 2023-02-20 一种业务对象的分类方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN114564516A (zh)
WO (1) WO2023165352A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114564516A (zh) * 2022-03-03 2022-05-31 百果园技术(新加坡)有限公司 一种业务对象的分类方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009093A (zh) * 2018-12-07 2019-07-12 阿里巴巴集团控股有限公司 用于分析关系网络图的神经网络系统和方法
CN111667067A (zh) * 2020-05-28 2020-09-15 平安医疗健康管理股份有限公司 基于图神经网络的推荐方法、装置和计算机设备
US20210334606A1 (en) * 2020-04-28 2021-10-28 Microsoft Technology Licensing, Llc Neural Network Categorization Accuracy With Categorical Graph Neural Networks
CN113988264A (zh) * 2021-10-29 2022-01-28 支付宝(杭州)信息技术有限公司 获得用于执行流量预测业务的图神经网络的方法及装置
CN114564516A (zh) * 2022-03-03 2022-05-31 百果园技术(新加坡)有限公司 一种业务对象的分类方法、装置、设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009093A (zh) * 2018-12-07 2019-07-12 阿里巴巴集团控股有限公司 用于分析关系网络图的神经网络系统和方法
WO2020114122A1 (zh) * 2018-12-07 2020-06-11 阿里巴巴集团控股有限公司 用于分析关系网络图的神经网络系统和方法
US20210334606A1 (en) * 2020-04-28 2021-10-28 Microsoft Technology Licensing, Llc Neural Network Categorization Accuracy With Categorical Graph Neural Networks
CN111667067A (zh) * 2020-05-28 2020-09-15 平安医疗健康管理股份有限公司 基于图神经网络的推荐方法、装置和计算机设备
CN113988264A (zh) * 2021-10-29 2022-01-28 支付宝(杭州)信息技术有限公司 获得用于执行流量预测业务的图神经网络的方法及装置
CN114564516A (zh) * 2022-03-03 2022-05-31 百果园技术(新加坡)有限公司 一种业务对象的分类方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN114564516A (zh) 2022-05-31

Similar Documents

Publication Publication Date Title
CN109919316B (zh) 获取网络表示学习向量的方法、装置和设备及存储介质
Asadi et al. Detecting botnet by using particle swarm optimization algorithm based on voting system
Adewole et al. Malicious accounts: Dark of the social networks
CN108717408B (zh) 一种敏感词实时监控方法、电子设备、存储介质及系统
Yao et al. RDAM: A reinforcement learning based dynamic attribute matrix representation for virtual network embedding
Wang et al. Incremental subgraph feature selection for graph classification
Hamrouni et al. A survey of dynamic replication and replica selection strategies based on data mining techniques in data grids
Junaid et al. Modeling an optimized approach for load balancing in cloud
Liu et al. Fast attributed multiplex heterogeneous network embedding
Kaur et al. Dynamic resource allocation for big data streams based on data characteristics (5 V s)
Li et al. A review of improved extreme learning machine methods for data stream classification
WO2023165352A1 (zh) 一种业务对象的分类方法、装置、设备及存储介质
Zhu et al. CCBLA: a lightweight phishing detection model based on CNN, BiLSTM, and attention mechanism
Zhu et al. BGCL: Bi-subgraph network based on graph contrastive learning for cold-start QoS prediction
Ying et al. FrauDetector+ An Incremental Graph-Mining Approach for Efficient Fraudulent Phone Call Detection
Zhang et al. Quality of web service prediction by collective matrix factorization
Nikoloska et al. Data selection scheme for energy efficient supervised learning at iot nodes
CN113055890B (zh) 一种面向移动恶意网页的多设备组合优化的实时检测系统
Chen et al. A user dependent web service QoS collaborative prediction approach using neighborhood regularized matrix factorization
CN111935259B (zh) 目标帐号集合的确定方法和装置、存储介质及电子设备
Li et al. Graphmf: Qos prediction for large scale blockchain service selection
CN115048530A (zh) 融合邻居重要度和特征学习的图卷积推荐系统
Wu et al. A domain generalization pedestrian re-identification algorithm based on meta-graph aware
Yang et al. Improving blog spam filters via machine learning
Liu et al. Prediction model for non-topological event propagation in social networks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23762753

Country of ref document: EP

Kind code of ref document: A1