WO2022152161A1 - 混合图神经网络模型的训练、预测 - Google Patents

混合图神经网络模型的训练、预测 Download PDF

Info

Publication number
WO2022152161A1
WO2022152161A1 PCT/CN2022/071577 CN2022071577W WO2022152161A1 WO 2022152161 A1 WO2022152161 A1 WO 2022152161A1 CN 2022071577 W CN2022071577 W CN 2022071577W WO 2022152161 A1 WO2022152161 A1 WO 2022152161A1
Authority
WO
WIPO (PCT)
Prior art keywords
graph
instance
data
target
user
Prior art date
Application number
PCT/CN2022/071577
Other languages
English (en)
French (fr)
Inventor
李厚意
张国威
曾馨檀
李勇勇
刘永超
黄斌
何昌华
Original Assignee
蚂蚁智信(杭州)信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 蚂蚁智信(杭州)信息技术有限公司 filed Critical 蚂蚁智信(杭州)信息技术有限公司
Priority to US18/272,194 priority Critical patent/US20240152732A1/en
Publication of WO2022152161A1 publication Critical patent/WO2022152161A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • G06Q30/0202Market predictions or forecasting for commercial activities

Definitions

  • This specification relates to the technical field of data processing, and in particular, to a method and device for training a hybrid graph neural network, and a method and device for predicting using a hybrid graph neural network.
  • Graphs have powerful expressive power and can be used as data structures to model social networks operating in various fields. Diagrams are usually used to describe a specific relationship between certain things. A point is used to represent a thing, and a line connecting two points is used to indicate that there is such a relationship between the corresponding two things.
  • Graph Neural Networks (GNN, Graph Neural Networks) are deep learning-based algorithms operating on the graph domain, with convincing performance and high interpretability, and have become a widely used graph analysis method.
  • the input data of machine learning tasks are not suitable to be represented as information in the graph domain, such as a series of data with temporal relationships.
  • the hybrid graph neural network model combines graph neural network algorithms and other machine learning algorithms, which can greatly improve the prediction effect in these application scenarios.
  • the present specification provides a training method for a hybrid graph neural network model
  • the hybrid graph neural network model includes an encoding function and a decoding function
  • the encoding function is a graph neural network algorithm with encoding parameters and a combination thereof
  • the decoding function is a machine learning algorithm with decoding parameters and a combination thereof.
  • the method includes: using the instances corresponding to all the targets in the training samples and several degree neighbors of the instances as points in the graph, based on the graph of all instances.
  • Data use the encoding function to generate the graph representation vector of each instance; perform t rounds of training on the decoding parameters; in each round, extract bs targets from the training samples, based on the graph representation of the instance corresponding to each target vector and corresponding non-graph data, use the decoding function to generate the prediction of each target, and optimize the decoding parameters according to the loss of this round determined by the predictions and labels of the bs targets in this round; bs is a natural number, t is a natural number greater than 1; the coding parameters are optimized according to the losses of the t rounds; all the above steps are repeated until the predetermined training termination condition is met.
  • This specification provides a method for predicting a hybrid graph neural network model, wherein the hybrid graph neural network model includes an encoding function and a decoding function, and the encoding function is a code that has been trained according to the training method for the hybrid graph neural network model.
  • the decoding function is a machine learning algorithm with decoding parameters trained according to the training method of the aforementioned hybrid graph neural network model, and the method includes: using all instances corresponding to the targets to be predicted and all Several degrees of neighbors of the described instance are used as points in the graph, and based on the graph data of all instances, an encoding function is used to generate a graph representation vector of each instance; based on the graph representation vector of the instance corresponding to the target to be predicted, the corresponding non-graph data , using a decoding function to generate a predictor for the target.
  • This specification also provides a training device for a hybrid graph neural network model, the hybrid graph neural network model includes an encoding function and a decoding function, the encoding function is a graph neural network algorithm with encoding parameters and a combination thereof, the The decoding function is a machine learning algorithm with decoding parameters and a combination thereof, and the device includes: a training graph representing a vector unit, used for taking the instances corresponding to all the targets in the training samples and several degree neighbors of the instances as points in the graph , based on the graph data of all instances, the encoding function is used to generate the graph representation vector of each instance; the decoding parameter training unit is used to perform t rounds of training on the decoding parameters; in each round, bs is extracted from the training samples each target, based on the graph representation vector of the instance corresponding to each target and the corresponding non-graph data, the decoding function is used to generate the predicted amount of each target, and the predicted amount of each target is determined according to the predicted amount and the label amount of the bs targets in this round
  • the loss of the rounds optimizes the decoding parameters; bs is a natural number, and t is a natural number greater than 1; the coding parameter training unit is used to optimize the coding parameters according to the loss of the t rounds; the training loop unit is used to repeat the above All units until the predetermined training termination condition is met.
  • the hybrid graph neural network model includes an encoding function and a decoding function
  • the encoding function is a code that has been trained according to the training method for the hybrid graph neural network model.
  • Graph neural network algorithm of parameters the decoding function is a machine learning algorithm with decoding parameters trained according to the training method of the aforementioned hybrid graph neural network model
  • the device includes: a prediction graph representation vector unit, which is used for all to be The instance corresponding to the predicted target and several degree neighbors of the instance are used as points in the graph, and based on the graph data of all instances, an encoding function is used to generate a graph representation vector for each instance; The graph representation vector of the instance corresponding to the target of , and the corresponding non-graph data, and a decoding function is used to generate the predictor of the target.
  • a computer device includes: a memory and a processor; the memory stores a computer program that can be run by the processor; when the processor runs the computer program, the above-mentioned training method for a hybrid graph neural network is executed the method described.
  • a computer device includes: a memory and a processor; the memory stores a computer program that can be run by the processor; when the processor runs the computer program, the processor executes the prediction of the hybrid graph neural network model method described.
  • the present specification provides a computer-readable storage medium on which a computer program is stored, and when the computer program is run by a processor, the method described in the above-mentioned training method of a hybrid graph neural network is executed.
  • the present specification also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is run by a processor, the method described in the above-mentioned prediction method of a hybrid graph neural network model is executed.
  • the graph data of the instance is converted into a graph representation vector by the encoding function, and the decoding function is based on the graph representation vector and the non-representation vector corresponding to the target.
  • Graph data generates predictions of training targets, and optimizes decoding parameters and encoding parameters according to the difference between predictions and labels, so that when the encoding parameters do not change, the graph data of all instances are converted into graph representation vectors at one time, avoiding The redundant and repeated processing of graph data improves the training speed; at the same time, the decoding function comprehensively considers the graph representation vector and non-graph data of the instance, and realizes the efficient training of the hybrid graph neural network model.
  • the graph data of all instances is converted into graph representation vectors by the encoding function at one time, and the decoding function is based on the graph representation vector and the non-graph data corresponding to the target.
  • the decoding function comprehensively considers the graph representation vector and non-graph data of the instance, realizing the high efficiency of the hybrid graph neural network model. predict.
  • FIG. 1 is a flowchart of a training method of a hybrid graph neural network model in Embodiment 1 of this specification;
  • FIG. 2 is a logical structure diagram of a hybrid graph neural network model training system in two exemplary implementations of Embodiment 1 of the present specification;
  • FIG. 3 is a flowchart of a prediction method for a hybrid graph neural network model in Embodiment 2 of this specification;
  • FIG. 4 is a logical structure diagram of a hybrid graph neural network model prediction system in two exemplary implementations of Embodiment 2 of this specification;
  • Fig. 5 is a kind of hardware structure diagram of the device running the embodiment of this specification.
  • FIG. 6 is a logical structure diagram of a training device for a hybrid graph neural network model in an embodiment of the present specification
  • FIG. 7 is a logical structure diagram of a prediction apparatus for a hybrid graph neural network model in an embodiment of the present specification.
  • the graph in the hybrid graph neural network model is constructed with instances as points and relationships between instances as edges.
  • Instances can be any subject in practical application scenarios, such as users, commodities, stores, suppliers, sites, deliverymen, web pages, user terminals, buildings, and so on.
  • Hybrid graph neural network models are used to predict instance-related states, behaviors, etc.
  • the state can be information that can describe the subject, such as the category of the instance, the attributes of the instance, etc.
  • the behavior can be the behavior implemented by the instance, or the behavior that takes the instance as the implementation object.
  • the matching degree between the first type of subject and the second type of subject can also be used as the prediction target. In this case, one of the subjects can be used as an instance, and the other type can be used as the associated object of the instance.
  • the target predicted by the hybrid graph neural network model in each embodiment of this specification is the target related to the determined instance, and all the instances in the graph that will be involved in predicting the target can be known according to the predicted target.
  • the target of the hybrid graph neural network model corresponds to at least one instance.
  • the mixed graph neural network model can use the user as an example to form a picture, and its prediction target is the consumption of a certain user in the next few days.
  • the target corresponds to a certain user
  • a hybrid graph neural network model is used to predict the number of times a webpage is referenced by other web pages, then this hybrid graph neural network model takes a webpage as an example, its The target corresponds to a certain web page
  • a mixed graph neural network model uses several items that a user has clicked in the past to predict the user's interest in a certain item to be recommended, then this The hybrid graph neural network model takes a commodity as an example, and its prediction target is the matching degree between a user and the target commodity, and the instances corresponding to the target include the target commodity and several commodities that the user has clicked on.
  • the hybrid graph neural network model includes an encoding function and a decoding function.
  • the encoding function can be various graph neural network algorithms, or a combination of one or more graph neural network algorithms;
  • the decoding function can be any machine learning algorithm including a graph neural network, or a kind of To a variety of combinations of the above various machine learning algorithms, such as DNN (Deep Neural Networks, deep neural network), RNN (Recurrent Neural Network, recurrent neural network), LSTM (Long short-term memory, long short-term memory network) , Wide & Deep (Breadth and Deep) and combinations of these algorithms.
  • DNN Deep Neural Networks, deep neural network
  • RNN Recurrent Neural Network, recurrent neural network
  • LSTM Long short-term memory, long short-term memory network
  • Wide & Deep Wide & Deep
  • the input of the hybrid graph neural network model is various data related to the instance corresponding to the target.
  • the data suitable for being expressed in the form of a graph and also suitable for iteration or processing by the graph neural network algorithm can be used as an instance.
  • the graph data is input to the encoding function. After the encoding function processes or encodes the graph data by the encoding function, the graph representation vector of the output instance is output; the data input to the graph neural network model except the graph data is called
  • the output of the decoding function is the output of the hybrid graph neural network model, and the output prediction can be a value or a vector, which is not limited.
  • the learnable parameters used in the encoding function are called encoding parameters, and the learnable parameters used in the decoding function are called decoding parameters;
  • the training process of the model is to modify the learnable parameters to make The process by which the output of the model is closer to the amount of labels of the model targets in the training samples.
  • the attribute information of the instance corresponding to the target and the behavior information related to the instance should be considered. From the attribute information of the instance, the attribute data of the instance itself and the relationship data with other instances can be obtained; from the behavior information related to the instance, the relationship data with other instances and the behavior sequence derived from the historical behavior records related to the instance can be obtained. Information (i.e. time series data related to the instance).
  • the relational data between the instance and other instances can be conveniently expressed as the edge in the graph, which is suitable for processing by the graph neural network algorithm. Therefore, the relational data with other instances can usually be expressed as the edge in the graph.
  • the time series data related to the instance is not suitable to be expressed in the form of a graph, and is usually used as the input of the decoding function.
  • the attribute data of the instance itself can be conveniently expressed as the attributes of the points in the graph, not all attribute data are suitable for processing by the graph neural network algorithm.
  • the sparse data in the attribute data of the instance itself is more Suitable as input to the decoding function.
  • the prediction of the target has a better effect.
  • the part of the instance's own attribute data that is input to the encoding function is called the instance's own point data
  • the part of the instance's own attribute data that is input to the decoding function is called the instance's own non-point data.
  • the instance's own dense data is usually used as the instance's own point data, while the instance's own sparse data can be used as the instance's own point data or the instance's own non-point data.
  • the instance's own dense data can be used as the instance's own point data, and the instance's own sparse data can be used as the instance's own non-point data; in other application scenarios, a part of the instance's own dense data can also be used.
  • As the instance's own point data another part of the instance's own dense data and the instance's own sparse data are used as the instance's own non-point data.
  • dense data is data that can be represented by a value or a low-dimensional vector
  • sparse data is data that is represented by a vector with high dimensions but only a few elements with values.
  • dense data is data that can be represented by a value or a low-dimensional vector
  • sparse data is data that is represented by a vector with high dimensions but only a few elements with values.
  • the user's account balance and age can be expressed as a value, which is dense data
  • the bank cards owned by users are sparse data.
  • a user Usually there are only a few bank cards, which is represented by a vector with a dimension of hundreds of thousands but only a few elements (that is, the elements corresponding to the bank cards owned by the user) whose value is 1.
  • the first embodiment of this specification proposes a new training method for a hybrid graph neural network model.
  • the coding function is used to calculate the graph representation vectors of all instances at one time, and the graph representation vectors of the instances are used to represent the vectors and the instances related to the instance.
  • the non-graph data is used as the input, and the decoding function is used to calculate the prediction quantity of the training target.
  • the encoding parameters and decoding parameters are optimized, which avoids redundant and repeated calculation of the instance graph data, reduces the calculation amount and speeds up the training speed.
  • the decoding function comprehensively considers the influence of graph data and non-graph data on the prediction, and realizes the efficient training of the hybrid graph neural network model.
  • Embodiment 1 of this specification can be run on any device with computing and storage capabilities, such as mobile phones, tablet computers, PCs (Personal Computers, personal computers), notebooks, servers and other devices; it can also be run on two or two devices.
  • the logical nodes of the above devices implement various functions in Embodiment 1 of this specification.
  • Embodiment 1 of the present specification the flow of the training method of the hybrid graph neural network model is shown in FIG. 1 .
  • the training in the first embodiment is supervised learning, and the training samples include the input data of the hybrid graph neural network model and the label quantity (expected output) of the target, and the input data includes the graph data of the instance of the input encoding function and the input decoding function. of non-graph data corresponding to the target.
  • the encoding parameters in the encoding function and the decoding parameters in the decoding function are initialized to initial values.
  • any method may be used to set the initial values of the encoding parameters and the decoding parameters.
  • Step 110 using the instances corresponding to all the targets in the training sample and several degree neighbors of these instances as points in the graph, and based on graph data of all instances, use an encoding function to generate a graph representation vector for each instance.
  • each instance corresponding to each target in the training sample is a point in the graph, and the set of points in the graph includes not only all instances corresponding to all targets in the training sample, but also possible Other instances that become some degree neighbors of each of the above instances.
  • the training sample includes 100 million items, and the other 900 million items may become part of the 100 million items in the training sample.
  • 1-degree to k-degree neighbors the set of points in the graph of the hybrid graph neural network model can be these 1 billion commodities.
  • the graph data of an instance includes one or more items of the following: self-point data in the instance's own attribute data, and relational data with other instances.
  • the own point data of the instance is used to express the characteristics of the points in the graph
  • the relational data with other instances is used to express the characteristics of the edges in the graph (the association between different points).
  • the relational data with other instances may be relational data between a point and a certain-order neighbor, or may be a combination of relational data between a point and several neighbors of each order, which is not limited.
  • the graph neural network algorithm using the encoding function can convert the graph data of the instance into the graph representation vector of the instance.
  • a graph representation vector of each instance in the graph is generated according to the graph data of all instances at one time; after the encoding parameter is changed (optimized), this process is repeated.
  • Step 120 Perform t (t is a natural number greater than 1) rounds of training on the decoding parameters; in each round, extract bs (bs is a natural number) targets from the training samples, based on the The graph represents the vector and the corresponding non-graph data.
  • the decoding function is used to generate the prediction of each target, and the decoding parameters are optimized according to the loss of this round determined by the predictions and labels of the bs targets in this round.
  • Embodiment 1 of this specification for a given encoding parameter, t rounds of training will be performed on the decoding parameter. In other words, each time the encoding parameters are optimized, the decoding parameters will be optimized t times.
  • each target has one or more instances of its respective determination.
  • the graph representation vector of the corresponding instance and the non-graph data corresponding to the target can be used as the input of the decoding function, and the output of the decoding function is the predictor of the target.
  • the non-graph data corresponding to the target may be one or more items of self non-point data corresponding to an instance of the target and time series data related to the instance or these instances.
  • the loss amount of the bs targets extracted in this round can be obtained.
  • the amount of loss optimizes the decoding parameters.
  • any loss function can be selected to calculate the loss amount of the current round of bs targets, such as the cross entropy loss function, the least squares loss function, the absolute error loss function, or the mean square error loss function, etc. ;
  • any optimization function can also be used to modify the decoding parameters according to the loss of this round, such as gradient descent optimizer, Adam optimizer, etc.; there is no limitation.
  • a predetermined loss function can be used to calculate the loss of each target in this round based on the predicted amount and label amount of each target, and then the loss of this round can be obtained according to the loss of bs targets; Amount of loss With respect to the gradient of the decoding parameter, the decoding parameter is optimized according to the calculated gradient.
  • step 130 After performing t rounds of decoding parameter training, step 130 is entered.
  • Step 130 Optimizing encoding parameters according to the losses of the above t rounds.
  • Any optimization function can be selected according to the characteristics of the actual application scenario to modify the coding parameters according to the loss of the t rounds, which is not limited.
  • the gradient of the graph representation vector of the corresponding instances of the bs targets in the round can be calculated first, so that the losses of t rounds can be obtained in total of bs ⁇ t
  • the gradient of the loss to the graph representation vector; then the gradient of the bs ⁇ t loss to the graph representation vector is used to optimize the encoding parameters.
  • t rounds of graph representation vectors can be accumulated on the graph representation vectors of each instance corresponding to the bs targets in each round. Gradient, and then determine the gradient of the loss to the coding parameters by the gradient accumulated on the representation vector of these graphs, and finally use the gradient of the loss to the coding parameters to optimize the coding parameters.
  • the graph representation vector of the repeated instances will accumulate more than one round of gradients.
  • the specific accumulation method is not limited, for example, it may be a summation of gradients, or a weighted sum of gradients, and so on.
  • Step 140 Repeat all the above steps until the predetermined training termination condition is met.
  • predetermined training termination condition After completing the optimization of the encoding parameters, it is determined whether the predetermined training termination condition is satisfied. If it has been satisfied, the training of the hybrid graph neural network model is completed, and the training process ends. If the predetermined training termination condition has not been met, steps 110 to 130 are repeated.
  • step 110 calculate the graph representation vectors of all instances according to the updated coding parameters, and use the newly calculated graph representation vectors to perform t rounds of decoding in step 120 Parameter training, and encoding parameter optimization in step 130.
  • any predetermined training termination condition may be adopted.
  • the encoding parameter can be optimized for R times (R is a natural number greater than 1) as a predetermined training termination condition, so that the training of the hybrid graph neural network model will be completed after steps 110 to 130 are repeatedly performed for R times.
  • a hybrid graph neural network model is used to classify instances. All instances in the training sample corresponding to all targets, and other instances that may be neighbors of these instances by degrees, form a set of points The relationship between all points forms the edge set ⁇ , which constitutes the graph.
  • the graph data of an instance includes the instance's own point data and relationship data with other instances, and the non-graph data corresponding to the target includes the instance's own non-point data and time series data generated according to the instance's historical behavior information.
  • X is the own point data of all instances
  • E is the relational data of all instances and other instances
  • A is the graph
  • f is the encoding function
  • W is the encoding parameter
  • g is the decoding function
  • is the decoding parameter.
  • the target of each training sample includes target identification ID and target label quantity, and the label quantity of target ID i is Y i .
  • the target identifier ID represents the instance corresponding to the target, and the instance corresponding to the target ID i is v i ; the tag quantity Y i of the target ID i represents the category to which the target ID i belongs.
  • the graph of the instance vi indicates that the vector is H i , the self non-point data is B i , and the time series data corresponding to the target ID i is S i .
  • the training system includes a training encoder and a training decoder, and the training encoder includes an encoding function and an encoding parameter module, a graph representation vector calculation module, a graph representation vector storage module, a training encoder gradient calculation and parameter optimization module, and a gradient receiving module; training;
  • the decoder includes a decoding function and decoding parameter module, a graph representation vector query module, a prediction and loss calculation module, a training decoder gradient calculation and parameter optimization module, and a gradient transmission module.
  • the encoding function and encoding parameter module store encoding function f and encoding parameter W
  • the decoding function and decoding parameter module store decoding function g and decoding parameter ⁇ .
  • the training system runs as follows:
  • Step S02 when starting training, set the coding parameter W in the coding function and the coding parameter module, and the decoding parameter ⁇ in the decoding function and the decoding parameter module as initial values, set the coding parameter optimization times r1 to 0, set the decoding parameter optimization The number r2 is zero.
  • Step S04 In the training encoder, based on the current encoding parameter W, the graph representation vector calculation module uses formula 1 to calculate all instances in the point set v at one time (including all instances in the training samples and their neighbors of several degrees) The graph representation vector of
  • Step S06 In the training encoder, the graph representation vector storage module uses the identifier of the instance as an index, and saves the identifier v i of each instance and the graph representation vector of the instance. corresponding relationship.
  • Step S08 In the training decoder, extract bs targets from the target set of the training samples. For each retrieved target ID i (i ⁇ [1, bs]), the graph representation vector of the instance v i corresponding to the target ID i obtained by the graph representation vector query module from the graph representation vector storage module of the training encoder is used. With the self non-point data B i of the instance v i and the time series data S i corresponding to the target ID i , spliced into a set of input of the decoding function g
  • Step S10 In the training decoder, the prediction amount and the loss amount calculation module are based on the current decoding parameter ⁇ , and the prediction amount of the bs targets taken out in step S08 is obtained by formula 2. Let the loss function be l, and the prediction and loss calculation module uses formula 3 to obtain the loss of this round (ie round r2)
  • Step S12 In the training decoder, the training decoder gradient calculation and parameter optimization module is based on the loss amount of this round Obtain the gradient of the loss of this round to the decoding parameters from Equation 4, and then optimize the decoding parameters according to the gradient of the loss of this round to the decoding parameters, and update the decoding function and the decoding parameters in the decoding parameter module to the optimized ones. value. If the gradient descent method is used to optimize the decoding parameters, the optimized decoding parameters can be obtained from Equation 5. In Equation 5, ⁇ is the learning rate in the gradient descent method.
  • Step S14 In the training decoder, the gradient calculation and parameter optimization module of the training decoder uses Equation 6 to calculate the loss of this round
  • the gradients of the vectors are represented for the bs graphs, and the calculated bs gradient vectors are sent to the gradient sending module.
  • Step S16 In the training decoder, the gradient sending module sends bs gradient vectors to the gradient receiving module of the training encoder. In the training encoder, the gradient receiving module of the training encoder saves the received bs gradient vectors.
  • Step S18 Increase r2 by 1. If r2 cannot divide t, go to step S08; if r2 can divide t, go to step S20.
  • Step S20 In the training encoder, the training encoder gradient calculation and parameter optimization module reads the stored bs ⁇ t gradient vectors from the gradient receiving module, and uses Equation 7 to calculate the gradient of the loss for t rounds to the encoding parameters Then according to Optimize encoding parameters. If the gradient descent method is used to optimize the encoding parameters, the optimized encoding parameters can be obtained from Equation 8.
  • Step S22 Increase r1 by 1. If r1 ⁇ R, go to step S04, otherwise, go to step S24.
  • Step S24 the training ends.
  • the encoding parameter WE in the encoding function and encoding parameter module is the encoding parameter for completing the training
  • the decoding parameter ⁇ E in the decoding function and decoding parameter module is the decoding parameter for completing the training.
  • a second exemplary implementation manner of Embodiment 1 of the present specification is given below.
  • a hybrid graph neural network model is used to predict the degree of matching between instances and objects.
  • Each training sample includes a target instance vi for a certain object ui , and N instances vij of the object ui that had historical behavior before, ( j ⁇ [1,N]).
  • the instances corresponding to each target include (N+1), ie vi and vi ij , j ⁇ [1,N].
  • the training samples also include target identification ID and target label quantity, the label quantity of target ID i is Y i , and the representation vector of object ui of target ID i is U i .
  • the graph data of an instance includes the instance's own point data and relational data with other instances, and the non-graph data corresponding to the target includes a representation vector of the target's object.
  • X is the own point data of all instances
  • E is the relational data of all instances and other instances
  • A is the graph
  • f is the encoding function
  • W is the encoding parameter
  • g is the decoding function
  • is the decoding parameter.
  • the logical structure of the hybrid graph neural network model training system is shown in FIG. 2 .
  • the training system runs as follows:
  • Step S32 when starting training, set the coding parameter W in the coding function and the coding parameter module and the decoding parameter ⁇ in the decoding function and the decoding parameter module as initial values, set the coding parameter optimization times r1 to 0, set the decoding parameter optimization The number r2 is zero.
  • Step S34 In the training encoder, based on the current encoding parameter W, the point set is calculated at one time by the graph representation vector calculation module using Equation 1 The graph representation vector of all instances in (including all instances in the training sample and their neighbors of several degrees)
  • Step S36 In the training encoder, the graph representation vector storage module uses the identifier of the instance as an index, and saves the identifier v i of each instance and the graph representation vector of the instance. corresponding relationship.
  • Step S38 In the training decoder, extract bs targets from the target set of training samples. For each retrieved target ID i (i ⁇ [1, bs]), use the graph representation vector query module to find the instance v i , vi ij corresponding to the target ID i from the graph representation vector storage module of the training encoder. , a graph representation vector of j ⁇ [1,N] with the representation vector U i of the object corresponding to the target ID i , concatenated into a set of inputs to the decoding function g
  • Step S40 In the training decoder, the prediction amount and the loss amount calculation module are based on the current decoding parameter ⁇ , and the prediction amount of the bs targets taken out in step S38 is obtained by formula 9. Let the loss function be l, and the prediction and loss calculation module uses formula 3 to obtain the loss of this round (ie round r2)
  • Step S42 In the training decoder, the training decoder gradient calculation and parameter optimization module is based on the loss amount of this round Obtain the gradient of the loss of this round to the decoding parameters from Equation 4, and then optimize the decoding parameters according to the gradient of the loss of this round to the decoding parameters, and update the decoding function and the decoding parameters in the decoding parameter module to the optimized ones. value.
  • Step S44 In the training decoder, the gradient calculation and parameter optimization module of the training decoder uses Equation 6 and Equation 10 to calculate the loss of this round
  • the gradients of the vectors are represented for bs ⁇ (N+1) graphs, and the calculated bs ⁇ (N+1) gradient vectors are sent to the gradient sending module.
  • Step S46 In the training decoder, the gradient sending module sends bs ⁇ (N+1) gradient vectors to the gradient receiving module of the training encoder. In the training encoder, the gradient receiving module of the training encoder saves the received bs ⁇ (N+1) gradient vectors.
  • Step S48 Increase r2 by 1. If r2 cannot divide t, go to step S38; if r2 can divide t, go to step S50.
  • Step S50 In the training encoder, the training encoder gradient calculation and parameter optimization module reads the saved bs ⁇ (N+1) ⁇ t gradient vectors from the gradient receiving module, and uses Equation 11 to calculate the loss of t rounds Gradients over encoded parameters Then according to Optimize encoding parameters.
  • Step S52 Increase r1 by 1. If r1 ⁇ R, go to step S34, otherwise, go to step S54.
  • Step S54 the training ends.
  • the encoding parameter WE in the encoding function and encoding parameter module is the encoding parameter for completing the training
  • the decoding parameter ⁇ E in the decoding function and decoding parameter module is the decoding parameter for completing the training.
  • the encoding function when the encoding parameters do not change, the encoding function is used to calculate the graph representation vectors of all instances at one time, and the decoding function generates the prediction of the training target based on the graph representation vector and the non-graph data corresponding to the target.
  • the encoding parameters and decoding parameters are optimized according to the predicted amount and the amount of labels, which avoids redundant and repeated processing of graph data, reduces the amount of computation and speeds up the training speed.
  • the decoding function comprehensively considers the graph representation vector and non-graph data of the instance. Efficient training of hybrid graph neural network models is achieved.
  • the second embodiment of this specification proposes a new prediction method for a hybrid graph neural network model.
  • the encoding function is used to calculate the graph representation vectors of all instances at one time, and the graph representation vectors of the instances and the non-graph data corresponding to the target are used as inputs.
  • the decoding function calculates the prediction amount of the target to be predicted, which avoids redundant and repeated calculation of instance graph data, reduces the amount of computation and speeds up the prediction speed. Efficient prediction of hybrid graph neural network models.
  • the second embodiment of this specification can run on any device with computing and storage capabilities, such as mobile phones, tablet computers, PCs, notebooks, servers and other devices; it can also be implemented by logical nodes running on two or more devices Each function in Embodiment 1 of this specification.
  • the hybrid graph neural network model is the hybrid graph neural network model trained by the training method in the first embodiment of this specification. That is to say, in the hybrid graph neural network model of the second embodiment, the encoding function is The graph neural network algorithm including the encoding parameters trained by the method of Embodiment 1 of this specification, and the decoding function is a machine learning algorithm that includes the decoding parameters trained by the method of Embodiment 1 of this specification.
  • the input data of the hybrid graph neural network model includes graph data input to the instance of the encoding function and non-graph data corresponding to the target input to the decoding function.
  • each target to be predicted of the hybrid graph neural network model corresponds to one or more instances.
  • Step 310 Taking all instances corresponding to the target to be predicted and several degree neighbors of these instances as points in the graph, and based on graph data of all instances, use an encoding function to generate a graph representation vector for each instance.
  • each instance corresponding to each target in the training sample is a point in the graph, and the set of points in the graph includes not only all instances corresponding to all targets in the training sample, but also all instances that may become Other instances of some degree neighbors of each instance above.
  • the graph data of an instance includes one or more items of the following: self-point data in the instance's own attribute data, and relational data with other instances.
  • the own point data of the instance is used to express the characteristics of the points in the graph
  • the relational data with other instances is used to express the characteristics of the edges in the graph (the association between different points).
  • the relational data with other instances may be relational data between a point and a certain-order neighbor, or may be a combination of relational data between a point and several neighbors of each order, which is not limited.
  • the encoding function adopts the trained graph neural network algorithm, and generates its graph representation vector for each instance in the graph at one time according to the graph data of all instances.
  • Step 320 Based on the graph representation vector of the instance corresponding to the target to be predicted, and the corresponding non-graph data, use a decoding function to generate a predictor of the target.
  • the graph representation vectors of the instances corresponding to each target and the corresponding non-graph data can be input into the trained decoding function, and the output of the decoding function is are the predictors for each target.
  • the non-graph data corresponding to the target may be one or more items of the instance's own non-point data and time series data related to the instance corresponding to the target.
  • the number of targets to be predicted is relatively large.
  • ps (ps is a natural number) targets can be predicted in each round until all targets are predicted.
  • ps targets can be extracted from the target set to be predicted; for the extracted ps targets, the graph representation vector of the instance corresponding to each target and the corresponding non-graph data are input respectively.
  • the machine learning algorithm of the decoding function obtains the predicted amount of each target in the ps targets; then delete the ps targets extracted in this round from the target set, if the target set is not empty, continue.
  • ps targets are extracted for prediction until the target set is empty. It should be noted that the target set is an unprecedented last round, and the number of extracted targets may be less than ps.
  • the graph neural network model in the first exemplary implementation is a graph neural network model trained through the first exemplary implementation of Embodiment 1 of this specification. Therefore, the graph neural network in the first exemplary implementation
  • the network model is used to classify the instance, and its encoding function f, decoding function g, graph data of the instance, and non-graph data corresponding to the target are the same as in the first exemplary implementation of the first embodiment, and the encoding parameter is W E , the decoding parameter is ⁇ E .
  • all instances corresponding to the target to be predicted and several degree neighbors of these instances form a point set
  • the relationship between all points forms the edge set ⁇ , which constitutes the graph
  • X is the own point data of all instances
  • E is the relational data of all instances and other instances
  • A is the graph
  • Each target to be predicted includes a target identifier, the instance corresponding to target ID i is v i , the graph representation vector of instance v i is H i , its own non-point data is B i , and the time series data is S i .
  • the prediction system includes a prediction encoder and a prediction decoder.
  • the prediction encoder includes an encoding function and an encoding parameter module, a graph representation vector calculation module, and a graph representation vector storage module;
  • the prediction decoder includes a decoding function and a decoding parameter module, and a graph representation vector query module, and a pre-measurement calculation module.
  • the encoding function and encoding parameter module store the encoding function f and the training encoding parameter WE
  • the decoding function and decoding parameter module store the decoding parameter g and the training decoding parameter ⁇ E .
  • the prediction system operates as follows:
  • Step S62 In the prediction encoder, using the encoding parameter WE , the graph representation vector of all points in the graph (including all the instances to be predicted and their neighbors of several degrees) is calculated at one time by the graph representation vector calculation module using Equation 12 to obtain the graph representation vector ⁇ H i ⁇ :
  • Step S64 In the predictive encoder, the graph representation vector storage module uses the identifier of the instance as an index, and stores the correspondence between the identifier v i of each instance and the graph representation vector H i of the instance.
  • Step S68 In the prediction decoder, take out Ct targets from the set of targets to be predicted (if the total number of targets in the target set is less than ps, take out all the remaining targets and modify the value of Ct to the total number of targets in the target set). For each extracted target ID i (i ⁇ [1,Ct]), the graph representation vector of the instance v i corresponding to the target ID i obtained by the graph representation vector query module from the graph representation vector storage module of the predictive encoder is used.
  • H i with its own non-point data of instance vi as B i and time series data as S i , are spliced into a set of inputs of decoding function g (H i , B i , S i ) (i ⁇ [1,Ct]) .
  • Step S70 In the prediction decoder, the prediction quantity calculation module adopts the decoding parameter ⁇ E , and obtains the prediction quantities of the Ct targets in this round taken out in step S68 from the formula 13.
  • Step S72 Delete the Ct targets taken out in this round from the target set to be predicted. If the target set is empty, the prediction ends; if the target set is not empty, go to step S68.
  • the graph neural network model in the second exemplary implementation is a graph neural network model trained through the second exemplary implementation of Embodiment 1 of this specification. Therefore, the graph neural network in the second exemplary implementation
  • the network model is used to predict the degree of matching between an instance and an object, and its encoding function f, decoding function g, graph data of the instance, and non-graph data corresponding to the target are the same as those in the second exemplary implementation of the first embodiment.
  • W E , and the decoding parameter is ⁇ E .
  • each target to be predicted includes a target identifier, and the instances corresponding to target ID i include (N+1), namely vi and v ij , j ⁇ [1,N], the instance
  • the graph representation vector of vi is H i
  • the representation vector of object ui of target ID i is U i .
  • all instances corresponding to the target to be predicted and several degree neighbors of these instances form a point set
  • the relationship between all points forms the edge set ⁇ , which constitutes the graph
  • X is the own point data of all instances
  • E is the relational data of all instances and other instances
  • A is the graph The adjacency matrix of the topological relationship between midpoints and edges.
  • the prediction system operates as follows:
  • Step S82 In the predictive encoder, using the coding parameter W E obtained from the training, the graph representation vector of all instances (including all to-be-predicted instances and their neighbors of several degrees) is calculated at one time by the graph representation vector calculation module using formula 12. ⁇ H i ⁇ :
  • Step S84 In the predictive encoder, the graph representation vector storage module uses the instance identifier as an index, and stores the correspondence between the instance identifier v i and the graph representation vector H i of the instance.
  • Step S88 In the prediction decoder, take out Ct targets from the set of targets to be predicted (if the total number of targets in the target set is less than ps, take out all the remaining targets and modify the value of Ct to the total number of targets in the target set). For each extracted target ID i (i ⁇ [1,Ct]), the graph representation vector of the instance v i corresponding to the target ID i obtained by the graph representation vector query module from the graph representation vector storage module of the predictive encoder is used.
  • H i the graph representation vector H ij , j ⁇ [1, N] corresponding to the instance v ij , j ⁇ [1, N]
  • the representation vector U i of the object with the target ID i are concatenated into a set of decoding functions g Input (H i , H ij , U i )(i ⁇ [1, Ct]).
  • Step S90 In the prediction decoder, the prediction quantity calculation module adopts the decoding parameter ⁇ E , and obtains the prediction quantities of the Ct targets in this round taken out in step S88 from the formula 14.
  • Step S92 Delete the Ct targets taken out in this round from the target set to be predicted. If the target set is empty, the prediction ends; if the target set is not empty, go to step S88.
  • the encoding function is used to calculate the graph representation vectors of all instances at one time, and the decoding function generates the prediction quantity of the training target based on the graph representation vector and the non-graph data corresponding to the target, avoiding the need for the graph data.
  • the redundant and repeated processing of reduces the amount of computation and speeds up the prediction speed.
  • the decoding function comprehensively considers the graph representation vector and non-graph data of the instance, and realizes the efficient prediction of the hybrid graph neural network model.
  • an Internet service provider adopts the hybrid graph neural network model to evaluate the category to which the user belongs, and according to the category to which the user belongs, performs business processing corresponding to the category of the request from the user,
  • the form of the category and the corresponding business processing can be determined according to specific business requirements, which are not limited.
  • the category may be consumption level, credit level, activity level, security level, etc.; the corresponding business processing may be applying different business processes to different categories of users, using different business processing parameters, and so on.
  • the Internet service provider uses the user as an instance to construct a hybrid graph neural network model, and the training or prediction target of the model is the category to which a certain user belongs.
  • the graph data of the instance is the part of the user data expressed as the attributes of the points in the graph and the attributes of the edges (such as the dense data in the user attribute data, the relationship data between users, etc.), and the user
  • the rest of the data (such as sparse data in user attribute data) and historical behavior time series data generated according to the user's historical behavior records are used as non-graph data corresponding to the target.
  • the Internet service provider adopts the first exemplary implementation manner of Embodiment 1 of this specification to train the hybrid graph neural network model. After the training is completed, the first exemplary implementation manner of the second embodiment of the present specification is used to predict the category to which the user belongs based on the trained hybrid graph neural network model, and according to the predicted category, the user is performed corresponding to the category to which the user belongs. Category of business processing.
  • an Internet service provider uses a hybrid graph neural network model to evaluate the degree of matching between users and objects, and recommends objects to users according to the degree of matching between users and objects, so as to speed up user acquisition of information efficiency and improve user satisfaction.
  • the specific form of the object is not limited, for example, it may be a commodity, a promotional activity, an advertisement, a search result for a user's search request, and the like.
  • the Internet service provider uses an object as an instance to construct a hybrid graph neural network model, and the training or prediction target of the model is how well a user matches an object.
  • the graph data of the instance is the part of the object data that is expressed as the attributes of points in the graph and the attributes of edges
  • the user's representation vector is used as the non-graph data corresponding to the target.
  • the graph representation vector of N objects that the user has had behaviors is also used as the input of the decoding function, so that the instance corresponding to each target will include (N+ 1) objects, namely the object to be recommended and N objects that have historical behaviors with the user.
  • the Internet service provider adopts the second exemplary implementation manner of Embodiment 1 of this specification to train the hybrid graph neural network model. After the training is completed, the second exemplary implementation of the second embodiment of this specification is used to predict the matching degree between the user and the object to be recommended based on the trained hybrid graph neural network model, and compare the predicted matching degree with the user. Several high-level objects to be recommended are recommended to users.
  • the embodiments of this specification further provide a training device for a hybrid graph neural network, and a prediction device for the hybrid graph neural network.
  • Both of these two devices can be implemented by software, or can be implemented by hardware or a combination of software and hardware.
  • a device in the logical sense it is formed by the CPU (Central Process Unit, central processing unit) of the device where the corresponding computer program instructions are read into the memory for operation.
  • the equipment where the training device or prediction device of the hybrid graph neural network is located usually also includes other hardware such as chips for wireless signal transmission and reception, and/ Or other hardware such as boards used to implement network communication functions.
  • FIG. 6 shows a training device for a hybrid graph neural network model provided by an embodiment of the present specification.
  • the hybrid graph neural network model includes an encoding function and a decoding function
  • the encoding function is a graph neural network algorithm with encoding parameters. and its combination
  • the decoding function is a machine learning algorithm with decoding parameters and a combination thereof
  • the device includes a training graph representation vector unit, a decoding parameter training unit, an encoding parameter training unit and a training loop unit, wherein:
  • the training graph represents The vector unit is used to use the instances corresponding to all the targets in the training sample and the several degree neighbors of the instances as points in the graph, and based on the graph data of all instances, use the encoding function to generate the graph representation vector of each instance;
  • the decoding parameter training unit It is used to train the decoding parameters for t rounds; in each round, bs targets are extracted from the training samples, and based on the graph representation vector of the instance corresponding to each target and the corresponding non-graph data
  • the encoding parameter training unit is specifically configured to: calculate the loss of each round, calculate the gradient of the graph representation vector of the instances corresponding to the bs targets in the round, and optimize the encoding parameters according to the bs ⁇ t gradients.
  • the encoding parameter training unit optimizes the encoding parameters according to bs ⁇ t gradients, including: accumulating the gradients of t rounds on the graph representation vectors of the instances corresponding to the bs targets in each round, respectively, according to the Each of the above graphs indicates that the gradient accumulated on the vector determines the gradient of the loss to the encoding parameter, and the encoding parameter is optimized by using the gradient of the loss to the encoding parameter.
  • the decoding parameter training unit optimizes the decoding parameters according to the loss amount of this round determined by the prediction amount and the label amount of the bs targets in this round, including: according to the prediction amount and the label of each target in this round.
  • the loss of each target is determined by the amount of loss, and the loss of this round is obtained from the loss of bs targets in this round, and the decoding parameter is optimized according to the gradient of the loss of this round to the decoding parameter.
  • the predetermined training termination condition includes: optimizing encoding parameters for R times, where R is a natural number greater than 1.
  • the graph data of the instance includes at least one of the instance's own point data and relationship data with other instances; the corresponding non-graph data includes the instance's own non-point data corresponding to the target, At least one item of time series data related to the instance corresponding to the target.
  • the instance's own point data includes: the instance's own dense data; the instance's own non-point data includes: the instance's own sparse data.
  • the hybrid graph neural network model is used to evaluate the category to which the user belongs; the instance is the user; the training target is the category to which a certain user belongs; the graph data of the instance includes: the user data is expressed as The part of the attributes of the points and edges in the graph; the corresponding non-graph data includes at least one of the following: the rest of the user data except the attributes expressed as the points and edges in the graph, generated according to the user's historical behavior records Historical behavior time series data; the device also includes a category prediction and business processing unit for predicting the category to which the user belongs by using the trained hybrid graph neural network model, and performing corresponding operations on the user according to the category to which the user belongs. Category of business processing.
  • the hybrid graph neural network model is used to evaluate the degree of matching between a user and an object; the instance is an object, and the training target is the degree of matching between a certain user and an object to be recommended; the graph of the instance is The data includes: the part of the object data expressed as attributes of points and edges in the graph; the graph representation vector of the instance corresponding to the target includes: the graph representation vector of the object to be recommended, and the N objects that the user has had historical behaviors The corresponding non-graph data includes: the user's representation vector; N is a natural number; the device further includes a matching prediction and recommendation unit, which is used for using the trained mixed graph neural network model to predict the user and the user.
  • the matching degree of the object to be recommended is recommended, and several objects to be recommended that are predicted to have a higher matching degree with the user are recommended to the user.
  • FIG. 7 shows a prediction device for a hybrid graph neural network model provided by an embodiment of the present specification.
  • the hybrid graph neural network model includes an encoding function and a decoding function, and the encoding function
  • the graph neural network algorithm of the coding parameters trained by the training method, the decoding function is a machine learning algorithm with the decoding parameters trained according to the aforementioned hybrid graph neural network model training method
  • the device includes a prediction graph representation vector unit and a prediction Quantity generation unit wherein: the prediction graph representation vector unit is used to use the instances corresponding to all the targets to be predicted and the several degree neighbors of the instances as points in the graph, and based on the graph data of all instances, use an encoding function to generate the graph of each instance.
  • the graph represents a vector; the predictor generating unit is configured to use a decoding function to generate the predictor of the target based on the graph representative vector of the instance corresponding to the target to be predicted and the corresponding non-graph data.
  • the device further includes a target extraction unit, configured to extract ps targets to be predicted from the target set to be predicted; ps is a natural number; and the predicted quantity generation unit is specifically configured to: , respectively based on the graph representation vector of the instance corresponding to each target and the corresponding non-graph data, a decoding function is used to generate the predicted amount of each target; the device also includes a loop control unit for deleting the target set to be predicted. For the ps targets, if the target set to be predicted is not empty, continue to extract at most ps targets for prediction in the next round until the target set is empty.
  • the graph data of the instance includes at least one of the instance's own point data and relationship data with other instances; the corresponding non-graph data includes the instance's own non-point data corresponding to the target, and the corresponding non-graph data. At least one item of time series data related to the instance of the target.
  • the self-point data of the instance includes: the self-dense data of the instance; the self-non-point data of the instance includes: the self-sparse data of the instance.
  • the hybrid graph neural network model is used to evaluate the category to which the user belongs; the instance is a user; the target to be predicted is the category to which a user belongs; the graph data of the instance includes: The part expressed as the attributes of the points and edges in the graph; the corresponding non-graph data includes at least one of the following: the rest of the user data except the attributes expressed as the points and edges in the graph, according to the user's historical behavior records The generated historical behavior time series data; the apparatus further includes a category business processing unit, configured to perform business processing corresponding to the category to the user according to the category to which a user is predicted to belong.
  • the hybrid graph neural network model is used to evaluate the degree of matching between a user and an object; the instance is an object, and the training target is the degree of matching between a certain user and an object to be recommended; the graph of the instance is The data includes: the part of the object data expressed as attributes of points and edges in the graph; the graph representation vector of the instance corresponding to the target includes: the graph representation vector of the object to be recommended, and the N objects that the user has had historical behaviors
  • the corresponding non-graph data includes: a representation vector of the user; N is a natural number; the device further includes a recommending unit, which is used for, according to the matching degree of the object to be recommended and a certain user, to match the user with the user.
  • Embodiments of the present specification provide a computer device including a memory and a processor.
  • the memory stores a computer program that can be run by the processor; when the processor runs the stored computer program, the processor executes each step of the hybrid graph neural network training method in the embodiment of the present specification.
  • the processor executes each step of the hybrid graph neural network training method in the embodiment of the present specification.
  • Embodiments of the present specification provide a computer device including a memory and a processor.
  • the memory stores a computer program that can be run by the processor; when the processor runs the stored computer program, the processor executes each step of the prediction method of the hybrid graph neural network in the embodiment of the present specification.
  • the processor executes each step of the prediction method of the hybrid graph neural network in the embodiment of the present specification.
  • the embodiments of this specification provide a computer-readable storage medium, where computer programs are stored on the storage medium, and when these computer programs are run by a processor, each step of the method for training a hybrid graph neural network in the embodiments of this specification is executed. .
  • each step of the training method of the hybrid graph neural network please refer to the previous content and will not be repeated.
  • the embodiments of this specification provide a computer-readable storage medium, where computer programs are stored on the storage medium, and when these computer programs are run by a processor, each step of the prediction method of the hybrid graph neural network in the embodiments of this specification is executed. .
  • each step of the prediction method of the hybrid graph neural network please refer to the previous content and will not be repeated.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • Memory may include forms of non-persistent memory, random access memory (RAM) and/or non-volatile memory in computer readable media, such as read only memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
  • RAM random access memory
  • ROM read only memory
  • flash RAM flash memory
  • Computer-readable media includes both persistent and non-permanent, removable and non-removable media, and storage of information may be implemented by any method or technology.
  • Information may be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • computer-readable media does not include transitory computer-readable media, such as modulated data signals and carrier waves.
  • embodiments of the present specification may be provided as a method, a system or a computer program product. Accordingly, embodiments of this specification may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present specification may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein .
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本说明书提供一种混合图神经网络模型的训练方法,所述混合图神经网络模型包括编码函数和解码函数,所述方法包括:以训练样本中所有目标对应的实例以及所述实例的若干度邻居作为图中的点,基于所有实例的图数据,采用编码函数生成每个实例的图表示向量;对解码参数进行t个轮次的训练;在每个轮次,从训练样本中提取bs个目标,基于每个目标对应的实例的图表示向量、对应的非图数据,采用解码函数生成每个目标的预测量,并根据由本轮次bs个目标的预测量与标签量确定的本轮次的损失量优化解码参数;根据所述t个轮次的损失量优化编码参数;重复上述所有步骤直至满足预定训练终止条件。

Description

混合图神经网络模型的训练、预测 技术领域
本说明书涉及数据处理技术领域,尤其涉及混合图神经网络训练的方法和装置、和利用混合图神经网络预测的方法和装置。
背景技术
图具有强大的表达能力,能够用来作为数据结构,对在各个领域运行的社会网络进行建模。图通常用来描述某些事物之间的某种特定关系,用点代表事物,用连接两点的线表示相应两个事物间具有这种关系。图神经网络(GNN,Graph Neural Networks)是在图域上运行的基于深度学习的算法,具有令人信服的性能和高解释性,已成为一种广泛应用的图形分析方法。
在很多应用场景中,机器学习任务的输入数据不适合表示为图域中的信息,例如具有时序关系的一系列数据。混合图神经网络模型结合了图神经网络算法和其他机器学习算法,能够在这些应用场景中大大提升预测的效果。
在采用某个点的样本对混合图神经网络模型进行训练时,或者采用混合图神经网络模型对某个点进行预测时,需要计算该点的k(k为自然数)度邻居。通常的方式是每次提取各个点的k度邻居并对其进行计算,由于各个点的k度邻居常常包含相同的点,这会导致大量冗余的重复运算,使得训练或预测的效率受到影响。
发明内容
有鉴于此,本说明书提供一种混合图神经网络模型的训练方法,所述混合图神经网络模型包括编码函数和解码函数,所述编码函数为带有编码参数的图神经网络算法及其组合,所述解码函数为带有解码参数的机器学习算法及其组合,所述方法包括:以训练样本中所有目标对应的实例以及所述实例的若干度邻居作为图中的点,基于所有实例的图数据,采用编码函数生成每个实例的图表示向量;对解码参数进行t个轮次的训练;在每个轮次,从训练样本中提取bs个目标,基于每个目标对应的实例的图表示向量、对应的非图数据,采用解码函数生成每个目标的预测量,并根据由本轮次bs个目标的预测量与标签量确定的本轮次的损失量优化解码参数;bs为自然数,t为大于1的自然数;根据所述t个轮次的损失量优化编码参数;重复上述所有步骤直至满足预定训练终止条件。
本说明书提供的一种混合图神经网络模型的预测方法,所述混合图神经网络模型包括编码函数和解码函数,所述编码函数为带有根据前述混合图神经网络模型的训练方法训练完毕的编码参数的图神经网络算法,所述解码函数为带有根据前述混合图神经网络模型的训练方法训练完毕的解码参数的机器学习算法,所述方法包括:以所有待预测的目标对应的实例以及所述实例的若干度邻居作为图中的点,基于所有实例的图数据,采用编码函数生成每个实例的图表示向量;基于与待预测的目标对应的实例的图表示向量、对应的非图数据,采用解码函数生成所述目标的预测量。
本说明书还提供了一种混合图神经网络模型的训练装置,所述混合图神经网络模型包括编码函数和解码函数,所述编码函数为带有编码参数的图神经网络算法及其组合,所述解码函数为带有解码参数的机器学习算法及其组合,所述装置包括:训练图表示向量单元,用于以训练样本中所有目标对应的实例以及所述实例的若干度邻居作为图中的点,基于所有实例的图数据,采用编码函数生成每个实例的图表示向量;解码参数训练单元,用于对解码参数进行t个轮次的训练;在每个轮次,从训练样本中提取bs个目标,基于每个目标对应的实例的图表示向量、对应的非图数据,采用解码函数生成每个目标的预测量,并根据由本轮次bs个目标的预测量与标签量确定的本轮次的损失量优化解码参数;bs为自然数,t为大于1的自然数;编码参数训练单元,用于根据所述t个轮次的损失量优化编码参数;训练循环单元,用于重复采用上述所有单元直至满足预定训练终止条件。
本说明书提供的一种混合图神经网络模型的预测装置,所述混合图神经网络模型包括编码函数和解码函数,所述编码函数为带有根据前述混合图神经网络模型的训练方法训练完毕的编码参数的图神经网络算法,所述解码函数为带有根据前述混合图神经网络模型的训练方法训练完毕的解码参数的机器学习算法,所述装置包括:预测图表示向量单元,用于以所有待预测的目标对应的实例以及所述实例的若干度邻居作为图中的点,基于所有实例的图数据,采用编码函数生成每个实例的图表示向量;预测量生成单元,用于基于与待预测的目标对应的实例的图表示向量、对应的非图数据,采用解码函数生成所述目标的预测量。
本说明书提供的一种计算机设备,包括:存储器和处理器;所述存储器上存储有可由处理器运行的计算机程序;所述处理器运行所述计算机程序时,执行上述混合图神经网络的训练方法所述的方法。
本说明书提供的一种计算机设备,包括:存储器和处理器;所述存储器上存储有可由处理器运行的计算机程序;所述处理器运行所述计算机程序时,执行上述混合图神经网络模型的预测方法所述的方法。
本说明书提供的一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器运行时,执行上述混合图神经网络的训练方法所述的方法。
本说明书还提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器运行时,执行上述混合图神经网络模型的预测方法所述的方法。
由以上技术方案可见,在本说明书混合图神经网络模型的训练方法和装置实施例中,由编码函数将实例的图数据转换为图表示向量,由解码函数基于图表示向量和对应于目标的非图数据生成训练目标的预测量,并根据预测量与标签量的差异来优化解码参数和编码参数,从而在编码参数未发生变化时一次性的将所有实例的图数据转换为图表示向量,避免了对图数据的冗余重复处理,提高了训练速度;同时,解码函数综合考虑了实例的图表示向量和非图数据,实现了混合图神经网络模型的高效训练。
在本说明书混合图神经网络模型的预测方法和装置实施例中,由编码函数一次性的将所有实例的图数据转换为图表示向量,由解码函数基于图表示向量和对应于目标的非图数据生成训练目标的预测量,从而避免了对图数据的冗余重复处理,提高了预测速度; 同时,解码函数综合考虑了实例的图表示向量和非图数据,实现了混合图神经网络模型的高效预测。
附图说明
图1是本说明书实施例一中一种混合图神经网络模型的训练方法的流程图;
图2是本说明书实施例一的两种示例性实现方式中混合图神经网络模型训练系统的逻辑结构图;
图3是本说明书实施例二中一种混合图神经网络模型的预测方法的流程图;
图4是本说明书实施例二的两种示例性实现方式中混合图神经网络模型预测系统的逻辑结构图;
图5是运行本说明书实施例的设备的一种硬件结构图;
图6是本说明书实施例中一种混合图神经网络模型的训练装置的逻辑结构图;
图7是本说明书实施例中一种混合图神经网络模型的预测装置的逻辑结构图。
具体实施方式
本说明书的各个实施例中,混合图神经网络模型中的图以实例为点,以实例之间的关系作为边来构建。实例可以是实际应用场景中的任何主体,如用户、商品、店铺、供应商、站点、配送员、网页、用户终端、建筑物等。混合图神经网络模型用来预测与实例相关的状态、行为等。作为预测的目标,状态可以是该实例的类别、该实例的属性等能够描述该主体的信息;行为可以是由该实例实施的行为,也可以是以该实例为实施对象的行为。此外,还可以将第一种主体与第二种主体的匹配程度来作为预测的目标,在这种情形下,可以将其中的一种主体作为实例,另一种作为实例的关联对象。
需要说明的是,本说明书各个实施例中混合图神经网络模型预测的目标是与确定的实例相关的目标,根据预测的目标即可知道在预测该目标时会涉及到的所有图中的实例。混合图神经网络模型的目标对应于至少一个实例。
例如,某个混合图神经网络模型用来对某个用户未来若干天的消费金额进行预测,则这个混合图神经网络模型可以以用户为实例构图,其预测目标是某个用户未来若干天的消费金额,目标对应于一个确定的用户;第二个例子,某个混合图神经网络模型用来对某个网页被其他网页引用的次数进行预测,则这个混合图神经网络模型以网页为实例,其目标对应于一个确定的网页;第三个例子,某个混合图神经网络模型采用一个用户过去曾经点击过的若干个商品来对该用户对某个待推荐商品的感兴趣程度进行预测,则这个混合图神经网络模型以商品为实例,其预测目标是某个用户与目标商品的匹配程度,其目标对应的实例包括目标商品和该用户曾经点击过的若干个商品。
本说明书的各个实施例中,混合图神经网络模型包括编码函数和解码函数。其中,编码函数可以是各种图神经网络算法,也可以是一种到多种图神经网络算法的组合;解码函数可以是包括图神经网络在内的任意的机器学习算法,也可以是一种到多种上述各 种机器学习算法的组合,例如可以是DNN(Deep Neural Networks,深度神经网络)、RNN(Recurrent Neural Network,循环神经网络)、LSTM(Long short-term memory,长短期记忆网络)、Wide&Deep(广度和深度)等算法以及这些算法的组合。
混合图神经网络模型的输入是与对应于目标的实例相关的各种数据,这些数据中,可以将适合以图的方式来表达、也适合也图神经网络算法来迭代或处理的数据作为实例的图数据,输入到编码函数,经编码函数处理或者说由编码函数对图数据进行编码后,输出实例的图表示向量;而输入图神经网络模型的数据中除图数据之外的其他数据,称为对应于目标的非图数据,可以与实例的图表示向量一并作为解码函数的输入,经解码函数处理后,输出目标的预测量。解码函数的输出即为混合图神经网络模型的输出,所输出的预测量可以是一个值,也可以是一个向量,不做限定。
本说明书的各个实施例中,将编码函数中使用的可学习参数称为编码参数,将解码函数中使用的可学习参数称为解码参数;模型的训练过程即是通过修改可学习参数,来使得模型的输出更加接近训练样本中模型目标的标签量的过程。
在很多应用场景中,采用混合图神经网络模型对目标进行预测时,要考虑目标对应的实例的属性信息和与实例相关的行为信息。从实例的属性信息可以得到实例自身的属性数据、与其他实例的关系数据;从与实例相关的行为信息可以得到与其他实例的关系数据、以及由与实例相关的历史行为记录得出的行为序列信息(即与实例相关的时序数据)。
在以实例作为图中的点时,实例与其他实例的关系数据可以方便的表达为图中的边,适合以图神经网络算法来处理,因此,与其他实例的关系数据通常可以以图中边的属性的形式,作为编码函数的输入。而与实例相关的时序数据则不适合以图的形式来表达,通常会作为解码函数的输入。
对实例自身的属性数据,虽然可以方便的表达为图中点的属性,但是并不是所有的属性数据都适合由图神经网络算法来处理时,例如,实例自身属性数据中的稀疏数据,就更适合作为解码函数的输入。此外,在某些应用场景中,有些实例自身的部分属性数据作为解码函数的输入时,对目标的预测有更好的效果。本说明书的实施例中,将实例自身的属性数据中输入编码函数的部分称为实例的自身点数据,将实例自身的属性数据中输入解码函数的部分称为实例的自身非点数据。实例的自身稠密数据通常会作为实例的自身点数据,而实例的自身稀疏数据则既可以作为实例的自身点数据,也可以作为实例的自身非点数据。在一些应用场景中,可以将实例的自身稠密数据作为实例的自身点数据,将实例的自身稀疏数据作为实例的自身非点数据;在另外一些应用场景中,也可以将一部分实例的自身稠密数据作为实例的自身点数据,将另一部分实例的自身稠密数据、和实例的自身稀疏数据作为实例的自身非点数据。
其中,稠密数据是可以用一个值或低维度向量表示的数据;而稀疏数据则是以维度很高、但只有少量元素有值的向量来表示的数据。例如,假设以用户为实例,用户的账户余额、账龄都可以表示为一个值,是稠密数据;用户所拥有的银行卡则是稀疏数据,在全世界几十万种银行卡中,一个用户通常只有几张银行卡,以一个维度为几十万但只有几个元素(即用户所拥有的银行卡对应的元素)的值为1的向量来表示。
本说明书的实施例一提出一种新的混合图神经网络模型的训练方法,在编码参数未发生变化时采用编码函数一次性计算所有实例的图表示向量,以实例的图表示向量和与实例相关的非图数据为输入采用解码函数计算训练目标的预测量,基于预测量和标签量优化编码参数及解码参数,避免了对实例图数据的冗余重复计算,降低运算量并加快了训练速度,同时解码函数综合考虑了图数据和非图数据对预测量的影响,实现了混合图神经网络模型的高效训练。
本说明书的实施例一可以运行在任何具有计算和存储能力的设备上,如手机、平板电脑、PC(Personal Computer,个人电脑)、笔记本、服务器等设备;还可以由运行在两个或两个以上设备的逻辑节点来实现本说明书实施例一中的各项功能。
本说明书的实施例一中,混合图神经网络模型的训练方法的流程如图1所示。实施例一中的训练为有监督学习,训练样本中包括混合图神经网络模型的输入数据和目标的标签量(期望的输出),输入数据包括输入编码函数的实例的图数据、和输入解码函数的对应于目标的非图数据。
在训练开始前,编码函数中的编码参数、解码函数中的解码参数被初始化为初始值。本说明书的实施例一中,可以采用任意的方式来设置编码参数和解码参数的初始值。
步骤110,以训练样本中所有目标对应的实例以及该些实例的若干度邻居作为图中的点,基于所有实例的图数据,采用编码函数生成每个实例的图表示向量。
对于编码函数所采用的图神经网络算法,训练样本中每个目标对应的每个实例是图中的一个点,图中点的集合不仅包括训练样本中所有目标对应的所有实例,还可以包括可能成为上述每个实例的若干度邻居的其他实例。例如,一个网络购物平台的商品有10亿个,假设以商品作为混合图神经网络模型的实例,训练样本中包括1亿个商品,而其他9亿个商品可能成为训练样本中1亿个商品的1度到k度邻居,则该混合图神经网络模型的图中点的集合可以是这10亿个商品。
实例的图数据包括以下的一项到多项:实例自身属性数据中的自身点数据、与其他实例的关系数据。其中,实例的自身点数据用来表达图中点的特性,与其他实例的关系数据用来表达图中边的特性(不同点之间的关联)。与其他实例的关系数据可以是点与某阶邻居之间的关系数据,也可以是点与若干个各阶邻居之间的关系数据的组合,不做限定。
采用编码函数的图神经网络算法,可以将实例的图数据转换为实例的图表示向量。本说明书的实施例一中,对于既定的编码参数,一次性的根据所有实例的图数据,生成图中每个实例的图表示向量;在编码参数发生变化(被优化)后,再重复这一过程,采用变化后的编码参数一次性生成每个实例新的图表示向量;直到训练结束。
步骤120,对解码参数进行t(t为大于1的自然数)个轮次的训练;在每个轮次,从训练样本中提取bs(bs为自然数)个目标,基于每个目标对应的实例的图表示向量、对应的非图数据,采用解码函数生成每个目标的预测量,并根据由本轮次bs个目标的预测量与标签量确定的本轮次的损失量优化解码参数。
本说明书的实施例一中,对于既定的编码参数,将对解码参数进行t个轮次的训练。 换言之,每次优化编码参数前,解码参数将被优化t次。
在训练解码参数的每个轮次,从训练样本中提取bs个目标。如前所述,每个目标均有各自对应的确定的一个到多个实例。在得到某个目标对应的实例后,即可将对应实例的图表示向量、对应该目标的非图数据作为解码函数的输入,解码函数的输出即为该目标的预测量。解码函数的输入中,对应于目标的非图数据可以是对应于目标的实例的自身非点数据、与该实例或该些实例相关的时序数据中的一项到多项。
在得到本轮次bs个目标的预测量后,按照每个目标的预测量以及训练样本中该目标的标签量,可以得到本轮次提取的bs个目标的损失量,并基于本轮次的损失量优化解码参数。可以根据实际应用场景的特点选择任意的损失函数来计算bs个目标的本轮次损失量,如可以采用交叉熵损失函数、最小二乘损失函数、绝对误差损失函数、或均方误差损失函数等;类似的,也可以采用任意的优化函数来根据本轮次的损失量修改解码参数,如梯度下降优化器,Adam优化器等;均不做限定。
例如,可以采用预定的损失函数基于每个目标的预测量与标签量计算本轮次每个目标的损失量,再根据bs个目标的损失量得到本次轮的损失量;计算本轮次的损失量对于解码参数的梯度,根据计算所得的梯度对解码参数进行优化。
在进行t个轮次的解码参数训练后,进入步骤130。
步骤130:根据上述t个轮次的损失量优化编码参数。
可以根据实际应用场景的特点选择任意的优化函数来根据t个轮次的损失量修改编码参数,不做限定。
例如,可以先基于每个轮次的损失量,计算该轮次损失量对该轮次bs个目标对应实例的图表示向量的梯度,这样t个轮次的损失量共可以得到bs×t个损失量对图表示向量的梯度;然后采用这bs×t个损失量对图表示向量的梯度来优化编码参数。
上述例子中,在采用bs×t个损失量对图表示向量的梯度来优化编码参数时,可以先在每个轮次bs个目标对应的各个实例的图表示向量上分别累积t个轮次的梯度,再由这些图表示向量上累积的梯度确定损失量对编码参数的梯度,最后采用损失量对编码参数的梯度优化编码参数。这种优化方式中,当不同轮次的目标所对应的实例有重复时,重复实例的图表示向量上将累积超过1个轮次的梯度。具体的累积方式不做限定,例如可以是对梯度求和,也可以是梯度的加权和等等。
步骤140:重复上述所有步骤直至满足预定训练终止条件。
在完成对编码参数的优化后,判断是否满足预定训练终止条件。如果已经满足,则对混合图神经网络模型的训练完成,训练流程结束。如果尚未满足预定训练终止条件,则重复执行步骤110至130。
具体而言,在尚未满足预定终止条件时,转步骤110,按照更新后的编码参数计算所有实例的图表示向量,并采用新计算得出的图表示向量执行步骤120中t个轮次的解码参数训练、和步骤130中的编码参数优化。
本说明书的实施例一中可以采用任意的预定训练终止条件。例如,可以将优化R(R 为大于1的自然数)次编码参数作为预定训练终止条件,这样将在步骤110至130被重复执行R次后完成对混合图神经网络模型的训练。
以下给出本说明书实施例一的第一种示例性实现方式。本第一种示例性实现方式中,混合图神经网络模型用来对实例进行分类。训练样本中对应于所有目标的所有实例、以及可能成为这些实例的若干度邻居的其他实例组成点集合
Figure PCTCN2022071577-appb-000001
所有点之间的关系组成边集合ε,从而构成图
Figure PCTCN2022071577-appb-000002
实例的图数据包括实例的自身点数据和与其他实例的关系数据,对应于目标的非图数据包括实例自身非点数据和根据实例的历史行为信息生成的时序数据。X为所有实例的自身点数据,E为所有实例与其他实例的关系数据,A为图
Figure PCTCN2022071577-appb-000003
中点与边拓扑关系的邻接矩阵,f为编码函数,W为编码参数,g为解码函数,ω为解码参数。
每个训练样本的目标包括目标标识ID和目标的标签量,目标ID i的标签量为Y i。目标标识ID表示该目标对应的实例,目标ID i对应的实例为v i;目标ID i的标签量Y i表示目标ID i所属的类别。实例v i的图表示向量为H i、自身非点数据为B i,对应于目标ID i的时序数据为S i
第一种示例性实现方式中,混合图神经网络模型训练系统的逻辑结构如图2所示。训练系统包括训练编码器和训练解码器,训练编码器包括编码函数及编码参数模块、图表示向量计算模块、图表示向量存储模块、训练编码器梯度计算与参数优化模块、和梯度接收模块;训练解码器包括解码函数及解码参数模块、图表示向量查询模块、预测量与损失量计算模块、训练解码器梯度计算与参数优化模块、和梯度发送模块。其中,编码函数及编码参数模块中保存有编码函数f和编码参数W,解码函数及解码参数模块中保存有解码函数g和解码参数ω。训练系统以如下步骤运行:
步骤S02:开始训练时,将编码函数及编码参数模块中的编码参数W、和解码函数及解码参数模块中的解码参数ω置为初始值,置编码参数优化次数r1为0,置解码参数优化次数r2为0。
步骤S04:在训练编码器中,基于当前的编码参数W,由图表示向量计算模块采用式1一次性计算得出点集合v中所有实例(包括训练样本中的所有实例及其若干度邻居)的图表示向量
Figure PCTCN2022071577-appb-000004
H r1=f(X,A,E|W)   式1
步骤S06:在训练编码器中,由图表示向量存储模块以实例的标识为索引,保存每个实例的标识v i与该实例的图表示向量
Figure PCTCN2022071577-appb-000005
的对应关系。
步骤S08:在训练解码器中,从训练样本的目标集合中取出bs个目标。对每个取出的目标ID i(i∈[1,bs]),采用由图表示向量查询模块从训练编码器的图表示向量存储模块中查找得到的目标ID i对应实例v i的图表示向量
Figure PCTCN2022071577-appb-000006
与实例v i的自身非点数据B i、对应于目标ID i的时序数据S i,拼接为一组解码函数g的输入
Figure PCTCN2022071577-appb-000007
步骤S10:在训练解码器中,预测量与损失量计算模块基于当前的解码参数ω,由式2得出步骤S08中取出的bs个目标的预测量
Figure PCTCN2022071577-appb-000008
设损失函数为l,预测量与损失量计算模块采用式3得出本轮次(即轮次r2)的损失量
Figure PCTCN2022071577-appb-000009
Figure PCTCN2022071577-appb-000010
Figure PCTCN2022071577-appb-000011
步骤S12:在训练解码器中,训练解码器梯度计算与参数优化模块根据本轮次的损失量
Figure PCTCN2022071577-appb-000012
由式4得到本轮次的损失量对解码参数的梯度,再根据本轮次的损失量对解码参数的梯度优化解码参数,并将解码函数及解码参数模块中的解码参数更新为优化后的值。如果采用梯度下降法对解码参数进行优化,则可以由式5得出优化后的解码参数。式5中,α为梯度下降法中的学习率。
Figure PCTCN2022071577-appb-000013
Figure PCTCN2022071577-appb-000014
步骤S14:在训练解码器中,训练解码器梯度计算与参数优化模块采用式6计算得到本轮次的损失量
Figure PCTCN2022071577-appb-000015
对bs个图表示向量的梯度,并将计算得出的bs个梯度向量送入梯度发送模块。
Figure PCTCN2022071577-appb-000016
步骤S16:在训练解码器中,梯度发送模块将bs个梯度向量发送给训练编码器的梯度接收模块。在训练编码器中,训练编码器的梯度接收模块保存收到的bs个梯度向量。
步骤S18:将r2增加1。如果r2不能整除t,转步骤S08;如果r2能够整除t,执行步骤S20。
步骤S20:在训练编码器中,训练编码器梯度计算与参数优化模块从梯度接收模块读取保存的bs×t个梯度向量,采用式7计算t个轮次的损失量对编码参数的梯度
Figure PCTCN2022071577-appb-000017
再根据
Figure PCTCN2022071577-appb-000018
优化编码参数。如果采用梯度下降法对编码参数进行优化,则可以由式8得出优化后的编码参数。
Figure PCTCN2022071577-appb-000019
Figure PCTCN2022071577-appb-000020
步骤S22:将r1增加1。如果r1<R,则转步骤S04,否则执行步骤S24。
步骤S24:训练结束。此时,编码函数及编码参数模块中的编码参数W E即为完成训练的编码参数,解码函数及解码参数模块中的解码参数ω E即为完成训练的解码参数。
以下给出本说明书实施例一的第二种示例性实现方式。本第二种示例性实现方式中,混合图神经网络模型用来预测实例与对象的匹配程度。每个训练样本中包括对某个对象u i的目标实例v i、和该对象u i之前曾经有过历史行为的N个实例v ij,(j∈[1,N])。这 样,本第二种示例性实现方式中,每个目标对应的实例包括(N+1)个,即v i和v ij,j∈[1,N]。训练样本还包括目标标识ID和目标的标签量,目标ID i的标签量为Y i,目标ID i的对象u i的表示向量为U i
训练样本中对应于所有目标的所有实例、以及可能成为这些实例的若干度邻居的其他实例组成点集合
Figure PCTCN2022071577-appb-000021
所有点之间的关系组成边集合ε,从而构成图
Figure PCTCN2022071577-appb-000022
实例的图数据包括实例的自身点数据和与其他实例的关系数据,对应于目标的非图数据包括目标的对象的表示向量。X为所有实例的自身点数据,E为所有实例与其他实例的关系数据,A为图
Figure PCTCN2022071577-appb-000023
中点与边拓扑关系的邻接矩阵,f为编码函数,W为编码参数,g为解码函数,ω为解码参数。
第二种示例性实现方式中,混合图神经网络模型训练系统的逻辑结构如图2所示。训练系统以如下步骤运行:
步骤S32:开始训练时,将编码函数及编码参数模块中的编码参数W、和解码函数及解码参数模块中的解码参数ω置为初始值,置编码参数优化次数r1为0,置解码参数优化次数r2为0。
步骤S34:在训练编码器中,基于当前的编码参数W,由图表示向量计算模块采用式1一次性计算得出点集合
Figure PCTCN2022071577-appb-000024
中所有实例(包括训练样本中的所有实例及其若干度邻居)的图表示向量
Figure PCTCN2022071577-appb-000025
步骤S36:在训练编码器中,由图表示向量存储模块以实例的标识为索引,保存每个实例的标识v i与该实例的图表示向量
Figure PCTCN2022071577-appb-000026
的对应关系。
步骤S38:在训练解码器中,从训练样本的目标集合中取出bs个目标。对每个取出的目标ID i(i∈[1,bs]),采用由图表示向量查询模块从训练编码器的图表示向量存储模块中查找得到的目标ID i对应的实例v i、v ij,j∈[1,N]的图表示向量
Figure PCTCN2022071577-appb-000027
与对应于目标ID i的对象的表示向量U i,拼接为一组解码函数g的输入
Figure PCTCN2022071577-appb-000028
Figure PCTCN2022071577-appb-000029
步骤S40:在训练解码器中,预测量与损失量计算模块基于当前的解码参数ω,由式9得出步骤S38中取出的bs个目标的预测量
Figure PCTCN2022071577-appb-000030
设损失函数为l,预测量与损失量计算模块采用式3得出本轮次(即轮次r2)的损失量
Figure PCTCN2022071577-appb-000031
Figure PCTCN2022071577-appb-000032
步骤S42:在训练解码器中,训练解码器梯度计算与参数优化模块根据本轮次的损失量
Figure PCTCN2022071577-appb-000033
由式4得到本轮次的损失量对解码参数的梯度,再根据本轮次的损失量对解码参数的梯度优化解码参数,并将解码函数及解码参数模块中的解码参数更新为优化后的值。
步骤S44:在训练解码器中,训练解码器梯度计算与参数优化模块采用式6和式10计算得到本轮次的损失量
Figure PCTCN2022071577-appb-000034
对bs×(N+1)个图表示向量的梯度,并将计算得出的bs×(N+1)个梯度向量送入梯度发送模块。
Figure PCTCN2022071577-appb-000035
步骤S46:在训练解码器中,梯度发送模块将bs×(N+1)个梯度向量发送给训练编码器的梯度接收模块。在训练编码器中,训练编码器的梯度接收模块保存收到的bs×(N+1)个梯度向量。
步骤S48:将r2增加1。如果r2不能整除t,转步骤S38;如果r2能够整除t,执行步骤S50。
步骤S50:在训练编码器中,训练编码器梯度计算与参数优化模块从梯度接收模块读取保存的bs×(N+1)×t个梯度向量,采用式11计算t个轮次的损失量对编码参数的梯度
Figure PCTCN2022071577-appb-000036
再根据
Figure PCTCN2022071577-appb-000037
优化编码参数。
Figure PCTCN2022071577-appb-000038
步骤S52:将r1增加1。如果r1<R,则转步骤S34,否则执行步骤S54。
步骤S54:训练结束。此时,编码函数及编码参数模块中的编码参数W E即为完成训练的编码参数,解码函数及解码参数模块中的解码参数ω E即为完成训练的解码参数。
可见,本说明书的实施例一中,在编码参数未发生变化时采用编码函数一次性计算所有实例的图表示向量,由解码函数基于图表示向量和对应于目标的非图数据生成训练目标的预测量,依据预测量和标签量优化编码参数及解码参数,避免了对图数据的冗余重复处理,降低运算量并加快了训练速度,解码函数综合考虑了实例的图表示向量和非图数据,实现了混合图神经网络模型的高效训练。
本说明书的实施例二提出一种新的混合图神经网络模型的预测方法,采用编码函数一次性计算所有实例的图表示向量,以实例的图表示向量和对应于目标的非图数据为输入采用解码函数计算待预测目标的预测量,避免了对实例图数据的冗余重复计算,降低运算量并加快了预测速度,同时解码函数综合考虑了图数据和非图数据对预测量的影响,实现了混合图神经网络模型的高效预测。
本说明书的实施例二可以运行在任何具有计算和存储能力的设备上,如手机、平板电脑、PC、笔记本、服务器等设备;还可以由运行在两个或两个以上设备的逻辑节点来实现本说明书实施例一中的各项功能。
本说明书的实施例二中,混合图神经网络模型是采用本说明书实施例一的训练方法训练完毕的混合图神经网络模型,也就是说,实施例二的混合图神经网络模型中,编码函数是包括由本说明书实施例一的方法训练完毕的编码参数的图神经网络算法,解码函数是包括由本说明书实施例一的方法训练完毕的解码参数的机器学习算法。混合图神经网络模型的输入数据包括输入编码函数的实例的图数据和输入解码函数的对应于目标的非图数据。
本说明书的实施例二中,混合图神经网络模型的预测方法的流程如图3所示。如前所述,混合图神经网络模型的每个待预测的目标均对应于一个到多个实例。
步骤310:以所有待预测的目标对应的实例以及该些实例的若干度邻居作为图中的点,基于所有实例的图数据,采用编码函数生成每个实例的图表示向量。
对于编码函数所采用的图神经网络算法,训练样本中每个目标对应的每个实例是图中的一个点,图中点的集合不仅包括训练样本中所有目标对应的所有实例,还包括可能成为上述每个实例的若干度邻居的其他实例。
实例的图数据包括以下的一项到多项:实例自身属性数据中的自身点数据、与其他实例的关系数据。其中,实例的自身点数据用来表达图中点的特性,与其他实例的关系数据用来表达图中边的特性(不同点之间的关联)。与其他实例的关系数据可以是点与某阶邻居之间的关系数据,也可以是点与若干个各阶邻居之间的关系数据的组合,不做限定。
编码函数采用训练完毕的图神经网络算法,根据所有实例的图数据,一次性的为图中每个实例生成其图表示向量。
步骤320:基于与待预测的目标对应的实例的图表示向量、对应的非图数据,采用解码函数生成所述目标的预测量。
在生成所有实例的图表示向量后,对于某个或某些待预测的目标,可以将各个目标对应的实例的图表示向量、对应的非图数据输入训练完毕的解码函数,解码函数的输出即是各个目标的预测量。解码函数的输入中,对应于目标的非图数据可以是实例的自身非点数据、与对应于目标的实例相关的时序数据中的一项到多项。
在一些应用场景中,待预测目标的数量比较大。此时可以在每个轮次对ps(ps为自然数)个目标进行预测,直到完成所有目标的预测。具体而言,在每个轮次,可以从待预测的目标集合中提取ps个目标;对所提取的ps个目标,分别将每个目标对应的实例的图表示向量、对应的非图数据输入到解码函数,由解码函数的机器学习算法得到这ps个目标中每个目标的预测量;然后将本轮次提取的ps个目标从目标集合中删除,如果目标集合不为空,则继续进行下一个轮次,提取ps个目标进行预测,直到目标集合为空。需要说明的是,目标集合为空前的最后一个轮次,所提取的目标数量可能小于ps。
以下给出本说明书实施例二的第一种示例性实现方式。本第一种示例性实现方式中的图神经网络模型是经由本说明书实施例一的第一种示例性实现方式训练完成的图神经网络模型,因此,本第一种示例性实现方式中图神经网络模型用来对实例进行分类,其编码函数f、解码函数g、实例的图数据、对应于目标的非图数据与实施例一的第一种示例性实现方式中相同,编码参数为W E,解码参数为ω E
本第一种示例性实现方式中,所有待预测目标对应的实例以及这些实例的若干度邻居组成点集合
Figure PCTCN2022071577-appb-000039
所有点之间的关系组成边集合ε,从而构成图
Figure PCTCN2022071577-appb-000040
X为所有实例的自身点数据,E为所有实例与其他实例的关系数据,A为图
Figure PCTCN2022071577-appb-000041
中点与边拓扑关系的邻接矩阵。每个待预测的目标包括目标标识,目标ID i对应的实例为v i,实例v i的图表示向量为H i、自身非点数据为B i、时序数据为S i
本第一种示例性实现方式中,混合图网络神经网络模型预测系统的逻辑结构如图4所示。预测系统包括预测编码器和预测解码器,预测编码器包括编码函数及编码参数模 块、图表示向量计算模块、和图表示向量存储模块;预测解码器包括解码函数及解码参数模块、图表示向量查询模块、和预测量计算模块。其中,编码函数及编码参数模块中保存有编码函数f和完成训练的编码参数W E,解码函数及解码参数模块中保存有解码参数g和完成训练的解码参数ω E。预测系统以如下步骤运行:
步骤S62:在预测编码器中,采用编码参数W E,由图表示向量计算模块采用式12一次性计算得出图中所有的点(包括所有待预测实例及其若干度邻居)的图表示向量{H i}:
H=f(X,A,E|W E)   式12
步骤S64:在预测编码器中,由图表示向量存储模块以实例的标识为索引,保存每个实例的标识v i与该实例的图表示向量H i的对应关系。
步骤S66:令变量Ct=ps。
步骤S68:在预测解码器中,从待预测目标的集合中取出Ct个目标(如果目标集合的目标总数不足ps,则取出剩余的全部目标并将Ct的值修改为目标集合的目标总数)。对每个取出的目标ID i(i∈[1,Ct]),采用由图表示向量查询模块从预测编码器的图表示向量存储模块中查找得到的目标ID i对应实例v i的图表示向量H i,与实例v i的自身非点数据为B i、时序数据为S i,拼接为一组解码函数g的输入(H i,B i,S i)(i∈[1,Ct])。
步骤S70:在预测解码器中,预测量计算模块采用解码参数ω E,由式13得出步骤S68中取出的本轮次Ct个目标的预测量
Figure PCTCN2022071577-appb-000042
Figure PCTCN2022071577-appb-000043
步骤S72:从待预测的目标集合中删除本轮次取出的Ct个目标,如果目标集合为空,则预测结束;如果目标集合不为空,则转步骤S68。
以下给出本说明书实施例二的第二种示例性实现方式。本第二种示例性实现方式中的图神经网络模型是经由本说明书实施例一的第二种示例性实现方式训练完成的图神经网络模型,因此,本第二种示例性实现方式中图神经网络模型用来预测实例与对象的匹配程度,其编码函数f、解码函数g、实例的图数据、对应于目标的非图数据与实施例一的第二种示例性实现方式中相同,编码参数为W E,解码参数为ω E
本第二种示例性实现方式中,每个待预测的目标包括目标标识,目标ID i对应的实例包括(N+1)个,即v i和v ij,j∈[1,N],实例v i的图表示向量为H i,目标ID i的对象u i的表示向量为U i
本第二种示例性实现方式中,所有待预测目标对应的实例以及这些实例的若干度邻居组成点集合
Figure PCTCN2022071577-appb-000044
所有点之间的关系组成边集合ε,从而构成图
Figure PCTCN2022071577-appb-000045
X为所有实例的自身点数据,E为所有实例与其他实例的关系数据,A为图
Figure PCTCN2022071577-appb-000046
中点与边拓扑关系的邻接矩阵。
本第二种示例性实现方式中,混合图网络神经网络模型预测系统的逻辑结构如图4所示。预测系统以如下步骤运行:
步骤S82:在预测编码器中,采用训练所得的编码参数W E,由图表示向量计算模块采用式12一次性计算得出所有实例(包括所有待预测实例及其若干度邻居)的图表 示向量{H i}:
步骤S84:在预测编码器中,由图表示向量存储模块以实例的标识为索引,保存每个实例的标识v i与该实例的图表示向量H i的对应关系。
步骤S86:令变量Ct=ps。
步骤S88:在预测解码器中,从待预测目标的集合中取出Ct个目标(如果目标集合的目标总数不足ps,则取出剩余的全部目标并将Ct的值修改为目标集合的目标总数)。对每个取出的目标ID i(i∈[1,Ct]),采用由图表示向量查询模块从预测编码器的图表示向量存储模块中查找得到的目标ID i对应实例v i的图表示向量H i、对应实例v ij,j∈[1,N]的图表示向量H ij,j∈[1,N]、与目标ID i的对象的表示向量U i,拼接为一组解码函数g的输入(H i,H ij,U i)(i∈[1,Ct])。
步骤S90:在预测解码器中,预测量计算模块采用解码参数ω E,由式14得出步骤S88中取出的本轮次Ct个目标的预测量
Figure PCTCN2022071577-appb-000047
Figure PCTCN2022071577-appb-000048
步骤S92:从待预测的目标集合中删除本轮次取出的Ct个目标,如果目标集合为空,则预测结束;如果目标集合不为空,则转步骤S88。
可见,本说明书的实施例二中,采用编码函数一次性计算所有实例的图表示向量,由解码函数基于图表示向量和对应于目标的非图数据生成训练目标的预测量,避免了对图数据的冗余重复处理,降低运算量并加快了预测速度,解码函数综合考虑了实例的图表示向量和非图数据,实现了混合图神经网络模型的高效预测。
上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。
在本说明书的第一个应用示例中,某互联网服务提供商采用混合图神经网络模型来评估用户所属的类别,并依据用户所属的类别,对来自用户的请求进行与其类别相对应的业务处理,从而能够为用户提供更加有针对性的服务,提升业务处理的效率。其中,可以根据具体的业务需求来确定类别的形式、以及相对应的业务处理,不做限定。例如,类别可以是消费级别、信用级别、活跃等级、安全级别等;对应的业务处理可以是对不同类别的用户适用不同的业务流程、采用不同的业务处理参数等。
该互联网服务提供商以用户为实例来构建混合图神经网络模型,模型的训练或预测目标为某个用户所属的类别。混合图神经网络模型中,实例的图数据为用户数据中表达为图中点的属性和边的属性的部分(如用户属性数据中的稠密数据、用户之间的关系数据等),而将用户数据中的其余部分(如用户属性数据中的稀疏数据)、根据用户的历史行为记录生成的历史行为时序数据来作为对应于目标的非图数据。
该互联网服务提供商采用本说明书实施例一的第一种示例性实现方式来进行混合 图神经网络模型的训练。在训练完毕后,采用本说明书实施例二的第一种示例性实现方式,来基于训练完毕的混合图神经网络模型预测用户所属的类别,并根据所预测的类别来对用户进行对应于其所属类别的业务处理。
在本说明书的第二个应用示例中,某互联网服务提供商采用混合图神经网络模型来评估用户与对象的匹配程度,并依据用户与对象的匹配程度,向用户推荐对象,以便加快用户获取信息的效率,提升用户的满意度。其中,对象的具体形式不做限定,例如可以是商品、促销活动、广告、对用户搜索请求的搜索结果等。
该互联网服务提供商以对象为实例来构建混合图神经网络模型,模型的训练或预测目标为某个用户与某个对象的匹配程度。混合图神经网络模型中,实例的图数据为对象数据中表达为图中点的属性和边的属性的部分,而将用户的表示向量作为对应于目标的非图数据。根据用户的历史行为记录,将用户曾经有过行为(如浏览、收藏、关注等)的N个对象的图表示向量也作为解码函数的输入,这样,每个目标对应的实例将包括(N+1)个对象,即待推荐对象和N个和用户有过历史行为的对象。
该互联网服务提供商采用本说明书实施例一的第二种示例性实现方式来进行混合图神经网络模型的训练。在训练完毕后,采用本说明书实施例二的第二种示例性实现方式,来基于训练完毕的混合图神经网络模型预测用户与待推荐对象的匹配程度,并将预测的与该用户匹配程度较高的若干个待推荐对象推荐给用户。
与上述流程实现对应,本说明书的实施例还提供了一种混合图神经网络的训练装置,和一种混合图神经网络的预测装置。这两种装置均可以通过软件实现,也可以通过硬件或者软硬件结合的方式实现。以软件实现为例,作为逻辑意义上的装置,是所在设备的CPU(Central Process Unit,中央处理器)将对应的计算机程序指令读取到内存中运行形成的。从硬件层面而言,除了图5所示的CPU、内存以及存储器之外,混合图神经网络的训练装置或预测装置所在的设备通常还包括用于进行无线信号收发的芯片等其他硬件,和/或用于实现网络通信功能的板卡等其他硬件。
图6所示为本说明书实施例提供的一种混合图神经网络模型的训练装置,所述混合图神经网络模型包括编码函数和解码函数,所述编码函数为带有编码参数的图神经网络算法及其组合,所述解码函数为带有解码参数的机器学习算法及其组合,所述装置包括训练图表示向量单元、解码参数训练单元、编码参数训练单元和训练循环单元,其中:训练图表示向量单元用于以训练样本中所有目标对应的实例以及所述实例的若干度邻居作为图中的点,基于所有实例的图数据,采用编码函数生成每个实例的图表示向量;解码参数训练单元用于对解码参数进行t个轮次的训练;在每个轮次,从训练样本中提取bs个目标,基于每个目标对应的实例的图表示向量、对应的非图数据,采用解码函数生成每个目标的预测量,并根据由本轮次bs个目标的预测量与标签量确定的本轮次的损失量优化解码参数;bs为自然数,t为大于1的自然数;编码参数训练单元用于根据所述t个轮次的损失量优化编码参数;训练循环单元用于重复采用上述所有单元直至满足预定训练终止条件。
一个例子中,所述编码参数训练单元具体用于:计算每个轮次的损失量对该轮次bs个目标对应实例的图表示向量的梯度,根据bs×t个的梯度优化编码参数。
上述例子中,所述编码参数训练单元根据bs×t个梯度优化编码参数,包括:在每个轮次bs个目标对应的各个实例的图表示向量上分别累积t个轮次的梯度,根据所述各个图表示向量上累积的梯度确定损失量对编码参数的梯度,采用损失量对编码参数的梯度优化编码参数。
可选的,所述解码参数训练单元根据由本轮次bs个目标的预测量与标签量确定的本轮次的损失量优化解码参数,包括:根据本轮次每个目标的预测量与标签量确定每个目标的损失量,由本轮次bs个目标的损失量得到本轮次的损失量,根据本轮次的损失量对解码参数的梯度优化解码参数。
可选的,所述预定训练终止条件包括:优化R次编码参数,R为大于1的自然数。
一种实现方式中,所述实例的图数据包括实例的自身点数据、与其他实例的关系数据中的至少一项;所述对应的非图数据包括对应于目标的实例的自身非点数据、与对应于目标的实例相关的时序数据中的至少一项。
上述实现方式中,所述实例的自身点数据包括:实例的自身稠密数据;所述实例的自身非点数据包括:实例的自身稀疏数据。
可选的,所述混合图神经网络模型用于评估用户所属的类别;所述实例为用户;所述训练目标为某个用户所属的类别;所述实例的图数据包括:用户数据中表达为图中点和边的属性的部分;所述对应的非图数据包括以下至少一项:用户数据中除表达为图中点和边的属性之外的其余部分、根据用户的历史行为记录生成的历史行为时序数据;所述装置还包括类别预测及业务处理单元,用于采用训练完毕的混合图神经网络模型预测用户所属的类别,并根据所述用户所属的类别来对用户进行对应于所述类别的业务处理。
可选的,所述混合图神经网络模型用于评估用户与对象的匹配程度;所述实例为对象,所述训练目标为某个用户与某个待推荐对象的匹配程度;所述实例的图数据包括:对象数据中表达为图中点和边的属性的部分;所述目标对应的实例的图表示向量包括:待推荐对象的图表示向量、和所述用户有过历史行为的N个对象的图表示向量;所述对应的非图数据包括:所述用户的表示向量;N为自然数;所述装置还包括匹配预测及推荐单元,用于采用训练完毕的混合图神经网络模型预测用户与待推荐对象的匹配程度,并将预测与所述用户匹配程度较高的若干个待推荐对象推荐给用户。
图7所示为本说明书实施例提供的一种混合图神经网络模型的预测装置,所述混合图神经网络模型包括编码函数和解码函数,所述编码函数为带有根据前述混合图神经网络模型训练方法训练完毕的编码参数的图神经网络算法,所述解码函数为带有根据前述混合图神经网络模型训练方法训练完毕的解码参数的机器学习算法,所述装置包括预测图表示向量单元和预测量生成单元其中:预测图表示向量单元用于以所有待预测的目标对应的实例以及所述实例的若干度邻居作为图中的点,基于所有实例的图数据,采用编码函数生成每个实例的图表示向量;预测量生成单元用于基于与待预测的目标对应的实例的图表示向量、对应的非图数据,采用解码函数生成所述目标的预测量。
可选的,所述装置还包括目标提取单元,用于从待预测的目标集合中提取ps个待 预测的目标;ps为自然数;所述预测量生成单元具体用于:对所述ps个目标,分别基于每个目标对应的实例的图表示向量、对应的非图数据,采用解码函数生成每个目标的预测量;所述装置还包括循环控制单元,用于删除待预测的目标集合中的所述ps个目标,如果待预测的目标集合不为空,则继续下一轮次提取最多ps个目标进行预测,直至目标集合为空。
一个例子中,所述实例的图数据包括实例的自身点数据、与其他实例的关系数据中的至少一项;所述对应的非图数据包括对应于目标的实例的自身非点数据、与对应于目标的实例相关的时序数据中的至少一项。
上述例子中,所述实例的自身点数据包括:实例的自身稠密数据;所述实例的自身非点数据包括:实例的自身稀疏数据。
可选的,所述混合图神经网络模型用于评估用户所属的类别;所述实例为用户;所述待预测的目标为某个用户所属的类别;所述实例的图数据包括:用户数据中表达为图中点和边的属性的部分;所述对应的非图数据包括以下至少一项:用户数据中除表达为图中点和边的属性之外的其余部分、根据用户的历史行为记录生成的历史行为时序数据;所述装置还包括类别业务处理单元,用于根据预测的某个用户所属的类别,来对所述用户进行对应于所述类别的业务处理。
可选的,所述混合图神经网络模型用于评估用户与对象的匹配程度;所述实例为对象,所述训练目标为某个用户与某个待推荐对象的匹配程度;所述实例的图数据包括:对象数据中表达为图中点和边的属性的部分;所述目标对应的实例的图表示向量包括:待推荐对象的图表示向量、和所述用户有过历史行为的N个对象的图表示向量;所述对应的非图数据包括:所述用户的表示向量;N为自然数;所述装置还包括推荐单元,用于根据待推荐对象与某个用户的匹配程度,将与所述用户匹配程度较高的若干个待推荐对象推荐给用户。
本说明书的实施例提供了一种计算机设备,该计算机设备包括存储器和处理器。其中,存储器上存储有能够由处理器运行的计算机程序;处理器在运行存储的计算机程序时,执行本说明书实施例中混合图神经网络的训练方法的各个步骤。对混合图神经网络的训练方法的各个步骤的详细描述请参见之前的内容,不再重复。
本说明书的实施例提供了一种计算机设备,该计算机设备包括存储器和处理器。其中,存储器上存储有能够由处理器运行的计算机程序;处理器在运行存储的计算机程序时,执行本说明书实施例中混合图神经网络的预测方法的各个步骤。对混合图神经网络的预测方法的各个步骤的详细描述请参见之前的内容,不再重复。
本说明书的实施例提供了一种计算机可读存储介质,该存储介质上存储有计算机程序,这些计算机程序在被处理器运行时,执行本说明书实施例中混合图神经网络的训练方法的各个步骤。对混合图神经网络的训练方法的各个步骤的详细描述请参见之前的内容,不再重复。
本说明书的实施例提供了一种计算机可读存储介质,该存储介质上存储有计算机程序,这些计算机程序在被处理器运行时,执行本说明书实施例中混合图神经网络的预 测方法的各个步骤。对混合图神经网络的预测方法的各个步骤的详细描述请参见之前的内容,不再重复。
以上所述仅为本说明书的较佳实施例而已,并不用以限制请求保护的其他实施例,凡在本说明书的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在请求保护的范围之内。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
本领域技术人员应明白,本说明书的实施例可提供为方法、系统或计算机程序产品。因此,本说明书的实施例可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本说明书的实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。

Claims (34)

  1. 一种混合图神经网络模型的训练方法,所述混合图神经网络模型包括编码函数和解码函数,所述编码函数为带有编码参数的图神经网络算法及其组合,所述解码函数为带有解码参数的机器学习算法及其组合,所述方法包括:
    以训练样本中所有目标对应的实例以及所述实例的若干度邻居作为图中的点,基于所有实例的图数据,采用编码函数生成每个实例的图表示向量;
    对解码参数进行t个轮次的训练;在每个轮次,从训练样本中提取bs个目标,基于每个目标对应的实例的图表示向量、对应的非图数据,采用解码函数生成每个目标的预测量,并根据由本轮次bs个目标的预测量与标签量确定的本轮次的损失量优化解码参数;bs为自然数,t为大于1的自然数;
    根据所述t个轮次的损失量优化编码参数;
    重复上述所有步骤直至满足预定训练终止条件。
  2. 根据权利要求1所述的方法,所述根据t个轮次的损失量优化编码参数,包括:计算每个轮次的损失量对该轮次bs个目标对应实例的图表示向量的梯度,根据bs×t个的梯度优化编码参数。
  3. 根据权利要求2所述的方法,所述根据bs×t个梯度优化编码参数,包括:在每个轮次bs个目标对应的各个实例的图表示向量上分别累积t个轮次的梯度,根据所述各个图表示向量上累积的梯度确定损失量对编码参数的梯度,采用损失量对编码参数的梯度优化编码参数。
  4. 根据权利要求1所述的方法,所述根据由本轮次bs个目标的预测量与标签量确定的本轮次的损失量优化解码参数,包括:根据本轮次每个目标的预测量与标签量确定每个目标的损失量,由本轮次bs个目标的损失量得到本轮次的损失量,根据本轮次的损失量对解码参数的梯度优化解码参数。
  5. 根据权利要求1所述的方法,所述预定训练终止条件包括:优化R次编码参数,R为大于1的自然数。
  6. 根据权利要求1所述的方法,所述实例的图数据包括实例的自身点数据、与其他实例的关系数据中的至少一项;所述对应的非图数据包括对应于目标的实例的自身非点数据、与对应于目标的实例相关的时序数据中的至少一项。
  7. 根据权利要求6所述的方法,所述实例的自身点数据包括:实例的自身稠密数据;所述实例的自身非点数据包括:实例的自身稀疏数据。
  8. 根据权利要求1所述的方法,所述混合图神经网络模型用于评估用户所属的类别;所述实例为用户;所述训练目标为某个用户所属的类别;所述实例的图数据包括:用户数据中表达为图中点和边的属性的部分;所述对应的非图数据包括以下至少一项:用户数据中除表达为图中点和边的属性之外的其余部分、根据用户的历史行为记录生成的历史行为时序数据;
    所述方法还包括:采用训练完毕的混合图神经网络模型预测用户所属的类别,并根据所述用户所属的类别来对用户进行对应于所述类别的业务处理。
  9. 根据权利要求1所述的方法,所述混合图神经网络模型用于评估用户与对象的匹配程度;所述实例为对象,所述训练目标为某个用户与某个待推荐对象的匹配程度;所述实例的图数据包括:对象数据中表达为图中点和边的属性的部分;所述目标对应的 实例的图表示向量包括:待推荐对象的图表示向量、和所述用户有过历史行为的N个对象的图表示向量;所述对应的非图数据包括:所述用户的表示向量;N为自然数;
    所述方法还包括:采用训练完毕的混合图神经网络模型预测用户与待推荐对象的匹配程度,并将预测与所述用户匹配程度较高的若干个待推荐对象推荐给用户。
  10. 一种混合图神经网络模型的预测方法,所述混合图神经网络模型包括编码函数和解码函数,所述编码函数为带有根据权利要求1至9任意一项所述方法训练完毕的编码参数的图神经网络算法,所述解码函数为带有根据权利要求1至9任意一项所述方法训练完毕的解码参数的机器学习算法,所述方法包括:
    以所有待预测的目标对应的实例以及所述实例的若干度邻居作为图中的点,基于所有实例的图数据,采用编码函数生成每个实例的图表示向量;
    基于与待预测的目标对应的实例的图表示向量、对应的非图数据,采用解码函数生成所述目标的预测量。
  11. 根据权利要求10所述的方法,
    所述方法还包括:从待预测的目标集合中提取ps个待预测的目标;ps为自然数;
    所述基于与待预测的目标对应的实例的图表示向量、对应的非图数据,采用解码函数生成所述目标的预测量,包括:对所述ps个目标,分别基于每个目标对应的实例的图表示向量、对应的非图数据,采用解码函数生成每个目标的预测量;
    所述方法还包括:删除待预测的目标集合中的所述ps个目标,如果待预测的目标集合不为空,则继续下一轮次提取最多ps个目标进行预测,直至目标集合为空。
  12. 根据权利要求10所述的方法,所述实例的图数据包括实例的自身点数据、与其他实例的关系数据中的至少一项;所述对应的非图数据包括对应于目标的实例的自身非点数据、与对应于目标的实例相关的时序数据中的至少一项。
  13. 根据权利要求12所述的方法,所述实例的自身点数据包括:实例的自身稠密数据;所述实例的自身非点数据包括:实例的自身稀疏数据。
  14. 根据权利要求10所述的方法,所述混合图神经网络模型用于评估用户所属的类别;所述实例为用户;所述待预测的目标为某个用户所属的类别;所述实例的图数据包括:用户数据中表达为图中点和边的属性的部分;所述对应的非图数据包括以下至少一项:用户数据中除表达为图中点和边的属性之外的其余部分、根据用户的历史行为记录生成的历史行为时序数据;
    所述方法还包括:根据预测的某个用户所属的类别,来对所述用户进行对应于所述类别的业务处理。
  15. 根据权利要求10所述的方法,所述混合图神经网络模型用于评估用户与对象的匹配程度;所述实例为对象,所述训练目标为某个用户与某个待推荐对象的匹配程度;所述实例的图数据包括:对象数据中表达为图中点和边的属性的部分;所述目标对应的实例的图表示向量包括:待推荐对象的图表示向量、和所述用户有过历史行为的N个对象的图表示向量;所述对应的非图数据包括:所述用户的表示向量;N为自然数;
    所述方法还包括:根据待推荐对象与某个用户的匹配程度,将与所述用户匹配程度较高的若干个待推荐对象推荐给用户。
  16. 一种混合图神经网络模型的训练装置,所述混合图神经网络模型包括编码函数和解码函数,所述编码函数为带有编码参数的图神经网络算法及其组合,所述解码函数 为带有解码参数的机器学习算法及其组合,所述装置包括:
    训练图表示向量单元,用于以训练样本中所有目标对应的实例以及所述实例的若干度邻居作为图中的点,基于所有实例的图数据,采用编码函数生成每个实例的图表示向量;
    解码参数训练单元,用于对解码参数进行t个轮次的训练;在每个轮次,从训练样本中提取bs个目标,基于每个目标对应的实例的图表示向量、对应的非图数据,采用解码函数生成每个目标的预测量,并根据由本轮次bs个目标的预测量与标签量确定的本轮次的损失量优化解码参数;bs为自然数,t为大于1的自然数;
    编码参数训练单元,用于根据所述t个轮次的损失量优化编码参数;
    训练循环单元,用于重复采用上述所有单元直至满足预定训练终止条件。
  17. 根据权利要求16所述的装置,所述编码参数训练单元具体用于:计算每个轮次的损失量对该轮次bs个目标对应实例的图表示向量的梯度,根据bs×t个的梯度优化编码参数。
  18. 根据权利要求17所述的装置,所述编码参数训练单元根据bs×t个梯度优化编码参数,包括:在每个轮次bs个目标对应的各个实例的图表示向量上分别累积t个轮次的梯度,根据所述各个图表示向量上累积的梯度确定损失量对编码参数的梯度,采用损失量对编码参数的梯度优化编码参数。
  19. 根据权利要求16所述的装置,所述解码参数训练单元根据由本轮次bs个目标的预测量与标签量确定的本轮次的损失量优化解码参数,包括:根据本轮次每个目标的预测量与标签量确定每个目标的损失量,由本轮次bs个目标的损失量得到本轮次的损失量,根据本轮次的损失量对解码参数的梯度优化解码参数。
  20. 根据权利要求16所述的装置,所述预定训练终止条件包括:优化R次编码参数,R为大于1的自然数。
  21. 根据权利要求16所述的装置,所述实例的图数据包括实例的自身点数据、与其他实例的关系数据中的至少一项;所述对应的非图数据包括对应于目标的实例的自身非点数据、与对应于目标的实例相关的时序数据中的至少一项。
  22. 根据权利要求21所述的装置,所述实例的自身点数据包括:实例的自身稠密数据;所述实例的自身非点数据包括:实例的自身稀疏数据。
  23. 根据权利要求16所述的装置,所述混合图神经网络模型用于评估用户所属的类别;所述实例为用户;所述训练目标为某个用户所属的类别;所述实例的图数据包括:用户数据中表达为图中点和边的属性的部分;所述对应的非图数据包括以下至少一项:用户数据中除表达为图中点和边的属性之外的其余部分、根据用户的历史行为记录生成的历史行为时序数据;
    所述装置还包括:类别预测及业务处理单元,用于采用训练完毕的混合图神经网络模型预测用户所属的类别,并根据所述用户所属的类别来对用户进行对应于所述类别的业务处理。
  24. 根据权利要求16所述的装置,所述混合图神经网络模型用于评估用户与对象的匹配程度;所述实例为对象,所述训练目标为某个用户与某个待推荐对象的匹配程度;所述实例的图数据包括:对象数据中表达为图中点和边的属性的部分;所述目标对应的实例的图表示向量包括:待推荐对象的图表示向量、和所述用户有过历史行为的N个对 象的图表示向量;所述对应的非图数据包括:所述用户的表示向量;N为自然数;
    所述装置还包括:匹配预测及推荐单元,用于采用训练完毕的混合图神经网络模型预测用户与待推荐对象的匹配程度,并将预测与所述用户匹配程度较高的若干个待推荐对象推荐给用户。
  25. 一种混合图神经网络模型的预测装置,所述混合图神经网络模型包括编码函数和解码函数,所述编码函数为带有根据权利要求1至9任意一项所述方法训练完毕的编码参数的图神经网络算法,所述解码函数为带有根据权利要求1至9任意一项所述方法训练完毕的解码参数的机器学习算法,所述装置包括:
    预测图表示向量单元,用于以所有待预测的目标对应的实例以及所述实例的若干度邻居作为图中的点,基于所有实例的图数据,采用编码函数生成每个实例的图表示向量;
    预测量生成单元,用于基于与待预测的目标对应的实例的图表示向量、对应的非图数据,采用解码函数生成所述目标的预测量。
  26. 根据权利要求25所述的装置,所述装置还包括:目标提取单元,用于从待预测的目标集合中提取ps个待预测的目标;ps为自然数;
    所述预测量生成单元具体用于:对所述ps个目标,分别基于每个目标对应的实例的图表示向量、对应的非图数据,采用解码函数生成每个目标的预测量;
    所述装置还包括:循环控制单元,用于删除待预测的目标集合中的所述ps个目标,如果待预测的目标集合不为空,则继续下一轮次提取最多ps个目标进行预测,直至目标集合为空。
  27. 根据权利要求25所述的装置,所述实例的图数据包括实例的自身点数据、与其他实例的关系数据中的至少一项;所述对应的非图数据包括对应于目标的实例的自身非点数据、与对应于目标的实例相关的时序数据中的至少一项。
  28. 根据权利要求27所述的装置,所述实例的自身点数据包括:实例的自身稠密数据;所述实例的自身非点数据包括:实例的自身稀疏数据。
  29. 根据权利要求25所述的装置,所述混合图神经网络模型用于评估用户所属的类别;所述实例为用户;所述待预测的目标为某个用户所属的类别;所述实例的图数据包括:用户数据中表达为图中点和边的属性的部分;所述对应的非图数据包括以下至少一项:用户数据中除表达为图中点和边的属性之外的其余部分、根据用户的历史行为记录生成的历史行为时序数据;
    所述装置还包括:类别业务处理单元,用于根据预测的某个用户所属的类别,来对所述用户进行对应于所述类别的业务处理。
  30. 根据权利要求25所述的装置,所述混合图神经网络模型用于评估用户与对象的匹配程度;所述实例为对象,所述训练目标为某个用户与某个待推荐对象的匹配程度;所述实例的图数据包括:对象数据中表达为图中点和边的属性的部分;所述目标对应的实例的图表示向量包括:待推荐对象的图表示向量、和所述用户有过历史行为的N个对象的图表示向量;所述对应的非图数据包括:所述用户的表示向量;N为自然数;
    所述装置还包括:推荐单元,用于根据待推荐对象与某个用户的匹配程度,将与所述用户匹配程度较高的若干个待推荐对象推荐给用户。
  31. 一种计算机设备,包括:存储器和处理器;所述存储器上存储有可由处理器运行的计算机程序;所述处理器运行所述计算机程序时,执行如权利要求1到9任意一项 所述的方法。
  32. 一种计算机设备,包括:存储器和处理器;所述存储器上存储有可由处理器运行的计算机程序;所述处理器运行所述计算机程序时,执行如权利要求10到15任意一项所述的方法。
  33. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器运行时,执行如权利要求1到9任意一项所述的方法。
  34. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器运行时,执行如权利要求10到15任意一项所述的方法。
PCT/CN2022/071577 2021-01-14 2022-01-12 混合图神经网络模型的训练、预测 WO2022152161A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/272,194 US20240152732A1 (en) 2021-01-14 2022-01-12 Training and prediction of hybrid graph neural network model

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110050410.7A CN112381216B (zh) 2021-01-14 2021-01-14 混合图神经网络模型的训练、预测方法和装置
CN202110050410.7 2021-01-14

Publications (1)

Publication Number Publication Date
WO2022152161A1 true WO2022152161A1 (zh) 2022-07-21

Family

ID=74581860

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/071577 WO2022152161A1 (zh) 2021-01-14 2022-01-12 混合图神经网络模型的训练、预测

Country Status (3)

Country Link
US (1) US20240152732A1 (zh)
CN (1) CN112381216B (zh)
WO (1) WO2022152161A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116305995A (zh) * 2023-03-27 2023-06-23 清华大学 结构体系的非线性分析方法及装置、设备及介质
CN116932893A (zh) * 2023-06-21 2023-10-24 江苏大学 一种基于图卷积网络的序列推荐方法、系统、设备及介质
CN117113148A (zh) * 2023-08-30 2023-11-24 上海智租物联科技有限公司 基于时序图神经网络的风险识别方法、装置及存储介质
CN116932893B (zh) * 2023-06-21 2024-06-04 江苏大学 一种基于图卷积网络的序列推荐方法、系统、设备及介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381216B (zh) * 2021-01-14 2021-04-27 蚂蚁智信(杭州)信息技术有限公司 混合图神经网络模型的训练、预测方法和装置
CN113657577B (zh) * 2021-07-21 2023-08-18 阿里巴巴达摩院(杭州)科技有限公司 模型训练方法及计算系统
CN115905624B (zh) * 2022-10-18 2023-06-16 支付宝(杭州)信息技术有限公司 一种用户行为状态的确定方法、装置及设备
CN116506622B (zh) * 2023-06-26 2023-09-08 瀚博半导体(上海)有限公司 模型训练方法及视频编码参数优化方法和装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160239461A1 (en) * 2013-03-01 2016-08-18 Synaptic Engines, Llc Reconfigurable graph processor
CN110598842A (zh) * 2019-07-17 2019-12-20 深圳大学 一种深度神经网络超参数优化方法、电子设备及存储介质
CN111985622A (zh) * 2020-08-25 2020-11-24 支付宝(杭州)信息技术有限公司 一种图神经网络训练方法和系统
CN112381216A (zh) * 2021-01-14 2021-02-19 蚂蚁智信(杭州)信息技术有限公司 混合图神经网络模型的训练、预测方法和装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111902825A (zh) * 2018-03-23 2020-11-06 多伦多大学管理委员会 多边形对象标注系统和方法以及训练对象标注系统的方法
CN108829683B (zh) * 2018-06-29 2022-06-10 北京百度网讯科技有限公司 混合标注学习神经网络模型及其训练方法、装置
CN111192680B (zh) * 2019-12-25 2021-06-01 山东众阳健康科技集团有限公司 一种基于深度学习和集成分类的智能辅助诊断方法
CN111612070B (zh) * 2020-05-13 2024-04-26 清华大学 基于场景图的图像描述生成方法及装置
CN112114791B (zh) * 2020-09-08 2022-03-25 南京航空航天大学 一种基于元学习的代码自适应生成方法
CN112115377B (zh) * 2020-09-11 2022-05-27 安徽农业大学 一种基于社交关系的图神经网络链路预测推荐方法
CN112085615A (zh) * 2020-09-23 2020-12-15 支付宝(杭州)信息技术有限公司 图神经网络的训练方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160239461A1 (en) * 2013-03-01 2016-08-18 Synaptic Engines, Llc Reconfigurable graph processor
CN110598842A (zh) * 2019-07-17 2019-12-20 深圳大学 一种深度神经网络超参数优化方法、电子设备及存储介质
CN111985622A (zh) * 2020-08-25 2020-11-24 支付宝(杭州)信息技术有限公司 一种图神经网络训练方法和系统
CN112381216A (zh) * 2021-01-14 2021-02-19 蚂蚁智信(杭州)信息技术有限公司 混合图神经网络模型的训练、预测方法和装置

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116305995A (zh) * 2023-03-27 2023-06-23 清华大学 结构体系的非线性分析方法及装置、设备及介质
CN116305995B (zh) * 2023-03-27 2023-11-07 清华大学 结构体系的非线性分析方法及装置、设备及介质
CN116932893A (zh) * 2023-06-21 2023-10-24 江苏大学 一种基于图卷积网络的序列推荐方法、系统、设备及介质
CN116932893B (zh) * 2023-06-21 2024-06-04 江苏大学 一种基于图卷积网络的序列推荐方法、系统、设备及介质
CN117113148A (zh) * 2023-08-30 2023-11-24 上海智租物联科技有限公司 基于时序图神经网络的风险识别方法、装置及存储介质
CN117113148B (zh) * 2023-08-30 2024-05-17 上海智租物联科技有限公司 基于时序图神经网络的风险识别方法、装置及存储介质

Also Published As

Publication number Publication date
US20240152732A1 (en) 2024-05-09
CN112381216A (zh) 2021-02-19
CN112381216B (zh) 2021-04-27

Similar Documents

Publication Publication Date Title
WO2022152161A1 (zh) 混合图神经网络模型的训练、预测
CN112561069B (zh) 模型处理方法、装置、设备及存储介质
CN112765477B (zh) 信息处理、信息推荐的方法和装置、电子设备和存储介质
CN111382555A (zh) 数据处理方法、介质、装置和计算设备
CN112989169B (zh) 目标对象识别方法、信息推荐方法、装置、设备及介质
CN112541575B (zh) 图神经网络的训练方法及装置
CN111008335A (zh) 一种信息处理方法、装置、设备及存储介质
Cui et al. Allie: Active learning on large-scale imbalanced graphs
CN112989182B (zh) 信息处理方法、装置、信息处理设备及存储介质
CN116090504A (zh) 图神经网络模型训练方法及装置、分类方法、计算设备
KR20180028610A (ko) 관련도 벡터 머신을 이용한 기계학습방법, 이를 구현하는 컴퓨터 프로그램 및 이를 수행하도록 구성되는 정보처리장치
CN114092162B (zh) 推荐质量确定方法、推荐质量确定模型的训练方法及装置
CN114168804B (zh) 一种基于异质子图神经网络的相似信息检索方法和系统
CN109597851B (zh) 基于关联关系的特征提取方法和装置
CN113190730A (zh) 一种区块链地址的分类方法及装置
CN113111133A (zh) 用户分类的方法和装置
CN115329183A (zh) 数据处理方法、装置、存储介质及设备
Chen et al. Temporal-aware influence maximization solution in artificial intelligent edge application
Hmaidi et al. Anime Link Prediction Using Improved Graph Convolutional Networks
CN115018009B (zh) 对象描述方法、网络模型的训练方法及装置
CN114547448B (zh) 数据处理、模型训练方法、装置、设备、存储介质及程序
Kalidindi et al. Discrete Deep Learning Based Collaborative Filtering Approach for Cold Start Problem.
CN115545851A (zh) 一种资源推荐方法、装置、电子设备和存储介质
CN116860888A (zh) 一种用户筛选方法、装置及计算机可读存储介质
CN113901278A (zh) 一种基于全局多探测和适应性终止的数据搜索方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22739039

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18272194

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22739039

Country of ref document: EP

Kind code of ref document: A1