CN117670366A - Risk prediction method, apparatus, device, medium, and program product - Google Patents

Risk prediction method, apparatus, device, medium, and program product Download PDF

Info

Publication number
CN117670366A
CN117670366A CN202311685540.3A CN202311685540A CN117670366A CN 117670366 A CN117670366 A CN 117670366A CN 202311685540 A CN202311685540 A CN 202311685540A CN 117670366 A CN117670366 A CN 117670366A
Authority
CN
China
Prior art keywords
sample
node
vector
user
risk
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311685540.3A
Other languages
Chinese (zh)
Inventor
谭博帅
裴凯洋
陈萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202311685540.3A priority Critical patent/CN117670366A/en
Publication of CN117670366A publication Critical patent/CN117670366A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4016Transaction verification involving fraud or risk level assessment in transaction processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9024Graphs; Linked lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/042Knowledge-based neural networks; Logical representations of neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0895Weakly supervised learning, e.g. semi-supervised or self-supervised learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Business, Economics & Management (AREA)
  • Databases & Information Systems (AREA)
  • Accounting & Taxation (AREA)
  • Computer Security & Cryptography (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure provides a risk prediction method, which can be applied to the technical field of artificial intelligence. The method comprises the following steps: acquiring data to be analyzed, and updating a heterogeneous graph based on the data to be analyzed, wherein the heterogeneous graph comprises nodes and edges, the nodes are used for storing user risk characteristic information, and the edges are used for storing user relations; and processing the heterogeneous graph based on a risk prediction model to complete risk prediction, wherein updating the heterogeneous graph comprises: self-supervised learning is carried out on the heterogeneous graph storage data, and a heterogeneous graph containing weight characteristics is obtained; the risk prediction model is trained by combining an attention network model with a classifier, wherein the attention network model comprises a characteristic attention network and a side attention network; the feature attention network is used for predicting user risk based on the user risk feature data, and the side attention network is used for predicting a user risk propagation path based on a user relationship. The present disclosure also provides a risk prediction apparatus, device, storage medium, and program product.

Description

Risk prediction method, apparatus, device, medium, and program product
Technical Field
The present disclosure relates to the field of artificial intelligence technology, and in particular, to a risk prediction method, apparatus, device, medium, and program product.
Background
Risk assessment of users plays an important role in credit services. Although evaluation of risk is a hotspot of current research by constructing a scoring model or a classification model through technologies such as statistical analysis or machine learning, the traditional machine learning method often ignores heterogeneity and complexity of risk items and is not sensitive enough to capture information of association relations of risk users. Knowledge graph technology is widely applied to representing and mining association relations, but huge amount of data storage can bring about great calculation cost. There is a need to find a risk prediction method that enables more accurate information capture with less computational overhead.
Disclosure of Invention
In view of the foregoing, embodiments of the present disclosure provide a risk prediction method, apparatus, device, medium, and program product.
According to a first aspect of the present disclosure, there is provided a risk prediction method, comprising: acquiring data to be analyzed, and updating a heterogeneous graph based on the data to be analyzed, wherein the heterogeneous graph comprises nodes and edges, the nodes are used for storing user risk characteristic information, and the edges are used for storing user relations; and processing the heterogeneous graph based on a risk prediction model to complete risk prediction, wherein updating the heterogeneous graph comprises: self-supervised learning is carried out on the heterogeneous graph storage data, and a heterogeneous graph containing weight characteristics is obtained; the risk prediction model is trained by combining an attention network model with a classifier, wherein the attention network model comprises a characteristic attention network and a side attention network; the feature attention network is used for predicting user risk based on the user risk feature data, and the side attention network is used for predicting a user risk propagation path based on a user relationship.
According to an embodiment of the present disclosure, the method for updating the heterogeneous map includes: acquiring a real-time user characteristic data set, and storing the characteristic data of each user in a node; acquiring user relations between users and other users, storing the user relations in edges, and processing the user risk characteristic data and the user relations by using a heterogeneous graph conversion network to generate node expression vectors and edge expression vectors; learning the node representation vector and the edge representation vector based on a self-learning Xi Yanma device to obtain a node representation update vector and an edge representation update vector; the heterogeneous graph is obtained based on the node representation update vector and the edge representation update vector, wherein the node representation update vector and the edge representation update vector comprise node representation vectors and edge representation vectors updated based on weights.
According to an embodiment of the present disclosure, the heterogeneous graph conversion network includes a heterogeneous convolution layer, and the processing the user risk feature data and the user relationship by using the heterogeneous graph conversion network includes: processing the user risk characteristic data by utilizing the heterogeneous convolution layer, normalizing a processing result, and generating the node representation vector; and processing the user relationship by utilizing the heterogeneous convolution layer, normalizing the processing result, and generating the edge representation vector.
According to an embodiment of the disclosure, the self-learning Xi Yanma device includes a feature mask self-learning network and an edge mask self-learning network, and the learning the node representation vector and the edge representation vector based on the self-learning Xi Yanma device, to obtain a node representation update vector and an edge representation update vector includes: processing the node representation vector based on a feature mask self-learning network, learning a node mask, and obtaining the node representation update vector, wherein the node representation update vector comprises a node representation vector updated based on weight; and processing the edge representation vector from the learning network based on an edge mask, learning the edge mask, and obtaining the edge representation update vector, wherein the edge representation update vector comprises an edge representation vector updated based on weights.
According to an embodiment of the present disclosure, the training method of the risk prediction model includes: acquiring sample user data, wherein the sample user data comprises sample user characteristic data and a sample user association relationship, the sample user association relationship comprises an association relationship between a sample user and other users, and the other users comprise users with the association relationship with the sample user; storing the sample user characteristic data in the heterogeneous graph to obtain a sample node representation vector and a sample edge representation vector; processing the sample node representing vector based on a feature attention network to obtain a sample node feature vector, and processing the sample edge representing vector based on an edge attention network to obtain a sample node-edge feature vector; processing the sample node feature vector and the sample node-edge feature vector based on a classifier to obtain sample node risk probability and sample edge risk probability; and calculating a loss function value based on the sample node risk probability, the sample side risk probability and the sample user risk, and performing iterative training on the risk prediction model based on the loss function value until the loss function value is smaller than a preset threshold value.
According to an embodiment of the disclosure, the processing the sample node representation vector based on the feature attention network, obtaining a sample node feature vector includes: acquiring the attention weight of the sample node on the sample neighbor node based on the sample node expression vector and the expression vector of the sample neighbor node; acquiring a sample node characteristic attention vector based on the attention weight of the sample node on the neighbor node and the representative vector of the sample neighbor node; and obtaining a node feature vector for the sample based on the sample node representation vector and the sample node feature attention vector.
According to an embodiment of the disclosure, the processing the sample edge representation vector based on the feature attention network, obtaining a sample node-edge feature vector includes: obtaining the attention weight of the sample node on the sample side based on the sample node representation vector and the sample side representation vector; acquiring a sample edge characteristic attention vector based on the attention weight of the sample node on the sample edge and the representative vector of the sample edge; and obtaining a node-edge feature vector for the sample based on the sample node representation vector and the sample-edge feature attention vector.
According to an embodiment of the disclosure, the data to be analyzed is updated periodically based on a preset frequency.
According to an embodiment of the present disclosure, after completing the risk prediction, the method further comprises: and carrying out risk display based on the risk prediction result.
According to an embodiment of the disclosure, the risk displaying based on the risk prediction result includes: and displaying a node user feature risk source and/or the node user risk propagation path, wherein the node user feature risk source is associated with user feature data, and the node user risk propagation path is associated with a user relationship.
A second aspect of the present disclosure provides a risk prediction apparatus, comprising: the first acquisition module is configured to acquire data to be analyzed, and update a heterogeneous graph based on the data to be analyzed, wherein the heterogeneous graph comprises nodes and edges, the nodes are used for storing user risk characteristic information, and the edges are used for storing user relations; and a prediction module configured to process the heterogeneous graph based on a risk prediction model to complete risk prediction, wherein updating the heterogeneous graph includes: self-supervised learning is carried out on the heterogeneous graph storage data, and a heterogeneous graph containing weight characteristics is obtained; the risk prediction model is trained by combining an attention network model with a classifier, and the attention network model comprises a characteristic attention network and a side attention network; the feature attention network is used for predicting user risk based on the user risk feature data, and the side attention network is used for predicting a user risk propagation path based on a user relationship.
A third aspect of the present disclosure provides a training apparatus of a risk prediction model, including:
the second acquisition module is configured to acquire sample user data, wherein the sample user data comprises sample user characteristic data and a sample user association relationship, the sample user association relationship comprises an association relationship between a sample user and other users, and the other users comprise users with the association relationship with the sample user;
the storage module is configured to store the sample user data in the heterogeneous graph and acquire a sample node representation vector and a sample edge representation vector;
a first computing module configured to process the sample node representation vector based on a feature attention network, obtain a sample node feature vector,
the second computing module is configured to process the sample edge representing vector based on an edge attention network to obtain a sample edge characteristic vector;
the classification module is configured to process the sample node feature vector and the sample node-edge feature vector based on a classifier, and obtain sample node risk probability and sample edge risk probability; and
and a third calculation module configured to calculate a loss function value based on the sample node risk probability and the sample edge risk probability and the sample user risk, and perform iterative training on the risk prediction model based on the loss function value until the loss function value is smaller than a preset threshold.
A fourth aspect of the present disclosure provides an electronic device, comprising: one or more processors; and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method described above.
A fifth aspect of the present disclosure also provides a computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to perform the above-described method.
A sixth aspect of the present disclosure also provides a computer program product comprising a computer program which, when executed by a processor, implements the above method.
Drawings
The foregoing and other objects, features and advantages of the disclosure will be more apparent from the following description of embodiments of the disclosure with reference to the accompanying drawings, in which:
fig. 1 schematically illustrates an application scenario diagram of a risk prediction method, apparatus, device, medium and program product according to an embodiment of the present disclosure.
Fig. 2 schematically illustrates a flow chart of a risk prediction method according to an embodiment of the present disclosure.
Fig. 3 schematically illustrates a flow chart of a method of updating a heterogeneous graph according to an embodiment of the disclosure.
Fig. 4 schematically illustrates a flowchart of a method of training a risk prediction model according to an embodiment of the present disclosure.
Fig. 5 schematically illustrates a flowchart of a method of obtaining a sample edge feature vector according to an embodiment of the present disclosure.
Fig. 6 schematically illustrates a flow chart of a method of processing the sample edge representation vector based on a feature attention network to obtain a sample node-edge feature vector, in accordance with an embodiment of the present disclosure.
Fig. 7 schematically shows a block diagram of a risk prediction apparatus according to an embodiment of the present disclosure.
Fig. 8 schematically illustrates a block diagram of a risk prediction model according to further embodiments of the present disclosure.
Fig. 9 schematically shows a block diagram of a structure of an updating apparatus of a heterogram according to an embodiment of the present disclosure.
Fig. 10 schematically illustrates a block diagram of a training apparatus of a risk prediction model according to an embodiment of the present disclosure.
Fig. 11 schematically illustrates a block diagram of an electronic device adapted to implement a risk prediction method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where expressions like at least one of "A, B and C, etc. are used, the expressions should generally be interpreted in accordance with the meaning as commonly understood by those skilled in the art (e.g.," a system having at least one of A, B and C "shall include, but not be limited to, a system having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Risk assessment of users plays an important role in credit services. Although the evaluation of risks is a hotspot of current research by constructing a scoring model or a classification model through technologies such as statistical analysis or machine learning, the traditional machine learning method often ignores heterogeneity and complexity of risk items, in actual situations, different risk feature items usually have own characteristics, and in some situations, user relationships also affect user risks and the risk feature items and the risk relationships change with time, and the traditional machine learning method ignores time dimension and dynamic change of the risk items. On the other hand, knowledge graph technology is widely applied to representing and mining association relations, but huge calculation cost is brought to massive data storage. In addition, if the knowledge graph is only used as a data structure for storing and displaying data, the operation cost can be greatly increased in the subsequent model processing process, and the improvement of the model precision is also not facilitated.
Therefore, in order to more accurately identify the risk of the user, the above factors need to be considered in order to reduce the calculation cost of the model as much as possible on the basis of improving the prediction precision of the model, and more advanced technology is adopted to process complex and dynamic data structures.
In view of the foregoing problems in the prior art, an embodiment of the present disclosure provides a risk prediction method, including: acquiring data to be analyzed, and updating a heterogeneous graph based on the data to be analyzed, wherein the heterogeneous graph comprises nodes and edges, the nodes are used for storing user risk characteristic information, and the edges are used for storing user relations; and processing the heterogeneous graph based on a risk prediction model to complete risk prediction, wherein updating the heterogeneous graph comprises: self-supervised learning is carried out on the heterogeneous graph storage data, and a heterogeneous graph containing weight characteristics is obtained; the risk prediction model is trained by combining an attention network model with a classifier, wherein the attention network model comprises a characteristic attention network and a side attention network; the feature attention network is used for predicting user risk based on the user risk feature data, and the side attention network is used for predicting a user risk propagation path based on a user relationship.
The risk prediction method provided by the embodiment of the invention can effectively and comprehensively consider the user risk characteristic items, the multi-element relation among users and the multi-dimensional characteristic information of the users, thereby improving the accuracy and the robustness of the risk model prediction, and reducing the cost of model operation.
Fig. 1 schematically illustrates an application scenario diagram of a risk prediction method, apparatus, device, medium and program product according to an embodiment of the present disclosure.
As shown in fig. 1, an application scenario 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. Various communication client applications, such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only) may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be a variety of electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (by way of example only) providing support for websites browsed by users using the terminal devices 101, 102, 103. The background management server may analyze and process the received data such as the user request, and feed back the processing result (e.g., the web page, information, or data obtained or generated according to the user request) to the terminal device.
It should be noted that the risk prediction method provided by the embodiments of the present disclosure may be generally performed by the server 105. Accordingly, the risk prediction apparatus provided by the embodiments of the present disclosure may be generally provided in the server 105. The risk prediction method provided by the embodiments of the present disclosure may also be performed by a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the risk prediction apparatus provided by the embodiments of the present disclosure may also be provided in a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
The risk prediction method of the disclosed embodiment will be described in detail below with reference to fig. 2 to 6 based on the scenario described in fig. 1.
Fig. 2 schematically illustrates a flow chart of a risk prediction method according to an embodiment of the present disclosure.
As shown in fig. 2, the risk prediction method of this embodiment includes operations S210 to S220, and the risk prediction method may be performed by the server 105.
In operation S21O, data to be analyzed is obtained, and a heterogeneous graph is updated based on the data to be analyzed, where the heterogeneous graph includes nodes and edges, the nodes are used for storing user risk feature information, and the edges are used for storing user relationships.
In operation S220, the heterogeneous map is processed based on the risk prediction model, and risk prediction is completed.
In an embodiment of the present disclosure, a heterogeneous graph (Heterogeneous Graph) is a graph structure data capable of representing nodes and edges of multiple types and roles, which can effectively capture heterogeneity and complexity in the data. The nodes are used for storing user risk characteristic information, and the edges are used for storing user relations. Thus, the association relationship between the user risk feature items and the user relationship can be mined based on the analysis processing of the heterogeneous map data. The risk prediction model of embodiments of the present disclosure is trained from an attention network model in combination with a classifier, the attention network model comprising a feature attention network and/or a side attention network. The feature attention network (Feature Attention Network) is a deep learning model capable of processing node features, and is capable of effectively learning the importance and relevance of node features and the overall feature representation of the nodes. The edge attention network (Edge Attention Network) is a deep learning model capable of handling edge features, and is capable of efficiently learning the importance and relevance of edge features, as well as the overall feature representation of the edge. Thus, the feature attention network may mine association relationships between nodes and be used to predict user risk based on the user risk feature information, and the side attention network may mine user association relationship information for predicting a user risk propagation path based on the user relationship. In one example, the risk prediction method described above may be used to predict a user's credit risk. In embodiments of the present disclosure, the heterograms may be pre-built into the store. By updating the data of the heterogeneous graph, timely and accurate user risk characteristics and user relationship characteristics can be obtained in real time, so that timely capturing of user risks is realized. Thus, the heterogeneous map of the present disclosure may be updated in real-time based on user information to achieve real-time management of risk.
Wherein updating the heterogeneous graph comprises: and performing self-supervision learning on the heterogeneous graph storage data to acquire the heterogeneous graph containing the weight characteristics. It should be appreciated that each time the node data and edge data stored by the heterogeneous graph are updated, the data stored by the heterogeneous graph may be trained by self-supervised learning to obtain the heterogeneous graph containing the weight features. Self-supervised learning is a learning method between supervised learning and unsupervised learning, and utilizes auxiliary tasks to mine self-supervision information from large-scale unlabeled data, and the network is trained through the constructed supervision information, so that valuable characterization on downstream tasks can be learned. Typical categories of self-supervised learning include: 1. prediction task: this approach predicts the back portion of the input data by training the model with the front portion of the input data as input and the back portion as a tag. For example, in a language model, given the front part of a sentence, the model is made to predict what the next word is. 2. Context coding: this approach utilizes the structure of the data itself to learn the representation. For example, in natural language processing, each word in a piece of text may be encoded as a vector, and the vectors of all words in the piece of text may be combined into a matrix, which may be used for subsequent classification or regression tasks. 3. Self-encoder: this method compresses the input data into a low-dimensional representation by training an encoder and then training a decoder to recover the low-dimensional representation into the original data. The goal of the self-encoder is to minimize the difference between the input data and the reconstructed data. 4. Contrast study: this approach learns a representation by comparing the front and back samples. For example, in image recognition, a picture and its flipped version can be considered as a frontal sample and a back sample, and a training model learns how to distinguish between the two. The self-supervision learning does not need a large amount of labeled data, and can train on unlabeled data, so that the problem of high labeling cost of the supervision learning data is solved. Meanwhile, the self-supervision learning can also utilize large-scale non-tag data to improve the model performance, and the problem of poor generalization capability of the non-supervision learning is solved.
In one embodiment of the present disclosure, self-supervised learning is performed using a self-learning masker. A self-learning mask is a special encoder that is capable of mining its own supervision information from large-scale unlabeled data through self-supervision learning. Specifically, the self-learning Xi Yanma machine masks a portion of the input data, then trains an encoder to encode the masked data into a low-dimensional representation, and trains a decoder to recover the low-dimensional representation into the original data. During the training process, the self-learning Xi Yanma unit optimizes the parameters of the encoder based on the differences in the reconstructed data to minimize the reconstruction errors. By masking the input data, the self-learning Xi Yanma machine can simulate a partial supervised learning scenario in supervised learning, thereby mining useful supervision information on unlabeled data. Meanwhile, the self-learning Xi Yanma device can also utilize large-scale unlabeled data to improve the model performance, so that the problem of high cost of supervised learning data labeling is solved.
After the weight characteristics are given, different attention degrees can be given to node characteristic items with different weights when the risk model is used for processing, so that higher-precision learning prediction can be realized with more saved calculation cost under the condition of keeping all characteristic information, and the processing capacity of the model is improved. Through testing, after self-supervised learning is performed by applying a self-learning Xi Yanma device, the data volume which can be processed in the same time is increased compared with a heterogeneous graph without weight after the weight is given to the node characteristic items, and the prediction accuracy of the model is improved, so that the processing capacity of the model is improved. In one example, the model of the embodiments of the present disclosure is able to infer 2000 pieces of data at a time, with single container multiple concurrent pressure measurement TPS of 5.8.
Fig. 3 schematically illustrates a flow chart of a method of updating a heterogeneous graph according to an embodiment of the disclosure.
As shown in fig. 3, the updating method of the heterogram of this embodiment includes operations S310 to S350.
In operation S310, a real-time user feature data set is acquired, and feature data of each user is stored in one node.
In operation S320, a user relationship between the user and other users is acquired, and the user relationship is stored in the side.
In operation S330, the user risk feature data and the user relationship are processed using a heterogeneous graph transformation network, and a node representation vector and an edge representation vector are generated.
In operation S340, the node representation update vector and the edge representation update vector are learned based on the self-learning Xi Yanma machine to obtain the node representation update vector and the edge representation update vector.
In operation S350, the heterogeneous graph is acquired based on the node representation update vector and the edge representation update vector. Wherein the node representation update vector and the edge representation update vector comprise a node representation vector and an edge representation vector based on weight updating.
According to embodiments of the present disclosure, a heterogram may be constructed based on multi-source heterogeneous data. The nodes are used for representing the risk characteristics of the users, and the edges are used for representing the relationships of the users, so that information in data can be fully utilized, and meanwhile, the heterogeneity and the complexity of risk items and relationships are considered.
According to an embodiment of the present disclosure, a heterogram conversion network includes heterogeneous convolution layers. The heterogram transformation network (Heterogeneous Graph Conversion Network, HGCN) is a deep learning model for processing heterogram data. It may convert the heterogram data into tensors and input it into the convolutional layer for processing. The core of the heterogram transformation network is a heterogram convolution layer (Heterogeneous Convolutional Layer), each heterogram convolution layer consisting of a plurality of convolution kernels, each specially adapted to handle one type of node and edge. In this way, the heterogeneous graph transformation network can simultaneously process multiple types of nodes and edges and consider the relationships between them. In a heterogeneous graph conversion network, each node and edge establishes links with other nodes and edges to form a connected network structure. The network structure can capture the interaction information between different types of nodes and edges, so that the deep analysis of heterogeneous graph data is realized.
The heterogeneous graph conversion network is utilized to process the user risk characteristic data and the user relationship, and the generation of the node representation vector and the edge representation vector comprises the following steps: processing the user risk characteristic data by utilizing the heterogeneous convolution layer, normalizing a processing result, and generating the node representation vector; and processing the user relationship by utilizing the heterogeneous convolution layer, normalizing the processing result, and generating the edge representation vector.
Specifically, the heterogeneous diagram of the examples of the present disclosure may be constructed based on the following method:
let g= (V, E, X, Y) be a heterogram, where V is the set of nodes and E is the set of edges; xi e X represents the d-dimensional feature vector of node vi, and xi e Rd; y is the set of labels for each node in V. In the heterogeneous graph, the node type mapping function τ: V→A, edge type mapping function φ: E→R, A represents a node type set, and R represents a relationship type set.
The heterogram converter is used to build a convolution layer in order to aggregate heterogeneous neighborhood information from the source node to obtain a context representation of the target node. The process can be broken down into two parts: heterogeneous attention, target polymerization.
Heterogeneous attention construction:
the target node t is mapped to a Q vector, the source node is mapped to a K vector, and the dot product of them is calculated as the attention. The calculation formula is as follows:
K′(S)=t(s)(H l-1 [s])
Q′(t)=τ(t)(H l-1 [t])
ATT i head (p,e,t)=φ(e)K i (S)Q i (t)
the heterogeneous attention formula is as follows:
Att 1 head (p,e,t)=Softmax(|| i h ATT i head (p,e,t))。
so the output of the L-th layer is G l
ATT i head Is a attention head, and the total of h attention heads is h. aggr is a polymerization operation. V (t) is a neighborhood node of the target node t, and E (p, t) is an edge connecting the source node p and the target node t.
Target polymerization: in this step, the attention vectors of the source nodes are combined to obtain neighborhood information.
Updated vectorWhere σ is the activation function:
according to an embodiment of the present disclosure, the self-learning Xi Yanma machine includes a feature mask self-learning network and an edge mask self-learning network. The self-learning Xi Yanma-based method for learning the node representation vector and the edge representation vector to obtain the node representation update vector and the edge representation update vector comprises the following steps: processing the node representation vector based on a feature mask self-learning network, learning a node mask, and obtaining the node representation update vector, wherein the node representation update vector comprises a node representation vector updated based on weights. And processing the edge representation vector from the learning network based on an edge mask, learning the edge mask, and obtaining the edge representation update vector, wherein the edge representation update vector comprises an edge representation vector updated based on weights. Specifically, a feature mask self-learning network (Fn) and an edge mask self-learning network (En) may be added behind the heterogeneous translation layer.
Wherein the feature mask is: the initial feature G0, the aggregate feature GL and the node type code for each node are concatenated. In an embodiment of the present disclosure, node masks are learned with Fn.
The edge mask is: the features of the source node and the target node and their edge type codes are concatenated and the edge mask is learned with En. The processing is based on the following formula:
Where Fn and En are multi-layer perceptrons, τ and φ are node type mapping functions and edge type mapping functions, and I is a stitching operation.
The new feature of each node is obtained by dot product of the corresponding feature mask and the original feature. The edge mask and the new features form a weighted iso-graph as input to the risk prediction model, i.e. G' =g 0 ·Mask n +Mask e . After feature mask learning and edge mask learning, the initial node features and edge features are weighted so that these features can be more effectively utilized in the subsequent model training process.
According to embodiments of the present disclosure, the risk prediction model may be pre-trained.
Fig. 4 schematically illustrates a flowchart of a method of training a risk prediction model according to an embodiment of the present disclosure.
As shown in fig. 4, the heterogeneous diagram construction method of this embodiment includes operations S410 to S460.
In operation S410, sample user data is obtained, where the sample user data includes sample user feature data and a sample user association relationship, the sample user association relationship includes an association relationship between a sample user and other users, and the other users include users having an association relationship with the sample user.
In operation S420, the sample user data is stored in the heterogeneous graph, and a sample node representation vector and a sample edge representation vector are acquired.
In operation S430, the sample node representation vector is processed based on the feature attention network to obtain a sample node feature vector.
In operation S440, the sample edge representation vector is processed based on the edge attention network to obtain a sample node-edge feature vector.
In operation S450, the sample node feature vector and the sample node-edge feature vector are processed based on the classifier, and a sample node risk probability and a sample edge risk probability are obtained.
In operation S460, a loss function value is calculated based on the sample node risk probability, the sample edge risk probability, and the sample user risk, and the risk prediction model is iteratively trained based on the loss function value until the loss function value is smaller than a preset threshold.
The sample user association relationship comprises association relationships between sample users and other users, wherein the other users comprise users with association relationships with the sample users. In one example, in the context of credit risk assessment, typical associations may include, but are not limited to, user social relationships, user property relationships, and the like. It will be appreciated that different relationships may affect the credit risk of a sample user. In one example of the present disclosure, the sample user credit risk is a user tag. Wherein the sample node representation vector and the sample edge representation vector may be obtained based on the method in the example above. After the sample node representing vector is processed by the feature attention network, the sample node feature vector can represent the association relation between the sample node and the neighbor node, and after the edge attention network is used for processing the edge node representing vector, the edge feature representing vector can represent the association relation between the sample node and the neighbor edge. Therefore, after the node characteristics and the edge characteristics are learned by utilizing the characteristic attention network and the edge attention network, the association relationship among the nodes, the edges and the credit risks of the user can be mined by combining the user labels.
Fig. 5 schematically illustrates a flowchart of a method of obtaining a sample edge feature vector according to an embodiment of the present disclosure.
As shown in fig. 5, the method for acquiring the sample edge feature vector of this embodiment includes operations S510 to S530.
In operation S510, an attention weight of the sample node on the sample neighbor node is acquired based on the sample node representation vector and the representation vector of the sample neighbor node.
In the feature attention network, N (V) is set as node V epsilon V i N is set in neighbor node set in heterogeneous graph j (V) is node V ε V i And a subset of neighbor nodes of type j.epsilon.1. In embodiments of the present disclosure, at compute node v εv i The sample node X may be based on the attention weight on neighbor nodes of type j e l V Is a representation of vector and sample neighbor node x u Is a representative vector acquisition sample node X V At sample neighbor node x u Attention weight on. In one example, the sample node X may be scaled by a matrix of learnable parameters, a rectified linear unit function with leakage parameters V Is a representation of vector and sample neighbor node x u Is processed to calculate a sample node x u At sample neighbor node x v Attention weight on, thereby determining sample node x v Attention distribution on neighbor nodes.
In operation S520, a sample node feature attention vector is acquired based on the attention weights of the sample nodes on the neighbor nodes and the representative vectors of the sample neighbor nodes.
After the attention weight of the sample node on the neighbor node is obtained, a sample node feature attention vector can be obtained based on the attention weight of the sample node on the neighbor node and the representative vector of the sample neighbor node.
In one example, in computing the feature attention vector of node v, all neighbor nodes u of node v need to be considered, and the representation vector x of each neighbor node u is expressed with weights u And performing weighted splicing. The weight is determined by the type of the node v and the node u and the connection relation between the node v and the node uFor reflecting the degree of interest of node v in the representation vector of node u. In this way, the feature attention network can learn the attention degree of the node v to the different neighbor node representation vectors and integrate the attention degree into the feature attention vector h v Is a kind of medium.
In operation S530, the sample node feature vector is acquired based on the sample node representation vector and the sample node feature attention vector.
In one example, feature attention vector h v Is based on the representation vector x of the neighbor node u u And the representation vector x of node v itself v By weight a ij And (v, u) carrying out weighted splicing. And the final eigenvector z v Then it is at the feature attention vector h v Is obtained by performing nonlinear transformation by activating a function.
Fig. 6 schematically illustrates a flow chart of a method of processing the sample edge representation vector based on a feature attention network to obtain a sample node-edge feature vector, in accordance with an embodiment of the present disclosure.
As shown in fig. 6, the method of acquiring a sample node-edge feature vector of this embodiment includes operations S610 to S630.
In operation S610, attention weights of the sample nodes on the sample sides are acquired based on the sample node representation vectors and the representation vectors of the sample sides.
In the embodiment of the disclosure, the sample edge is a neighboring edge connected with the sample node. Let ε (v) be node v ε v i Setting epsilon in neighbor edge set in heterogeneous graph j (v) For node v e v i A subset of neighbor edges of type j e 1.
The neighbor edge attention weight is the attention weight of node v to its neighbor node u and the edges between them, reflecting the degree of attention of node v to node u and the edges between them.
In operation S620, a sample edge feature attention vector is acquired based on the attention weights of the sample nodes on the sample edges and the representative vector of the sample edges.
Wherein, side note force vector g v The method is obtained by weighting and splicing the neighbor node u of the node v and the representing vector of the edge between the neighbor node u of the node v and the representing vector on the basis of calculating the attention weight of the neighbor edge, and can reflect the integral characteristics and the attribute of the neighbor node u of the node v and the edge between the neighbor node u of the node v.
In operation S630, a node-edge feature vector of the sample is acquired based on the sample node representation vector and the sample-edge feature attention vector.
In one example, the final node-edge feature vector is then the on-edge force vector g v And node feature vector x v Is obtained by performing nonlinear transformation by activating a function. Thus, complex interaction information among the user, the user risk features and the user relationship can be captured.
In operation S450, the node v ε v may be calculated by transforming the learnable parameter vector with the activation function i Or the probability value of whether the edge e= (u, v) E is at risk:
in operation S460, model training and testing may be performed using an optimizer using the loss function as an optimization target. The optimization objective is to minimize the loss function between the predicted value and the actual value,
When the loss function value reaches a preset threshold, the model is described as reaching the required accuracy, and at this time, the model training can be stopped.
According to an embodiment of the present disclosure, the data to be analyzed is updated periodically based on a preset frequency. For example, user information and its transaction data, as well as user associations, may be updated monthly to enable dynamic maintenance of user risk.
According to an embodiment of the present disclosure, after completing the risk prediction, the method further comprises: and carrying out risk display based on the risk prediction result.
According to embodiments of the present disclosure, risk prediction results may be presented in a visual view. Specifically, a node user feature risk source and/or the node user risk propagation source may be displayed, where the node user feature risk source is associated with user feature data, and the node user risk propagation path is associated with a user relationship. For example, user characteristic data such as funding configuration status and historical behavior of a user may be displayed to represent a node user characteristic risk source, and a user relationship with another high risk user may be displayed to represent a node user risk propagation source. In the embodiment of the disclosure, the interpretability of the model is increased by visually displaying the output result of the risk prediction model. Through the selection of the nodes or the edges, the association relation between the node risk probability and the risk items and the influence of the relation between node users on the user risk can be intuitively captured.
Based on the risk prediction method, the disclosure further provides a risk prediction device. The device will be described in detail below in connection with fig. 7.
Fig. 7 schematically shows a block diagram of a risk prediction apparatus according to an embodiment of the present disclosure.
As shown in fig. 7, the risk prediction apparatus 700 of this embodiment includes a first acquisition module 710 and a prediction module 720.
The first obtaining module 710 is configured to obtain data to be analyzed, and update a heterogeneous graph based on the data to be analyzed, wherein the heterogeneous graph includes nodes and edges, the nodes are used for storing user risk characteristic information, and the edges are used for storing user relationships.
The prediction module 720 is configured to process the heterograms based on a risk prediction model to complete risk prediction. Wherein updating the heterogeneous graph comprises: self-supervised learning is carried out on the heterogeneous graph storage data, and a heterogeneous graph containing weight characteristics is obtained; the risk prediction model is trained by combining an attention network model with a classifier, and the attention network model comprises a characteristic attention network and a side attention network; the feature attention network is used for predicting user risk based on the user risk feature data, and the side attention network is used for predicting a user risk propagation path based on a user relationship.
Fig. 8 schematically illustrates a block diagram of a risk prediction model according to further embodiments of the present disclosure.
As shown in fig. 8, the risk prediction apparatus 700 of this embodiment includes a first acquisition module 710, a prediction module 720, and a presentation module 730.
The functions of the first obtaining module 710 and the predicting module 720 are as described in fig. 7, and are not described again.
Presentation module 730 is configured to present a risk based on the risk prediction results.
Fig. 9 schematically shows a block diagram of a structure of an updating apparatus of a heterogram according to an embodiment of the present disclosure.
As shown in fig. 9, the apparatus 800 for updating a heterogram in this embodiment includes a first storage module 810, a second storage module 820, a conversion module 830, a self-learning module 840, and an updating module 850.
Wherein the first storage module 810 is configured to obtain a real-time user feature data set, and store the feature data of each user in one node.
The second storage module 820 is configured to obtain a user relationship of a user with other users, and store the user relationship on an edge.
The conversion module 830 is configured to process the user risk feature data and the user relationship using a heterogeneous graph conversion network to generate a node representation vector and an edge representation vector.
The self-learning module 840 is configured to learn the node representation vector and the edge representation vector based on a self-learning Xi Yanma machine to obtain a node representation update vector and an edge representation update vector.
The update module 850 is configured to obtain the heterogram based on the node representation update vector and the edge representation update vector, wherein the node representation update vector and the edge representation update vector comprise node representation vectors and edge representation vectors updated based on weights.
According to an embodiment of the present disclosure, the heterogeneous graph conversion network includes a heterogeneous convolution layer, and the conversion module performs processing of the user risk feature data and the user relationship by using the heterogeneous graph conversion network, and the method for generating the node representation vector and the edge representation vector further includes: processing the user risk characteristic data by utilizing the heterogeneous convolution layer, normalizing a processing result, and generating the node representation vector; and processing the user relationship by utilizing the heterogeneous convolution layer, normalizing the processing result, and generating the edge representation vector.
According to an embodiment of the present disclosure, the self-learning Xi Yanma device includes a feature mask self-learning network and an edge mask self-learning network, and the self-learning module performs learning on the node representation vector and the edge representation vector to obtain a node representation update vector and an edge representation update vector, and the method further includes: processing the node representation vector based on a feature mask self-learning network, learning a node mask, and obtaining the node representation update vector, wherein the node representation update vector comprises a node representation vector updated based on weight; and processing the edge representation vector from the learning network based on an edge mask, learning the edge mask, and obtaining the edge representation update vector, wherein the edge representation update vector comprises an edge representation vector updated based on weights.
According to the training method of the risk prediction model of the embodiment of the disclosure, the embodiment of the disclosure also provides a training device of the risk prediction model.
Fig. 10 schematically illustrates a block diagram of a training apparatus of a risk prediction model according to an embodiment of the present disclosure.
As shown in fig. 10, the training apparatus 1000 of the risk prediction model of this embodiment includes a second acquisition module 1001, a third storage module 1002, a first calculation module 1003, a second calculation module 1004, a classification module 1005, and a third calculation module 1006.
The second obtaining module 1001 is configured to obtain sample user data, where the sample user data includes sample user feature data and a sample user association relationship, and the sample user association relationship includes an association relationship between a sample user and other users, and the other users include users having an association relationship with the sample user.
The third storage module 1002 is configured to store the sample user data in the heterogram, obtaining a sample node representation vector and a sample edge representation vector.
The first calculation module 1003 is configured to process the sample node representation vector based on a feature attention network to obtain a sample node feature vector.
The second computing module 1004 is configured to process the sample edge representation vector based on an edge-attention network to obtain a sample node-edge feature vector.
The classification module 1005 is configured to process the sample node feature vector and the sample edge-feature vector based on a classifier to obtain a sample node risk probability and a sample edge risk probability.
A third calculation module 1006 is configured to calculate a loss function value based on the sample node risk probability and sample edge risk probability and sample user risk, and iteratively train the risk prediction model based on the loss function value until the loss function value is less than a preset threshold.
According to an embodiment of the present disclosure, a method performed by a first computing module includes obtaining an attention weight of a sample node on a sample neighbor node based on the sample node representation vector and a representation vector of the sample neighbor node. And obtaining a sample node characteristic attention vector based on the attention weight of the sample node on the neighbor node and the representative vector of the sample neighbor node. A node feature vector for the sample is obtained based on the sample node representation vector and the sample node feature attention vector.
According to an embodiment of the present disclosure, the method performed by the second calculation module comprises obtaining an attention weight of the sample node on the sample edge based on the sample node representation vector and the representation vector of the sample edge. And obtaining a sample edge characteristic attention vector based on the attention weight of the sample node on the sample edge and the representative vector of the sample edge. A node-edge feature vector for the sample is obtained based on the sample node representation vector and the sample-edge feature attention vector.
According to embodiments of the present disclosure, any of the first acquisition module 710, the prediction module 720, the presentation module 730, the first storage module 810, the second storage module 820, the conversion module 830, the self-learning module 840, the update module 850, the second acquisition module 1001, the third storage module 1002, the first calculation module 1003, the second calculation module 1004, the classification module 1005, and the third calculation module 1006 may be combined in one module or any of the modules may be split into a plurality of modules. Alternatively, at least some of the functionality of one or more of the modules may be combined with at least some of the functionality of other modules and implemented in one module. According to embodiments of the present disclosure, at least one of the first acquisition module 710, the prediction module 720, the presentation module 730, the first storage module 810, the second storage module 820, the conversion module 830, the self-learning module 840, the update module 850, the second acquisition module 1001, the third storage module 1002, the first calculation module 1003, the second calculation module 1004, the classification module 1005, and the third calculation module 1006 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging the circuitry, or in any one of or a suitable combination of three implementations of software, hardware, and firmware. Alternatively, at least one of the first acquisition module 710, the prediction module 720, the presentation module 730, the first storage module 810, the second storage module 820, the conversion module 830, the self-learning module 840, the update module 850, the second acquisition module 1001, the third storage module 1002, the first calculation module 1003, the second calculation module 1004, the classification module 1005 and the third calculation module 1006 may be at least partially implemented as a computer program module, which may perform the corresponding functions when executed.
Fig. 11 schematically illustrates a block diagram of an electronic device adapted to implement a risk prediction method according to an embodiment of the present disclosure.
As shown in fig. 11, an electronic device 900 according to an embodiment of the present disclosure includes a processor 901 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 902 or a program loaded from a storage portion 908 into a Random Access Memory (RAM) 903. The processor 901 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. Processor 901 may also include on-board memory for caching purposes. Processor 901 may include a single processing unit or multiple processing units for performing the different actions of the method flows according to embodiments of the present disclosure.
In the RAM 903, various programs and data necessary for the operation of the electronic device 900 are stored. The processor 901, the ROM 902, and the RAM 903 are connected to each other by a bus 904. The processor 901 performs various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM 902 and/or the RAM 903. Note that the program may be stored in one or more memories other than the ROM 902 and the RAM 903. The processor 901 may also perform various operations of the method flow according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the disclosure, the electronic device 900 may also include an input/output (I/O) interface 905, the input/output (I/O) interface 905 also being connected to the bus 904. The electronic device 900 may also include one or more of the following components connected to the I/O interface 905: an input section 906 including a keyboard, a mouse, and the like; an output portion 907 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage portion 908 including a hard disk or the like; and a communication section 909 including a network interface card such as a LAN card, a modem, or the like. The communication section 909 performs communication processing via a network such as the internet. The drive 910 is also connected to the I/O interface 905 as needed. A removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on the drive 910 so that a computer program read out therefrom is installed into the storage section 908 as needed.
The present disclosure also provides a computer-readable storage medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example, but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, the computer-readable storage medium may include ROM 902 and/or RAM 903 and/or one or more memories other than ROM 902 and RAM 903 described above.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the methods shown in the flowcharts. The program code, when executed in a computer system, causes the computer system to implement the item recommendation method provided by embodiments of the present disclosure.
The above-described functions defined in the system/apparatus of the embodiments of the present disclosure are performed when the computer program is executed by the processor 901. The systems, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
In one embodiment, the computer program may be based on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted, distributed, and downloaded and installed in the form of a signal on a network medium, via communication portion 909, and/or installed from removable medium 911. The computer program may include program code that may be transmitted using any appropriate network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In such an embodiment, the computer program may be downloaded and installed from the network via the communication portion 909 and/or installed from the removable medium 911. The above-described functions defined in the system of the embodiments of the present disclosure are performed when the computer program is executed by the processor 901. The systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
According to embodiments of the present disclosure, program code for performing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, such computer programs may be implemented in high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. Programming languages include, but are not limited to, such as Java, c++, python, "C" or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or in the claims may be provided in a variety of combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, the features recited in the various embodiments of the present disclosure and/or the claims may be variously combined and/or combined without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
The embodiments of the present disclosure are described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described above separately, this does not mean that the measures in the embodiments cannot be used advantageously in combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be made by those skilled in the art without departing from the scope of the disclosure, and such alternatives and modifications are intended to fall within the scope of the disclosure.

Claims (15)

1. A risk prediction method, comprising:
acquiring data to be analyzed, and updating a heterogeneous graph based on the data to be analyzed, wherein the heterogeneous graph comprises nodes and edges, the nodes are used for storing user risk characteristic information, and the edges are used for storing user relations; and
Processing the heterogeneous graph based on a risk prediction model to complete risk prediction,
wherein updating the heterogeneous graph comprises: self-supervised learning is carried out on the heterogeneous graph storage data, and a heterogeneous graph containing weight characteristics is obtained;
the risk prediction model is trained by combining an attention network model with a classifier, wherein the attention network model comprises a characteristic attention network and a side attention network; the feature attention network is used for predicting user risk based on the user risk feature data, and the side attention network is used for predicting a user risk propagation path based on a user relationship.
2. The method of claim 1, wherein the heterogeneous map updating method comprises:
acquiring a real-time user characteristic data set, and storing the characteristic data of each user in a node;
acquiring a user relationship between a user and other users, and storing the user relationship on an edge;
processing the user risk characteristic data and the user relationship by using a heterogeneous graph conversion network to generate a node representation vector and an edge representation vector;
learning the node representation vector and the edge representation vector based on a self-learning Xi Yanma device to obtain a node representation update vector and an edge representation update vector;
The heterogram is obtained based on the node representation update vector and the edge representation update vector,
wherein the node representation update vector and the edge representation update vector comprise a node representation vector and an edge representation vector based on weight updating.
3. The method of claim 2, wherein the heterogeneous graph conversion network comprises a heterogeneous convolution layer, the processing the user risk feature data and the user relationship with the heterogeneous graph conversion network to generate node representation vectors and edge representation vectors comprises:
processing the user risk characteristic data by utilizing the heterogeneous convolution layer, normalizing a processing result, and generating the node representation vector; and processing the user relationship by utilizing the heterogeneous convolution layer, normalizing the processing result, and generating the edge representation vector.
4. The method of claim 2, wherein the self-learning Xi Yanma engine includes a feature mask self-learning network and an edge mask self-learning network, the learning the node representation vector and edge representation vector based on the self-learning Xi Yanma engine includes:
processing the node representation vector based on a feature mask self-learning network, learning a node mask, and obtaining the node representation update vector, wherein the node representation update vector comprises a node representation vector updated based on weight; and
Processing the edge expression vector based on an edge mask self-learning network, learning an edge mask, and obtaining the edge expression update vector, wherein the edge expression update vector comprises an edge expression vector updated based on weights.
5. The method of any one of claims 1-4, wherein the training method of the risk prediction model comprises:
acquiring sample user data, wherein the sample user data comprises sample user characteristic data and a sample user association relationship, the sample user association relationship comprises an association relationship between a sample user and other users, and the other users comprise users with the association relationship with the sample user;
storing the sample user characteristic data in the heterogeneous graph to obtain a sample node representation vector and a sample edge representation vector;
processing the sample node representation vector based on a feature attention network, obtaining a sample node feature vector,
processing the sample edge representation vector based on an edge attention network to obtain a sample node-edge feature vector;
processing the sample node feature vector and the sample node-edge feature vector based on a classifier to obtain sample node risk probability and sample edge risk probability; and
And calculating a loss function value based on the sample node risk probability, the sample side risk probability and the sample user risk, and performing iterative training on the risk prediction model based on the loss function value until the loss function value is smaller than a preset threshold value.
6. The method of claim 5, wherein the processing the sample node representation vector based on the feature attention network to obtain a sample node feature vector comprises:
acquiring the attention weight of the sample node on the sample neighbor node based on the sample node expression vector and the expression vector of the sample neighbor node;
acquiring a sample node characteristic attention vector based on the attention weight of the sample node on the neighbor node and the representative vector of the sample neighbor node; and
a node feature vector for the sample is obtained based on the sample node representation vector and the sample node feature attention vector.
7. The method of claim 5, wherein the processing the sample edge representation vector based on a feature attention network to obtain a sample node-edge feature vector comprises:
obtaining the attention weight of the sample node on the sample side based on the sample node representation vector and the sample side representation vector;
Acquiring a sample edge characteristic attention vector based on the attention weight of the sample node on the sample edge and the representative vector of the sample edge; and
a node-edge feature vector for the sample is obtained based on the sample node representation vector and the sample-edge feature attention vector.
8. The method of claim 1, wherein the data to be analyzed is updated periodically based on a preset frequency.
9. The method of claim 1, wherein after completing the risk prediction, the method further comprises:
and carrying out risk display based on the risk prediction result.
10. The method of claim 7, wherein the risk presentation based on the risk prediction results comprises:
and displaying a node user feature risk source and/or the node user risk propagation path, wherein the node user feature risk source is associated with user feature data, and the node user risk propagation path is associated with a user relationship.
11. A risk prediction apparatus, comprising:
the first acquisition module is configured to acquire data to be analyzed, and update a heterogeneous graph based on the data to be analyzed, wherein the heterogeneous graph comprises nodes and edges, the nodes are used for storing user risk characteristic information, and the edges are used for storing user relations; and
A prediction module configured to process the heterograms based on a risk prediction model, complete a risk prediction,
wherein updating the heterogeneous graph comprises: self-supervised learning is carried out on the heterogeneous graph storage data, and a heterogeneous graph containing weight characteristics is obtained; the risk prediction model is trained by combining an attention network model with a classifier, and the attention network model comprises a characteristic attention network and a side attention network; the feature attention network is used for predicting user risk based on the user risk feature data, and the side attention network is used for predicting a user risk propagation path based on a user relationship.
12. A training device for a risk prediction model, comprising:
the second acquisition module is configured to acquire sample user data, wherein the sample user data comprises sample user characteristic data and a sample user association relationship, the sample user association relationship comprises an association relationship between a sample user and other users, and the other users comprise users with the association relationship with the sample user;
the storage module is configured to store the sample user data in the heterogeneous graph and acquire a sample node representation vector and a sample edge representation vector;
A first computing module configured to process the sample node representation vector based on a feature attention network, obtain a sample node feature vector,
the second computing module is configured to process the sample edge representing vector based on an edge attention network to obtain a sample edge characteristic vector;
the classification module is configured to process the sample node feature vector and the sample node-edge feature vector based on a classifier, and obtain sample node risk probability and sample edge risk probability; and
and a third calculation module configured to calculate a loss function value based on the sample node risk probability and the sample edge risk probability and the sample user risk, and perform iterative training on the risk prediction model based on the loss function value until the loss function value is smaller than a preset threshold.
13. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-10.
14. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the method according to any of claims 1 to 10.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 10.
CN202311685540.3A 2023-12-08 2023-12-08 Risk prediction method, apparatus, device, medium, and program product Pending CN117670366A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311685540.3A CN117670366A (en) 2023-12-08 2023-12-08 Risk prediction method, apparatus, device, medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311685540.3A CN117670366A (en) 2023-12-08 2023-12-08 Risk prediction method, apparatus, device, medium, and program product

Publications (1)

Publication Number Publication Date
CN117670366A true CN117670366A (en) 2024-03-08

Family

ID=90067815

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311685540.3A Pending CN117670366A (en) 2023-12-08 2023-12-08 Risk prediction method, apparatus, device, medium, and program product

Country Status (1)

Country Link
CN (1) CN117670366A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118094343A (en) * 2024-04-23 2024-05-28 安徽大学 Attention mechanism-based LSTM machine residual service life prediction method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118094343A (en) * 2024-04-23 2024-05-28 安徽大学 Attention mechanism-based LSTM machine residual service life prediction method

Similar Documents

Publication Publication Date Title
JP7331171B2 (en) Methods and apparatus for training image recognition models, methods and apparatus for recognizing images, electronic devices, storage media, and computer programs
US11562304B2 (en) Preventative diagnosis prediction and solution determination of future event using internet of things and artificial intelligence
CN111279362B (en) Capsule neural network
CN110751286B (en) Training method and training system for neural network model
US20230102337A1 (en) Method and apparatus for training recommendation model, computer device, and storage medium
US20180197087A1 (en) Systems and methods for retraining a classification model
CN112541124B (en) Method, apparatus, device, medium and program product for generating a multitasking model
US20190095788A1 (en) Supervised explicit semantic analysis
CN113409090B (en) Training method, prediction method and device of advertisement click rate prediction model
CN112463968B (en) Text classification method and device and electronic equipment
KR20220047228A (en) Method and apparatus for generating image classification model, electronic device, storage medium, computer program, roadside device and cloud control platform
US20210166105A1 (en) Method and system for enhancing training data and improving performance for neural network models
Zhang et al. Semantic understanding and prompt engineering for large-scale traffic data imputation
CN117670366A (en) Risk prediction method, apparatus, device, medium, and program product
CN114357170A (en) Model training method, analysis method, device, equipment and medium
CN115018552A (en) Method for determining click rate of product
CN114970540A (en) Method and device for training text audit model
US20220405615A1 (en) Methods and systems for generating an uncertainty score for an output of a gradient boosted decision tree model
CN111241273A (en) Text data classification method and device, electronic equipment and computer readable medium
WO2024060587A1 (en) Generation method for self-supervised learning model and generation method for conversion rate estimation model
CN114691836B (en) Text emotion tendentiousness analysis method, device, equipment and medium
CN116401372A (en) Knowledge graph representation learning method and device, electronic equipment and readable storage medium
CN115795025A (en) Abstract generation method and related equipment thereof
CN115544210A (en) Model training and event extraction method based on event extraction of continuous learning
CN113239215A (en) Multimedia resource classification method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination