US10691494B2 - Method and device for virtual resource allocation, modeling, and data prediction - Google Patents

Method and device for virtual resource allocation, modeling, and data prediction Download PDF

Info

Publication number
US10691494B2
US10691494B2 US16/697,913 US201916697913A US10691494B2 US 10691494 B2 US10691494 B2 US 10691494B2 US 201916697913 A US201916697913 A US 201916697913A US 10691494 B2 US10691494 B2 US 10691494B2
Authority
US
United States
Prior art keywords
data
user
evaluation results
training
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US16/697,913
Other languages
English (en)
Other versions
US20200097329A1 (en
Inventor
Jun Zhou
Xiaolong Li
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Publication of US20200097329A1 publication Critical patent/US20200097329A1/en
Assigned to ALIBABA GROUP HOLDING LIMITED reassignment ALIBABA GROUP HOLDING LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LI, XIAOLONG, ZHOU, JUN
Priority to US16/907,637 priority Critical patent/US10891161B2/en
Application granted granted Critical
Publication of US10691494B2 publication Critical patent/US10691494B2/en
Assigned to ADVANTAGEOUS NEW TECHNOLOGIES CO., LTD. reassignment ADVANTAGEOUS NEW TECHNOLOGIES CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ALIBABA GROUP HOLDING LIMITED
Assigned to Advanced New Technologies Co., Ltd. reassignment Advanced New Technologies Co., Ltd. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ADVANTAGEOUS NEW TECHNOLOGIES CO., LTD.
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/03Credit; Loans; Processing thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/02Banking, e.g. interest calculation or account maintenance

Definitions

  • the present specification relates to the field of computer applications, and in particular, to a method and an apparatus for virtual resource allocation, modeling, and data prediction.
  • Some service platforms that provide Internet services for users can collect massive user data by collecting daily generated service data of the users.
  • the user data is a very precious “resource” for an operator of the service platform.
  • the operator of the service platform can construct a user evaluation model based on the “resource” through data mining and machine learning, and make evaluation and decision for the user by using the user evaluation model.
  • data features of several dimensions can be extracted from massive user data, training samples can be constructed based on the extracted features, and a user risk evaluation model can be constructed through training by using a specific machine learning algorithm. Then, risk evaluation is performed on a user by using the user risk evaluation model, whether the user is a risky user is determined based on a risk evaluation result, and then whether a loan needs to be granted to the user is determined.
  • the present specification provides a virtual resource allocation method, including: receiving evaluation results of several users that are uploaded by a plurality of data providers, where the evaluation results are obtained after the plurality of data providers evaluate the users respectively based on evaluation model of the plurality of data providers; constructing several training samples by using the evaluation results uploaded by the plurality of data providers as training data, where each training sample includes evaluation results of the same user that are uploaded by the plurality of data providers, and the training sample is labeled based on an actual service execution status of the user; and training a model based on the several training samples and the label of each training sample, using a coefficient of each variable in the trained models as the contribution level of each data provider, and allocating virtual resources to each data provider based on the contribution level of each data provider.
  • the trained model is a linear model.
  • the number of virtual resources allocated to each data provider is directly proportional to the contribution level of each data provider.
  • the method further includes: receiving evaluation results of a certain user that are uploaded by the plurality of data providers, and inputting the evaluation results to the trained model to obtain a final evaluation result of the user.
  • the virtual resource is a user data usage fund distributed to each data provider.
  • the evaluation model is a user risk evaluation model
  • the evaluation result is risk score
  • the label indicates whether the user is a risky user.
  • the present specification further provides a virtual resource allocation apparatus, including: a receiving module, configured to receive evaluation results of several users that are uploaded by a plurality of data providers, where the evaluation results are obtained after the plurality of data providers evaluate the users respectively based on evaluation models of the plurality of data providers; a training module, configured to construct several training samples by using the evaluation results uploaded by the plurality of data providers as training data, where each training sample includes evaluation results of the same user that are uploaded by the plurality of data providers, and the training sample is labeled based on an actual service execution status of the user; and an allocation module, configured to train a model based on the several training samples and the label of each training sample, use a coefficient of each variable in the trained models as the contribution level of each data provider, and allocate virtual resources to each data provider based on the contribution level of each data provider.
  • a receiving module configured to receive evaluation results of several users that are uploaded by a plurality of data providers, where the evaluation results are obtained after the plurality of data providers evaluate the users respectively based on evaluation models of the
  • the trained model is a linear model.
  • the number of virtual resources allocated to each data provider is directly proportional to the contribution level of each data provider.
  • the apparatus further includes: an evaluation module, configured to receive evaluation results of a certain user that are uploaded by the plurality of data providers, and input the evaluation results to the trained model to obtain a final evaluation result of the user.
  • an evaluation module configured to receive evaluation results of a certain user that are uploaded by the plurality of data providers, and input the evaluation results to the trained model to obtain a final evaluation result of the user.
  • the virtual resource is a user data usage fund distributed to each data provider.
  • the evaluation model is a user risk evaluation model
  • the evaluation result is risk score
  • the label indicates whether the user is a risky user.
  • the present specification further provides a modeling method, including: receiving evaluation results of several users that are uploaded by a plurality of data providers, where the evaluation results are obtained after the plurality of data providers evaluate the users respectively based on evaluation models of the plurality of data providers; constructing several training samples by using the evaluation results uploaded by the plurality of data providers as training data, where each training sample includes evaluation results of the same user that are uploaded by the plurality of data providers, and the training sample is labeled based on an actual service execution status of the user; and training a model based on the several training samples and the label of each training sample, to obtain a trained model.
  • the trained model is a linear model.
  • the evaluation model is a user risk evaluation model
  • the evaluation result is risk score
  • the label indicates whether the user is a risky user.
  • the present specification further provides a data prediction method, including: receiving evaluation results of several users that are uploaded by a plurality of data providers, where the evaluation results are obtained after the plurality of data providers evaluate the users respectively based on evaluation models of the plurality of data providers; constructing several training samples by using the evaluation results uploaded by the plurality of data providers as training data, where each training sample includes evaluation results of the same user that are uploaded by the plurality of data providers, and the training sample is labeled based on an actual service execution status of the user; training a model based on the several training samples and the label of each training sample, to obtain a trained model; and receiving evaluation results of a certain user that are uploaded by the plurality of data providers, and inputting the evaluation results to the trained model to obtain a final evaluation result of the user.
  • the present specification further provides a virtual resource allocation system, including: servers of a plurality of data providers, configured to upload evaluation results of several users to a server of a risk evaluator, where the evaluation results are obtained after the plurality of data providers evaluate the users respectively based on evaluation models of the plurality of data providers; and the server of the risk evaluator, configured to construct several training samples by using the evaluation results uploaded by the plurality of data providers as training data, where each training sample includes evaluation results of the same user that are uploaded by the plurality of data providers, and the training sample is labeled based on an actual service execution status of the user; and train a model based on the several training samples and the label of each training sample, use a coefficient of each variable in the trained models as the contribution level of each data provider, and allocate virtual resources to each data provider based on the contribution level of each data provider.
  • a virtual resource allocation system including: servers of a plurality of data providers, configured to upload evaluation results of several users to a server of a risk evaluator, where the evaluation
  • the present specification further provides an electronic device, including: a processor; and a memory, configured to store machine executable instructions, where by reading and executing the machine executable instructions that are stored in the memory and that correspond to control logic of virtual resource allocation, the processor is prompted to perform the following operations: receiving evaluation results of several users that are uploaded by a plurality of data providers, where the evaluation results are obtained after the plurality of data providers evaluate the users respectively based on evaluation models of the plurality of data providers; constructing several training samples by using the evaluation results uploaded by the plurality of data providers as training data, where each training sample includes evaluation results of the same user that are uploaded by the plurality of data providers, and the training sample is labeled based on an actual service execution status of the user; and training a model based on the several training samples and the label of each training sample, using a coefficient of each variable in the trained models as the contribution level of each data provider, and allocating virtual resources to each data provider based on the contribution level of each data provider.
  • the plurality of data providers can upload, to the risk evaluator, evaluation results obtained after several users are separately evaluated based on evaluation models of the plurality of data providers, and the risk evaluator can construct several training samples by using the evaluation results uploaded by the plurality of data providers as training data, train a model, use a coefficient of each variable in the trained models as the contribution level of each data provider to the model, and then allocate virtual resources to each data provider based on the contribution level of each data provider.
  • the data provider needs to transmit, to the risk evaluator, only the evaluation results obtained through preliminary evaluations of several users by the data provider. Therefore, the data provider no longer needs to transmit the locally maintained raw user data to the risk evaluator, thereby significantly reducing the user privacy disclosure risk.
  • the coefficient of each variable in the trained models can truly reflect the contribution level of each data provider to the trained model. Therefore, the virtual resource is allocated to each data provider based on the contribution level, so that virtual resources can be properly allocated.
  • FIG. 1 is a flowchart illustrating a virtual resource allocation method, according to an implementation of the present specification
  • FIG. 2 is a schematic diagram illustrating training a model by a risk evaluator based on evaluation results uploaded by a plurality of data providers, according to an implementation of the present specification
  • FIG. 3 is a flowchart illustrating a modeling method, according to an implementation of the present specification
  • FIG. 4 is a flowchart illustrating a data prediction method, according to an implementation of the present specification
  • FIG. 5 is a structural diagram of hardware related to an electronic device that includes a virtual resource allocation apparatus, according to an implementation of the present specification.
  • FIG. 6 is a logical block diagram illustrating a virtual resource allocation apparatus, according to an implementation of the present specification.
  • the present specification intends to provide a technical solution in which when a risk evaluator needs to train a model by “sharing” user data maintained by a plurality of data providers, “data sharing” can be achieved while each data provider no longer needs to transmit raw user data to the risk evaluator.
  • each data provider can train, based on a machine learning algorithm, the user data locally maintained by the data provider, construct a user evaluation model, evaluate several sample users by using the user evaluation model, and then upload evaluation results to the risk evaluator.
  • the risk evaluator can construct several training samples by using the evaluation results uploaded by the plurality of data providers as training data.
  • Each training sample includes evaluation results of the same user that are uploaded by the plurality of data providers.
  • a feature vector can be constructed by separately using evaluation results of a certain user that are uploaded by the plurality of data providers as modeling features, and the feature vector is used as a training sample.
  • the constructed training sample can be correspondingly labeled based on an actual service execution of the user.
  • a label that each training sample is marked with can be a user label that is based on an actual repayment status of the user and that can indicate whether the user is a risky user.
  • the risk evaluator can train a model based on the constructed training samples and the label of each training sample, use a coefficient of each variable in the trained models as the contribution level of each data provider to the model, and then allocate virtual resources to each data provider based on the contribution level of each data provider.
  • the data provider needs to transmit, to the risk evaluator, only the evaluation results obtained through preliminary evaluations of several users by the data provider. Therefore, the data provider no longer needs to transmit the locally maintained raw user data to the risk evaluator, thereby significantly reducing the user privacy disclosure risk.
  • the coefficient of each variable in the trained models can truly reflect the contribution level of each data provider to the trained model. Therefore, the virtual resource is allocated to each data provider based on the contribution level, so that virtual resources can be properly allocated.
  • the user evaluation model can be a user risk evaluation model used to determine whether a user is a risky user, and the evaluation result can be risk score output after risk evaluation is performed on a user by using the user risk evaluation model.
  • each data provider can construct a user risk evaluation model based on user data maintained by the data provider.
  • the risk evaluator for example, which can be a party that grants a loan
  • the risk evaluator can construct several training samples by using evaluation results uploaded by the plurality of data providers as training data, mark, based on an actual repayment status of a user, each training sample with a label that can indicate whether the user is a risky user, then train the model based on the constructed training samples and the label of each training sample, use a coefficient of each variable in the trained models as the contribution level of each data provider to the model, and allocate virtual resources to each data provider based on the contribution level of each data provider. Therefore, in the whole process, “data sharing” can be achieved while each data provider does not need to provide the raw user data for the risk evaluator.
  • FIG. 1 illustrates a virtual resource allocation method, according to an implementation of the present specification. The method is applied to a server of a risk evaluator, and the server performs the following steps.
  • Step 102 Receive evaluation results of several users that are uploaded by a plurality of data providers, where the evaluation results are obtained after the plurality of data providers evaluate the users respectively based on evaluation model of the plurality of data providers.
  • Step 104 Construct several training samples by using the evaluation results uploaded by the plurality of data providers as training data, where each training sample includes evaluation results of the same user that are uploaded by the plurality of data providers, and the training sample is labeled based on an actual service execution status of the user.
  • Step 106 Train a model based on the several training samples and the label of each training sample, use a coefficient of each variable in the trained models as the contribution level of each data provider, and allocate virtual resources to each data provider based on the contribution level of each data provider.
  • the data provider can include a party that has a cooperation relationship with the risk evaluator.
  • the data provider and the risk evaluator can correspond to different operators.
  • the risk evaluator can be a data operation platform of company A
  • the data provider can be a service platform, such as an e-commerce platform, a third-party bank, an express company, another financial institution, or a telecommunications operator, that cooperates with the data operation platform of company A.
  • the user evaluation model can include any type of machine learning model used to evaluate a user.
  • the user evaluation model can be a user risk evaluation model (for example, a linear logistic regression model or a credit scoring model used to perform risk evaluation on a user) trained based on a specific machine learning algorithm.
  • the evaluation result output after the user is evaluated by using the user evaluation model can be risk score that represents a risk level of the user.
  • the risk score is usually a floating-point value ranging from 0 to 1 (for example, the risk score can be a probability value that represents a risk level of a user).
  • the evaluation result can be another form of score other than the risk score, for example, a credit score.
  • each data provider can no longer need to transmit locally maintained raw user data to the risk evaluator, but perform modeling by using the locally maintained raw user data.
  • a server of each data provider can collect daily generated user data at a back end, collect several pieces of user data from the collected user data as data samples, and generate an initialized data sample set based on the collected data samples.
  • the number of collected data samples is not limited in the present specification, and can be set by a person skilled in the art based on an actual demand.
  • a specific form of the user data depends on a specific service scenario and a modeling demand, can include any type of user data that can be used to extract modeling features to train a user evaluation model, and is not limited in the present specification.
  • the user data can include user data, such as transaction data, a shopping record, a repayment record, a consumption record, and a financial product purchase record of a user, that can be used to extract modeling features to train the risk evaluation model.
  • the server of the data provider can further preprocess the data sample in the data sample set.
  • Preprocessing the data sample in the data sample set usually includes performing data cleaning, default value addition, normalization processing, or other forms of preprocessing processes on the data sample in the data sample set.
  • the data sample in the data sample set is preprocessed, so that the collected data sample can be converted into a standardized data sample suitable for model training.
  • the server of the data provider can extract data features (namely, modeling features that finally participate in modeling) of several dimensions from each of the data samples in the data sample set.
  • data features namely, modeling features that finally participate in modeling
  • the number of extracted data features of several dimensions is not limited in the present specification, and can be selected by a person skilled in the art based on an actual modeling demand.
  • a specific type of the extracted data feature is not limited in the present specification.
  • a person skilled in the art can manually select, based on an actual modeling demand, the data feature from information actually included in the data sample.
  • the server of the data provider can generate one data feature vector for each data sample based on data feature values corresponding to the extracted data features of the dimensions, and then construct a target matrix based on the data feature vector of each data sample.
  • the target matrix can be a matrix of a dimension of N ⁇ M.
  • the constructed target matrix is a final training sample set for model training.
  • the server of each data provider can perform machine learning based on a specific machine learning algorithm by using the target matrix as an original sample training set, to train a user evaluation model.
  • machine learning algorithms used by the plurality of data providers to train the user evaluation models can be the same or different, and are not limited in the present specification.
  • the machine learning model can be a supervised machine learning model.
  • the machine learning model can be a logistic regression (LR) model.
  • each data sample in the training sample set can include a pre-marked sample label.
  • a specific form of the sample label usually also depends on a specific service scenario and a modeling demand, and is not limited in the present specification either.
  • the sample label can be a user label used to indicate whether the user is a risky user.
  • the user label can be marked and provided by the risk evaluator.
  • each data feature vector in the target matrix can correspond to one sample label.
  • the supervised machine learning algorithm is an LR algorithm.
  • a logistic regression model is trained based on the LR algorithm, a fitting error between a training sample and a corresponding sample label usually can be evaluated by using a loss function.
  • the training sample and the corresponding sample label can be input to the loss function as input values, and repeated iterative calculation is performed by using a gradient descent method until convergence is reached.
  • a value of a model parameter namely, an optimal weight value of each modeling feature in the training sample, where the weight value can represent the contribution level of each modeling feature to a model output result
  • the logistic regression model can be constructed by using the obtained value of the model parameter as an optimal parameter.
  • FIG. 2 is a schematic diagram illustrating training a model by a risk evaluator based on evaluation results uploaded by a plurality of data providers, according to an implementation of the present specification.
  • the risk evaluator can prepare several sample users, and notify each data provider of user IDs of the sample users. For example, in implementation, the user IDs of the sample users can be sent to each data provider in a form of a list.
  • each data provider can separately evaluate the sample users by using a user evaluation model of the data provider, and then upload evaluation results to the risk evaluator, and the risk evaluator performs modeling.
  • the risk evaluator does not need to notify each data provider of the user IDs of the sample users.
  • each data provider no longer needs to “share” locally maintained raw user data with the risk evaluator, and needs to “share” only a preliminary evaluation result of a user with the risk evaluator.
  • the preliminary evaluation result that the data provider “shares” with the risk evaluator can be understood as a result obtained by decreasing a dimension of the locally maintained user data.
  • the preliminary evaluation result that each data provider “shares” can be considered as a data feature obtained by decreasing the dimension of the locally maintained user data to dimension 1.
  • the preliminary evaluation result is obtained by each data provider through modeling by performing machine learning on the locally maintained user data. Therefore, “sharing” the preliminary evaluation result with the risk evaluator is equivalent to sharing, with the risk evaluator, data value obtained by learning and analyzing the locally maintained user data based on the machine learning. Although each data provider does not “share” the raw user data with the risk evaluator, data sharing can still be achieved by “sharing” the data value.
  • the risk evaluator after receiving the evaluation results that correspond to the sample users and that are uploaded by the plurality of data providers, the risk evaluator can construct a corresponding training sample for each sample user by using the evaluation results uploaded by the plurality of data providers as training data.
  • each constructed training sample includes evaluation results obtained after the plurality of data providers preliminarily evaluate, based on the trained user evaluation models, a sample user corresponding to the training sample.
  • An evaluation result from each data provider corresponds to one feature variable in the training sample.
  • the feature variable refers to a feature field that constitutes the training sample.
  • each training sample includes several feature fields, and each feature field corresponds to an evaluation result uploaded by one data provider.
  • a training sample set can be further generated based on the constructed training samples, and the training sample is correspondingly labeled based on an actual service execution status of each sample user.
  • a label that each training sample is marked with can be a user label that is based on an actual repayment status of the user and that can indicate whether the user is a risky user.
  • the risk evaluator can mark each sample user with the user label based on information about whether each sample user finally defaults on repayment. For example, assume that after a loan is finally granted to a certain sample user, the user defaults on repayment. In this case, in the training sample set, a training sample corresponding to the sample user is finally labeled to indicate that the user is a risky user.
  • a server of the risk evaluator can train a predetermined machine learning model based on the constructed training sample set and the label corresponding to each training sample.
  • the risk evaluator can add up the evaluation results of the same user that are uploaded by the plurality of data providers after the evaluation results are multiplied by corresponding coefficients, and then use a calculation result as the final evaluation result of the user.
  • the machine learning model trained by the risk evaluator can be a linear model.
  • the machine learning model trained by the risk evaluator can be a linear logistic regression model.
  • the process in which the risk evaluator trains the linear model based on the constructed training sample set and the label corresponding to each training sample is a process in which the evaluation results uploaded by the plurality of data providers and the corresponding user labels are input to the linear model to perform linear fitting to obtain coefficients corresponding to respective variables, where the evaluation results are used as the independent variables, and the corresponding user labels are used as dependent variables.
  • a specific implementation process is not described in detail in the present specification. When a person skilled in the art implements the technical solution in the present specification, references can be made to a record in a related technology.
  • the risk evaluator obtains, through training by using the previous training process, the coefficients corresponding to the variables (namely, the evaluation results uploaded by the plurality of data providers) in the training samples, training of the model is completed.
  • the risk evaluator can further allocate a certain quantity of virtual resources to each data provider based on the contribution level of each data provider to the trained model.
  • the number of virtual resources allocated to each data provider can be directly proportional to a weight value (namely, a coefficient) of each data provider.
  • the virtual resource allocated to each data provider can be a user data usage fund distributed by the risk evaluator to each data provider.
  • the risk evaluator can allocate, based on the contribution level of each data provider to the trained model, the user data usage fund that can be allocated to each data provider.
  • the contribution level of each data provider to the trained model can be represented by the coefficient that is obtained through training and that corresponds to each variable in the training sample.
  • the risk evaluator can use the coefficient that is obtained through training and that corresponds to each variable as the contribution level of each data provider, and then allocate the fund to each data provider based on a value of the coefficient corresponding to each variable.
  • the risk evaluator can use the coefficient of each variable as the contribution level to the model to obtain a corresponding allocation percentage through conversion, and then allocate a total amount of user data usage funds that can be allocated to the plurality of data providers to each data provider based on the allocation percentage obtained through conversion.
  • a data provider with a high contribution level to the model can be allocated more data usage funds.
  • a high-quality data provider can benefit more, so that each data provider can be encouraged to continuously improve quality of data maintained by the data provider.
  • an initial coefficient can be set for each variable in the model, and the initial coefficient is used to represent an initial contribution level of each data provider to the model.
  • a policy for setting the initial contribution level is not limited in the present specification, and can be set based on an actual demand when a person skilled in the art implements the technical solution in the present specification.
  • the same initial coefficient can be set for the variables in the model in a weighted averaging way, and virtual resources are equally allocated to the plurality of data providers by using the initial coefficient as initial contributions level of the plurality of data providers.
  • the virtual resource allocated by the risk evaluator to each data provider is a user data usage fund distributed by the risk evaluator to each data provider.
  • the risk evaluator can equally allocate the total amount of user data usage funds that can be allocated to the plurality of data providers to the plurality of data providers based on the initial contributions level of the plurality of data providers.
  • the coefficient of each variable in the trained models can truly reflect the contribution level of each data provider to the trained model. Therefore, the virtual resource is allocated to each data provider based on the contribution level, so that virtual resources can be properly allocated.
  • the risk evaluator can subsequently perform risk evaluation on a certain target user by using the trained model.
  • the target user can include a user whose risk evaluation needs to be performed by the risk evaluator.
  • the risk evaluator can be a party that grants a loan
  • the target user can be a user who initiates a loan application and for which the risk evaluator needs to perform risk evaluation and determines whether to grant a loan.
  • the plurality of data providers can search, based on the user ID, for evaluation results obtained after evaluation is performed by using user evaluation models of the plurality of data providers, and then upload the evaluation results to the risk evaluator.
  • the risk evaluator can construct a corresponding prediction sample for the target user by using the evaluation results uploaded by the plurality of data providers as training data, input the prediction sample to the trained model for prediction calculation to obtain a final evaluation result of the user, and make a corresponding service decision based on the final evaluation result.
  • a credit-based loan granting service scenario is still used as an example.
  • the final evaluation result can still be risk score.
  • the risk evaluator can compare the risk score with a predetermined risk threshold. If the risk score is greater than or equal to the risk threshold, it indicates that the target user is a risky user. In this case, the user can be labeled to indicate that the user is a risky user, and the loan application initiated by the user is terminated.
  • the risk score is less than the risk threshold, it indicates that the target user is a low-risk user.
  • the user can be labeled to indicate that the user is a low-risk user, the loan application initiated by the user is normally responded, and a loan is granted to the user.
  • the user label that the user is marked with can be maintained and updated based on information about whether the target user finally defaults on repayment. For example, assume that the target user is not marked as a risky user, and after a loan is finally granted to the user, if the user defaults on repayment, the marked user label can be immediately updated, and the user is re-marked as a risky user.
  • a data modeling party can support any data provider in exiting “data sharing” at any time, and can further support any data provider in joining “data sharing” at any time.
  • the risk evaluator may not need to focus on a quantity and a type of data providers that have a cooperation relationship with the risk evaluator, and only needs to perform weighting calculation on preliminary evaluation results of the target user that are uploaded by data providers that currently maintain a cooperation relationship with the risk evaluator. It can be seen that in the present specification, the risk evaluator can flexibly cooperate with different types of data providers.
  • the data provider needs to transmit, to the risk evaluator, only the evaluation results obtained through preliminary evaluations of several users by the data provider. Therefore, the data provider no longer needs to transmit the locally maintained raw user data to the risk evaluator, thereby significantly reducing the user privacy disclosure risk.
  • the coefficient of each variable in the trained models can truly reflect the contribution level of each data provider to the trained model. Therefore, the virtual resource is allocated to each data provider based on the contribution level, so that virtual resources can be properly allocated.
  • the present specification further provides a modeling method.
  • the method is applied to a server of a risk evaluator, and the server performs the following steps:
  • Step 302 Receive evaluation results of several users that are uploaded by a plurality of data providers, where the evaluation results are obtained after the plurality of data providers evaluate the users respectively based on evaluation model of the plurality of data providers.
  • Step 304 Construct several training samples by using the evaluation results uploaded by the plurality of data providers as training data, where each training sample includes evaluation results of the same user that are uploaded by the plurality of data providers, and the training sample is labeled based on an actual service execution status of the user.
  • Step 306 Train a model based on the several training samples and the label of each training sample, to obtain a trained model.
  • the trained model can be a linear model.
  • the trained model can be a linear logistic regression model.
  • the evaluation model can be a user risk evaluation model, the evaluation result can be risk score (or credit score), and the label indicates whether the user is a risky user.
  • the present specification further provides a data prediction method.
  • the method is applied to a server of a risk evaluator, and the server performs the following steps:
  • Step 402 Receive evaluation results of several users that are uploaded by a plurality of data providers, where the evaluation results are obtained after the plurality of data providers evaluate the users respectively based on evaluation model of the plurality of data providers.
  • Step 404 Construct several training samples by using the evaluation results uploaded by the plurality of data providers as training data, where each training sample includes evaluation results of the same user that are uploaded by the plurality of data providers, and the training sample is labeled based on an actual service execution status of the user.
  • Step 406 Train a model based on the several training samples and the label of each training sample, to obtain a trained model.
  • Step 408 Receive evaluation results of a certain user that are uploaded by the plurality of data providers, and input the evaluation results to the trained model to obtain a final evaluation result of the user.
  • the present specification further provides an implementation of a virtual resource allocation apparatus.
  • the implementation of the virtual resource allocation apparatus in the present specification can be applied to an electronic device.
  • the apparatus implementation can be implemented by software, hardware, or a combination of hardware and software.
  • Software implementation is used as an example.
  • the apparatus is formed by reading a corresponding computer program instruction in a nonvolatile memory and running the instruction in a memory by a processor of an electronic device where the apparatus is located.
  • FIG. 5 is a structural diagram of hardware of an electronic device where a virtual resource allocation apparatus is located, according to an implementation of the present specification.
  • the electronic device where the apparatus is located in some implementations can usually include other hardware based on an actual function of the electronic device. Details are omitted.
  • FIG. 6 is a block diagram illustrating a virtual resource allocation apparatus, according to an example implementation of the present specification.
  • the virtual resource allocation apparatus 60 can be applied to the electronic device shown in FIG. 5 , and includes a receiving module 601 , a training module 602 , and an allocation module 603 .
  • the receiving module 601 is configured to receive evaluation results of several users that are uploaded by a plurality of data providers, where the evaluation results are obtained after the plurality of data providers evaluate the users respectively based on evaluation model of the plurality of data providers.
  • the training module 602 is configured to construct several training samples by using the evaluation results uploaded by the plurality of data providers as training data, where each training sample includes evaluation results of the same user that are uploaded by the plurality of data providers, and the training sample is labeled based on an actual service execution status of the user.
  • the allocation module 603 is configured to train a model based on the several training samples and the label of each training sample, use a coefficient of each variable in the trained models as the contribution level of each data provider, and allocate virtual resources to each data provider based on the contribution level of each data provider.
  • the trained model is a linear model.
  • the number of virtual resources allocated to each data provider is directly proportional to the contribution level of each data provider.
  • the apparatus further includes: an evaluation module 604 (not shown in FIG. 6 ), configured to receive evaluation results of a certain user that are uploaded by the plurality of data providers, and input the evaluation results to the trained model to obtain a final evaluation result of the user.
  • an evaluation module 604 (not shown in FIG. 6 ), configured to receive evaluation results of a certain user that are uploaded by the plurality of data providers, and input the evaluation results to the trained model to obtain a final evaluation result of the user.
  • the virtual resource is a user data usage fund distributed to each data provider.
  • the evaluation model is a user risk evaluation model
  • the evaluation result is risk score
  • the label indicates whether the user is a risky user.
  • the apparatus implementation basically corresponds to the method implementation, and therefore for related parts, references can be made to related description in the method implementation.
  • the previous apparatus implementation is merely an example.
  • the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the modules can be selected based on an actual demand to achieve the objectives of the solutions of the present specification. A person of ordinary skill in the art can understand and implement the implementations of the present specification without creative efforts.
  • the system, apparatus, module, or unit illustrated in the previous implementations can be implemented by using a computer chip or an entity, or can be implemented by using a product having a certain function.
  • a typical implementation device is a computer, and the computer can be a personal computer, a laptop computer, a cellular phone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email receiving and sending device, a game console, a tablet computer, a wearable device, or any combination of these devices.
  • the present specification further provides an implementation of a virtual resource allocation system.
  • the virtual resource allocation system can include servers of a plurality of data providers and a server of a risk evaluator.
  • the servers of the plurality of data providers are configured to upload evaluation results of several users to the server of the risk evaluator, where the evaluation results are obtained after the plurality of data providers evaluate the users respectively based on evaluation model of the plurality of data providers.
  • the server of the risk evaluator is configured to construct several training samples by using the evaluation results uploaded by the plurality of data providers as training data, where each training sample includes evaluation results of the same user that are uploaded by the plurality of data providers, and the training sample is labeled based on an actual service execution status of the user; and train a model based on the several training samples and the label of each training sample, use a coefficient of each variable in the trained models as the contribution level of each data provider, and allocate virtual resources to each data provider based on the contribution level of each data provider.
  • the present specification further provides an implementation of an electronic device.
  • the electronic device includes a processor and a memory configured to store machine executable instructions.
  • the processor and the memory are usually connected to each other by using an internal bus.
  • the device can further include an external interface, to communicate with another device or component.
  • the processor is prompted to perform the following operations: receiving evaluation results of several users that are uploaded by a plurality of data providers, where the evaluation results are obtained after the plurality of data providers evaluate the users respectively based on evaluation model of the plurality of data providers; constructing several training samples by using the evaluation results uploaded by the plurality of data providers as training data, where each training sample includes evaluation results of the same user that are uploaded by the plurality of data providers, and the training sample is labeled based on an actual service execution status of the user; and training a model based on the several training samples and the label of each training sample, using a coefficient of each variable in the trained models as the contribution level of each data provider, and allocating virtual resources to each data provider based on the contribution level of each data provider.
  • the trained model is a linear model.
  • the number of virtual resources allocated to each data provider is directly proportional to the contribution level of each data provider.
  • the processor by reading and executing the machine executable instructions that are stored in the memory and that correspond to the control logic of the virtual resource allocation, the processor is prompted to perform the following operation: receiving evaluation results of a certain user that are uploaded by the plurality of data providers, and inputting the evaluation results to the trained model to obtain a final evaluation result of the user.
  • the virtual resource is a user data usage fund distributed to each data provider.
  • the evaluation model is a user risk evaluation model
  • the evaluation result is risk score
  • the label indicates whether the user is a risky user.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Computing Systems (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Technology Law (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Databases & Information Systems (AREA)
  • Algebra (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Game Theory and Decision Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
US16/697,913 2017-09-27 2019-11-27 Method and device for virtual resource allocation, modeling, and data prediction Active US10691494B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/907,637 US10891161B2 (en) 2017-09-27 2020-06-22 Method and device for virtual resource allocation, modeling, and data prediction

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201710890033 2017-09-27
CN201710890033.1A CN109559214A (zh) 2017-09-27 2017-09-27 虚拟资源分配、模型建立、数据预测方法及装置
CN201710890033.1 2017-09-27
PCT/CN2018/107261 WO2019062697A1 (fr) 2017-09-27 2018-09-25 Procédé et dispositif d'attribution de ressources virtuelles, d'établissement de modèle et de prédiction de données

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/107261 Continuation WO2019062697A1 (fr) 2017-09-27 2018-09-25 Procédé et dispositif d'attribution de ressources virtuelles, d'établissement de modèle et de prédiction de données

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/907,637 Continuation US10891161B2 (en) 2017-09-27 2020-06-22 Method and device for virtual resource allocation, modeling, and data prediction

Publications (2)

Publication Number Publication Date
US20200097329A1 US20200097329A1 (en) 2020-03-26
US10691494B2 true US10691494B2 (en) 2020-06-23

Family

ID=65863622

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/697,913 Active US10691494B2 (en) 2017-09-27 2019-11-27 Method and device for virtual resource allocation, modeling, and data prediction
US16/907,637 Active US10891161B2 (en) 2017-09-27 2020-06-22 Method and device for virtual resource allocation, modeling, and data prediction

Family Applications After (1)

Application Number Title Priority Date Filing Date
US16/907,637 Active US10891161B2 (en) 2017-09-27 2020-06-22 Method and device for virtual resource allocation, modeling, and data prediction

Country Status (5)

Country Link
US (2) US10691494B2 (fr)
EP (1) EP3617983A4 (fr)
CN (1) CN109559214A (fr)
TW (1) TWI687876B (fr)
WO (1) WO2019062697A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10891161B2 (en) 2017-09-27 2021-01-12 Advanced New Technologies Co., Ltd. Method and device for virtual resource allocation, modeling, and data prediction

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2018285963A1 (en) * 2017-06-16 2020-02-06 Soter Analytics Pty Ltd Method and system for monitoring core body movements
EP3503012A1 (fr) * 2017-12-20 2019-06-26 Accenture Global Solutions Limited Moteur d'analyse pour plusieurs n uds de chaîne de blocs
CN110110970A (zh) * 2019-04-12 2019-08-09 平安信托有限责任公司 虚拟资源风险评级方法、系统、计算机设备和存储介质
CN110162995B (zh) * 2019-04-22 2023-01-10 创新先进技术有限公司 评估数据贡献程度的方法及其装置
CN110232403B (zh) * 2019-05-15 2024-02-27 腾讯科技(深圳)有限公司 一种标签预测方法、装置、电子设备及介质
CN110851482B (zh) * 2019-11-07 2022-02-18 支付宝(杭州)信息技术有限公司 为多个数据方提供数据模型的方法及装置
CN111401914B (zh) * 2020-04-02 2022-07-22 支付宝(杭州)信息技术有限公司 风险评估模型的训练、风险评估方法及装置
CN111833179A (zh) * 2020-07-17 2020-10-27 浙江网商银行股份有限公司 资源分配平台、资源分配方法及装置
CN113762675A (zh) * 2020-10-27 2021-12-07 北京沃东天骏信息技术有限公司 信息生成方法、装置、服务器、系统和存储介质
CN113221989B (zh) * 2021-04-30 2022-09-02 浙江网商银行股份有限公司 基于分布式的评估模型训练方法、系统以及装置
US11704609B2 (en) 2021-06-10 2023-07-18 Bank Of America Corporation System for automatically balancing anticipated infrastructure demands
US11252036B1 (en) 2021-06-10 2022-02-15 Bank Of America Corporation System for evaluating and tuning resources for anticipated demands
US12014210B2 (en) 2021-07-27 2024-06-18 Bank Of America Corporation Dynamic resource allocation in a distributed system
WO2023097353A1 (fr) * 2021-12-03 2023-06-08 Batnav Pty Ltd Procédé de conservation d'informations
CN115242648B (zh) * 2022-07-19 2024-05-28 北京百度网讯科技有限公司 扩缩容判别模型训练方法和算子扩缩容方法

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6963826B2 (en) * 2003-09-22 2005-11-08 C3I, Inc. Performance optimizer system and method
US7113932B2 (en) * 2001-02-07 2006-09-26 Mci, Llc Artificial intelligence trending system
US7444308B2 (en) * 2001-06-15 2008-10-28 Health Discovery Corporation Data mining platform for bioinformatics and other knowledge discovery
EP2490139A1 (fr) 2011-02-15 2012-08-22 General Electric Company Procédé de construction d'un modèle de mélange
US8417715B1 (en) * 2007-12-19 2013-04-09 Tilmann Bruckhaus Platform independent plug-in methods and systems for data mining and analytics
CN103051645A (zh) 2011-10-11 2013-04-17 电子科技大学 P2p网络中基于分组的激励机制
US8630902B2 (en) * 2011-03-02 2014-01-14 Adobe Systems Incorporated Automatic classification of consumers into micro-segments
US8655695B1 (en) * 2010-05-07 2014-02-18 Aol Advertising Inc. Systems and methods for generating expanded user segments
US8762299B1 (en) * 2011-06-27 2014-06-24 Google Inc. Customized predictive analytical model training
WO2014160296A1 (fr) 2013-03-13 2014-10-02 Guardian Analytics, Inc. Détection et analyse de fraude
CN104240016A (zh) 2014-08-29 2014-12-24 广州华多网络科技有限公司 虚拟场所的用户管理方法及装置
CN104866969A (zh) 2015-05-25 2015-08-26 百度在线网络技术(北京)有限公司 个人信用数据处理方法和装置
US20150281320A1 (en) * 2014-03-31 2015-10-01 Alibaba Group Holding Limited Method and system for providing internet application services
CN105225149A (zh) 2015-09-07 2016-01-06 腾讯科技(深圳)有限公司 一种征信评分确定方法及装置
US9239996B2 (en) * 2010-08-24 2016-01-19 Solano Labs, Inc. Method and apparatus for clearing cloud compute demand
US9436911B2 (en) * 2012-10-19 2016-09-06 Pearson Education, Inc. Neural networking system and methods
US9495641B2 (en) * 2012-08-31 2016-11-15 Nutomian, Inc. Systems and method for data set submission, searching, and retrieval
CN106127363A (zh) 2016-06-12 2016-11-16 腾讯科技(深圳)有限公司 一种用户信用评估方法和装置
CN106204033A (zh) 2016-07-04 2016-12-07 首都师范大学 一种基于人脸识别和指纹识别的支付系统
US20170148027A1 (en) 2015-11-24 2017-05-25 Vesta Corporation Training and selection of multiple fraud detection models
US9672474B2 (en) * 2014-06-30 2017-06-06 Amazon Technologies, Inc. Concurrent binning of machine learning data
WO2017143919A1 (fr) 2016-02-26 2017-08-31 阿里巴巴集团控股有限公司 Procédé et appareil d'établissement de modèle d'identification de données

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897918A (zh) * 2017-02-24 2017-06-27 上海易贷网金融信息服务有限公司 一种混合式机器学习信用评分模型构建方法
CN109559214A (zh) 2017-09-27 2019-04-02 阿里巴巴集团控股有限公司 虚拟资源分配、模型建立、数据预测方法及装置

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7113932B2 (en) * 2001-02-07 2006-09-26 Mci, Llc Artificial intelligence trending system
US7444308B2 (en) * 2001-06-15 2008-10-28 Health Discovery Corporation Data mining platform for bioinformatics and other knowledge discovery
US6963826B2 (en) * 2003-09-22 2005-11-08 C3I, Inc. Performance optimizer system and method
US8417715B1 (en) * 2007-12-19 2013-04-09 Tilmann Bruckhaus Platform independent plug-in methods and systems for data mining and analytics
US8655695B1 (en) * 2010-05-07 2014-02-18 Aol Advertising Inc. Systems and methods for generating expanded user segments
US9239996B2 (en) * 2010-08-24 2016-01-19 Solano Labs, Inc. Method and apparatus for clearing cloud compute demand
EP2490139A1 (fr) 2011-02-15 2012-08-22 General Electric Company Procédé de construction d'un modèle de mélange
US8630902B2 (en) * 2011-03-02 2014-01-14 Adobe Systems Incorporated Automatic classification of consumers into micro-segments
US8762299B1 (en) * 2011-06-27 2014-06-24 Google Inc. Customized predictive analytical model training
US9342798B2 (en) * 2011-06-27 2016-05-17 Google Inc. Customized predictive analytical model training
CN103051645A (zh) 2011-10-11 2013-04-17 电子科技大学 P2p网络中基于分组的激励机制
US9495641B2 (en) * 2012-08-31 2016-11-15 Nutomian, Inc. Systems and method for data set submission, searching, and retrieval
US9436911B2 (en) * 2012-10-19 2016-09-06 Pearson Education, Inc. Neural networking system and methods
CN105556552A (zh) 2013-03-13 2016-05-04 加迪安分析有限公司 欺诈探测和分析
WO2014160296A1 (fr) 2013-03-13 2014-10-02 Guardian Analytics, Inc. Détection et analyse de fraude
US20150281320A1 (en) * 2014-03-31 2015-10-01 Alibaba Group Holding Limited Method and system for providing internet application services
US9672474B2 (en) * 2014-06-30 2017-06-06 Amazon Technologies, Inc. Concurrent binning of machine learning data
CN104240016A (zh) 2014-08-29 2014-12-24 广州华多网络科技有限公司 虚拟场所的用户管理方法及装置
CN104866969A (zh) 2015-05-25 2015-08-26 百度在线网络技术(北京)有限公司 个人信用数据处理方法和装置
CN105225149A (zh) 2015-09-07 2016-01-06 腾讯科技(深圳)有限公司 一种征信评分确定方法及装置
US20170148027A1 (en) 2015-11-24 2017-05-25 Vesta Corporation Training and selection of multiple fraud detection models
WO2017143919A1 (fr) 2016-02-26 2017-08-31 阿里巴巴集团控股有限公司 Procédé et appareil d'établissement de modèle d'identification de données
CN106127363A (zh) 2016-06-12 2016-11-16 腾讯科技(深圳)有限公司 一种用户信用评估方法和装置
CN106204033A (zh) 2016-07-04 2016-12-07 首都师范大学 一种基于人脸识别和指纹识别的支付系统

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Crosby et al., "BlockChain Technology: Beyond Bitcoin," Sutardja Center for Entrepreneurship & Technology Technical Report, Oct. 16, 2015, 35 pages.
Extended European Search Report in European Application No. 18861936.5, dated Mar. 27, 2020, 12 pages.
International Preliminary Report on Patentability in International Application No. PCT/CN2018/107261, dated Mar. 31, 2020, 8 pages (with English translation).
International Search Report and Written Opinion in International Application No. PCT/CN2018/107261, dated Jan. 4, 2019, 8 pages (with partial English Translation).
Nakamoto, "Bitcoin: A Peer-to-Peer Electronic Cash System," www.bitcoin.org, 2005, 9 pages.

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10891161B2 (en) 2017-09-27 2021-01-12 Advanced New Technologies Co., Ltd. Method and device for virtual resource allocation, modeling, and data prediction

Also Published As

Publication number Publication date
US20200097329A1 (en) 2020-03-26
US20200319927A1 (en) 2020-10-08
US10891161B2 (en) 2021-01-12
CN109559214A (zh) 2019-04-02
WO2019062697A1 (fr) 2019-04-04
TW201915847A (zh) 2019-04-16
TWI687876B (zh) 2020-03-11
EP3617983A4 (fr) 2020-05-06
EP3617983A1 (fr) 2020-03-04

Similar Documents

Publication Publication Date Title
US10891161B2 (en) Method and device for virtual resource allocation, modeling, and data prediction
US20240087009A1 (en) Data reconciliation based on computer analysis of data
EP3627759A1 (fr) Procédé et appareil de chiffrement de données, procédé et appareil de formation pour modèle d'apprentissage machine de formation, et dispositif électronique
JP2020522832A (ja) 信用力があると判定された消費者にローンを発行するシステムおよび方法
US10817813B2 (en) Resource configuration and management system
US11531987B2 (en) User profiling based on transaction data associated with a user
CN111340558B (zh) 基于联邦学习的线上信息处理方法、装置、设备及介质
US20150262184A1 (en) Two stage risk model building and evaluation
US20230325592A1 (en) Data management using topic modeling
US20210110359A1 (en) Dynamic virtual resource management system
US10956976B2 (en) Recommending shared products
CN116304007A (zh) 一种信息推荐方法、装置、存储介质及电子设备
US10896290B2 (en) Automated pattern template generation system using bulk text messages
CN113138847A (zh) 基于联邦学习的计算机资源分配调度方法和装置
CN116226531A (zh) 一种小微企业金融产品智能推荐方法及相关产品
US20230176896A1 (en) Automated tuning of data processing rules based on region-specific requirements
CN115048561A (zh) 推荐信息确定方法及装置、电子设备和可读存储介质
US11341505B1 (en) Automating content and information delivery
CN113094595A (zh) 对象识别方法、装置、计算机系统及可读存储介质
US12008009B2 (en) Pre-computation and memoization of simulations
US20230139465A1 (en) Electronic service filter optimization
US20230419344A1 (en) Attribute selection for matchmaking
US11593677B1 (en) Computer-based systems configured to utilize predictive machine learning techniques to define software objects and methods of use thereof
CN114565030B (zh) 特征筛选方法、装置、电子设备和存储介质
US20220399005A1 (en) System for decisioning resource usage based on real time feedback

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: ALIBABA GROUP HOLDING LIMITED, CAYMAN ISLANDS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZHOU, JUN;LI, XIAOLONG;REEL/FRAME:052296/0962

Effective date: 20200401

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: ADVANTAGEOUS NEW TECHNOLOGIES CO., LTD., CAYMAN ISLANDS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ALIBABA GROUP HOLDING LIMITED;REEL/FRAME:053743/0464

Effective date: 20200826

AS Assignment

Owner name: ADVANCED NEW TECHNOLOGIES CO., LTD., CAYMAN ISLANDS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ADVANTAGEOUS NEW TECHNOLOGIES CO., LTD.;REEL/FRAME:053754/0625

Effective date: 20200910

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4