CN113010562B - Information recommendation method and device - Google Patents

Information recommendation method and device Download PDF

Info

Publication number
CN113010562B
CN113010562B CN202110280216.8A CN202110280216A CN113010562B CN 113010562 B CN113010562 B CN 113010562B CN 202110280216 A CN202110280216 A CN 202110280216A CN 113010562 B CN113010562 B CN 113010562B
Authority
CN
China
Prior art keywords
recommendation
user
model
probability
recommendation model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110280216.8A
Other languages
Chinese (zh)
Other versions
CN113010562A (en
Inventor
谢壮壮
陈振
奚冬博
燕鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Qiandai Beijing Information Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202110280216.8A priority Critical patent/CN113010562B/en
Publication of CN113010562A publication Critical patent/CN113010562A/en
Application granted granted Critical
Publication of CN113010562B publication Critical patent/CN113010562B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2457Query processing with adaptation to user needs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate

Abstract

The specification discloses an information recommendation method and device, which are used for acquiring user data of a user, determining characteristic data corresponding to the user, and inputting the characteristic data into a recommendation model aiming at each pre-trained recommendation model so as to determine a first recommendation probability for recommending information to the user under the recommendation model. And then, correcting the first recommendation probability corresponding to the recommendation model according to the sample sampling rate corresponding to the recommendation model during model training, and determining the corrected recommendation probability corresponding to the recommendation model. And finally, recommending information to the user according to the corrected recommendation probability corresponding to each recommendation model. According to the method, the first recommendation probability corresponding to the recommendation model is corrected through the sample sampling rate corresponding to the recommendation model during model training, so that the accuracy of the result output by the recommendation model is improved, and the accuracy of information recommendation to a user is guaranteed.

Description

Information recommendation method and device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for information recommendation.
Background
At present, a service party provides a plurality of online services for a user, and in order to provide more convenience for the daily life of the user, information recommendation can be performed to the user based on user data of the user, so that the user executes the service meeting the actual needs of the user based on the received recommendation information.
In practical applications, a pre-trained recommendation model is usually used to recommend information to a user. In the training process of the recommendation model, due to the fact that the scale of user data is too large, all samples cannot be input into the recommendation model for training under the existing computing resources, and therefore, part of samples are usually adopted for training the recommendation model. However, since the partial samples can actually reflect only part of the conditions of the full amount of samples, the result output by the recommendation model is often inaccurate when the recommendation model is trained by using the partial samples.
Therefore, how to effectively improve the accuracy of the result output by the recommendation model is an urgent problem to be solved.
Disclosure of Invention
The present specification provides a method, an apparatus, a storage medium, and an electronic device for information recommendation, so as to partially solve the above problems in the prior art.
The technical scheme adopted by the specification is as follows:
the present specification provides a method for information recommendation, including:
acquiring user data of a user, wherein the user data comprises: at least one of attribute data of the user, historical behavior data of the user for a target service, service risk information corresponding to the user, historical service data corresponding to the user, and related information corresponding to each piece of historical recommendation information sent to the user;
determining characteristic data corresponding to the user according to the user data of the user;
inputting the characteristic data into each pre-trained recommendation model to determine a first recommendation probability for information recommendation to the user under the recommendation model as a first recommendation probability corresponding to the recommendation model;
correcting the first recommendation probability corresponding to the recommendation model according to the sample sampling rate corresponding to the recommendation model during model training, and determining the corrected recommendation probability corresponding to the recommendation model;
and recommending information to the user according to the corrected recommendation probability corresponding to each recommendation model.
Optionally, recommending information to the user according to the corrected recommendation probability corresponding to each recommendation model, specifically including:
and sending recommendation information aiming at the target service to the user according to the corrected recommendation probability corresponding to each recommendation model.
Optionally, training the recommendation model specifically includes:
for each recommendation model, determining a sample sampling rate corresponding to the recommendation model, wherein the sample sampling rates corresponding to different recommendation models are different;
according to the sample sampling rate corresponding to the recommendation model, all positive samples and part of negative samples are selected from historical user data of each user, and a sample set corresponding to the recommendation model is constructed, wherein the positive samples are used for expressing that the user executes the service corresponding to the recommendation information through the acquired recommendation information, and the negative samples are used for expressing that the user does not execute the service corresponding to the recommendation information after acquiring the recommendation information;
inputting the feature data corresponding to the training sample into the recommendation model aiming at each training sample in the sample set to obtain the prediction recommendation probability corresponding to the training sample;
and training the recommendation model by taking the minimized deviation between the prediction recommendation probability and the sample label corresponding to the training sample as an optimization target.
Optionally, the method for determining the corrected recommendation probability corresponding to the recommendation model by correcting the first recommendation probability corresponding to the recommendation model according to the sample sampling rate corresponding to the recommendation model during model training specifically includes:
acquiring a predetermined probability correction rule corresponding to the recommendation model;
and determining the corrected recommendation probability corresponding to the recommendation model according to the probability correction rule, the sample sampling rate corresponding to the recommendation model during model training and the first recommendation probability corresponding to the recommendation model.
Optionally, determining a probability correction rule corresponding to the recommendation model specifically includes:
determining a corresponding relation between a recommendation probability for each training sample in a sample set adopted by the recommendation model during model training and obtained after the recommendation model is trained through a full sample set and a recommendation probability for each training sample in the training sample set according to a sample sampling rate corresponding to the recommendation model, wherein the training samples in the sample set are part of the training samples in the full sample set;
and determining a probability correction rule corresponding to the recommended model according to the corresponding relation.
Optionally, recommending information to the user according to the corrected recommendation probability corresponding to each recommendation model, specifically including:
determining a second recommendation probability for recommending information to the user according to the corrected recommendation probability corresponding to each recommendation model;
and recommending information to the user according to the second recommendation probability.
Optionally, determining a second recommendation probability for recommending information to the user according to the corrected recommendation probabilities corresponding to the recommendation models, specifically including:
and obtaining the second recommendation probability according to the predetermined weight corresponding to each recommendation model and the correction recommendation probability corresponding to each recommendation model.
Optionally, the determining the weight corresponding to each recommendation model in advance specifically includes:
verifying the recommendation model through a verification set corresponding to the recommendation model aiming at each recommendation model, and determining the identification accuracy of the recommendation model aiming at the verification set as the identification accuracy corresponding to the recommendation model;
and determining the weight corresponding to each recommended model according to the identification accuracy corresponding to each recommended model.
This specification provides an apparatus for information recommendation, including:
an obtaining module, configured to obtain user data of a user, where the user data includes: at least one of attribute data of the user, historical behavior data of the user for a target service, service risk information corresponding to the user, historical service data corresponding to the user, and related information corresponding to each piece of historical recommendation information sent to the user;
the determining module is used for determining the characteristic data corresponding to the user according to the user data of the user;
the input module is used for inputting the characteristic data into each pre-trained recommendation model so as to determine a first recommendation probability for recommending information to the user under the recommendation model, and the first recommendation probability is used as a first recommendation probability corresponding to the recommendation model;
the correction module is used for correcting the first recommendation probability corresponding to the recommendation model according to the sample sampling rate corresponding to the recommendation model during model training, and determining the corrected recommendation probability corresponding to the recommendation model;
and the recommending module is used for recommending information to the user according to the corrected recommending probability corresponding to each recommending model.
The present specification provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the above-described information recommendation method.
The present specification provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the above information recommendation method when executing the program.
The technical scheme adopted by the specification can achieve the following beneficial effects:
in the information recommendation method provided in this specification, user data of a user is obtained, feature data corresponding to the user is determined according to the user data of the user, and then, for each recommendation model trained in advance, the feature data is input into the recommendation model to determine a first recommendation probability for information recommendation to the user under the recommendation model, which is taken as the first recommendation probability corresponding to the recommendation model. And then, correcting the first recommendation probability corresponding to the recommendation model according to the sample sampling rate corresponding to the recommendation model during model training, and determining the corrected recommendation probability corresponding to the recommendation model. And finally, recommending information to the user according to the corrected recommendation probability corresponding to each recommendation model.
It can be seen from the above information recommendation method that the first recommendation probability corresponding to the recommendation model can be corrected according to the sample sampling rate corresponding to the recommendation model during model training, so that the recommendation result obtained by information recommendation of each recommendation model trained based on different sample sampling rates is close to the recommendation result obtained by information recommendation of the recommendation model trained through a full amount of samples, and therefore, compared with the prior art in which only a part of samples are used for training the recommendation model, the accuracy of the result output by the recommendation model is effectively improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification and are incorporated in and constitute a part of this specification, illustrate embodiments of the specification and together with the description serve to explain the specification and not to limit the specification in a non-limiting sense. In the drawings:
fig. 1 is a schematic flowchart of a method for information recommendation provided in an embodiment of the present specification;
fig. 2 is a schematic flowchart of constructing a sample set according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an information recommendation apparatus provided in an embodiment of the present specification;
fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of this specification.
Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more clear, the technical solutions of the present disclosure will be clearly and completely described below with reference to the specific embodiments of the present disclosure and the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present specification without any creative effort belong to the protection scope of the present specification.
The technical solutions provided by the embodiments of the present description are described in detail below with reference to the accompanying drawings.
In the prior art, because the scale of user data is too large, the time for training the model is very long when all samples are input into the recommendation model for training, so that the recommendation model is usually trained by using part of the samples in the whole amount of samples, thereby improving the efficiency of training the model. However, since the partial samples actually reflect only part of the total number of samples, accuracy of the result output by the recommendation model is lowered, and accuracy of information recommendation to the user is lowered.
In order to solve the above problems, the present specification provides an information recommendation method, which obtains user data of a user and determines feature data corresponding to the user. Secondly, for each pre-trained recommendation model, inputting feature data into the recommendation model to determine a first recommendation probability for information recommendation to a user under the recommendation model. And then, correcting the first recommendation probability corresponding to the recommendation model according to the sample sampling rate corresponding to the recommendation model during model training, and determining the corrected recommendation probability corresponding to the recommendation model. And finally, recommending information to the user according to the corrected recommendation probability corresponding to each recommendation model. Therefore, the first recommendation probability corresponding to the recommendation model can be corrected according to the sample sampling rate corresponding to the recommendation model, namely the relation between part of samples for training the recommendation model and all samples, so that the corrected recommendation probability is close to the result output by the recommendation model for training by using all samples, the accuracy of the result output by the recommendation model is improved, and the accuracy of information recommendation to a user is ensured.
Fig. 1 is a schematic flow chart of a method for recommending information provided in an embodiment of the present specification, which specifically includes the following steps:
s100: user data of a user is acquired.
The execution subject of the information recommendation provided by the specification may be a terminal device such as a server, a desktop computer, or the like. For convenience of description, the following description will be made of a method of information recommendation provided in this specification, with only a server as an execution subject.
Before recommending information to a user, the server may first obtain user data of the user, where the obtained user data of the user may include: at least one of attribute data of the user, historical behavior data of the user for the target service, service risk information corresponding to the user, historical service data corresponding to the user, and related information corresponding to each piece of historical recommendation information sent to the user.
The attribute data of the user mentioned here is mainly used for reflecting the basic personal situation of the user, and specifically may refer to data such as the age, sex, constellation, occupation, income level, education level, city of residence, and so on of the user. The historical behavior data of the user for the target service can be used for reflecting actual operation of the user on the service historically, and the server can determine whether to recommend information to the user or not according to the obtained historical behavior data. For example, in a credit card transaction service, the historical behavior data of the user includes exposure of recommended information for transacting a credit card, clicking of the recommended information for transacting a credit card, execution of an application operation for transacting a credit card, submission of a form for transacting a credit card, and the like during information browsing. Therefore, if the user clicks and applies for the advertisement (recommendation information) of the credit card transaction service historically, the user is judged to have a willingness to transact the credit card transaction service, and the recommendation information of the credit card transaction service can be sent to the user. If the user has not clicked on the advertisement of the credit card transaction service historically, the recommendation information of the credit card transaction service may not be sent to the user.
The service risk information corresponding to the user is mainly used for reflecting whether the user brings certain service risk to the service in the service execution process. The business risk information may specifically include: credit rating of the user, record of breach, etc.
The historical business data corresponding to the user may include historical order data of the user, and the like. The related information corresponding to each history recommendation information generated by the user may include the number of times of information recommendation to the user and the time of sending the recommendation information, which are performed in a manner such as Artificial Intelligence (AI) outbound, texting, advertisement placement, and the like.
It should be noted that, in this embodiment of the present specification, the server may determine whether to send recommendation information for a target service to a user according to the obtained user data. The target service mentioned herein may refer to any service, such as an order take-out service, a credit card transaction service, a financial service, a travel service, and the like, and the description does not specifically limit the target service.
Further, the above mentioned historical service data of the user may refer to historical service data of the user for the target service, and of course, historical service data of other services may also be included. Correspondingly, if the acquired historical service data of the user contains historical service data of other services, the server can also determine whether to send recommendation information for the target service to the user according to the historical service data of other services. For example, if the target service is a credit card transaction service and the other services are shopping services, if the server determines that the user frequently performs online shopping based on the acquired historical service data, it may be determined that the user may have a requirement for transacting a credit card, and then the recommendation information of the credit card transaction service is sent to the user.
S102: and determining characteristic data corresponding to the user according to the user data of the user.
In this embodiment, each user has its corresponding user data, so the server may determine the feature data corresponding to the user according to the user data, where the feature data of the user is used to indicate some preference features reflected by the user. The server can extract data of specified dimensions related to the service from the user data and convert the data into corresponding feature vectors according to actual service requirements, and accordingly feature data in a feature vector form is obtained. The manner in which the server converts the user data into the feature vector may be various, for example, a Continuous Bag-of-Word Model (CBOW), a Word2vec Model (Word to vector), and the like, and the specific manner is not limited in this specification.
S104: and inputting the characteristic data into each pre-trained recommendation model to determine a first recommendation probability for information recommendation to the user under the recommendation model as a first recommendation probability corresponding to the recommendation model.
In practical application, the recommended models are trained by using the full sample set, so that the model training time is very long, the model training efficiency is reduced, and therefore, the server can train a plurality of models by using partial samples, the model training time is reduced, and the model training efficiency is improved. Based on this, the server needs to train each recommendation model before using it. The server can input the determined characteristic data into each pre-trained recommendation model to determine a first recommendation probability for recommending information to the user under the recommendation model, and the first recommendation probability is used as the first recommendation probability corresponding to the recommendation model.
In this embodiment of the present specification, feature data input by different recommendation models may be the same, and a first recommendation probability output by one recommendation model is used to indicate a probability that a user predicted by the recommendation model executes a target service after acquiring recommendation information for the target service. In the subsequent steps, the server can determine whether to recommend information to the user by determining the first recommendation probability corresponding to the user in different recommendation models.
In the embodiment of the present specification, the recommendation model mentioned above needs to be trained in advance, specifically, the server may obtain historical user data of the user, and determine whether the user completes a complete target service process according to historical behavior data of the user for the target service, so as to determine whether the user data of the user is a positive sample or a negative sample.
For example, in the credit card transaction, the history recommendation information sent by the server to the user sequentially relates to four stages of exposure (i.e. exposure of the history recommendation information in the process of browsing information), clicking (clicking) on the history recommendation information, application (application of the credit card transaction based on the history recommendation information), submission (completion of final form filling and submission of the form based on the history recommendation information), for a user, as long as the user does not complete the final submission operation for the obtained historical recommendation information, the user data when the user receives the historical recommendation information is negative sample, only the user completes exposure, click, application and submission aiming at the historical recommendation information in turn, the user data of the user at the time of receiving the historical recommendation information may be counted as a positive sample.
In practical applications, since each user has a large amount of historical user data, when training a recommendation model, it is impossible to train the recommendation model by using all the historical user data of each user as training samples. Therefore, it is necessary to select a part of the user data from the total amount of user data as a training sample for model training. In addition, no matter which target service is recommended, the number of times that each user executes the service through the acquired recommendation information in the actual application is far smaller than the number of times that the server sends the recommendation information to each user, so in general, the number of positive samples is far smaller than the number of negative samples. Therefore, in the model training process, the sample sampling rate may refer to the sampling rate of the negative samples, that is, when training a recommended model, the sample set may be constructed by all the positive samples and part of the negative samples determined based on the sample sampling rate, as shown in fig. 2.
Fig. 2 is a schematic flowchart of constructing a sample set according to an embodiment of the present disclosure.
The triangles in fig. 2 represent positive samples, the rectangles represent negative samples, the positive samples are used for representing that the user executes the service corresponding to the recommended information after the obtained recommended information passes, and the negative samples are used for representing that the user does not execute the service corresponding to the recommended information after the obtained recommended information passes. The server constructs a sample set by selecting all positive samples from the full sample set and selecting some negative samples according to the sampling rate.
In the embodiment of the present specification, the sample sampling rates corresponding to different recommendation models are different, that is, the proportion of negative samples in the sample set corresponding to each recommendation model is different. The higher the sample sampling rate is, the more negative samples in the sample set are, and compared with the sample set with the low sample sampling rate, the sample set with the high sample sampling rate is closer to the full sample set, so that different sample sampling rates are set for different recommendation models, and the influence of the different sample sampling rates on the recommendation models is determined. The sample sampling rate of each recommended model may be manually set according to actual requirements, or may be determined according to a preset distribution rule (such as beta distribution), and the method for determining the sample sampling rate is not limited in this specification.
In the process of training a recommendation model, the server may input, to each training sample in the sample set corresponding to the recommendation model, feature data corresponding to the training sample to the recommendation model to obtain a predicted recommendation probability corresponding to the training sample, and train the recommendation model with a deviation between the minimized predicted recommendation probability and a sample label corresponding to the training sample as an optimization target.
It should be noted that the recommendation model mentioned above may be a conventional model, such as a Gradient Boosting Decision Tree (GBDT), an Extreme Gradient Boosting (XGBoost), a Light Gradient Boosting Machine (LightGBM), and the like, and the recommendation model is not limited herein. In addition, the downsampling mentioned in the present specification may be in various forms, such as random downsampling, an unbalanced learning algorithm of multiple classes (easynesemble), a guided aggregation algorithm (Bagging), and the like.
S106: and correcting the first recommendation probability corresponding to the recommendation model according to the sample sampling rate corresponding to the recommendation model during model training, and determining the corrected recommendation probability corresponding to the recommendation model.
In this embodiment of the present specification, the server constructs a sample set corresponding to the recommendation model according to the sample sampling rate, and trains the recommendation model using the sample set, where a first recommendation probability output by the recommendation model trained using the sample set is also changed compared with a recommendation probability output by the recommendation model trained using the full sample set because the sample set after sampling is changed compared with the full sample set. Therefore, the first recommendation probability corresponding to the recommendation model can be corrected according to the sample sampling rate corresponding to the recommendation model during model training, and the corrected recommendation probability corresponding to the recommendation model is determined, so that the corrected recommendation probability output by the recommendation model is close to the recommendation probability output by the recommendation model trained by using the full-scale sample set.
In the correction process, a predetermined probability correction rule corresponding to the recommendation model needs to be obtained, and the probability correction rule is determined by a corresponding relation between a recommendation probability for the training sample obtained after the recommendation model is trained through a full sample set and a recommendation probability for the training sample obtained after the recommendation model is trained through the sample set. For any recommendation model, the training samples included in the sample set corresponding to the recommendation model are part of the training samples in the full sample set.
In the embodiment of the present specification, the probability correction rule may be determined in the following manner:
p(s|y,x)=p(s|y)
in the above formula, s represents whether a training sample in the full sample set is selected into the training subset by downsampling, s is 1 as selected, s is 0 as unselected, y represents whether the training sample is a positive sample or a negative sample, y is 1 as a positive sample, y is-1 as a negative sample, and x represents the feature data of the training sample. p (s | y, x) represents the conditional probability of whether a training sample is selected under the condition that the feature data of the training sample is x. Since s is independent of x based on the principle of downsampling, i.e. the probability of a training sample being selected is independent of the feature data of the training sample, p (s | y, x) can be transformed into p (s | y), which in turn represents the conditional probability of whether a training sample is selected or not.
In this embodiment, since all the positive samples need to be obtained, and the negative samples need to select a part from all the negative samples by the sampling rate β, the following formula may be specifically referred to for the probability that the training sample is selected into the sample set:
p(s=1|y=1)=1
p(s=1|y=-1)=β
where p (s ═ 1| y ═ 1) ═ 1 denotes the probability that a positive sample is selected into the sample set, and p (s ═ 1| y ═ 1) ═ β denotes the probability that a negative sample is selected into the sample set.
Further, taking a positive sample as an example, according to a bayesian formula, after the recommendation model is trained by a sample set constructed after sampling, the probability that the training sample is the positive sample is output as follows:
Figure BDA0002977951490000111
by substituting the above-mentioned p (s ═ 1| y ═ 1) and p (s ═ 1| y ═ 1) ═ β into the bayesian formula, the following formula can be obtained:
Figure BDA0002977951490000112
in the positive sample with the feature data of x, p (y is 1| x, s is 1) represents the probability that the training sample obtained by training the recommendation model with the sample set constructed based on the sampling rate β is the positive sample, and may be specifically represented as p, and p (y is 1| x) represents the probability that the training sample obtained by training the recommendation model with the full number of samples is the positive sample, and may be specifically represented as psP (y ═ -1| x) represents the probability that the training sample obtained by training the recommended model with the full-scale sample is a negative sample. Further, p and psCan be expressed by the following formula:
Figure BDA0002977951490000121
here, for a training sample, the training sample only includes a positive sample or a negative sample, and therefore, p (y ═ -1| x) in the above formula can be represented by (1-p). Further, the above formula represents the corresponding relationship between the probability of model output after training by partial training samples and the probability of model output after training by full samples, so in practical application, the server needs to output the probability according to psTo obtain p, i.e. a recommended probability for a training sample obtained after training with the sampled sample set according to the recommendation model, to obtainAnd psAnd obtaining the recommendation probability aiming at the same training sample obtained after the recommendation model is trained through the full sample set according to the corresponding relation between the two training samples. Correspondingly, the formula can be further transformed to obtain a probability correction rule, and the formula is specifically as follows:
Figure BDA0002977951490000122
based on this, for each recommendation model, the server may input the first recommendation probability corresponding to the recommendation model and the sample sampling rate corresponding to the recommendation model during model training into the probability correction rule, so as to obtain the corrected recommendation probability corresponding to the recommendation model.
S108: and recommending information to the user according to the corrected recommendation probability corresponding to each recommendation model.
In this embodiment, the server may input the feature data of the user into each recommendation model, obtain the first recommendation probability corresponding to each recommendation model of the user, correct the first recommendation probability corresponding to each recommendation model of the user according to the sample sampling rate corresponding to each recommendation model, determine the corrected recommendation probability corresponding to each recommendation model of the user, and recommend information to the user.
The server can average the corrected recommendation probabilities of the user corresponding to the recommendation models and recommend information to the user according to the average. The server may also determine a lowest value of the corrected recommendation probability corresponding to each recommendation model of the user, and recommend information to the user when determining that the lowest value is higher than the set threshold, and not recommend information to the user if the lowest value is not higher than the set threshold.
The server can also carry out weighted summation on the corrected recommendation probabilities corresponding to the recommendation models of the user to obtain a second recommendation probability for information recommendation to the user, and carry out information recommendation to the user according to the second recommendation probability. The weights corresponding to different recommendation models may not be completely the same, and the weights corresponding to the recommendation models may be preset manually or may be determined by the recognition accuracy of each recommendation model. Specifically, after model training of each recommendation model is completed, the server may verify the recommendation model through a verification set corresponding to the recommendation model for each recommendation model, determine the identification accuracy of the recommendation model for the verification set, and use the identification accuracy as the identification accuracy corresponding to the recommendation model, where the verification set may be a sample set used by the recommendation model in a training process, may also be a preset verification set specially used for verification, and may also be a full sample set. Further, the server may determine the weight corresponding to each recommendation model according to the identification accuracy corresponding to each recommendation model. The method of weight calculation may specifically refer to the following formula:
Figure BDA0002977951490000131
the formula is used for expressing the identification accuracy of one recommendation model for the verification set, and normalization processing is performed under the identification accuracy of all recommendation models, so that the weight corresponding to the recommendation model is obtained. Wherein, the higher the identification accuracy corresponding to the recommendation model is, the greater the weight corresponding to the recommendation model is, auciRepresenting the recognition accuracy, w, of the recommended model i against the validation setiRepresenting the weight corresponding to the recommendation model i. And the server obtains a second recommendation probability for recommending information to the user according to the corrected recommendation probability corresponding to each recommendation model and the weight corresponding to each recommendation model, and recommends information to the user according to the second recommendation probability.
For example, assuming that the target service is a credit card transaction service, if the server determines that the second recommendation model is higher than the preset probability threshold, it may determine to send the recommendation information of the credit card transaction service to the user, otherwise, it does not send the recommendation information of the credit card transaction service to the user.
It can be seen from the above process that the server corrects the first recommendation probability corresponding to the recommendation model according to the sample sampling rate corresponding to the recommendation model during model training and the relationship between part of samples for training the recommendation model and all samples, so that the corrected recommendation probability output by the recommendation model for training by selecting part of samples according to the sample rate is similar to the result output by the recommendation model for training by using all samples.
Because the sample sampling rates corresponding to the recommendation models are different, the recognition capabilities of the recommendation models are different when the recommendation models recognize the same data, and therefore, after the server performs weighted summation on the corrected recommendation probabilities output by the recommendation models according to the weights corresponding to the recommendation models, the final second recommendation probability can be ensured to be obtained, the recognition capabilities of all recommendation models can be comprehensively considered, and the second recommendation probability can be further ensured to be as close as possible to the recommendation probability output by the recommendation model trained by using a full-scale sample.
In other words, the information recommendation method provided by the specification corrects the recommendation probability output by the recommendation model by using the sample sampling rate used by each recommendation model in the model training process, so that the effect of performing model training by using a full amount of samples is achieved under the condition of not using the full amount of samples to perform model training, and the accuracy of information recommendation to a user is effectively ensured.
Based on the same idea, the present specification further provides a corresponding information recommendation apparatus, as shown in fig. 3.
Fig. 3 is a schematic structural diagram of an information recommendation apparatus provided in an embodiment of this specification, which specifically includes:
an obtaining module 300, configured to obtain user data of a user, where the user data includes: at least one of attribute data of the user, historical behavior data of the user for a target service, service risk information corresponding to the user, historical service data corresponding to the user, and related information corresponding to each piece of historical recommendation information sent to the user;
a determining module 302, configured to determine, according to the user data of the user, feature data corresponding to the user;
an input module 304, configured to input the feature data into each pre-trained recommendation model to determine a first recommendation probability for recommending information to the user under the recommendation model, where the first recommendation probability is used as a first recommendation probability corresponding to the recommendation model;
the correcting module 306 is configured to correct the first recommendation probability corresponding to the recommendation model according to a sample sampling rate corresponding to the recommendation model during model training, and determine a corrected recommendation probability corresponding to the recommendation model;
and the recommending module 308 is configured to recommend information to the user according to the corrected recommending probability corresponding to each recommending model.
Optionally, the obtaining module 300 is specifically configured to, the user data includes: at least one of attribute data of the user, historical behavior data of the user for a target service, service risk information corresponding to the user, historical service data corresponding to the user, and related information corresponding to each historical recommendation information sent to the user.
Optionally, the recommending module 308 is specifically configured to send recommendation information for the target service to the user according to the corrected recommendation probability corresponding to each recommendation model.
Optionally, the input module 304 is specifically configured to, for each recommendation model, determine a sample sampling rate corresponding to the recommendation model, where the sample sampling rates corresponding to different recommendation models are different, select all positive samples and part of negative samples from historical user data of each user according to the sample sampling rate corresponding to the recommendation model, and construct a sample set corresponding to the recommendation model, where the positive samples are used to indicate that the user executes a service corresponding to the recommendation information through the acquired recommendation information, the negative samples are used to indicate that the user does not execute the service corresponding to the recommendation information after acquiring the recommendation information, and for each training sample in the sample set, input feature data corresponding to the training sample into the recommendation model to obtain a predicted recommendation probability corresponding to the training sample, so as to minimize a deviation between the predicted recommendation probability and a sample label corresponding to the training sample as an optimization target, the recommendation model is trained.
Optionally, the correcting module 306 is specifically configured to obtain a predetermined probability correcting rule corresponding to the recommended model, and determine a corrected recommended probability corresponding to the recommended model according to the probability correcting rule, a sample sampling rate corresponding to the recommended model during model training, and a first recommended probability corresponding to the recommended model.
Optionally, the correcting module 306 is specifically configured to, for each training sample included in a sample set used in model training of the recommendation model, determine, according to a sample sampling rate corresponding to the recommendation model, a recommendation probability for the training sample obtained after the recommendation model is trained through a full sample set, and a correspondence between the recommendation probability for the training sample obtained after the recommendation model is trained through the sample set, where the training sample included in the sample set is a part of the training samples in the full sample set, and determine, according to the correspondence, a probability correction rule corresponding to the recommendation model.
Optionally, the recommending module 308 is specifically configured to determine a second recommendation probability for recommending information to the user according to the corrected recommendation probability corresponding to each recommendation model, and recommend information to the user according to the second recommendation probability.
Optionally, the recommending module 308 is specifically configured to obtain the second recommendation probability according to the predetermined weight corresponding to each recommendation model and the corrected recommendation probability corresponding to each recommendation model.
Optionally, the recommending module 308 is specifically configured to verify, for each recommended model, the recommended model through a verification set corresponding to the recommended model, determine an identification accuracy of the recommended model for the verification set, serve as the identification accuracy corresponding to the recommended model, and determine a weight corresponding to each recommended model according to the identification accuracy corresponding to each recommended model.
The present specification also provides a computer-readable storage medium storing a computer program, which can be used to execute the method of information recommendation provided in fig. 1 above.
The present specification also provides a schematic structural diagram of the electronic device shown in fig. 4. As shown in fig. 4, the training device recommended by the information includes a processor, an internal bus, a network interface, a memory, and a non-volatile memory, and may include hardware required by other services. The processor reads a corresponding computer program from the non-volatile memory into the memory and then runs the computer program to implement the information recommendation method described in fig. 1. Of course, besides the software implementation, the present specification does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may be hardware or logic devices.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Hardware Description Language), traffic, pl (core universal Programming Language), HDCal (jhdware Description Language), lang, Lola, HDL, laspam, hardward Description Language (vhr Description Language), vhal (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the various elements may be implemented in the same one or more software and/or hardware implementations of the present description.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
This description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present specification, and is not intended to limit the present specification. Various modifications and alterations to this description will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present specification should be included in the scope of the claims of the present specification.

Claims (11)

1. A method for information recommendation, comprising:
acquiring user data of a user, wherein the user data comprises: at least one of attribute data of the user, historical behavior data of the user for a target service, service risk information corresponding to the user, historical service data corresponding to the user, and related information corresponding to each piece of historical recommendation information sent to the user;
determining characteristic data corresponding to the user according to the user data of the user;
inputting the characteristic data into each pre-trained recommendation model to determine a first recommendation probability for information recommendation to the user under the recommendation model as a first recommendation probability corresponding to the recommendation model;
correcting a first recommendation probability corresponding to the recommendation model according to a sample sampling rate corresponding to the recommendation model during model training, and determining a corrected recommendation probability corresponding to the recommendation model, wherein the sample sampling rate corresponding to the recommendation model refers to a proportion of negative samples in a sample set corresponding to the recommendation model in a full sample set, and the sample set is constructed by selecting all positive samples from the full sample set and determining partial negative samples based on the sample sampling rate;
and recommending information to the user according to the corrected recommendation probability corresponding to each recommendation model.
2. The method of claim 1, wherein recommending information to the user according to the corrected recommendation probability corresponding to each recommendation model specifically comprises:
and sending recommendation information aiming at the target service to the user according to the corrected recommendation probability corresponding to each recommendation model.
3. The method of claim 1, wherein training the recommendation model specifically comprises:
for each recommendation model, determining a sample sampling rate corresponding to the recommendation model, wherein the sample sampling rates corresponding to different recommendation models are different;
according to the sample sampling rate corresponding to the recommendation model, all positive samples and part of negative samples are selected from historical user data of each user, and a sample set corresponding to the recommendation model is constructed, wherein the positive samples are used for expressing that the user executes the service corresponding to the recommendation information through the acquired recommendation information, and the negative samples are used for expressing that the user does not execute the service corresponding to the recommendation information after acquiring the recommendation information;
inputting the feature data corresponding to the training sample into the recommendation model aiming at each training sample in the sample set to obtain the prediction recommendation probability corresponding to the training sample;
and training the recommendation model by taking the minimized deviation between the prediction recommendation probability and the sample label corresponding to the training sample as an optimization target.
4. The method of claim 1, wherein the step of correcting the first recommendation probability corresponding to the recommendation model according to a sample sampling rate corresponding to the recommendation model during model training to determine a corrected recommendation probability corresponding to the recommendation model comprises:
acquiring a probability correction rule based on conditional probability corresponding to the predetermined recommendation model;
and determining the corrected recommendation probability corresponding to the recommendation model according to the probability correction rule, the sample sampling rate corresponding to the recommendation model during model training and the first recommendation probability corresponding to the recommendation model.
5. The method of claim 4, wherein determining the probability correction rule corresponding to the recommended model specifically comprises:
determining a corresponding relation between a recommendation probability for each training sample in a sample set adopted by the recommendation model during model training and obtained after the recommendation model is trained through a full sample set and a recommendation probability for each training sample in the training sample set according to a sample sampling rate corresponding to the recommendation model, wherein the training samples in the sample set are part of the training samples in the full sample set;
and determining a probability correction rule corresponding to the recommended model according to the corresponding relation.
6. The method according to claim 1 or 2, wherein the recommending information to the user according to the corrected recommendation probability corresponding to each recommendation model specifically comprises:
determining a second recommendation probability for recommending information to the user according to the corrected recommendation probability corresponding to each recommendation model;
and recommending information to the user according to the second recommendation probability.
7. The method of claim 6, wherein determining a second recommendation probability for recommending information to the user according to the corrected recommendation probabilities corresponding to the recommendation models comprises:
and obtaining the second recommendation probability according to the predetermined weight corresponding to each recommendation model and the correction recommendation probability corresponding to each recommendation model.
8. The method of claim 7, wherein the predetermining the weight corresponding to each recommended model specifically comprises:
verifying the recommendation model through a verification set corresponding to the recommendation model aiming at each recommendation model, and determining the identification accuracy of the recommendation model aiming at the verification set as the identification accuracy corresponding to the recommendation model;
and determining the weight corresponding to each recommended model according to the identification accuracy corresponding to each recommended model.
9. An apparatus for information recommendation, comprising:
an obtaining module, configured to obtain user data of a user, where the user data includes: at least one of attribute data of the user, historical behavior data of the user for a target service, service risk information corresponding to the user, historical service data corresponding to the user, and related information corresponding to each piece of historical recommendation information sent to the user;
the determining module is used for determining the characteristic data corresponding to the user according to the user data of the user;
the input module is used for inputting the characteristic data into each pre-trained recommendation model so as to determine a first recommendation probability for recommending information to the user under the recommendation model, and the first recommendation probability is used as a first recommendation probability corresponding to the recommendation model;
the correction module is used for correcting the first recommendation probability corresponding to the recommendation model according to the sample sampling rate corresponding to the recommendation model during model training, and determining the corrected recommendation probability corresponding to the recommendation model, wherein the sample sampling rate corresponding to the recommendation model refers to the proportion of negative samples in a sample set corresponding to the recommendation model in a full sample set, and the sample set is constructed by selecting all positive samples from the full sample set and determining partial negative samples based on the sample sampling rate;
and the recommending module is used for recommending information to the user according to the corrected recommending probability corresponding to each recommending model.
10. A computer-readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method of any of the preceding claims 1 to 8.
11. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of claims 1 to 8 when executing the program.
CN202110280216.8A 2021-03-16 2021-03-16 Information recommendation method and device Active CN113010562B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110280216.8A CN113010562B (en) 2021-03-16 2021-03-16 Information recommendation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110280216.8A CN113010562B (en) 2021-03-16 2021-03-16 Information recommendation method and device

Publications (2)

Publication Number Publication Date
CN113010562A CN113010562A (en) 2021-06-22
CN113010562B true CN113010562B (en) 2022-05-10

Family

ID=76408039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110280216.8A Active CN113010562B (en) 2021-03-16 2021-03-16 Information recommendation method and device

Country Status (1)

Country Link
CN (1) CN113010562B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113722602A (en) * 2021-09-08 2021-11-30 平安医疗健康管理股份有限公司 Information recommendation method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109902708A (en) * 2018-12-29 2019-06-18 华为技术有限公司 A kind of recommended models training method and relevant apparatus
CN110019924A (en) * 2017-08-14 2019-07-16 中兴通讯股份有限公司 A kind of method, apparatus of song recommendations, computer equipment and storage medium
CN112269928A (en) * 2020-10-23 2021-01-26 百度在线网络技术(北京)有限公司 User recommendation method and device, electronic equipment and computer readable medium
CN112380449A (en) * 2020-12-03 2021-02-19 腾讯科技(深圳)有限公司 Information recommendation method, model training method and related device
CN112488782A (en) * 2020-11-18 2021-03-12 北京三快在线科技有限公司 Commodity recommendation method and device, storage medium and electronic equipment
CN112487278A (en) * 2019-09-11 2021-03-12 华为技术有限公司 Training method of recommendation model, and method and device for predicting selection probability

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11250340B2 (en) * 2017-12-14 2022-02-15 Microsoft Technology Licensing, Llc Feature contributors and influencers in machine learned predictive models

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110019924A (en) * 2017-08-14 2019-07-16 中兴通讯股份有限公司 A kind of method, apparatus of song recommendations, computer equipment and storage medium
CN109902708A (en) * 2018-12-29 2019-06-18 华为技术有限公司 A kind of recommended models training method and relevant apparatus
CN112487278A (en) * 2019-09-11 2021-03-12 华为技术有限公司 Training method of recommendation model, and method and device for predicting selection probability
CN112269928A (en) * 2020-10-23 2021-01-26 百度在线网络技术(北京)有限公司 User recommendation method and device, electronic equipment and computer readable medium
CN112488782A (en) * 2020-11-18 2021-03-12 北京三快在线科技有限公司 Commodity recommendation method and device, storage medium and electronic equipment
CN112380449A (en) * 2020-12-03 2021-02-19 腾讯科技(深圳)有限公司 Information recommendation method, model training method and related device

Also Published As

Publication number Publication date
CN113010562A (en) 2021-06-22

Similar Documents

Publication Publication Date Title
CN108460523B (en) Wind control rule generation method and device
CN110278175B (en) Graph structure model training and garbage account identification method, device and equipment
CN114202370A (en) Information recommendation method and device
CN110008991B (en) Risk event identification method, risk identification model generation method, risk event identification device, risk identification equipment and risk identification medium
CN110020427B (en) Policy determination method and device
CN113688313A (en) Training method of prediction model, information pushing method and device
CN110633989A (en) Method and device for determining risk behavior generation model
CN112214652B (en) Message generation method, device and equipment
CN108764915B (en) Model training method, data type identification method and computer equipment
CN110674188A (en) Feature extraction method, device and equipment
CN112966186A (en) Model training and information recommendation method and device
CN113643119A (en) Model training method, business wind control method and business wind control device
CN115238826B (en) Model training method and device, storage medium and electronic equipment
CN116071077B (en) Risk assessment and identification method and device for illegal account
CN114943307A (en) Model training method and device, storage medium and electronic equipment
CN110134860B (en) User portrait generation method, device and equipment
CN113887206B (en) Model training and keyword extraction method and device
CN113010562B (en) Information recommendation method and device
CN116308738B (en) Model training method, business wind control method and device
CN111507726B (en) Message generation method, device and equipment
CN110738562B (en) Method, device and equipment for generating risk reminding information
CN114511376A (en) Credit data processing method and device based on multiple models
CN111461892B (en) Method and device for selecting derived variables of risk identification model
CN111159397B (en) Text classification method and device and server
CN114116816A (en) Recommendation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221031

Address after: 1311, Floor 13, No. 27, Zhongguancun Street, Haidian District, Beijing 100081

Patentee after: QIANDAI (BEIJING) INFORMATION TECHNOLOGY CO.,LTD.

Patentee after: BEIJING SANKUAI ONLINE TECHNOLOGY Co.,Ltd.

Address before: 100080 2106-030, 9 North Fourth Ring Road, Haidian District, Beijing.

Patentee before: BEIJING SANKUAI ONLINE TECHNOLOGY Co.,Ltd.