CN115619448A - User loss prediction method and device, computer equipment and storage medium - Google Patents

User loss prediction method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN115619448A
CN115619448A CN202211370601.2A CN202211370601A CN115619448A CN 115619448 A CN115619448 A CN 115619448A CN 202211370601 A CN202211370601 A CN 202211370601A CN 115619448 A CN115619448 A CN 115619448A
Authority
CN
China
Prior art keywords
prediction
prediction model
training
model
feature data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211370601.2A
Other languages
Chinese (zh)
Inventor
刘兴廷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202211370601.2A priority Critical patent/CN115619448A/en
Publication of CN115619448A publication Critical patent/CN115619448A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • G06Q30/0202Market predictions or forecasting for commercial activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/906Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Abstract

The embodiment of the application belongs to the fields of artificial intelligence and financial science and technology, is applied to the field of customer loss prediction, and relates to a user loss prediction method, a user loss prediction device, computer equipment and a storage medium, wherein the user loss prediction method comprises the steps of obtaining characteristic data of users in batches according to a model pre-training request; preprocessing the characteristic data; inputting the preprocessed feature data into an initialized customer churn prediction model for pre-training; acquiring characteristic data of a target user according to the model prediction request; inputting a customer loss prediction model for prediction to obtain a prediction result; and identifying the loss state of the target user according to the prediction result. According to the method, the sufficiency and the scientificity of the feature data during pre-training are guaranteed through the SMOTE algorithm, the random forest tree prediction model is built before pre-training, parameter optimization is carried out on the random forest tree prediction model through the improved particle swarm optimization during pre-training, and the high availability and the accuracy of the prediction model are guaranteed.

Description

User loss prediction method and device, computer equipment and storage medium
Technical Field
The application relates to the technical field of big data and financial science and technology, in particular to a user loss prediction method and device, computer equipment and a storage medium.
Background
Customer churn is the pain point for most company business development, which means that customers give up the company's existing products or services for some reason. Under the condition of increasingly intense market competition, the service is continuously upgraded and diversified, and the difficulty in acquiring new customers is far greater than that in retaining old customers. It is therefore highly desirable for an enterprise to predict customer churn intentions in advance and to retain customers through improved service. The research on the customer loss cases is numerous, and a method for analyzing and summarizing the current situation of the customer through a traditional statistical theory and judging the user loss is provided; there are also methods for customer churn prediction using neural networks to train on historical user data sets.
Taking insurance customers as an example, according to the analysis of historical customer information of a company, dozens of main characteristic data influencing customer purchase are provided, the customer churn prediction is a high-dimensional prediction problem, and how to ensure the high availability and accuracy of a prediction model is a technical problem to be solved urgently.
Disclosure of Invention
The embodiment of the application aims to provide a user churn prediction method, a user churn prediction device, computer equipment and a storage medium, so that high availability and accuracy of a client churn prediction model are achieved.
In order to solve the above technical problem, an embodiment of the present application provides a user churn prediction method, which adopts the following technical solutions:
a user churn prediction method comprises the following steps:
acquiring characteristic data of users in batches according to a model pre-training request, wherein the pre-training request comprises an acquisition address corresponding to the characteristic data of the users in batches, and the characteristic data comprises sex, age, job position category, marital status, residence, family income grade, credit rating, academic history, purchased dangerous seeds, payment mode, price, purchase channel and payment times;
expanding the characteristic data according to an SMOTE algorithm;
inputting the preprocessed characteristic data serving as a training set into an initialized customer loss prediction model for pre-training to obtain a customer loss prediction model finished by pre-training;
acquiring feature data of a target user according to a model prediction request, wherein the prediction request comprises an acquisition address corresponding to the feature data of the target user;
inputting the characteristic data serving as a prediction set into the customer churn prediction model for prediction to obtain a prediction result;
And identifying the attrition status of the target user according to the prediction result, wherein the attrition status comprises an attrition status and a non-attrition status.
Further, the step of performing expansion processing on the feature data according to the SMOTE algorithm specifically includes:
using a first algorithm formula: c = a + r (0, 1) × a-B |, and performs data expansion on the feature data by taking a single user as a unit, wherein a represents an original sample corresponding to any first user, that is, the feature data before preprocessing corresponding to the first user, B represents the feature data of any second user adjacent to the a, | a-B | represents a euclidean distance between a and B, | r (0, 1) represents a random number between 0 and 1, and C represents a new sample corresponding to the first user, that is, the feature data after preprocessing corresponding to the first user;
and acquiring the preprocessed feature data corresponding to each user in the batch of users respectively, and completing the expansion of the feature data of the batch of users.
Further, before the step of inputting the preprocessed feature data as a training set into the initialized customer churn prediction model for pre-training, the method further includes:
Randomly selecting a fixed amount of feature data respectively corresponding to different users from the preprocessed feature data, and constructing a plurality of model construction sets, wherein the number of the model construction sets is less than or equal to the number of the users in batches;
utilizing a binary tree splitting method to construct a classification tree for each model construction set, obtaining a classification tree corresponding to each model construction set, integrating the classification trees into a preset processing layer to generate a random forest tree, and completing construction of a random forest tree prediction model;
setting a binary threshold for each feature in the feature data in advance, taking the binary threshold as a fixed configuration parameter, taking the type quantity of the feature data and the quantity of the classification trees as dynamic configuration parameters, and initializing the random forest tree prediction model;
and taking the initialized random forest tree prediction model as an initialized customer churn prediction model.
Further, the step of inputting the preprocessed feature data as a training set into an initialized customer churn prediction model for pre-training specifically includes:
acquiring preprocessed feature data corresponding to each user respectively by taking a single user as a unit, and constructing a sub-training set;
Inputting the sub-training set into the customer loss prediction model to perform loss prediction, and obtaining a prediction result;
performing probability operation based on a preset loss reference table and the prediction result to obtain the accuracy of the customer loss prediction model, wherein the loss reference table comprises the loss state of each user;
judging whether parameter tuning optimization processing needs to be carried out on the customer loss prediction model or not according to the accuracy and a preset accuracy threshold;
if the accuracy rate meets the accuracy rate threshold value, determining that the customer attrition prediction model is pre-trained;
and if the accuracy does not meet the accuracy threshold, performing tuning processing on the customer loss prediction model based on an improved particle swarm algorithm until the accuracy meets the accuracy threshold, and completing pre-training of the customer loss prediction model.
Further, the step of performing tuning processing on the client attrition prediction model based on the improved particle swarm optimization until the accuracy meets the accuracy threshold, and completing pre-training of the client attrition prediction model specifically includes:
iteratively updating the customer attrition prediction model according to the improved particle swarm optimization until the accuracy meets the accuracy threshold, and completing iteration to obtain the current accuracy;
Based on a preset second algorithm formula: y = am + bn + c, and obtaining an optimal dynamic configuration parameter of the customer churn prediction model, where y is the current accuracy, a, b, and c are preset constants, m represents the type number of the feature data in the dynamic configuration parameter, and n represents the number of the classification trees in the dynamic configuration parameter;
and replacing the dynamic configuration parameters during initialization with the optimal dynamic configuration parameters to finish the tuning treatment of the customer attrition prediction model.
Further, in the step of performing the iterative updating of the customer churn prediction model according to the improved particle swarm algorithm, the method further comprises:
based on a preset third algorithm formula:
Figure BDA0003925383150000041
carrying out convergence control on the iteration process, wherein omega represents convergence constants which are respectively and correspondingly set when different iteration times exist, k represents the current iteration time, D represents the maximum iteration time, r (0, 1) represents a random number between 0 and 1, and omega represents 1 And omega 2 Representing an inertial random number.
Further, the step of inputting the feature data serving as a prediction set into the customer churn prediction model for prediction to obtain a prediction result specifically includes:
acquiring the characteristic data through an input layer of the customer churn prediction model;
Classifying the feature data acquired by the input layer according to a random forest tree in a processing layer of the customer churn prediction model;
outputting a classification processing result through an output layer of the customer churn prediction model;
and carrying out loss state statistics on the classification processing results, and taking the loss state statistics results as prediction results of the target users.
In order to solve the above technical problem, an embodiment of the present application further provides a user churn prediction apparatus, which adopts the following technical scheme:
a user churn prediction apparatus comprising:
the training data acquisition module is used for acquiring characteristic data of users in batches according to a model pre-training request, wherein the pre-training request comprises an acquisition address corresponding to the characteristic data of the users in batches, and the characteristic data comprises sex, age, job position category, marital status, residence place, family income level, credit rating, academic history, purchased dangerous seeds, payment mode, price, purchase channel and payment times;
the preprocessing module is used for performing expansion processing on the feature data according to the SMOTE algorithm;
the model pre-training module is used for inputting the preprocessed characteristic data serving as a training set into the initialized customer loss prediction model for pre-training to obtain a customer loss prediction model after pre-training;
The system comprises a prediction data acquisition module, a model prediction module and a prediction module, wherein the prediction module is used for acquiring feature data of a target user according to a model prediction request, and the prediction request comprises an acquisition address corresponding to the feature data of the target user;
the prediction result acquisition module is used for inputting the characteristic data serving as a prediction set into the customer attrition prediction model for prediction to acquire a prediction result;
and the prediction result identification module is used for identifying the loss state of the target user according to the prediction result, wherein the loss state comprises a lost state and a non-lost state.
In order to solve the above technical problem, an embodiment of the present application further provides a computer device, which adopts the following technical solutions:
a computer device comprising a memory and a processor, the memory having computer readable instructions stored therein, the processor implementing the steps of the user churn prediction method when executing the computer readable instructions.
In order to solve the above technical problem, an embodiment of the present application further provides a computer-readable storage medium, which adopts the following technical solutions:
a computer readable storage medium having computer readable instructions stored thereon which, when executed by a processor, implement the steps of a user churn prediction method as described above.
Compared with the prior art, the embodiment of the application mainly has the following beneficial effects:
according to the user loss prediction method, the characteristic data of users in batches are obtained according to the model pre-training request; expanding the characteristic data according to an SMOTE algorithm; inputting the preprocessed feature data serving as a training set into the initialized customer churn prediction model for pre-training to obtain a customer churn prediction model after pre-training is completed; acquiring characteristic data of a target user according to the model prediction request; inputting the characteristic data serving as a prediction set into the customer churn prediction model for prediction to obtain a prediction result; and identifying the loss state of the target user according to the prediction result. According to the method, the sufficiency and the scientificity of the characteristic data during pre-training are guaranteed through the SMOTE algorithm, the random forest tree prediction model (RF) is constructed before pre-training, the parameter optimization is carried out on the random forest tree prediction model through the improved Particle Swarm Optimization (PSO) during pre-training, and the high availability and the accuracy of the prediction model are guaranteed.
Drawings
In order to more clearly illustrate the solution of the present application, the drawings needed for describing the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and that other drawings can be obtained by those skilled in the art without inventive effort.
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a user churn prediction method according to the present application;
FIG. 3 is a flowchart of an embodiment of constructing a random forest tree prediction model according to the user churn prediction method of the present application;
FIG. 4 is a flow diagram of one embodiment of step 203 shown in FIG. 2;
FIG. 5 is a flow diagram of one embodiment of step 406 of FIG. 4;
FIG. 6 is a flow diagram for one embodiment of step 205 shown in FIG. 2;
FIG. 7 is a schematic block diagram illustrating an embodiment of a user churn prediction apparatus according to the present application;
FIG. 8 is a schematic diagram of a structure of one embodiment of a random forest tree model building module according to the present application;
FIG. 9 is a block diagram illustrating one embodiment of the module 703 of FIG. 7;
FIG. 10 is a block diagram illustrating one embodiment of module 7036 of FIG. 9;
FIG. 11 is a schematic block diagram of one embodiment of a computer device according to the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "including" and "having," and any variations thereof, in the description and claims of this application and the description of the above figures are intended to cover non-exclusive inclusions. The terms "first," "second," and the like in the description and claims of this application or in the above-described drawings are used for distinguishing between different objects and not for describing a particular order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein may be combined with other embodiments.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. Network 104 is the medium used to provide communication links between terminal devices 101, 102, 103 and server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
A user may use terminal devices 101, 102, 103 to interact with a server 105 over a network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various communication client applications installed thereon, such as a web browser application, a shopping application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture experts Group Audio Layer III, mpeg compression standard Audio Layer 3), MP4 players (Moving Picture experts Group Audio Layer IV, mpeg compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background server providing support for pages displayed on the terminal devices 101, 102, 103.
It should be noted that, the user churn prediction method provided in the embodiments of the present application is generally executed by a server/terminal device, and accordingly, the user churn prediction apparatus is generally disposed in the server/terminal device.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow diagram of one embodiment of a user churn prediction method according to the present application is shown. The user churn prediction method comprises the following steps:
Step 201, obtaining feature data of a batch of users according to a model pre-training request, wherein the pre-training request includes obtaining addresses corresponding to the feature data of the batch of users.
In this embodiment, the characteristic data includes sex, age, job position category, marital status, residence, household income level, credit rating, academic calendar, purchased dangerous goods, payment method, price, purchase channel, and payment times.
Taking insurance customers as an example, according to the analysis of historical customer information of a company, the main characteristics influencing the purchase of the customers comprise sex, age, job position category, marital status, residence, family income level, credit rating, academic history, purchased dangerous varieties, payment mode, price, purchase channel, payment times and the like.
By using the characteristic data of the users in batches, the model is pre-trained, and the characteristic data corresponding to the clients is used for pre-training during model training, so that the model better meets the service requirements and the characteristics of the clients.
And step 202, performing expansion processing on the feature data according to the SMOTE algorithm.
In this embodiment, the step of performing expansion processing on the feature data according to the SMOTE algorithm specifically includes: using a first algorithm formula: c = a + r (0, 1) × a-B |, and performs data expansion on the feature data by taking a single user as a unit, wherein a represents an original sample corresponding to any first user, that is, the feature data before preprocessing corresponding to the first user, B represents the feature data of any second user adjacent to the a, | a-B | represents a euclidean distance between a and B, | r (0, 1) represents a random number between 0 and 1, and C represents a new sample corresponding to the first user, that is, the feature data after preprocessing corresponding to the first user; and acquiring the preprocessed feature data corresponding to each user in the batch of users respectively, and completing the expansion of the feature data of the batch of users.
The feature data are expanded by using the SMOTE algorithm, the sufficiency of the feature data is realized, the high availability and the scientificity of a prediction model are guaranteed through the sufficiency of the feature data, meanwhile, the SMOTE algorithm is used for calculating the feature data of two users, a new feature point between the feature data and the feature data is obtained and used as new feature data, and the high-dispersion feature data of the feature data among multiple users are subjected to de-discretization processing to a certain degree.
And 203, inputting the preprocessed feature data serving as a training set into the initialized customer churn prediction model for pre-training to obtain a customer churn prediction model finished by pre-training.
In this embodiment, before the step of inputting the preprocessed feature data into the initialized customer churn prediction model as the training set for pre-training, the method further includes: randomly selecting a fixed amount of feature data corresponding to different users from the preprocessed feature data, and constructing a plurality of model construction sets, wherein the number of the model construction sets is less than or equal to the number of the users in batches; constructing a classification tree for each model construction set by using a binary tree splitting method, acquiring a classification tree corresponding to each model construction set, integrating the classification trees into a preset processing layer to generate a random forest tree, and completing construction of a random forest tree prediction model; setting a binary threshold for each feature in the feature data in advance, taking the binary threshold as a fixed configuration parameter, taking the type number of the feature data and the number of the classification trees as dynamic configuration parameters, and initializing the random forest tree prediction model; and taking the initialized random forest tree prediction model as an initialized customer churn prediction model.
By carrying out random feature recombination construction on the feature data respectively corresponding to a plurality of users, the target unicity when the binary tree construction is carried out by taking a single user as a unit is avoided, the multi-feature data fusion of the random forest tree is realized, and the high availability of the prediction model is ensured.
With continuing reference to fig. 3, fig. 3 is a flowchart of a specific embodiment of constructing a random forest tree prediction model according to the user churn prediction method of the present application, including the steps of:
step 301, randomly selecting a fixed amount of feature data respectively corresponding to different users from the preprocessed feature data, and constructing a plurality of model construction sets, wherein the number of the model construction sets is less than or equal to the number of the users in batch;
step 302, performing classification tree construction on each model construction set by using a binary tree splitting method, acquiring a classification tree corresponding to each model construction set, integrating the classification trees into a preset processing layer to generate a random forest tree, and completing construction of a random forest tree prediction model;
step 303, setting a binary threshold for each feature in the feature data in advance, taking the binary threshold as a fixed configuration parameter, taking the type number of the feature data and the number of the classification trees as dynamic configuration parameters, and initializing the random forest tree prediction model;
And step 304, taking the initialized random forest tree prediction model as an initialized customer churn prediction model.
In this embodiment, the step of inputting the preprocessed feature data as a training set into an initialized customer churn prediction model for pre-training specifically includes: acquiring preprocessed feature data corresponding to each user respectively by taking a single user as a unit, and constructing a sub-training set; inputting the sub-training set into the customer loss prediction model to perform loss prediction, and obtaining a prediction result; performing probability operation based on a preset loss reference table and the prediction result to obtain the accuracy of the customer loss prediction model, wherein the loss reference table comprises the loss state of each user; judging whether parameter tuning optimization processing needs to be carried out on the customer loss prediction model or not according to the accuracy and a preset accuracy threshold; if the accuracy rate meets the accuracy rate threshold value, determining that the customer attrition prediction model is pre-trained; and if the accuracy does not meet the accuracy threshold, performing tuning processing on the customer loss prediction model based on an improved particle swarm algorithm until the accuracy meets the accuracy threshold, and completing pre-training of the customer loss prediction model.
The method comprises the steps of taking a single user as a unit to obtain characteristic data to construct a training set, pre-training a prediction model, and dynamically tuning the prediction model needing tuning by using an improved particle swarm optimization (I PSO) in the pre-training process to ensure the accuracy of the prediction model.
With continuing reference to FIG. 4, FIG. 4 is a flowchart of one embodiment of step 203 shown in FIG. 2, comprising the steps of:
step 401, obtaining preprocessed feature data corresponding to each user by taking a single user as a unit, and constructing a sub-training set;
step 402, inputting the sub-training set into the customer loss prediction model for loss prediction to obtain a prediction result;
step 403, performing probability calculation based on a preset attrition reference table and the prediction result to obtain the accuracy of the customer attrition prediction model, wherein the attrition reference table comprises the attrition state of each user;
step 404, judging whether parameter tuning is needed to be carried out on the customer loss prediction model according to the accuracy and a preset accuracy threshold;
step 405, if the accuracy meets the accuracy threshold, determining that the customer attrition prediction model pre-training is completed;
And 406, if the accuracy does not meet the accuracy threshold, performing tuning processing on the customer attrition prediction model based on an improved particle swarm optimization until the accuracy meets the accuracy threshold, and completing pre-training of the customer attrition prediction model.
In this embodiment, the tuning the customer churn prediction model based on the improved particle swarm optimization until the accuracy meets the accuracy threshold, and the pre-training of the customer churn prediction model is completed specifically includes: iteratively updating the customer attrition prediction model according to the improved particle swarm optimization until the accuracy meets the accuracy threshold, and completing iteration to obtain the current accuracy; based on a preset second algorithm formula: y = am + bn + c, and obtaining an optimal dynamic configuration parameter of the customer churn prediction model, where y is the current accuracy, a, b, and c are preset constants, m represents the number of types of the feature data in the dynamic configuration parameter, and n represents the number of the classification trees in the dynamic configuration parameter; and replacing the dynamic configuration parameters during initialization with the optimal dynamic configuration parameters to finish the tuning treatment of the customer attrition prediction model.
And dynamically adjusting the prediction model to be adjusted by using an Improved Particle Swarm Optimization (IPSO), acquiring the optimal dynamic configuration parameters after adjustment, reconfiguring the prediction model, and ensuring the accuracy of the prediction model.
With continued reference to FIG. 5, FIG. 5 is a flowchart of one embodiment of step 406 of FIG. 4, including the steps of:
step 501, iteratively updating the customer loss prediction model according to the improved particle swarm algorithm until the accuracy meets the accuracy threshold, and completing iteration to obtain the current accuracy;
step 502, based on a preset second algorithm formula: y = am + bn + c, and obtaining an optimal dynamic configuration parameter of the customer churn prediction model, where y is the current accuracy, a, b, and c are preset constants, m represents the number of types of the feature data in the dynamic configuration parameter, and n represents the number of the classification trees in the dynamic configuration parameter;
and 503, replacing the dynamic configuration parameters during initialization with the optimal dynamic configuration parameters, and completing the tuning process of the customer attrition prediction model.
In this embodiment, in the step of performing iterative update on the customer churn prediction model according to the improved particle swarm algorithm, the method further includes: based on a preset third algorithm formula :
Figure BDA0003925383150000131
Carrying out convergence control on the iteration process, wherein omega represents convergence constants which are respectively and correspondingly set when different iteration times exist, k represents the current iteration time, D represents the maximum iteration time, r (0, 1) represents a random number between 0 and 1, and omega represents 1 And ω 2 Representing an inertial random number.
And the convergence control is carried out on the iterative process by presetting a third algorithm formula, so that the global searching capability of the particle swarm optimization is ensured and the phenomenon of premature falling into local optimum is prevented.
Step 204, obtaining feature data of a target user according to a model prediction request, wherein the prediction request comprises an obtaining address corresponding to the feature data of the target user.
And step 205, inputting the characteristic data serving as a prediction set into the customer churn prediction model for prediction to obtain a prediction result.
In this embodiment, the step of inputting the feature data as a prediction set into the customer churn prediction model to predict and obtain a prediction result specifically includes: acquiring the characteristic data through an input layer of the customer churn prediction model; classifying the feature data acquired by the input layer according to a random forest tree in a processing layer of the customer churn prediction model; outputting a classification processing result through an output layer of the customer churn prediction model; and performing attrition state statistics on the classification processing result, and taking the attrition state statistical result as a prediction result of the target user.
By acquiring the characteristic data of the target user, inputting the characteristic data into the pre-trained random forest tree prediction model, acquiring the prediction result, identifying the loss state of the target user according to the prediction result, realizing intelligent and automatic prediction by using artificial intelligence, being faster and more convenient, and reducing the interference of human factors.
With continued reference to FIG. 6, FIG. 6 is a flowchart of one embodiment of step 205 shown in FIG. 2, comprising the steps of:
step 601, obtaining the characteristic data through an input layer of the customer churn prediction model;
step 602, classifying the feature data acquired by the input layer according to a random forest tree in a processing layer of the customer churn prediction model;
step 603, outputting a classification processing result through an output layer of the customer attrition prediction model;
and step 604, performing attrition state statistics on the classification processing result, and taking the attrition state statistics result as a prediction result of the target user.
Step 206, identifying the attrition status of the target user according to the prediction result, wherein the attrition status includes an attrition status and a non-attrition status.
According to the method, characteristic data of users in batches are obtained according to a model pre-training request; expanding the characteristic data according to an SMOTE algorithm; inputting the preprocessed feature data serving as a training set into the initialized customer churn prediction model for pre-training to obtain a customer churn prediction model after pre-training is completed; acquiring feature data of a target user according to the model prediction request; inputting the characteristic data serving as a prediction set into the customer attrition prediction model for prediction to obtain a prediction result; and identifying the loss state of the target user according to the prediction result. According to the method, the adequacy and the scientificity of feature data during pre-training are guaranteed through the SMOTE algorithm, a random forest tree prediction model (RF) is built before pre-training, parameter optimization is carried out on the random forest tree prediction model through the improved Particle Swarm Optimization (PSO) during pre-training, and the high availability and the accuracy of the prediction model are guaranteed.
The embodiment of the application can acquire and process related data based on an artificial intelligence technology. Among them, artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
The artificial intelligence base technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
In the embodiment of the application, all cost change information can be acquired from different data sources through a big data processing technology, the characteristic data of the target user is acquired, the characteristic data is input into a pre-trained random forest tree prediction model, a prediction result is acquired, the loss state of the target user is recognized according to the prediction result, intelligent and automatic prediction is achieved through artificial intelligence, the method is fast and convenient, and interference of human factors is reduced.
With further reference to fig. 7, as an implementation of the method shown in fig. 2, the present application provides an embodiment of a user churn prediction apparatus, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 7, the user churn prediction apparatus 700 according to this embodiment includes: a training data acquisition module 701, a preprocessing module 702, a model pre-training module 703, a prediction data acquisition module 704, a prediction result acquisition module 705, and a prediction result recognition module 706. Wherein:
a training data acquisition module 701, configured to acquire feature data of users in batches according to a model pre-training request, where the pre-training request includes an acquisition address corresponding to the feature data of the users in batches, and the feature data includes gender, age, job position category, marital status, residence place, family income level, credit rating, academic degree, purchased dangerous species, payment method, price, purchase channel, and payment times;
a preprocessing module 702, configured to perform expansion processing on the feature data according to a SMOTE algorithm;
the model pre-training module 703 is configured to input the preprocessed feature data as a training set into the initialized customer churn prediction model for pre-training, and obtain a customer churn prediction model after pre-training;
a prediction data obtaining module 704, configured to obtain feature data of a target user according to a model prediction request, where the prediction request includes an obtaining address corresponding to the feature data of the target user;
A prediction result obtaining module 705, configured to input the feature data as a prediction set into the customer churn prediction model for prediction, so as to obtain a prediction result;
a prediction result identifying module 706, configured to identify, according to the prediction result, an attrition status of the target user, where the attrition status includes an attrition status and a non-attrition status.
According to the method, characteristic data of users in batches are obtained according to a model pre-training request; expanding the characteristic data according to the SMOTE algorithm; inputting the preprocessed characteristic data serving as a training set into an initialized customer loss prediction model for pre-training to obtain a customer loss prediction model finished by pre-training; acquiring characteristic data of a target user according to the model prediction request; inputting the characteristic data serving as a prediction set into the customer attrition prediction model for prediction to obtain a prediction result; and identifying the loss state of the target user according to the prediction result. According to the method, the adequacy and the scientificity of feature data during pre-training are guaranteed through the SMOTE algorithm, a random forest tree prediction model (RF) is built before pre-training, parameter optimization is carried out on the random forest tree prediction model through the improved Particle Swarm Optimization (PSO) during pre-training, and the high availability and the accuracy of the prediction model are guaranteed.
With continuing reference to fig. 8, in some embodiments of the present application, the user churn prediction apparatus further includes a random forest tree model building module 707, where the random forest tree model building module 707 includes: a constructed data acquisition module 7071, a random forest tree construction submodule 7072 and a model initialization submodule 7073. Wherein:
a constructed data obtaining module 7071, configured to randomly select, from the preprocessed feature data, a fixed number of feature data corresponding to different users, and construct a plurality of model construction sets, where the number of the model construction sets is less than or equal to the number of the users in batch;
the random forest tree construction submodule 7072 is configured to perform classification tree construction on each model construction set by using a binary tree splitting method, acquire a classification tree corresponding to each model construction set, integrate the classification trees into a preset processing layer to generate a random forest tree, and complete construction of a random forest tree prediction model;
and the model initialization submodule 7073 is configured to set a binary threshold for each feature in the feature data in advance, use the binary threshold as a fixed configuration parameter, use the number of types of the feature data and the number of the classification trees as dynamic configuration parameters, initialize the random forest tree prediction model, and use the initialized random forest tree prediction model as an initialized customer churn prediction model.
The random feature recombination construction is carried out on the feature data respectively corresponding to the multiple users through the random forest tree model construction module, the target unicity when the binary tree construction is carried out by taking a single user as a unit is avoided, the multi-feature data fusion of the random forest tree is realized, and the high availability of the prediction model is ensured.
With continuing reference to fig. 9, fig. 9 is a schematic diagram of a specific embodiment of the module 703 shown in fig. 7, where the model pre-training module 703 includes: a training set constructing sub-module 7031, a loss predicting sub-module 7032, a model accuracy calculating sub-module 7033, a model tuning judging sub-module 7034, a first processing sub-module 7035 and a second processing sub-module 7036. Wherein:
a training set constructing sub-module 7031, configured to obtain preprocessed feature data corresponding to each user, and construct a sub-training set;
the loss prediction submodule 7032 is configured to input the sub-training set into the customer loss prediction model to perform loss prediction, so as to obtain a prediction result;
the model accuracy calculation submodule 7033 is configured to perform probability calculation based on a preset churn reference table and the prediction result to obtain the accuracy of the customer churn prediction model, where the churn reference table includes churn states of each user;
The model tuning judgment sub-module 7034 is configured to judge whether parameter tuning processing needs to be performed on the client loss prediction model according to the accuracy and a preset accuracy threshold;
a first processing submodule 7035, configured to determine that the customer churn prediction model is pre-trained if the accuracy meets the accuracy threshold;
a second processing submodule 7036, configured to, if the accuracy does not meet the accuracy threshold, perform tuning processing on the client churn prediction model based on an improved particle swarm algorithm until the accuracy meets the accuracy threshold, and the client churn prediction model is pre-trained.
In this embodiment, the model pre-training module acquires feature data to construct a training set by taking a single user as a unit, pre-trains the prediction model, and dynamically optimizes the prediction model to be optimized by using an Improved Particle Swarm Optimization (IPSO) in the pre-training process, thereby ensuring the accuracy of the prediction model.
With continuing reference to fig. 10, fig. 10 is a schematic structural diagram of a specific embodiment of the module 7036 shown in fig. 9, where the second processing sub-module 7036 includes: a current accuracy obtaining unit 10a, an optimal parameter obtaining unit 10b and a parameter replacement and optimization unit 10c. Wherein:
A current accuracy obtaining unit 10a, configured to iteratively update the client attrition prediction model according to the improved particle swarm algorithm until the accuracy meets the accuracy threshold, and complete iteration to obtain a current accuracy;
an optimal parameter obtaining unit 10b, configured to, based on a preset second algorithm formula: y = am + bn + c, and obtaining an optimal dynamic configuration parameter of the customer churn prediction model, where y is the current accuracy, a, b, and c are preset constants, m represents the number of types of the feature data in the dynamic configuration parameter, and n represents the number of the classification trees in the dynamic configuration parameter;
and the parameter replacing and optimizing unit 10c is configured to replace the dynamic configuration parameters during initialization with the optimal dynamic configuration parameters, and complete the optimization processing on the client churn prediction model.
According to the method and the device, the second processing submodule is used for dynamically adjusting the prediction model needing to be adjusted and optimized by using an Improved Particle Swarm Optimization (IPSO), the optimal dynamic configuration parameters after adjustment and optimization are obtained, the prediction model is reconfigured, and the accuracy of the prediction model is guaranteed.
In some embodiments of the present application, the second processing submodule 7036 further includes a convergence control unit, where the convergence control unit is configured to, based on a preset third algorithm formula:
Figure BDA0003925383150000181
Carrying out convergence control on the iteration process, wherein omega represents convergence constants which are respectively and correspondingly set when different iteration times exist, k represents the current iteration time, D represents the maximum iteration time, r (0, 1) represents a random number between 0 and 1, and omega represents 1 And ω 2 Representing an inertial random number.
The convergence control unit is used for carrying out convergence control on the iterative process, so that the global search capability of the particle swarm algorithm is ensured, and the phenomenon that the particle swarm algorithm is trapped into local optimum too early is prevented.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware associated with computer readable instructions, which can be stored in a computer readable storage medium, and when executed, the programs can include the processes of the embodiments of the methods described above. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In order to solve the technical problem, the embodiment of the application further provides computer equipment. Referring to fig. 11, fig. 11 is a block diagram of a basic structure of a computer device according to the present embodiment.
The computer device 11 comprises a memory 11a, a processor 11b, and a network interface 11c, which are communicatively connected to each other via a system bus. It is noted that only a computer device 11 having components 11a-11c is shown, but it is understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead. As will be understood by those skilled in the art, the computer device is a device capable of automatically performing numerical calculation and/or information processing according to instructions set or stored in advance, and the hardware thereof includes but is not limited to a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The computer device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The computer equipment can carry out man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
The memory 11a includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the storage 11a may be an internal storage unit of the computer device 11, such as a hard disk or a memory of the computer device 11. In other embodiments, the memory 11a may also be an external storage device of the computer device 11, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the computer device 11. Of course, the memory 11a may also include both an internal storage unit and an external storage device of the computer device 11. In this embodiment, the memory 11a is generally used for storing an operating system and various application software installed on the computer device 11, such as computer readable instructions of a user churn prediction method. Further, the memory 11a may also be used to temporarily store various types of data that have been output or are to be output.
The processor 11b may be a Central Processing Unit (CPU), a controller, a microcontroller, a microprocessor, or other data Processing chip in some embodiments. The processor 11b is typically used to control the overall operation of the computer device 11. In this embodiment, the processor 11b is configured to execute computer readable instructions stored in the memory 11a or process data, such as computer readable instructions for executing the user churn prediction method.
The network interface 11c may comprise a wireless network interface or a wired network interface, and the network interface 11c is generally used for establishing communication connection between the computer device 11 and other electronic devices.
The embodiment provides a computer device, and belongs to the technical field of artificial intelligence. According to the method, characteristic data of users in batches are obtained according to a model pre-training request; expanding the characteristic data according to the SMOTE algorithm; inputting the preprocessed characteristic data serving as a training set into an initialized customer loss prediction model for pre-training to obtain a customer loss prediction model finished by pre-training; acquiring feature data of a target user according to the model prediction request; inputting the characteristic data serving as a prediction set into the customer churn prediction model for prediction to obtain a prediction result; and identifying the loss state of the target user according to the prediction result. According to the method, the sufficiency and the scientificity of the characteristic data during pre-training are guaranteed through the SMOTE algorithm, the random forest tree prediction model (RF) is constructed before pre-training, the parameter optimization is carried out on the random forest tree prediction model through the improved Particle Swarm Optimization (PSO) during pre-training, and the high availability and the accuracy of the prediction model are guaranteed.
The present application further provides another embodiment, which is to provide a computer readable storage medium storing computer readable instructions, which are executable by a processor to cause the processor to perform the steps of the user churn prediction method as described above.
The embodiment provides a computer readable storage medium, and belongs to the technical field of artificial intelligence. According to the method, characteristic data of users in batches are obtained according to a model pre-training request; expanding the characteristic data according to an SMOTE algorithm; inputting the preprocessed characteristic data serving as a training set into an initialized customer loss prediction model for pre-training to obtain a customer loss prediction model finished by pre-training; acquiring characteristic data of a target user according to the model prediction request; inputting the characteristic data serving as a prediction set into the customer churn prediction model for prediction to obtain a prediction result; and identifying the loss state of the target user according to the prediction result. According to the method, the sufficiency and the scientificity of the characteristic data during pre-training are guaranteed through the SMOTE algorithm, the random forest tree prediction model (RF) is constructed before pre-training, the parameter optimization is carried out on the random forest tree prediction model through the improved Particle Swarm Optimization (PSO) during pre-training, and the high availability and the accuracy of the prediction model are guaranteed.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
It is to be understood that the above-described embodiments are merely illustrative of some, but not restrictive, of the broad invention, and that the appended drawings illustrate preferred embodiments of the invention and do not limit the scope of the invention. This application is capable of embodiments in many different forms and is provided for the purpose of enabling a thorough understanding of the disclosure of the application. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to one skilled in the art that the present application may be practiced without modification or with equivalents of some of the features described in the foregoing embodiments. All equivalent structures made by using the contents of the specification and the drawings of the present application are directly or indirectly applied to other related technical fields and are within the protection scope of the present application.

Claims (10)

1. A user churn prediction method is characterized by comprising the following steps:
acquiring characteristic data of users in batches according to a model pre-training request, wherein the pre-training request comprises an acquisition address corresponding to the characteristic data of the users in batches, and the characteristic data comprises sex, age, job position category, marital status, residence, family income grade, credit rating, academic history, purchased dangerous seeds, payment mode, price, purchase channel and payment times;
expanding the characteristic data according to an SMOTE algorithm;
inputting the preprocessed characteristic data serving as a training set into an initialized customer loss prediction model for pre-training to obtain a customer loss prediction model finished by pre-training;
acquiring feature data of a target user according to a model prediction request, wherein the prediction request comprises an acquisition address corresponding to the feature data of the target user;
inputting the characteristic data serving as a prediction set into the customer churn prediction model for prediction to obtain a prediction result;
and identifying the attrition status of the target user according to the prediction result, wherein the attrition status comprises an attrition status and a non-attrition status.
2. The user churn prediction method according to claim 1, wherein the step of performing the expansion processing on the feature data according to a SMOTE algorithm specifically includes:
using a first algorithm formula: c = a + r (0, 1) × a-B |, and performs data expansion on the feature data by taking a single user as a unit, wherein a represents an original sample corresponding to any first user, that is, the feature data before preprocessing corresponding to the first user, B represents the feature data of any second user adjacent to the a, | a-B | represents a euclidean distance between a and B, | r (0, 1) represents a random number between 0 and 1, and C represents a new sample corresponding to the first user, that is, the feature data after preprocessing corresponding to the first user;
and acquiring the preprocessed feature data corresponding to each user in the batch of users respectively, and completing the expansion of the feature data of the batch of users.
3. The method of user churn prediction according to claim 1, wherein prior to the step of pre-training the customer churn prediction model that initializes with preprocessed feature data as a training set input, the method further comprises:
randomly selecting a fixed amount of feature data corresponding to different users from the preprocessed feature data, and constructing a plurality of model construction sets, wherein the number of the model construction sets is less than or equal to the number of the users in batches;
Utilizing a binary tree splitting method to construct a classification tree for each model construction set, obtaining a classification tree corresponding to each model construction set, integrating the classification trees into a preset processing layer to generate a random forest tree, and completing construction of a random forest tree prediction model;
setting a binary threshold for each feature in the feature data in advance, taking the binary threshold as a fixed configuration parameter, taking the type number of the feature data and the number of the classification trees as dynamic configuration parameters, and initializing the random forest tree prediction model;
and taking the initialized random forest tree prediction model as an initialized customer churn prediction model.
4. The method according to claim 3, wherein the step of pre-training the initialized customer churn prediction model by inputting the pre-processed feature data as a training set specifically comprises:
obtaining preprocessed feature data corresponding to each user by taking a single user as a unit, and constructing a sub-training set;
inputting the sub-training set into the customer loss prediction model to perform loss prediction, and obtaining a prediction result;
performing probability operation based on a preset attrition reference table and the prediction result to obtain the accuracy of the customer attrition prediction model, wherein the attrition reference table comprises the attrition state of each user;
Judging whether parameter tuning optimization processing needs to be carried out on the customer loss prediction model or not according to the accuracy and a preset accuracy threshold;
if the accuracy meets the accuracy threshold, determining that the customer attrition prediction model is pre-trained;
and if the accuracy does not meet the accuracy threshold, performing tuning processing on the client attrition prediction model based on an improved particle swarm optimization algorithm until the accuracy meets the accuracy threshold, and completing pre-training of the client attrition prediction model.
5. The user churn prediction method of claim 4, wherein the tuning process of the client churn prediction model based on the improved particle swarm optimization algorithm is performed until the accuracy meets the accuracy threshold, and the step of pre-training the client churn prediction model is completed specifically comprises:
iteratively updating the customer attrition prediction model according to the improved particle swarm optimization until the accuracy meets the accuracy threshold, and completing iteration to obtain the current accuracy;
based on a preset second algorithm formula: y = am + bn + c, and obtaining an optimal dynamic configuration parameter of the customer churn prediction model, where y is the current accuracy, a, b, and c are preset constants, m represents the type number of the feature data in the dynamic configuration parameter, and n represents the number of the classification trees in the dynamic configuration parameter;
And replacing the dynamic configuration parameters during initialization with the optimal dynamic configuration parameters to finish the tuning treatment of the customer attrition prediction model.
6. The user churn prediction method according to claim 5, wherein in the step of performing the iterative updating of the customer churn prediction model according to the improved particle swarm algorithm, the method further comprises:
based on a preset third algorithm formula:
Figure FDA0003925383140000031
carrying out convergence control on the iteration process, wherein omega represents convergence constants which are respectively and correspondingly set when different iteration times exist, k represents the current iteration time, D represents the maximum iteration time, r (0, 1) represents a random number between 0 and 1, and omega represents 1 And omega 2 Representing an inertial random number.
7. The method according to claim 3, wherein the step of inputting the feature data as a prediction set into the customer churn prediction model for prediction to obtain a prediction result comprises:
acquiring the characteristic data through an input layer of the customer churn prediction model;
classifying the feature data acquired by the input layer according to a random forest tree in a processing layer of the customer churn prediction model;
Outputting a classification processing result through an output layer of the customer churn prediction model;
and carrying out loss state statistics on the classification processing results, and taking the loss state statistics results as prediction results of the target users.
8. A user churn prediction apparatus, comprising:
the training data acquisition module is used for acquiring characteristic data of users in batches according to a model pre-training request, wherein the pre-training request comprises an acquisition address corresponding to the characteristic data of the users in batches, and the characteristic data comprises sex, age, job position category, marital status, residence place, family income level, credit rating, academic history, purchased dangerous seeds, payment mode, price, purchase channel and payment times;
the preprocessing module is used for performing expansion processing on the feature data according to the SMOTE algorithm;
the model pre-training module is used for inputting the preprocessed characteristic data serving as a training set into the initialized customer loss prediction model for pre-training to obtain a customer loss prediction model after pre-training;
the prediction data acquisition module is used for acquiring the feature data of a target user according to a model prediction request, wherein the prediction request comprises an acquisition address corresponding to the feature data of the target user;
The prediction result acquisition module is used for inputting the characteristic data serving as a prediction set into the customer attrition prediction model for prediction to acquire a prediction result;
and the prediction result identification module is used for identifying the attrition state of the target user according to the prediction result, wherein the attrition state comprises an attrition state and a non-attrition state.
9. A computer device comprising a memory having computer readable instructions stored therein and a processor which when executed implements the steps of the user churn prediction method defined in any one of claims 1 to 7.
10. A computer readable storage medium having computer readable instructions stored thereon which, when executed by a processor, implement the steps of the user churn prediction method as claimed in any one of claims 1 to 7.
CN202211370601.2A 2022-11-03 2022-11-03 User loss prediction method and device, computer equipment and storage medium Pending CN115619448A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211370601.2A CN115619448A (en) 2022-11-03 2022-11-03 User loss prediction method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211370601.2A CN115619448A (en) 2022-11-03 2022-11-03 User loss prediction method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115619448A true CN115619448A (en) 2023-01-17

Family

ID=84876298

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211370601.2A Pending CN115619448A (en) 2022-11-03 2022-11-03 User loss prediction method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115619448A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116664184A (en) * 2023-07-31 2023-08-29 广东南方电信规划咨询设计院有限公司 Client loss prediction method and device based on federal learning
CN116934385A (en) * 2023-09-15 2023-10-24 山东理工昊明新能源有限公司 Construction method of user loss prediction model, user loss prediction method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116664184A (en) * 2023-07-31 2023-08-29 广东南方电信规划咨询设计院有限公司 Client loss prediction method and device based on federal learning
CN116664184B (en) * 2023-07-31 2024-01-12 广东南方电信规划咨询设计院有限公司 Client loss prediction method and device based on federal learning
CN116934385A (en) * 2023-09-15 2023-10-24 山东理工昊明新能源有限公司 Construction method of user loss prediction model, user loss prediction method and device
CN116934385B (en) * 2023-09-15 2024-01-19 山东理工昊明新能源有限公司 Construction method of user loss prediction model, user loss prediction method and device

Similar Documents

Publication Publication Date Title
CN111079022B (en) Personalized recommendation method, device, equipment and medium based on federal learning
WO2021155713A1 (en) Weight grafting model fusion-based facial recognition method, and related device
CN112632385A (en) Course recommendation method and device, computer equipment and medium
WO2021120677A1 (en) Warehousing model training method and device, computer device and storage medium
CN115619448A (en) User loss prediction method and device, computer equipment and storage medium
CN109471978B (en) Electronic resource recommendation method and device
CN112328909B (en) Information recommendation method and device, computer equipment and medium
CN113722438A (en) Sentence vector generation method and device based on sentence vector model and computer equipment
CN112785005A (en) Multi-target task assistant decision-making method and device, computer equipment and medium
CN112308173A (en) Multi-target object evaluation method based on multi-evaluation factor fusion and related equipment thereof
CN115730597A (en) Multi-level semantic intention recognition method and related equipment thereof
CN113743971A (en) Data processing method and device
CN114359582A (en) Small sample feature extraction method based on neural network and related equipment
CN110532448B (en) Document classification method, device, equipment and storage medium based on neural network
CN109951859B (en) Wireless network connection recommendation method and device, electronic equipment and readable medium
CN112100491A (en) Information recommendation method, device and equipment based on user data and storage medium
CN115860835A (en) Advertisement recommendation method, device and equipment based on artificial intelligence and storage medium
CN115185625A (en) Self-recommendation type interface updating method based on configurable card and related equipment thereof
CN112364649B (en) Named entity identification method and device, computer equipment and storage medium
CN115099875A (en) Data classification method based on decision tree model and related equipment
CN112257812A (en) Method and device for determining labeled sample, machine readable medium and equipment
CN116245616A (en) E-commerce platform commodity recommendation method and device, computer equipment and storage medium
CN116542779A (en) Product recommendation method, device, equipment and storage medium based on artificial intelligence
CN116777641A (en) Model construction method, device, computer equipment and storage medium
CN116720692A (en) Customer service dispatching method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination