CN114139898A - Cable data distribution method, device, equipment and storage medium - Google Patents

Cable data distribution method, device, equipment and storage medium Download PDF

Info

Publication number
CN114139898A
CN114139898A CN202111383050.9A CN202111383050A CN114139898A CN 114139898 A CN114139898 A CN 114139898A CN 202111383050 A CN202111383050 A CN 202111383050A CN 114139898 A CN114139898 A CN 114139898A
Authority
CN
China
Prior art keywords
clue
saving
data
distributed
historical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111383050.9A
Other languages
Chinese (zh)
Inventor
陈利琴
杨正良
闫永泽
刘设伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taikang Insurance Group Co Ltd
Taikang Online Property Insurance Co Ltd
Original Assignee
Taikang Insurance Group Co Ltd
Taikang Online Property Insurance Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taikang Insurance Group Co Ltd, Taikang Online Property Insurance Co Ltd filed Critical Taikang Insurance Group Co Ltd
Priority to CN202111383050.9A priority Critical patent/CN114139898A/en
Publication of CN114139898A publication Critical patent/CN114139898A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06311Scheduling, planning or task assignment for a person or group
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/01Customer relationship services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance

Abstract

Provided are a method, a device, equipment and a storage medium for cable data distribution, wherein the method comprises the following steps: determining a clue vector to be distributed of clue data to be distributed according to an input vector format of the business saving probability model, wherein the clue data to be distributed comprises: quitting the service information and the client behavior information related to the service; inputting the clue vector to be distributed into a service saving probability model, calculating to obtain the saving success probability of the clue data to be distributed, wherein the service saving probability model is obtained by training a historical clue sample and the saving success probability thereof, and the historical clue sample comprises clue data which is related to a preset service and is subjected to saving operation; according to the successful saving probability of the clue data to be distributed, dividing the grades of the clue data to be distributed according to the probability intervals of all grades; distributing the clue data to be distributed to corresponding processing personnel according to the priority order of the grades of the clue data to be distributed. The text can reduce the labor cost of the retreating and saving, and improve the saving efficiency and the saving rate.

Description

Cable data distribution method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of service distribution technologies, and in particular, to a method, an apparatus, a device, and a storage medium for data distribution.
Background
In the prior art, for the case that a client quits a service (for example, a logout), in order to reduce the logout amount, a service saving operation is often required to be performed on the client initiating the logout service request.
In the existing customer quit service saving process, thread data are directly and randomly issued to processing personnel for processing, the random allocation mode is only suitable for the condition that the manpower of the processing personnel is sufficient, and the problems of low saving efficiency and low success rate exist because the thread data are not distinguished.
In addition, when the manpower of the processing personnel is insufficient, valuable clue data cannot be processed in time, so that a large amount of valuable clue data is accumulated, the saving processing efficiency is further reduced, and the problems of low saving rate and large loss of customer services are caused.
How to efficiently and timely distribute valuable clue data to improve the distribution efficiency and the retention rate of the clue data is a problem to be urgently solved.
Disclosure of Invention
The method and the device are used for solving the problems that in the prior art, clue data are not distributed timely in the client backoffice service saving process, so that the saving processing efficiency is reduced, the saving rate is low, and a large number of clients are lost.
In order to solve the above technical problem, a first aspect of the present disclosure provides a method for data distribution, including:
determining a clue vector to be distributed of clue data according to an input vector format of the business saving probability model, wherein the clue data to be distributed comprises: quitting the service information and the client behavior information related to the service;
inputting the clue vector to be distributed into a service saving probability model, and calculating to obtain the saving success probability of the clue data to be distributed, wherein the service saving probability model is obtained by training a historical clue sample and the saving success probability thereof, and the historical clue sample comprises clue data which is related to a preset service and is subjected to saving operation;
according to the successful saving probability of the clue data to be distributed, dividing the grades of the clue data to be distributed according to the probability intervals of all grades;
distributing the clue data to be distributed to corresponding processing personnel according to the priority order of the grades of the clue data to be distributed.
As a further embodiment herein, the traffic saving probability model training process comprises:
performing aggregation and digital processing on features in the historical clue samples to obtain historical clue sample vectors;
carrying out digital processing on the saving result corresponding to the historical clue sample vector to obtain the saving success probability of the historical clue sample vector;
and training a business saving probability model by using the historical clue sample vector and the saving success probability thereof and adopting a k-fold cross validation and grid search algorithm.
As a further embodiment herein, before aggregating and digitizing the features in the historical cue samples, the method further comprises:
counting the missing rate of the features in the historical clue samples, and deleting the historical clue samples with the feature missing rate larger than a preset value;
and judging whether each feature in the historical clue sample exceeds a value range, and if so, limiting the feature in the value range.
As a further embodiment herein, the historical cue samples include continuous type features and categorical type features, each categorical type feature includes a plurality of categories, wherein aggregating and digitizing the features in the historical cue samples to obtain a historical cue sample vector includes:
the following processing is performed on the historical cue samples: determining the data quantity distribution of each category in each category type feature, and setting the category with the data quantity less than a preset threshold value as a new category; carrying out interval division processing on each continuous type feature;
carrying out digital processing on the processed historical clue samples to obtain characteristic values of the historical clue samples;
performing aggregation processing on the characteristic values of the historical clue samples to obtain aggregated characteristic values;
and obtaining a clue sample vector according to the characteristic value and the aggregation characteristic value of the historical clue sample.
In a further embodiment of the present disclosure, training a service saving probability model using a k-fold cross validation and a grid search algorithm using a historical cue sample vector and a saving success probability thereof includes:
a. dividing the clue sample vectors into k groups of clue sample vectors, and executing a process of training a classifier once for each group of the clue sample vectors, wherein the process of training the classifier each time comprises the following steps: taking a group of clue sample vectors as a verification set, and taking the other groups of clue sample vectors as training sets; training a classifier by using the training set and the saving success probability of each clue sample vector in the training set; respectively inputting each clue sample vector in the verification set into a trained classifier, and calculating to obtain the successful saving probability of each clue sample vector in the verification set; calculating to obtain a performance evaluation index value of the classifier aiming at the verification set according to the successful saving probability of each clue sample vector in the verification set;
b. averaging the performance evaluation index values obtained by training k times on k groups of cable sample vectors in the step a to obtain an average performance evaluation index value;
c. judging whether the parameter values of the classifier are adjusted completely according to the parameter adjustment strategy, if not, executing the step d, and if so, executing the step e;
d. adjusting the parameters of the classifier according to a preset parameter adjustment strategy, and then returning to continue executing the steps a to c;
e. screening out the parameter value of the classifier corresponding to the highest average performance evaluation index value;
f. determining a business saving probability model by using the parameter values of the screened classifiers;
g. and training a business saving probability model by using the clue sample vector and the saving success probability thereof.
As a further embodiment herein, the method of thread data distribution further comprises:
establishing a corresponding relation between the clue data to be distributed and the processing personnel according to the clue data to be distributed and the portrait of the processing personnel;
according to the priority level sequence of the thread data to be distributed, distributing the thread data to be distributed to corresponding processing personnel further comprises the following steps:
and distributing the thread data to be distributed to corresponding processing personnel according to the priority level sequence of the thread data to be distributed and the corresponding relation between the thread data to be distributed and the processing personnel.
As a further embodiment herein, the method of thread data distribution further comprises:
counting the data volume of the clue data which is not processed by a processor;
calculating the residual processing capacity of each processing person according to the upper limit processing capacity of the processing person and the data capacity of unprocessed clue data;
if the residual treatment capacity of each treatment personnel is less than the preset value;
the level probability intervals are readjusted.
A second aspect herein provides a data distribution apparatus comprising:
the conversion module is used for determining a clue vector to be distributed of clue data according to a vector format input by the service saving probability model, wherein the clue data to be distributed comprises: quitting the service information and the client behavior information related to the service;
the computation module is used for inputting the clue vector to be distributed into the business saving probability model and computing the saving success probability of the clue data to be distributed, wherein the business saving probability model is obtained by training a historical clue sample and the saving success probability thereof, and the historical clue sample comprises clue data which is related to a preset business and is subjected to saving operation;
the ranking module is used for ranking the clue data to be distributed according to each ranking probability interval according to the successful retrieval probability of the clue data to be distributed;
and the distribution module is used for distributing the clue data to be distributed to corresponding processing personnel according to the priority order of the grades of the clue data to be distributed.
A third aspect of the present document provides a computer device comprising a memory, a processor, and a computer program stored on the memory, the computer program when executed by the processor executing the instructions of the method of thread data distribution of any of the preceding embodiments.
A fourth aspect of the present document provides a computer storage medium having stored thereon a computer program which, when executed by a processor of a computer device, executes the instructions of the method of thread data distribution of any of the preceding embodiments.
According to the clue data distribution method and device, a business refuge saving probability model is established by learning the historical clue samples and the saving success probability thereof; predicting the thread data to be distributed through a service refunding and saving probability model to obtain the saving success probability of the thread data to be distributed; according to the successful saving probability of the clue data to be distributed, dividing the grades of the clue data to be distributed according to the probability intervals of all grades; according to the priority sequence of the levels of the clue data to be distributed, the clue data to be distributed is distributed to corresponding processing personnel, a manual random distribution mode can be replaced, the required labor cost is saved, effective distribution of the clue data to be distributed can be realized under the condition of limited manpower, and customers with high business quitting and handling saving probabilities can be saved in time, so that the saving efficiency and the saving rate are improved.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments or technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 shows a first flowchart of a method of threaded data distribution of embodiments herein;
FIG. 2 illustrates a flow diagram of a traffic saving probability model training process according to an embodiment herein;
FIG. 3 is a flowchart of a process for aggregating and digitizing characteristics of sample historical cues according to an embodiment of the present disclosure;
FIG. 4 is a flow diagram illustrating a process of training a traffic saving probability model using a k-fold cross validation and lattice search algorithm according to an embodiment of the present disclosure;
FIG. 5 shows a second flowchart of a method of thread data distribution according to an embodiment herein;
fig. 6 shows a third flowchart of a threaded data distribution method of embodiments herein;
figure 7 shows a fourth flowchart of a method of thread data distribution according to embodiments herein;
figure 8 is a block diagram illustrating a thread data distribution apparatus according to an embodiment herein;
FIG. 9 illustrates a flow diagram of a de-retain probabilistic model training process according to embodiments herein;
FIG. 10 is a flow diagram illustrating a method for retirement hints data distribution according to embodiments herein;
FIG. 11 shows a block diagram of a computer device according to an embodiment of the present disclosure.
Description of the symbols of the drawings:
810. a conversion module;
820. a calculation module;
830. a grading module;
840. a distribution module;
1102. a computer device;
1104. a processor;
1106. a memory;
1108. a drive mechanism;
1110. an input/output module;
1112. an input device;
1114. an output device;
1116. a presentation device;
1118. a graphical user interface;
1120. a network interface;
1122. a communication link;
1124. a communication bus.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments herein without making any creative effort, shall fall within the scope of protection.
It is noted that the terms "comprises" and "comprising," and any variations thereof, in the description and claims hereof, and in the drawings, are intended to cover non-exclusive inclusions, such that a process, method, apparatus, product, or device that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, product, or device.
The present specification provides method steps as described in the examples or flowcharts, but may include more or fewer steps based on routine or non-inventive labor. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. When an actual system or apparatus product executes, it can execute sequentially or in parallel according to the method shown in the embodiment or the figures.
It should be noted that the method and apparatus for distributing thread data herein may be used in the insurance field, where the thread data is thread data for initiating a logout service, and the service is, for example, an insurance service, and may also be used in any field other than the insurance field, and the application field of the method and apparatus for distributing thread data herein is not limited. The customer logout service information and the customer behavior information related to the service are data authorized by the customer or fully authorized by each party.
In an embodiment of the present disclosure, a method for distributing thread data is provided, which is used to solve the problems that in the process of saving a client logout service in the prior art, thread data is not distributed in time, so that saving processing efficiency is reduced, saving rate is low, and a large number of clients are lost. Specifically, as shown in fig. 1, the thread data distribution method includes:
step 110, determining a clue vector to be distributed of clue data according to an input vector format of the business saving probability model, wherein the clue data to be distributed comprises backlog business information and business-related customer behavior information;
step 120, inputting the clue vector to be distributed into a service saving probability model, and calculating to obtain the saving success probability of the clue data to be distributed, wherein the service saving probability model is obtained by training a historical clue sample and the saving success probability thereof, and the historical clue sample comprises clue data which is related to the preset service and is subjected to saving operation;
step 130, dividing the grade of the clue data to be distributed according to each grade probability interval according to the successful saving probability of the clue data to be distributed;
and step 140, distributing the thread data to be distributed to corresponding processing personnel according to the priority order of the grade of the thread data to be distributed.
In detail, the thread data to be allocated is originated from the client who has initiated the logout service request but has not performed the saving operation. The information of the backlog service comprises the time of initiating the backlog service, an initiating channel, basic service information and basic customer information. Taking insurance business as an example, the initiating channel includes but is not limited to micro insurance, 360 degrees, hundred degrees and the like, the basic business information includes but is not limited to premium information, insuring time, refund time, insuring time, underwriting city, claim settlement information and the like, and the basic customer information includes age, gender, whether social insurance identity exists and the like. The customer behavior information related to the service includes whether the customer initiates the operation of the logout service before initiating the logout service request, whether the logout service is successfully saved, and the like. By incorporating business-related customer behavior information into the input vector, the accuracy of the survival success probability prediction can be improved.
The clue data to be distributed contains more characteristics (variables), the characteristics and the specific position input to the business saving probability model are specified in the input vector of the business saving probability model, and the input vector conforming to the business saving probability model can be determined according to the input vector format of the business saving probability model.
In order to ensure the training speed of the service saving probability model, the service saving probability model described herein may select a LightGBM model, the LightGBM is a Gradient Boosting framework, a learning algorithm based on a Decision Tree is used, the LightGBM performs efficient parallel training on multiple Decision trees by implementing a GBDT (Gradient Boosting Decision Tree) algorithm, and the LightGBM has the advantages of fast training speed, low memory consumption, strong prediction capability, distributed support, capability of fast processing mass data, and the like, and is widely used in industrial practice. The specific training process of the traffic saving probability model refers to the following embodiments, and is not described in detail herein.
The preset service of the training service saving probability model is specified according to the requirement, which is not limited in this document, and all services which need to be saved can be used as the preset service described in this document.
The thread data subjected to the saving operation is the same as the variable included in the thread data to be distributed. The retrieval success probability of the historical cue samples is determined according to the retrieval results, specifically, for example, if the retrieval result of a certain historical cue sample is retrieval success, the corresponding retrieval success probability is 1, and if the retrieval result of a certain historical cue sample is retrieval failure, the corresponding retrieval success probability is 0. The definition of the saving result can refer to the following embodiments, and is not described in detail here.
Each level probability interval can be set according to requirements, and specific values of the level probability intervals are not limited in the text. Higher levels of saving probability correspond to higher priorities. In some embodiments, the saving operation may be performed by only including two levels of probability intervals, a level with a high saving success probability and a level with a low saving success probability, and the two levels are distributed to different processing personnel to perform different saving operations. The processing personnel may be the person responsible for the business or may be the person dedicated to the processing operations. The implementation of steps 130 and 140 can improve the distribution efficiency of the clue data to be distributed.
In the embodiment, the retrieval success probability of the thread data to be distributed is calculated by utilizing the service retrieval probability model obtained by training according to the historical thread samples and the retrieval success probability thereof, the grade of the thread data to be distributed is divided according to the retrieval success probability of the thread data to be distributed, and the thread data to be distributed is distributed to corresponding processing personnel according to the grade priority order of the thread data to be distributed, so that the manual random distribution mode can be replaced, the required labor cost is saved, the effective distribution of the thread data to be distributed can be realized under the condition of limited manpower, the client with high service withdrawal retrieval probability can be timely retrieved, and the retrieval efficiency and the retrieval rate are improved.
In an embodiment of this document, as shown in fig. 2, the process of training the traffic saving probability model includes:
step 201, performing aggregation and digital processing on features in the historical clue samples to obtain historical clue sample vectors;
step 202, carrying out digital processing on the saving result corresponding to the historical clue sample vector to obtain the saving success probability of the historical clue sample vector;
and step 203, training a service saving probability model by using the historical clue sample vector and the saving success probability thereof and adopting a k-fold cross validation and grid search algorithm.
Through the step 201, dimension disasters can be prevented, the model training time is reduced, the model generalization capability is enhanced, overfitting is reduced, and more meaningful usable features can be created. The historical clue sample can extract clue data of different channels from the business database according to preset requirements. Specifically, before extracting data from the business database, the data related to modeling may be manually screened according to all feature meanings in the correlation table known by the processing personnel, and then, features with unbalanced data distribution may be deleted, for example, the applicant client type field has two category values 1 and 2, the number ratio of the two categories is 1000000: 1, then the field may be deleted.
The historical cue samples described herein include: the category-based feature and the continuous feature take insurance business as an example, the category-based feature is sex, insurance channel (micro insurance, 360 degrees, hundred degrees), insurance sales plan (micro medical insurance million medical insurance, 360 national medical insurance, etc.), etc., and the continuous feature is age, etc.
Specifically, as shown in fig. 3, the step 201 includes:
the following steps 301 and 302 are performed on the historical cue samples:
step 301, determining data quantity distribution of each category in each category type feature, and setting the category of which the data quantity is less than a preset threshold value as a new category, wherein the name of the new category is other categories, for example, and the name of the new category can be set according to requirements;
step 302, performing interval division processing on each continuous type feature;
step 303, performing digital processing on the type features processed in step 301 and the continuous features processed in step 302 to obtain feature values of the historical clue samples;
304, aggregating the characteristic values of the historical clue samples to obtain aggregated characteristic values;
step 305, obtaining a clue sample vector according to each characteristic value and the aggregation characteristic value of the historical clue samples.
The predetermined threshold in step 301 may be set according to actual situations, for example, 100, taking the insurance business as an example, for example, 119 insurance sales schemes in the clue sample, where the number of policy corresponding to 100 sales scheme categories is below 100, and then the 100 sales scheme categories may be represented as "other risk categories".
In step 302, the interval division manner may be set according to actual requirements, which is not limited herein. For example, the age is an interval such as [0-10], [10-20], [20-30], [30-40], [40-50] … … every 10 years.
When step 303 is performed, the data obtained in step 301 and step 302 may be digitized by using a processing method such as LabelEncoder, get _ dummy, or onehotencor. Wherein, LabeleEncoder converts text type category data into numerical values, for example, there are three channels in all data: micro-protection, 360 degrees and hundred degrees, after LabeleEncoder processing, the three category characteristics become 0,1 and 2 respectively, for example, the age interval of 0-10 years is 1, 10-20 years is 2, etc. get _ dummy is one way to do one-hot encoding. OneHotEncoder is a method for converting numeric or text type category data into one or more columns of data with only 0 and 1, for example, channel characteristics can be converted into micro-insurance: [1,0,0], 360: [0,1,0], Baidu: [0,0,1].
In step 304, the feature value aggregation processing of the historical cue samples can be implemented according to the aggregation features set by the service. In practice, the polymerization characteristics can be calculated using the group operation of pandas. Taking the application service as an example, the aggregation features include but are not limited to: the amount of insurance the applicant purchases, the average age, maximum age, minimum age of the insured life in the policy purchased by the applicant, the maximum premium, minimum premium, average premium in the policy purchased by the applicant, etc.
In step 305, the feature values of the historical cue samples and the aggregate feature value are combined together to form a cue sample vector, and the cue sample vector forms an input vector format of the service retention probability model.
In step 202, the success saving probability includes success saving and failure saving, and when the success saving probability is digitized, the success saving probability is labeled as 1, and the success saving probability is labeled as 0. Whether the saving is successful or not can be specified according to the service condition, for example, the service of insuring is taken as an example, after the client applies for the saving, the text customer service can carry out the saving operation on the client in a CSS form, and the telephone call-out customer service can call the client to carry out the saving operation. The definition of success of the saving is: the client receives the telephone of the customer service or replies the consultation of the customer service, the difference between the refund time and the consultation time (the time of calling out the customer service or the time of consulting the client through the CSS) is more than 35 days (can be set according to the requirement), the single-stage saving premium is more than 0, the successful refund saving is shown, otherwise, the unsuccessfully refund saving is shown, wherein the character customer service and the telephone calling out the customer service are both processing personnel.
Specifically, as shown in fig. 4, the step 203 trains the service saving probability model by using k-fold cross validation and a grid search algorithm by using the historical clue sample vector and the saving success probability thereof, and includes:
dividing the thread sample vectors into k groups of thread sample vectors, and executing a process of training a classifier once for each group of thread sample vectors, wherein the process of training the classifier each time comprises the following steps: taking a group of clue sample vectors as a verification set, and taking the other groups of clue sample vectors as training sets; training a classifier by using the training set and the saving success probability of each clue sample vector in the training set; respectively inputting each clue sample vector in the verification set into a trained classifier, and calculating to obtain the successful saving probability of each clue sample vector in the verification set; calculating to obtain a performance evaluation index value of the classifier aiming at the verification set according to the successful saving probability of each clue sample vector in the verification set;
b, carrying out averaging processing on performance evaluation index values obtained by training k times on k groups of cable sample vectors in the step a to obtain an average performance evaluation index value;
step c, judging whether the parameter values of the classifier are adjusted completely according to the parameter adjustment strategy, if not, executing the step d, and if so, executing the step e;
d, adjusting parameters of the classifier according to a preset parameter adjustment strategy, and then returning to continue executing the steps a to c;
step e, screening out the parameter value of the classifier corresponding to the highest average performance evaluation index value;
step f, determining a business saving probability model by using the parameter values of the screened classifiers;
and g, training a business saving probability model by using the clue sample vector and the saving success probability thereof.
In specific implementation, the recall rate, the accuracy and the Fl-score can be selected as performance evaluation indexes in the step a, and in order to avoid overfitting of the model, AUC (Area under the Curve) can also be selected as the performance evaluation index. The ROC curve defines the False Positive Rate (FPR) as the X-axis and the True Positive Rate (TPR) as the Y-axis. Wherein FPR and TPR are defined as:
TPR: among all the actually positive samples, the rate of the sample correctly judged to be positive.
FPR: among all the samples that were actually negative, the sample rate that was erroneously determined to be positive.
Figure BDA0003366321630000111
Figure BDA0003366321630000112
Wherein TP is the sample size with positive result predicted by positive result if the actual result is positive; FP is the sample size with negative actual result and positive predicted result; FN is the sample size with positive actual result and negative predicted result; TN is the amount of samples whose actual results are negative and whose predicted results are negative.
After a service saving probability model is obtained through training, a trained de-saving prediction model can be subjected to performance testing by using a test set which does not participate in model training, a probability value corresponding to each sample point in the test set can be obtained by using the trained de-saving prediction model, the predicted values of all samples are ranked from high to low, then each probability value is taken as a threshold value in sequence, all samples which are larger than or equal to the threshold value are positive, samples which are smaller than the threshold value are negative, FPR and TPR values are calculated according to the formula and correspond to one point on an ROC curve, the FPR is an abscissa, the TPR is an ordinate, all points are drawn in sequence according to the method, all points are connected in sequence, and the ROC curve is obtained, and the area below the ROC curve is an AUC value.
The value of k can be selected according to actual conditions, and in general, the value of k is 5. LightGBM can be used as a classifier, and parameters of the classifier include num _ iterations, max _ depth, num _ leaves, learning rate, and the like. The step length of each parameter of the classifier can be set to be the same or different, and the specific value of the step length can be determined according to the actual situation, which is not limited herein.
In step d, the parameter adjustment strategy specifies an adjustment sequence and supplement of parameters, and in the specific implementation, only one parameter may be adjusted each time, or a plurality of parameters may be adjusted each time, the adjustment step length of each parameter may be the same or different, and the parameter adjustment strategy is not specifically limited herein.
In step e, when using AUC as the performance evaluation index, an average performance evaluation index value is obtained by calculation for each parameter value combination, and the larger the average performance evaluation index value is, the more accurate the classifier is represented, and the parameter value corresponding to the maximum value of the average performance evaluation index value is used as the parameter value of the distributor.
And f, substituting the parameter values of the screened classifiers into the classifier with unknown parameters to obtain a service saving probability model.
In this embodiment, in order to improve the accuracy of the traffic saving probability model, as shown in fig. 5, before the step 201 is implemented, the method further includes:
step 2001, counting the missing rate of the features in the historical clue samples;
step 2002, deleting historical clue samples with the characteristic missing rate larger than a preset value;
and step 2003, judging whether each feature in the historical clue sample exceeds a value range, and if so, limiting the feature in the value range.
The missing rate of features in step 2001 refers to the proportion of features with no value to the total features. The predetermined value in step 2002 can be set according to practical situations, and is not limited herein. Step 2003 may be performed by setting the characteristic to an upper limit or a lower limit, specifically, if the characteristic value exceeds the upper limit, the characteristic value is set to the upper limit, and if the characteristic value exceeds the lower limit, the characteristic value is set to the lower limit. Through step 2003, sample data can be normalized, and the influence of data entry errors on model training accuracy is prevented.
In one embodiment of the present invention, in order to further improve the saving efficiency and the success rate, the thread data distribution method, in addition to the above step 110 to the above step 130, as shown in fig. 6, further includes:
1401, establishing a corresponding relationship between the clue data to be distributed and the processing personnel according to the clue data and the portrait of the processing personnel;
and 1402, distributing the thread data to corresponding processing personnel according to the priority level sequence of the thread data and the corresponding relation between the thread data to be distributed and the processing personnel.
Specifically, in step 1401, processing the person image includes: the success rate of the processor for saving, the type of service which the processor is adept at, and the basic information (age, sex, region, etc.) of the processor. The corresponding relation between the clue data to be distributed and the processing personnel can be established according to the adept service business type in the picture of the processing personnel and the business type in the clue data.
When step 1402 is performed, the thread data may be obtained according to the order of priority levels of the thread data, the processing staff corresponding to the obtained thread data may be determined according to the correspondence between the thread data to be distributed and the processing staff, and the thread data may be sent to the processing staff.
In an embodiment of this document, in order to avoid the situation that the processing personnel delays the optimal duration of saving due to a heavy task at hand, as shown in fig. 7, the method for distributing thread data further includes, on the basis of the contents shown in fig. 1 and fig. 6:
step 150, counting the data volume of the thread data which is not processed by the processing personnel;
step 160, calculating the residual processing capacity of each processing personnel according to the upper limit processing capacity of the processing personnel and the data capacity of unprocessed clue data;
step 170, judging whether the residual processing amount of each processing personnel is smaller than a preset value, if the residual processing amount of each processing personnel is smaller than the preset value, executing step 180, and if the residual processing amount of each processing personnel is larger than or equal to the preset value, not needing any processing;
step 180, readjusting each level probability interval to redetermine the level of the clue data to be distributed.
According to the embodiment, the probability intervals of all levels are adjusted according to the residual processing capacity of the processing personnel, so that the clue data with high saving probability can be timely distributed and processed, and the saving efficiency and the saving rate are further improved.
Based on the same inventive concept, a cable data distribution device is also provided herein, as described in the following embodiments. Since the principle of solving the problem of the thread data distribution device is similar to that of the thread data distribution method, the thread data distribution device can be implemented by referring to the thread data distribution method, and repeated details are not repeated.
Specifically, as shown in fig. 8, the thread data distribution device includes:
a conversion module 810, configured to input a vector format according to the service saving probability model, and determine a thread vector to be allocated to the thread data to be allocated, where the thread data to be allocated includes: quitting the service information and the client behavior information related to the service;
the calculation module 820 is configured to input the cue vector to be allocated into the service saving probability model, and calculate the saving success probability of the cue data to be allocated, where the service saving probability model is obtained by training a historical cue sample and the saving success probability thereof, and the historical cue sample includes the cue data related to the preset service and subjected to saving operation;
the grade dividing module 830 is configured to divide the grade of the thread data to be allocated according to each grade probability interval according to the success probability of saving the thread data to be allocated;
the distributing module 840 is configured to distribute the thread data to be distributed to corresponding processing personnel according to the priority order of the grades of the thread data to be distributed.
The embodiment can replace a manual random distribution mode, and saves the required labor cost. And under the condition of limited manpower, the effective distribution of the clue data to be distributed can be realized, and the client with high business quitting and saving probability can be saved in time, so that the saving efficiency and the saving rate are improved.
In order to more clearly illustrate the technical solution herein, the procedure of training and using the probability model of the withdrawal saving is described below by taking the withdrawal saving as an example, which can be divided into two stages: a preparation phase and an application phase.
Preparation stage
The objective of this document is to predict the success probability of the de-retention saving of each thread data, and determine whether to perform the de-retention saving operation on this thread data by adjusting the threshold, and the specific flow is shown in fig. 9.
Step 901, selecting preset dangerous varieties according to business requirements, and extracting historical clue data of all customers in different channels from a business database, wherein the historical clue data of each customer is a historical clue sample.
Because the historical clue data has a plurality of fields which are irrelevant to modeling or redundant, the characteristic data for data analysis and modeling is processed from the business database, and mainly comprises business information and business-related customer behavior information. The information of the logout service comprises insurance policy information, time information (insurance application time, logout time and insurance start time), insurance acceptance city, basic information (age, gender and social insurance identity) of the insurance applicant and the insured person. The business-related customer behavior information includes claim settlement information, whether to refund operations, whether to refund reservations successfully, whether a policy has settled claims, and the like.
Step 902, screening out the thread data subjected to the retention cancellation and saving operation from the historical thread data of the client, and performing missing value processing and abnormal value processing on the screened thread data, specifically:
the missing values were processed as follows: and calculating the deletion rate of each feature, and deleting the features with the deletion rate of more than 70%.
The abnormal value is processed as follows: and for a sample with a certain characteristic value which is too large, too small or not in accordance with the distribution of the service scene data, directly deleting the sample, or replacing the sample with the average value of the characteristic or replacing the sample with an upper limit value and a lower limit value.
In step 903, the thread data obtained in step 902 is divided into two types, i.e., a successful de-reservation and a failed de-reservation according to the definition of whether the de-reservation is successful, and the two types are respectively marked as 1 and 0.
In step 904, the thread data of step 902 is processed by feature engineering.
Specifically, the feature engineering process includes aggregating and digitizing the clue data. The continuous feature data is discretized, for example, the age features are divided into a plurality of age intervals, 1 for 0-10 years, 2 for 10-20 years, and the like. The data of the class type characteristics are firstly analyzed, namely the data quantity distribution of each class in the class type characteristics is firstly determined, for each class type characteristic, the class with the data quantity less than a preset threshold value is screened out, the screened class is set to be the same class, then the digital processing is carried out by adopting methods such as LabeleEncoder, one HotEncoder or get _ dummy, and the like, for example, the city 'Wuhan city' where the insured person is located can be digitalized to be 1. Some aggregate characteristics, such as average age, maximum age, etc., of the applicant are constructed based on the applicant's, insured life purchasing information.
In step 905, the feature data subjected to the feature engineering in step 904 is randomly divided into two parts according to a certain proportion, wherein one part of the data set is used as a test set for testing the performance of the model, and the other part of the data set is used as a training set for training the model. It should be noted that the amount of data in the test set and the amount of data in the training set may be the same or different, and this is not limited herein.
In step 906, the LightGBM is used to train the classification model, the parameters of the LightGBM are adjusted by using a five-fold cross validation and grid search method, and the AUC value on the test set is used as the performance evaluation index of the classification model. The specific implementation steps comprise:
firstly, determining a parameter range for each parameter to be adjusted in the LightGBM, and sequentially adjusting the parameters according to step length, wherein the core parameters to be adjusted comprise num _ occurrences, max _ depth, num _ leaves, spare _ rate and the like;
then, equally dividing the training set used for training the model in the step 905 into 5 groups of thread sample vectors, respectively performing a primary verification set on each group of thread sample vectors, training the LightGBM model by taking the thread sample vectors of the rest groups as the training set, and selecting the parameter with the highest average AUC value from all the parameters to be selected as the optimal parameter;
finally, all the parameters to be adjusted are adjusted according to the steps, and N (N is the number of times of parameter adjustment) optimal parameters can be obtained. And (4) fixing the optimal parameters of the LightGBM model, training the model by using the training set in the step 905, taking the AUC value of the test set on the model as the index of the final evaluation model, and obtaining the model which is the refuge saving probability model after training.
(II) application phase
As shown in fig. 10, the thread data distribution process includes:
step 1001, for a batch of cue data for which a refund is applied, the successful probability of the refund saving of each cue data can be obtained through the trained refund saving probability model in the preparation stage, and the higher the successful probability value of the refund saving indicates that the cue data is more likely to be saved successfully.
In step 1002, a service may flexibly set a probability interval, and thread data is divided into a plurality of levels, for example: more than 0.9 is a class A thread, [0.9,0.8] is a class B thread, [0.8,0.7] is a class C thread, [0.7,0.6] is a class D thread, [0.6,0.5] is a class E thread, and [0.5,0] is a class F thread.
In step 1003, the service may combine the existing manpower to perform thread distribution according to the thread level, for example, first perform a fallback operation on the class a thread.
In the embodiment, the LightGMB method is used for modeling, the five-fold cross validation and grid search methods are used for adjusting parameters, and the AUC value is used as a performance evaluation index, so that overfitting of the model can be avoided, and the generalization capability of the model is improved.
In the embodiment, the probability model of the refuge saving is established by using an artificial intelligence method, the thread data of the refuge saving is graded, the thread data is intelligently distributed, the method is intelligently applied to the refuge saving system in the insurance industry, the artificial random thread distribution is replaced, the labor cost of the refuge saving is greatly reduced, the saving rate is improved, and the defects of large labor consumption and uncertain saving rate in the random thread data distribution in the current stage are overcome.
In an embodiment of this document, a computer device is further provided for implementing the thread data distribution method and the process of establishing the business saving probability model. In particular, as shown in fig. 11, computer device 1102 may include one or more processors 1104, such as one or more Central Processing Units (CPUs), each of which may implement one or more hardware threads. The computer device 1102 may also include any memory 1106 for storing any kind of information, such as code, settings, data, etc. For example, and without limitation, memory 1106 may include any one or more of the following in combination: any type of RAM, any type of ROM, flash memory devices, hard disks, optical disks, etc. More generally, any memory may use any technology to store information. Further, any memory may provide volatile or non-volatile retention of information. Further, any memory may represent fixed or removable components of computer device 1102. In one case, when the processor 1104 executes the associated instructions, which are stored in any memory or combination of memories, the computer device 1102 can perform any of the operations of the associated instructions. The computer device 1102 also includes one or more drive mechanisms 1108, such as a hard disk drive mechanism, an optical disk drive mechanism, etc., for interacting with any memory.
Computer device 1102 may also include an input/output module 1110(I/O) for receiving various inputs (via input device 1112) and for providing various outputs (via output device 1114). One particular output mechanism may include a presentation device 1116 and an associated graphical user interface 1118 (GUI). In other embodiments, input/output module 1110(I/O), input device 1112, and output device 1114 may also be excluded, as only one computer device in a network. Computer device 1102 can also include one or more network interfaces 1120 for exchanging data with other devices via one or more communication links 1122. One or more communication buses 1124 couple the above-described components together.
Communication link 1122 may be implemented in any manner, e.g., via a local area network, a wide area network (e.g., the Internet), a point-to-point connection, etc., or any combination thereof. Communications link 1122 may include any combination of hardwired links, wireless links, routers, gateway functions, name servers, etc., governed by any protocol or combination of protocols.
Corresponding to the methods in fig. 1-7, the embodiments herein also provide a computer-readable storage medium having stored thereon a computer program, which, when executed by a processor, performs the steps of the above-described method.
Embodiments herein also provide computer readable instructions, wherein when executed by a processor, a program thereof causes the processor to perform the method as shown in fig. 1-7.
It should be understood that, in various embodiments herein, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments herein.
It should also be understood that, in the embodiments herein, the term "and/or" is only one kind of association relation describing an associated object, meaning that three kinds of relations may exist. For example, a and/or B, may represent: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided herein, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purposes of the embodiments herein.
In addition, functional units in the embodiments herein may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present invention may be implemented in a form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The principles and embodiments of this document are explained herein using specific examples, which are presented only to aid in understanding the methods and their core concepts; meanwhile, for the general technical personnel in the field, according to the idea of this document, there may be changes in the concrete implementation and the application scope, in summary, this description should not be understood as the limitation of this document.

Claims (10)

1. A method for data distribution over a cable, comprising:
determining a clue vector to be distributed of clue data according to an input vector format of a business saving probability model, wherein the clue data to be distributed comprises: quitting the service information and the client behavior information related to the service;
inputting the clue vector to be distributed into the business saving probability model, and calculating to obtain the saving success probability of the clue data to be distributed, wherein the business saving probability model is obtained by training a historical clue sample and the saving success probability thereof, and the historical clue sample comprises clue data which is related to preset business and is subjected to saving operation;
according to the successful saving probability of the clue data to be distributed, dividing the grades of the clue data to be distributed according to each grade probability interval;
and distributing the thread data to be distributed to corresponding processing personnel according to the priority order of the grade of the thread data to be distributed.
2. The method of thread data distribution according to claim 1, wherein the business lingering probability model training process comprises:
performing aggregation and digital processing on the features in the historical clue samples to obtain historical clue sample vectors;
carrying out digital processing on the saving result corresponding to the historical clue sample vector to obtain the saving success probability of the historical clue sample vector;
and training the service saving probability model by using the historical clue sample vector and the saving success probability thereof and adopting a k-fold cross validation and grid search algorithm.
3. The method of claim 2, wherein aggregating and digitizing the features in the historical cue samples further comprises:
counting the missing rate of the features in the historical clue samples, and deleting the historical clue samples with the feature missing rate larger than a preset value;
and judging whether each feature in the historical clue sample exceeds a value range, and if so, limiting the feature in the value range.
4. The method of claim 2, wherein the historical cue samples comprise consecutive and categorical features, each categorical feature comprising a plurality of categories, and wherein aggregating and digitizing the features in the historical cue samples to obtain a vector of historical cue samples comprises:
the following processing is performed on the historical cue samples: determining the data quantity distribution of each category in each category type feature, and setting the category with the data quantity less than a preset threshold value as a new category; carrying out interval division processing on each continuous type feature;
carrying out digital processing on the processed historical clue samples to obtain characteristic values of the historical clue samples;
performing aggregation processing on the characteristic values of the historical clue samples to obtain aggregated characteristic values;
and obtaining a clue sample vector according to the characteristic value and the aggregation characteristic value of the historical clue sample.
5. The method of claim 2, wherein training the traffic saving probability model using k-fold cross validation and grid search algorithms using the historical cue sample vectors and saving success probabilities thereof comprises:
a. dividing the clue sample vectors into k groups of clue sample vectors, and executing a process of training a classifier once for each group of the clue sample vectors, wherein the process of training the classifier each time comprises the following steps: taking a group of clue sample vectors as a verification set, and taking the other groups of clue sample vectors as training sets; training a classifier by utilizing the training set and the saving success probability of each clue sample vector in the training set; respectively inputting each clue sample vector in the verification set into a trained classifier, and calculating to obtain the successful saving probability of each clue sample vector in the verification set; calculating to obtain a performance evaluation index value of the classifier aiming at the verification set according to the successful saving probability of each clue sample vector in the verification set;
b. averaging the performance evaluation index values obtained by training k times on k groups of cable sample vectors in the step a to obtain an average performance evaluation index value;
c. judging whether the parameter values of the classifier are adjusted completely according to the parameter adjustment strategy, if not, executing the step d, and if so, executing the step e;
d. adjusting the parameters of the classifier according to a preset parameter adjustment strategy, and then returning to continue executing the steps a to c;
e. screening out the parameter value of the classifier corresponding to the highest average performance evaluation index value;
f. determining a business saving probability model by using the parameter values of the screened classifiers;
g. and training a business saving probability model by using the clue sample vector and the saving success probability thereof.
6. The method of claim 1, further comprising:
establishing a corresponding relation between the clue data to be distributed and the processing personnel according to the clue data to be distributed and the portrait of the processing personnel;
according to the priority level sequence of the thread data to be distributed, distributing the thread data to be distributed to corresponding processing personnel further comprises the following steps:
distributing the clue data to be distributed to corresponding processing personnel according to the priority level sequence of the clue data to be distributed and the corresponding relation between the clue data to be distributed and the processing personnel.
7. The method of claim 1, further comprising:
counting the data volume of the clue data which is not processed by a processor;
calculating the residual processing capacity of each processing person according to the upper limit processing capacity of the processing person and the data capacity of unprocessed clue data;
if the residual treatment capacity of each treatment personnel is less than the preset value;
the level probability intervals are readjusted.
8. A cable data distribution apparatus, comprising:
the conversion module is used for determining a clue vector to be distributed of clue data according to a vector format input by the service saving probability model, wherein the clue data to be distributed comprises: quitting the service information and the client behavior information related to the service;
the calculation module is used for inputting the clue vector to be distributed into the business saving probability model and calculating the saving success probability of the clue data to be distributed, wherein the business saving probability model is obtained by training a historical clue sample and the saving success probability thereof, and the historical clue sample comprises clue data which is related to a preset business and is subjected to saving operation;
the grade division module is used for dividing the grade of the clue data to be distributed according to each grade probability interval according to the successful saving probability of the clue data to be distributed;
and the distribution module is used for distributing the clue data to be distributed to corresponding processing personnel according to the priority order of the grades of the clue data to be distributed.
9. A computer device comprising a memory, a processor, and a computer program stored on the memory, wherein the computer program, when executed by the processor, performs the instructions of the method of any one of claims 1-7.
10. A computer storage medium on which a computer program is stored, characterized in that the computer program, when being executed by a processor of a computer device, executes instructions of a method according to any one of claims 1-7.
CN202111383050.9A 2021-11-22 2021-11-22 Cable data distribution method, device, equipment and storage medium Pending CN114139898A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111383050.9A CN114139898A (en) 2021-11-22 2021-11-22 Cable data distribution method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111383050.9A CN114139898A (en) 2021-11-22 2021-11-22 Cable data distribution method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114139898A true CN114139898A (en) 2022-03-04

Family

ID=80390570

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111383050.9A Pending CN114139898A (en) 2021-11-22 2021-11-22 Cable data distribution method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114139898A (en)

Similar Documents

Publication Publication Date Title
WO2019127875A1 (en) Exclusive agent pool allocation method, electronic device and computer readable storage medium
US10521748B2 (en) Retention risk determiner
CN107038449B (en) Method and device for identifying fraudulent user
US20190297520A1 (en) Near-Uniform Load Balancing in a Visibility Network via Usage Prediction
WO2019144516A1 (en) Agent allocation method, electronic device, and computer-readable storage medium
CN110610431A (en) Intelligent claim settlement method and intelligent claim settlement system based on big data
CN111738819A (en) Method, device and equipment for screening characterization data
CN114186626A (en) Abnormity detection method and device, electronic equipment and computer readable medium
CN110930218A (en) Method and device for identifying fraudulent customer and electronic equipment
CN111061948B (en) User tag recommendation method and device, computer equipment and storage medium
CN116915710A (en) Traffic early warning method, device, equipment and readable storage medium
CN110796450B (en) Trusted relationship processing method and device
CN114139898A (en) Cable data distribution method, device, equipment and storage medium
CN112101692A (en) Method and device for identifying poor-quality users of mobile Internet
US20220318819A1 (en) Risk clustering and segmentation
CN108446907B (en) Safety verification method and device
CN111654853B (en) Data analysis method based on user information
CN108197740A (en) Business failure Forecasting Methodology, electronic equipment and computer storage media
CN109308565B (en) Crowd performance grade identification method and device, storage medium and computer equipment
CN112734352A (en) Document auditing method and device based on data dimensionality
CN113205442A (en) E-government data feedback management method and device based on block chain
CN112613920A (en) Loss probability prediction method and device
CN112767178A (en) Survival state monitoring method and device, computer equipment and storage medium
CN111311439A (en) Method, system and storage medium for screening order shops based on network order platform
CN113657675B (en) Data processing method, device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination