CN1897573A - Telecommunication customer loss forecasting method based on nervous-netowrk improved algorithm - Google Patents

Telecommunication customer loss forecasting method based on nervous-netowrk improved algorithm Download PDF

Info

Publication number
CN1897573A
CN1897573A CNA2006100857689A CN200610085768A CN1897573A CN 1897573 A CN1897573 A CN 1897573A CN A2006100857689 A CNA2006100857689 A CN A2006100857689A CN 200610085768 A CN200610085768 A CN 200610085768A CN 1897573 A CN1897573 A CN 1897573A
Authority
CN
China
Prior art keywords
model
algorithm
input parameter
error
neural net
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2006100857689A
Other languages
Chinese (zh)
Inventor
黄晓颖
薛庆童
余志刚
庄学阳
李岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LINKAGE SYSTEM INTEGRATION CO Ltd
Original Assignee
LINKAGE SYSTEM INTEGRATION CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LINKAGE SYSTEM INTEGRATION CO Ltd filed Critical LINKAGE SYSTEM INTEGRATION CO Ltd
Priority to CNA2006100857689A priority Critical patent/CN1897573A/en
Publication of CN1897573A publication Critical patent/CN1897573A/en
Pending legal-status Critical Current

Links

Images

Abstract

The method comprises: firstly using BP neural network to build a client running off model; then using the error propagation factor to make normalization process for the data with a normalization formula: x'=0.8(x-xmin/xmax-xmin)+0.1, and wherein, x is an inputted parameter, xmin is a minimum inputted parameter, xmax is a maximum parameter, x is a normalized inputted parameter.

Description

Improve the telecom client attrition prediction method of algorithm based on neural net
Technical field
The invention belongs to telecom operators data mining field, relate to a kind of telecom client (loss) Forecasting Methodology of improving algorithm based on neural net.
Background technology
Along with the development of telecommunication market competition, the leeway of customer selecting telecommunication product and telecommunications enterprise is increasing, and the contention to the client between the telecommunications enterprise is also more and more fierce.In the face of the fierce competitive market environment, traditional, the passive type service system of telecommunications enterprise can't satisfy client's needs, reply adversary challenge.Simultaneously, advantages such as traditional network, technology are difficult to draw back gap between telecommunications enterprise, can't form the competitive advantage of differentiation.Therefore, in order to cultivate and create the differential competition advantage that makes new advances under new Market Situation, telecommunications enterprise should be customer-centric, deep understanding client, and the guiding client keeps the client here.
Make up the attrition prediction model by the behavioural characteristic of client before net, prediction is about to the client of loss, the client who is about to run off is done keep; Also can pass through in the existing crm system, business personnel's customer visit behavior makes up forecast model to the influence of the success rate of signing a bill, and improves business personnel's customer visit level; Customer visit expense by sales force in the past makes up forecast model, prediction next stage expense budget; By residing stage of sale funnel and the relation of whether signing a bill, the required input of the marketing activity of prediction markets department etc.
The CN02822042.0 patient data excavates the present invention and provides a kind of data mining framework for excavating the high-quality structured clinical information.This data mining framework comprises data miner (350), and it excavates medical information according to the knowledge of the domain-specific that comprises in the knowledge base (330) from computerized patient record (CPR) (310).Data miner (350) comprises the parts that are used for from the CPR information extraction, makes up the parts (354) of all available evidence in time in the mode that principle is arranged, and the parts (356) of making reasoning from this anabolic process.The medical information that is excavated is stored among the structurized CPR (380), and this CPR can be a data warehouse.Above-mentionedly should not be used for the attrition prediction model that the behavioural characteristic of client before net make up to be stood according to the knowledge base data miner.
The BP NEURAL NETWORK network model is the forecast model of using always, and as scheming: the operation principle of BP network is that training mode is inputed to input layer, and reaches the hidden layer of back, transmits backward by connection weight, until the output that obtains network.In the network each neuron by ask the input weights and with come work through non-linear excited function passes result.
After obtaining the output of output layer, the desired value of itself and corresponding training mode is compared, obtain error amount by predefined network error function.Less than feasible value, then training is finished, otherwise error is adjusted the network connection weight by the gradient descent method as error.
The existing BP algorithm that directly uses is intuitively simple, realizes easily, but has following problem in actual applications:
1. the slow problem of convergence rate has limited the further utilization of this method.
2. the problem of local minimum point causes training error to descend, and can't be used to predict telecom client (loss).
Summary of the invention
The objective of the invention is at two top shortcomings, a kind of fast convergence rate is proposed and after training the little method of error, be used to predict telecom client (loss).
Technical solution of the present invention is: utilize the method after improving to set up model, the error propagation factor adopts: (desired output-reality output) * f (training last time is error in batches), wherein f ( u ) = 1 1 + exp ( - λu ) The weights adjustment algorithm adopts: Δw ( t ) = S ( t ) S ( t - 1 ) - S ( t ) Δw ( t - 1 ) , In the formula, S (t) and S (t-1) are current time and previous moment
Figure A20061008576800043
Value.Data are carried out normalized, and normalized formula is: x ′ = 0.8 ( x - x min x max - x min ) + 0.1 , Above-mentioned model training model, training back error is little, re-uses model, predicts.
A kind of telecom client Forecasting Methodology of improving algorithm based on neural net, utilize the method after improving to set up model, promptly, modeling in conjunction with neural net speed constant model to model, at first adopt the BP neural net method to set up the customer churn model, then adopt the error propagation factor: (desired output-reality output) * f (training last time is error in batches), wherein f ( u ) = 1 1 + e - λu (λ is a constant); The weights adjustment algorithm adopts: Δw ( t ) = S ( t ) S ( t - 1 ) - S ( t ) Δw ( t - 1 ) , In the formula, S (t) and S (t-1) are current time and the previous moment error local derviations to weights, promptly
Figure A20061008576800047
Value.Data are carried out normalized, and normalized formula is: x ′ = 0.8 ( x - x min x max - x min ) + 0.1 (x is an input parameter, x MinBe minimum input parameter, x MaxBe maximum input parameter, x ' is the later input parameter of normalization), with above-mentioned model training model; Then above-mentioned model is added correction factor, actual motion and analysis data by industrial reactor adopt optimized Algorithm to try to achieve the correction factor value, finally obtain model.
The present invention adopts neural net to set up model, and chooses connection weights between the neuron as the independent variable of model.
Client properties value after the employing normalization is as the input of model, and the normalization formula is:
x ′ = 0.8 ( x - x min x max - x min ) + 0.1 (x is an input parameter, x MinBe minimum input parameter, x MaxBe maximum input parameter, x ' is the later input parameter of normalization).The present invention adopts the method for second order to adjust weights:
Δw ( t ) = S ( t ) S ( t - 1 ) - S ( t ) Δw ( t - 1 ) Add correction factor.The error propagation factor that relates in the weights adjustment adopts: (desired output-reality output) * f (training last time is error in batches)
Effect of the present invention is:
Experiment Frequency of training Consuming time Error
Experiment 1 12045 4 minutes 8 seconds 0.000999939
Experiment 2 1169 28 seconds 0.000992806
Experiment 3 186 4 seconds 0.000985977
Illustrate: experiment 1 is the traditional BP algorithm
Experiment 2 is for adopting the experiment of the new error propagation factor
Experiment 3 is for adopting the experiment of the new error propagation factor and new weights adjustment algorithm
Demonstration:
According to the characteristics of BP algorithm gradient descent method, the correction of weights is directly proportional with the partial differential of error to weights in the BP algorithm, that is:
Δ w 3 [ i ] [ j ] ∞ ∂ E ∂ w 3 [ i ] [ j ]
The weights correction is big more, and convergence rate is fast more, so investigate:
∂ E ∂ w 3 [ i ] [ j ] = ∂ E ∂ ϵ · ∂ ϵ ∂ y [ j ] · ∂ y [ j ] ∂ y o [ j ] · ∂ y o [ j ] ∂ w 3 [ i ] [ j ]
∂ E ∂ w 3 [ i ] [ j ] = - ϵ f ′ ( y o [ j ] ) v [ i ] = ϵ · y [ j ] · ( 1 - y [ j ] ) · v [ i ]
As seen, weights correction amount w 3[i] [j] and ε y[j] (1-y[j]) v[i] be directly proportional.ε y[j] (1-y[j]) also can arrive input layer with the form backpropagation successively of the error propagation factor.Obviously, as ε y[j] (1-y[j]) big more, then the weights correction is big more, and then convergence rate is fast more.Adopt in the BP neural network model, the node number of input layer is all 21, is the association attributes of customer churn, as: customer type, at the net time limit, shut down number of times, average telephone expenses ... etc.The hidden layer node number is l (l=2~20), and output layer node number is all 1.
Analyze:
1. in general, wish that learning rate is relevant with the output error of system, in the bigger early learning process of error, learning algorithm should adopt bigger learning rate to accelerate the convergence of training, and in the less later stage learning process of error, algorithm preferably adopts smaller learning rate to avoid the instability of network.But y[j] (1-y[j]) but do not change with error change, so can not well satisfy this requirement.
2. investigate y[j] (1-y[j] function, the value of y is (0,1), when y was 0.5, function was obtained maximum 0.25, that is to say: the output valve of concentrating when training sample is near 0.5 the time, and weights are adjusted the step-length maximum, and the speed of convergence compares comparatively fast.But the result of in fact most of sample data is near two, promptly 0 and 1, and cause convergence rate slower.
3. at ε y[j] (1-y[j]) in, just with reference to error current, not with reference to the sample batch error of training last time.Can consider with last time in batches sample error also as parameter
Conclusion:
3 factors above comprehensive, then select last time in batches sample error E replace f ' (y as independent variable with the function of f (E) form o[j] investigates this function of f (E) in addition as the error propagation factor, can know: the codomain of E>0 and f (E) is (0.5,1), monotonic increase, promptly when last time training error big more, then f (E) is big more, the weights adjustment amount is big more indirectly.In addition, this function does not have the 2nd top problem, and whether the error propagation factor becomes 0.5 irrelevant with sample results.
Description of drawings
Fig. 1 is general neural network structure schematic diagram
Fig. 2 neural network structure schematic diagram of the present invention
Embodiment
Fig. 2 is input x quantity N 0, input layer is counted N 1, the number of hidden nodes N 2, the output layer node is counted N 3Be input as x[N 0], input layer is output as μ [N 1], hidden layer is output as v[N 2], output layer is output as y[N 3], desired output is d[N 3].
Each node be input as x 0[N 0], u o[N 1], v o[N 2], y o[N 3]
Weights between the layer are: w 1[N 0] [N 1], w 2[N 1] [N 2], w 3[N 2] [N 3]
Embodiment
The first step: utilize the method after improving to set up model,
Second step: data are prepared, because the needs of neural network model carry out normalized to data, normalized formula is: x ′ = 0.8 ( x - x min x max - x min ) + 0.1
The 3rd step: training pattern
By 231 training, the training error of model is 0.000934405 can satisfy the error precision requirement.
The 4th step: use a model, predict.The 5th step: interpretation of result (seeing the following form)
Do not run off at (family) Run off at (family)
Actual 2628 195
Prediction 2125 698
Predict successfully 2109 179
Success rate prediction:
Not=(not attrition prediction successfully count+attrition prediction successfully counts)/client's sum
=(2109+179)/2823
=81%
Customer revenue prediction coverage rate:
=customer revenue number/total client's number
=698/2823
=24.7%
Table:

Claims (6)

1. improve the telecom client Forecasting Methodology of algorithm based on neural net, it is characterized in that at first adopting the BP neural net method to set up the customer churn model, then adopt the error propagation factor:
(desired output-reality output) * f (training last time is error in batches), wherein f ( u ) = 1 1 + e - λu λ is a constant, and the weights adjustment algorithm adopts: Δw ( t ) = S ( t ) S ( t - 1 ) - S ( t ) Δw ( t - 1 ) ' In the formula, S (t) and S (t-1) are current time and the previous moment error local derviations to weights, promptly Value; Data are carried out normalized, and normalized formula is: x ′ = 0.8 ( x - x min x max - x min ) + 0.1 X is an input parameter, x MinBe minimum input parameter, x MaxBe maximum input parameter, x ' is the later input parameter of normalization, with above-mentioned model training model; Then above-mentioned model is added correction factor, actual motion and analysis data by industrial reactor adopt optimized Algorithm to try to achieve the correction factor value, finally obtain model.
2. improve the telecom client Forecasting Methodology of algorithm by claim 1 is described based on neural net, it is characterized in that adopting neural net to set up model, and choose connection weights between the neuron as the independent variable of model.
3. improve the telecom client Forecasting Methodology of algorithm by claim 1 is described based on neural net, it is characterized in that adopting client properties value after the normalization as the input of model, the normalization formula is: x ′ = 0.8 ( x - x min x max - x min ) + 0.1 X is an input parameter, x MinBe minimum input parameter, x MaxBe maximum input parameter, x ' is the later input parameter of normalization.
4. by the described telecom client Forecasting Methodology of claim 1, it is characterized in that adopting the method for second order to adjust weights based on neural net improvement algorithm: Δw ( t ) = S ( t ) S ( t - 1 ) - S ( t ) Δw ( t - 1 ) Add correction factor.
5. by the described telecom client Forecasting Methodology of claim 1, it is characterized in that the error propagation factor that relates to adopts in the weights adjustment based on neural net improvement algorithm:
(desired output-reality output) * f (training last time is error in batches).
6. by the described method of claim 1, it is characterized in that adopting in the BP neural network model, the node number of input layer is all 21, and the association attributes for customer churn comprises customer type, at the net time limit, shuts down number of times, average telephone expenses; The hidden layer node number is l (l=2~20), and output layer node number is all 1.
CNA2006100857689A 2006-06-30 2006-06-30 Telecommunication customer loss forecasting method based on nervous-netowrk improved algorithm Pending CN1897573A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2006100857689A CN1897573A (en) 2006-06-30 2006-06-30 Telecommunication customer loss forecasting method based on nervous-netowrk improved algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2006100857689A CN1897573A (en) 2006-06-30 2006-06-30 Telecommunication customer loss forecasting method based on nervous-netowrk improved algorithm

Publications (1)

Publication Number Publication Date
CN1897573A true CN1897573A (en) 2007-01-17

Family

ID=37609953

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2006100857689A Pending CN1897573A (en) 2006-06-30 2006-06-30 Telecommunication customer loss forecasting method based on nervous-netowrk improved algorithm

Country Status (1)

Country Link
CN (1) CN1897573A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663006A (en) * 2012-03-19 2012-09-12 华为软件技术有限公司 Method and apparatus for data screening
CN102737285A (en) * 2012-06-15 2012-10-17 北京理工大学 Back propagation (BP) neural network-based appropriation budgeting method for scientific research project
CN104598987A (en) * 2014-12-16 2015-05-06 南京华苏科技股份有限公司 Method for predicating offline tendency and probability of mobile subscriber through learning and network effects of social networking services
CN105989407A (en) * 2015-02-12 2016-10-05 中国人民解放军信息工程大学 Neural network based short wave median field intensity prediction system, method and device
CN108712279A (en) * 2018-04-27 2018-10-26 中国联合网络通信集团有限公司 The off-grid prediction technique of user and device
CN108876034A (en) * 2018-06-13 2018-11-23 重庆邮电大学 A kind of improved Lasso+RBF neural network ensemble prediction model

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663006A (en) * 2012-03-19 2012-09-12 华为软件技术有限公司 Method and apparatus for data screening
CN102663006B (en) * 2012-03-19 2014-07-09 华为软件技术有限公司 Method and apparatus for data screening
CN102737285A (en) * 2012-06-15 2012-10-17 北京理工大学 Back propagation (BP) neural network-based appropriation budgeting method for scientific research project
CN104598987A (en) * 2014-12-16 2015-05-06 南京华苏科技股份有限公司 Method for predicating offline tendency and probability of mobile subscriber through learning and network effects of social networking services
CN104598987B (en) * 2014-12-16 2018-08-21 南京华苏科技有限公司 A method of using the study in social networks mobile subscriber's off-network tendency and probability are predicted with network effects
CN105989407A (en) * 2015-02-12 2016-10-05 中国人民解放军信息工程大学 Neural network based short wave median field intensity prediction system, method and device
CN108712279A (en) * 2018-04-27 2018-10-26 中国联合网络通信集团有限公司 The off-grid prediction technique of user and device
CN108712279B (en) * 2018-04-27 2021-08-17 中国联合网络通信集团有限公司 User off-network prediction method and device
CN108876034A (en) * 2018-06-13 2018-11-23 重庆邮电大学 A kind of improved Lasso+RBF neural network ensemble prediction model
CN108876034B (en) * 2018-06-13 2021-09-14 重庆邮电大学 Improved Lasso + RBF neural network combination prediction method

Similar Documents

Publication Publication Date Title
US20200311558A1 (en) Generative Adversarial Network-Based Optimization Method And Application
CN1897573A (en) Telecommunication customer loss forecasting method based on nervous-netowrk improved algorithm
Shen et al. Forecasting time series of inhomogeneous Poisson processes with application to call center workforce management
US7499897B2 (en) Predictive model variable management
US7730003B2 (en) Predictive model augmentation by variable transformation
US8165853B2 (en) Dimension reduction in predictive model development
US8170841B2 (en) Predictive model validation
US7725300B2 (en) Target profiling in predictive modeling
US7933762B2 (en) Predictive model generation
US7562058B2 (en) Predictive model management using a re-entrant process
CN102469103B (en) Trojan event prediction method based on BP (Back Propagation) neural network
Bhattacharyya Direct marketing performance modeling using genetic algorithms
CN1751288A (en) Horizontal enterprise planning in accordance with an enterprise planning model
CN106126607A (en) A kind of customer relationship towards social networks analyzes method
Tong et al. The research of customer loyalty improvement in telecom industry based on NPS data mining
CN112200375B (en) Prediction model generation method, prediction model generation device, and computer-readable medium
CN115511186A (en) Prediction management method, device and equipment for deep learning training duration
CN1920865A (en) Extendable strategy based key clients evaluation method for third city logistic companies
CN110310038A (en) Appraisal procedure, device, equipment and the readable storage medium storing program for executing of model or strategy
Ballin et al. Optimization of sampling strata with the SamplingStrata package
CN1517949A (en) Determination method of artificial nerve network meaule parameter for cvomprehensive evaluation of human physique
CN1779710A (en) Service level contract support device
Prat Job separation under uncertainty and the wage distribution
Kajan et al. Software engineering framework for digital service-oriented ecosystem
CN111080456B (en) ERP project auxiliary decision support system and method based on environmental effect feedback

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication
EE01 Entry into force of recordation of patent licensing contract

Assignee: LIAN Technology (Nanjing) Co., Ltd.

Assignor: Linkage System Integration Co., Ltd.

Contract fulfillment period: 2009.6.23 to 2027.8.30 contract change

Contract record no.: 2009320001548

Denomination of invention: Telecommunication customer loss forecasting method based on nervous-netowrk improved algorithm

License type: exclusive license

Record date: 2009.8.17

LIC Patent licence contract for exploitation submitted for record

Free format text: EXCLUSIVE LICENSE; TIME LIMIT OF IMPLEMENTING CONTACT: 2009.6.23 TO 2027.8.30; CHANGE OF CONTRACT

Name of requester: LIANCHUANG SCIENCE ( NANJING ) CO., LTD.

Effective date: 20090817