CN116415989A - Gigabit potential customer prediction method, gigabit potential customer prediction device, computer equipment and storage medium - Google Patents

Gigabit potential customer prediction method, gigabit potential customer prediction device, computer equipment and storage medium Download PDF

Info

Publication number
CN116415989A
CN116415989A CN202310377985.9A CN202310377985A CN116415989A CN 116415989 A CN116415989 A CN 116415989A CN 202310377985 A CN202310377985 A CN 202310377985A CN 116415989 A CN116415989 A CN 116415989A
Authority
CN
China
Prior art keywords
gigabit
layer
data
potential customer
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310377985.9A
Other languages
Chinese (zh)
Inventor
苟昱辰
周钰
张菁菁
王孝天
王帅兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chinaccs Information Industry Co ltd
Original Assignee
Chinaccs Information Industry Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chinaccs Information Industry Co ltd filed Critical Chinaccs Information Industry Co ltd
Priority to CN202310377985.9A priority Critical patent/CN116415989A/en
Publication of CN116415989A publication Critical patent/CN116415989A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • G06Q30/0202Market predictions or forecasting for commercial activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • G06Q50/40
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Data Mining & Analysis (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Accounting & Taxation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Marketing (AREA)
  • Game Theory and Decision Science (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a gigabit potential client prediction method, a gigabit potential client prediction device, computer equipment and a storage medium, which relate to the field of intelligent marketing application of operators and clients, and have the technical scheme that: the gigabit potential customer prediction method is characterized by comprising the steps of acquiring communication data of all customers in an operator; according to business logic, screening significant indexes with larger correlation to obtain a plurality of sample data sets; carrying out data cleaning and normalization processing on the communication data; and training the data through the component convolutional neural network model to obtain a trained convolutional neural network model. The beneficial effects of the invention are as follows: aiming at the gigabit upgrading field, a deep learning algorithm in artificial intelligence is introduced, data processing and standardization are realized, features are extracted from a one-dimensional convolutional neural network, classification training and prediction of a full-connection network are realized, potential trends of clients are mined, data prediction accuracy is improved, and the mining degree of the gigabit potential clients is improved.

Description

Gigabit potential customer prediction method, gigabit potential customer prediction device, computer equipment and storage medium
Technical Field
The invention relates to the field of intelligent marketing application of operators and clients, in particular to a gigabit potential client prediction device, a gigabit potential client computer device and a gigabit potential client storage medium.
Background
With the informatization development of operators, the scene of the customer upgrading scene is more and more complex, the data volume is developed from TB to PB level, and the data volume is developed from a relational database like a big data platform. Big data is tied to artificial intelligence and to clients. In order to mine potential kilomega upgrade clients, the marketing success rate is improved, historical data collection processing is required to be carried out on the stock clients, and meanwhile, the data processing and algorithm are required to be optimized in consideration of the availability and high efficiency requirements of the data.
The method is based on operator data, comprises a relational database and a big data hadoop database, and realizes the process from business input, learning data, characteristic engineering and data modeling to target output.
Disclosure of Invention
The invention aims to provide a gigabit potential client prediction method, a gigabit potential client prediction device, a gigabit potential client prediction computer device and a gigabit potential client storage medium.
The invention is realized by the following measures: the gigabit potential customer prediction method is characterized by comprising the steps of acquiring communication data of all customers in an operator;
according to business logic, screening significant indexes with larger correlation to obtain a plurality of sample data sets;
carrying out data cleaning and normalization processing on the communication data;
and training the data through the component convolutional neural network model to obtain a trained convolutional neural network model.
The sample dataset comprises positive samples, negative samples, prediction samples;
positive sample specific calibers are defined as clients whose history has had a clear tendency to upgrade and has been successfully upgraded, negative sample specific calibers are defined as clients that appear to reject and do not wish to participate in the upgrade, and predicted sample specific calibers are not defined explicitly, requiring model prediction to give results.
The convolutional neural network model comprises 5 layers, namely: an input layer, a C1 one-dimensional convolution layer, a C2 one-dimensional convolution layer, an S3 full connection layer and an output layer;
the input layer transmits the preprocessed parameters to the C1 convolution layer for convolution operation.
For each sample in the dataset or each batch of samples in each iteration, the following is performed:
forward propagation: gradually calculating the output of each layer of network from the network input layer to the network output layer;
back propagation: calculating the error of the output layer based on the cost function and reversely and gradually spreading the error to the first hidden layer, thereby obtaining residual errors of all layers;
calculating the gradient: calculating the gradient of the network weight and the bias;
updating weights: the weights and biases of the network are updated.
The method also comprises the step of inputting the parameters of the verification set into the trained neural network for verification, and specifically comprises the following steps:
the output layer outputs a one-dimensional vector, and the one-dimensional vector is compared with expected output and then is used for determining super parameters in the model;
and (3) inputting echo data of the test sample into a trained neural network for testing, outputting a two-dimensional vector by an output layer, calculating an input predicted result according to the requirement of an activation function in the network, comparing an ideal output result, testing for a plurality of times, counting a prediction error, evaluating the generalization capability of the model, and generating a gigabit potential customer prediction model based on a convolutional neural network algorithm if the error meets the required requirement.
The activation functions of the input layer, the C1 one-dimensional convolution layer and the C2 one-dimensional convolution layer in the convolution neural network are all ReLU functions during training;
the full connection layer reaches the output layer through the linear activation function, a total of 2 nodes are output, and finally a row vector or a column vector with 1 element in two dimensions is output, wherein each dimension represents the predicted category.
The convolutional neural network model also comprises a pooling layer, and the pooling layer takes the maximum value in a section of adjacent region H as the final output of the region.
The gigabit potential customer prediction device is characterized by comprising an acquisition module, a data processing module and a data processing module, wherein the acquisition module is used for acquiring communication data of all customers in an operator and screening out a plurality of data sets;
the processing module is used for cleaning and normalizing the data;
the training module is used for training the data by utilizing a deep learning model CNN algorithm to obtain a convolutional neural network model;
and the prediction module is used for predicting the gigabit potential client to be tested based on the trained gigabit potential client prediction model.
A computer device, comprising: a processor and a memory, the processor for executing a gigabit potential customer prediction program stored in the memory to implement the gigabit potential customer prediction method of any of claims 1-7.
A storage medium storing one or more programs executable by one or more processors to implement the gigabit potential customer prediction method of any one of claims 1-7.
The technical scheme provided by the embodiment of the invention has the beneficial effects that: aiming at the gigabit upgrading field, a deep learning algorithm in artificial intelligence is introduced, data processing and standardization are realized, features are extracted from a one-dimensional convolutional neural network, classification training and prediction of a full-connection network are realized, potential trends of clients are mined, data prediction accuracy is improved, and the mining degree of the gigabit potential clients is improved.
Drawings
For a clearer description of the technical solutions of the present invention, the drawings used in the embodiments will be briefly described below, and it is obvious that the drawings listed below are only some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a gigabit potential customer prediction method provided by an embodiment of the present invention;
FIG. 2 is an overall flow chart of model training according to an embodiment of the present invention;
FIG. 3 is an overall flow chart of model prediction according to an embodiment of the present invention;
FIG. 4 is a block diagram of a convolutional neural network algorithm according to an embodiment of the present invention;
FIG. 5 is a dimension structure diagram of a convolutional neural network algorithm according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a gigabit potential customer premise equipment device according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following examples in order to make the objects, technical solutions and advantages of the present invention more apparent. Of course, the specific embodiments described herein are for purposes of illustration only and are not intended to limit the invention.
Embodiment one:
referring to fig. 1-5, a gigabit potential customer prediction method is characterized by comprising step S1, obtaining communication data of all customers in an operator.
Firstly, business understanding and modeling idea carding are carried out.
And secondly, establishing a feature broad table to construct data satisfaction carding and modeling structure carding according to service requirements.
The modeling is a training process of a classification classifier, and aims to distinguish whether a user has an upgrading tendency or not. The modeling mainly starts from communication data of clients, and the data are trained by using a deep learning model CNN algorithm to obtain a target model.
And S2, screening significant indexes with larger correlation according to service logic to obtain a plurality of sample data sets.
The significant indicators with larger correlation comprise a broadband time-interval use indicator, a historical behavior indicator, a DPI attribute indicator, a client product structure indicator, a broadband quality difference indicator, a base station indicator and the like.
The sample data set comprises a positive sample, a negative sample and a prediction sample;
positive sample specific calibers are defined as clients whose history has had a clear tendency to upgrade and has been successfully upgraded, negative sample specific calibers are defined as clients that appear to reject and do not wish to participate in the upgrade, and predicted sample specific calibers are not defined explicitly, requiring model prediction to give results.
And step S3, carrying out data cleaning and normalization processing on the communication data. The method comprises the following steps: the incoming data is required to be cleaned and processed to form clean data meeting the algorithm requirements.
Before modeling, the data quality must be fully checked, and abnormal values, outliers and missing values are corrected and filled, so that the purity of the modeling data is ensured. Missing value processing: class variables are filled with modes, continuous variables are filled with 0, mean and median. Outlier processing: continuous variable, cap method is used to remove the extreme values above 85% quantile. Multiple co-linearity processing: and eliminating the factor variable with multiple collinearity through correlation analysis.
The correlation analysis adopts the pearson correlation coefficient to realize feature correlation calculation, identifies the features with high correlation, and only one feature compression is realized by using the high correlation feature group in the feature engineering. And performing characteristic relevance calculation on the normalized classification variable by adopting the pearson correlation coefficient, wherein the calculation formula is a first formula:
p x,y =cov(x,y)/(σ x σ y ) (1)
wherein the pearson correlation coefficient between the two variables is p x,y The method comprises the steps of carrying out a first treatment on the surface of the cov (x, y), the covariance of two variables, (σ x σ y ) Is the standard deviation; and performing feature compression on the classification variable based on the Pearson correlation coefficient.
Further, a large number of features are required to be normalized, derived variables are processed, similar item features are removed, and operations such as feature removal are not affected.
The normalization processing realizes feature dimension removal, the derivative variable processing realizes potential feature mining, the similar item feature elimination realizes high similarity feature deletion, and the influence-free feature elimination realizes useless feature non-molding.
Normalization:
y-(x-minVaiuc)/(maxValuc-minValuc)
wherein: x is the current dimension value, minValue is the minimum value in the dimension, maxValue is the maximum value in the dimension, and y is the normalized value.
And S4, constructing a convolutional neural network model, and training data through the model to obtain a trained convolutional neural network model.
The convolutional neural network model comprises 5 layers, namely: an input layer, a C1 one-dimensional convolution layer, a C2 one-dimensional convolution layer, an S3 full connection layer and an output layer;
the input layer transmits the preprocessed parameters to the C1 convolution layer for convolution operation.
For each sample in the dataset or each batch of samples in each iteration, the following is performed:
forward propagation: gradually calculating the output of each layer of network from the network input layer to the network output layer;
back propagation: calculating the error of the output layer based on the cost function and reversely and gradually spreading the error to the first hidden layer, thereby obtaining residual errors of all layers;
calculating the gradient: calculating the gradient of the network weight and the bias;
updating weights: the weights and biases of the network are updated.
The method also comprises the step of inputting the parameters of the verification set into the trained neural network for verification, and specifically comprises the following steps:
the output layer outputs a one-dimensional vector, and the one-dimensional vector is compared with expected output and then is used for determining super parameters in the model;
and (3) inputting echo data of the test sample into a trained neural network for testing, outputting a two-dimensional vector by an output layer, calculating an input predicted result according to the requirement of an activation function in the network, comparing an ideal output result, testing for a plurality of times, counting a prediction error, evaluating the generalization capability of the model, and generating a gigabit potential customer prediction model based on a convolutional neural network algorithm if the error meets the required requirement.
The activation functions of the input layer, the C1 one-dimensional convolution layer and the C2 one-dimensional convolution layer in the convolution neural network all adopt ReLU functions during training;
the full connection layer reaches the output layer through the linear activation function, a total of 2 nodes are output, and finally a row vector or a column vector with 1 element in two dimensions is output, wherein each dimension represents the predicted category.
One-dimensional convolution layer: the convolution layer is mainly composed of a plurality of convolution kernels with local perception and parameter sharing characteristics, the characteristics of input data are extracted through executing convolution operation, and calculated parameters and calculated amount can be reduced while various characteristics are learned. The input of 1DCNN is a one-dimensional vector, so the convolution kernel is one-dimensional, and the one-dimensional convolution operation is shown in the following formula (3):
Figure SMS_1
wherein:
Figure SMS_2
input and bias of the t-th and k-th neurons, respectively>
Figure SMS_3
Is the convolution kernel between the ith neuron of the t-1 th layer and the kth neuron of the t layer,/and the (I)>
Figure SMS_4
Output of the ith neuron of the t-1 th layer, N t-1 For the number of neurons at layer t-1, conv1D is a one-dimensional convolution operation.
After convolution calculations, an activation function needs to be introduced in order to increase the nonlinearity of the neural network model. Correcting the linear unit (rectified linear unit, reLU) function accelerates the convergence of the network, preventing the gradient from disappearing, as expressed in formula (4) below:
ReLU(x)=max(0,x) (4)
thus, equation (5) below is the final output of each neuron in the convolutional layer:
Figure SMS_5
pooling layer: after the convolution layer, a pooling layer is typically employed to speed up computation, reduce computation costs, and prevent overfitting problems, and to maintain translational invariance of the features. The usual pooling has max-pooling and mean-pooling, and the study uses max-pooling, i.e. takes the maximum value in a section of adjacent region H as the final output of that region, as shown in equation (6) below.
Figure SMS_6
Dropout layer: dropout refers to the fact that in the deep learning training process, for a neural network training unit, the neural network training unit is temporarily removed from the network with a certain probability.
Full tie layer: the output of each layer of the fully connected layer is calculated by the following formula, and the input of the fully connected layer is a one-dimensional vector paved by multidimensional eigenvectors of a plurality of convolution layers and pooling layers.
Figure SMS_7
Wherein:
Figure SMS_8
activation value of the jth neuron for t-1 layer, < >>
Figure SMS_9
For the activation value of the ith neuron of the t-th layer,
Figure SMS_10
is the (t+1) th layer (j) th neuron and the (t) th layer (i) th nerveWeights between menstrual elements->
Figure SMS_11
Bias all neurons for the t layer against the j-th neuron for the t+1 layer.
The programming language used for modeling is Python, and the libraries used are Pandas, numpy, sklearn, pytorch. And constructing a model on the training set, evaluating the accuracy and the effectiveness of the model on the testing set through model evaluation indexes such as accuracy, recall rate or ROC curve, and preferentially storing. And finally, predicting and outputting the data to be predicted by using the stored optimal model.
According to the method, according to data stored in an operator data warehouse, the output content comprises significant indexes of kilomega upgrade of a customer, wherein the significant indexes comprise broadband time-interval use indexes, historical behavior indexes, DPI attribute indexes, customer product structure indexes, broadband quality difference indexes, base station indexes and the like, and the ascending kilomega tendency probability is calculated through a trained kilomega potential customer prediction model, and if the tendency probability is larger than a set threshold, the user is set as a potential upgrade user.
Embodiment two:
referring to fig. 6, the present embodiment provides a customer loss early warning attribution device for an embodiment of a gigabit potential customer prediction method, which includes an obtaining module 501, configured to obtain communication data of all customers in an operator, and screen out multiple data sets;
the processing module 502 is used for cleaning and normalizing the data;
training the data by using a deep learning model CNN algorithm to obtain a convolutional neural network model by a training module 503;
and the prediction module 504 is configured to predict the gigabit potential client for the to-be-tested client based on the trained gigabit potential client prediction model.
The detailed description of each module is referred to the corresponding related description of the method embodiment, and is not repeated herein.
The client loss early warning attribution device provided by the embodiment of the invention is used for executing the client loss early warning attribution method provided by the embodiment, the implementation mode and the principle are the same, and details refer to the related description of the embodiment of the method and are not repeated.
Embodiment III:
referring to fig. 7, this embodiment provides a computer apparatus for a gigabit potential customer prediction method in the first embodiment or a customer churn early warning attribution device in the second embodiment, including: a processor 601 and a memory 602, the processor is configured to execute a gigabit potential customer prediction program stored in the memory to implement the gigabit potential customer prediction method. Wherein the processor 601 and the memory 602 may be connected by a bus or otherwise. In fig. 7, connection via a bus is taken as an example.
The processor 601 may be a central processing unit (Central Processing Unit, CPU). The processor 601 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or combinations thereof.
The memory 602 is used as a non-transitory computer readable storage medium for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the methods provided in the embodiments of the present invention. The processor 601 executes various functional applications of the processor and data processing, i.e. implements the methods of the method embodiments described above, by running non-transitory software programs, instructions, and modules stored in the memory 602.
The memory 602 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for a function; the storage data area may store data created by the processor 601, etc. In addition, the memory 602 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 602 may optionally include memory located remotely from processor 601, such remote memory being connectable to processor 601 through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory 602 that, when executed by the processor 601, perform the methods of the method embodiments described above.
The specific details of the computer device may be correspondingly understood by referring to the corresponding related descriptions and effects in the above method embodiments, which are not repeated herein.
The foregoing is only illustrative of the present invention and is not to be construed as limiting thereof, but rather as various modifications, equivalent arrangements, improvements, etc., within the spirit and principles of the present invention.

Claims (10)

1. The gigabit potential customer prediction method is characterized by comprising the steps of acquiring communication data of all customers in an operator;
according to business logic, screening significant indexes with larger correlation to obtain a plurality of sample data sets;
carrying out data cleaning and normalization processing on the communication data;
and training the data through the component convolutional neural network model to obtain a trained convolutional neural network model.
2. The gigabit potential customer prediction method of claim 1, wherein the sample dataset comprises positive samples, negative samples, predicted samples;
positive sample specific calibers are defined as clients whose history has had a clear tendency to upgrade and has been successfully upgraded, negative sample specific calibers are defined as clients that appear to reject and do not wish to participate in the upgrade, and predicted sample specific calibers are not defined explicitly, requiring model prediction to give results.
3. The gigabit potential customer prediction method of claim 1, wherein the convolutional neural network model comprises 5 layers of: an input layer, a C1 one-dimensional convolution layer, a C2 one-dimensional convolution layer, an S3 full connection layer and an output layer;
the input layer transmits the preprocessed parameters to the C1 convolution layer for convolution operation.
4. A gigabit potential customer prediction method according to claim 3, wherein for each sample or each batch of samples in the dataset in each iteration the following is performed:
forward propagation: gradually calculating the output of each layer of network from the network input layer to the network output layer;
back propagation: calculating the error of the output layer based on the cost function and reversely and gradually spreading the error to the first hidden layer, thereby obtaining residual errors of all layers;
calculating the gradient: calculating the gradient of the network weight and the bias;
updating weights: the weights and biases of the network are updated.
5. The gigabit potential customer prediction method of claim 4, further comprising inputting verification set parameters into a trained neural network for verification, in particular:
the output layer outputs a one-dimensional vector, and the one-dimensional vector is compared with expected output and then is used for determining super parameters in the model;
and (3) inputting echo data of the test sample into a trained neural network for testing, outputting a two-dimensional vector by an output layer, calculating an input predicted result according to the requirement of an activation function in the network, comparing an ideal output result, testing for a plurality of times, counting a prediction error, evaluating the generalization capability of the model, and generating a gigabit potential customer prediction model based on a convolutional neural network algorithm if the error meets the required requirement.
6. The gigabit potential customer prediction method of claim 5, wherein the input layer, the C1 one-dimensional convolutional layer, and the C2 one-dimensional convolutional layer in the convolutional neural network all employ ReLU functions for activation functions during training;
the full connection layer reaches the output layer through the linear activation function, a total of 2 nodes are output, and finally a row vector or a column vector with 1 element in two dimensions is output, wherein each dimension represents the predicted category.
7. A gigabit potential customer prediction method in accordance with claim 3, wherein the convolutional neural network model further comprises a pooling layer that takes the maximum value within a section of the adjacent region H as the final output of that region.
8. The gigabit potential customer prediction device is characterized by comprising an acquisition module, a data processing module and a data processing module, wherein the acquisition module is used for acquiring communication data of all customers in an operator and screening out a plurality of data sets;
the processing module is used for cleaning and normalizing the data;
the training module is used for training the data by utilizing a deep learning model CNN algorithm to obtain a convolutional neural network model;
and the prediction module is used for predicting the gigabit potential client to be tested based on the trained gigabit potential client prediction model.
9. A computer device, comprising: a processor and a memory, the processor for executing a gigabit potential customer prediction program stored in the memory to implement the gigabit potential customer prediction method of any of claims 1-7.
10. A storage medium storing one or more programs executable by one or more processors to implement the gigabit potential customer prediction method of any one of claims 1-7.
CN202310377985.9A 2023-04-10 2023-04-10 Gigabit potential customer prediction method, gigabit potential customer prediction device, computer equipment and storage medium Pending CN116415989A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310377985.9A CN116415989A (en) 2023-04-10 2023-04-10 Gigabit potential customer prediction method, gigabit potential customer prediction device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310377985.9A CN116415989A (en) 2023-04-10 2023-04-10 Gigabit potential customer prediction method, gigabit potential customer prediction device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116415989A true CN116415989A (en) 2023-07-11

Family

ID=87054350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310377985.9A Pending CN116415989A (en) 2023-04-10 2023-04-10 Gigabit potential customer prediction method, gigabit potential customer prediction device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116415989A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117098171A (en) * 2023-08-11 2023-11-21 武汉博易讯信息科技有限公司 FTTR network user identification method, system, computer equipment and readable medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117098171A (en) * 2023-08-11 2023-11-21 武汉博易讯信息科技有限公司 FTTR network user identification method, system, computer equipment and readable medium
CN117098171B (en) * 2023-08-11 2024-03-29 武汉博易讯信息科技有限公司 FTTR network user identification method, system, computer equipment and readable medium

Similar Documents

Publication Publication Date Title
US11334795B2 (en) Automated and adaptive design and training of neural networks
CN109359725B (en) Training method, device and equipment of convolutional neural network model and computer readable storage medium
US20230048405A1 (en) Neural network optimization method and apparatus
CN107292097B (en) Chinese medicine principal symptom selection method based on feature group
CN114037844A (en) Global rank perception neural network model compression method based on filter characteristic diagram
CN113095370B (en) Image recognition method, device, electronic equipment and storage medium
US20220076101A1 (en) Object feature information acquisition, classification, and information pushing methods and apparatuses
CN112699941B (en) Plant disease severity image classification method, device, equipment and storage medium
US10956825B1 (en) Distributable event prediction and machine learning recognition system
US11151463B2 (en) Distributable event prediction and machine learning recognition system
CN116415989A (en) Gigabit potential customer prediction method, gigabit potential customer prediction device, computer equipment and storage medium
CN117061322A (en) Internet of things flow pool management method and system
CN113987236B (en) Unsupervised training method and unsupervised training device for visual retrieval model based on graph convolution network
US10643092B2 (en) Segmenting irregular shapes in images using deep region growing with an image pyramid
CN110533109A (en) A kind of storage spraying production monitoring data and characteristic analysis method and its device
CN114202417A (en) Abnormal transaction detection method, apparatus, device, medium, and program product
WO2024078112A1 (en) Method for intelligent recognition of ship outfitting items, and computer device
US10776923B2 (en) Segmenting irregular shapes in images using deep region growing
Wendelberger et al. Monitoring Deforestation Using Multivariate Bayesian Online Changepoint Detection with Outliers
EP4109374A1 (en) Data processing method and device
CN112381215B (en) Self-adaptive search space generation method and device oriented to automatic machine learning
WO2022262603A1 (en) Method and apparatus for recommending multimedia resources, device, storage medium, and computer program product
Baram et al. Forecasting by density shaping using neural networks
CN115953584A (en) End-to-end target detection method and system with learnable sparsity
CN116484111A (en) Multi-scale collaborative filtering recommendation method and system based on width learning system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination