CN111275268A - Pricing process efficiency prediction method, device, equipment and storage medium - Google Patents

Pricing process efficiency prediction method, device, equipment and storage medium Download PDF

Info

Publication number
CN111275268A
CN111275268A CN202010123415.3A CN202010123415A CN111275268A CN 111275268 A CN111275268 A CN 111275268A CN 202010123415 A CN202010123415 A CN 202010123415A CN 111275268 A CN111275268 A CN 111275268A
Authority
CN
China
Prior art keywords
network model
neural network
process efficiency
sample data
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010123415.3A
Other languages
Chinese (zh)
Inventor
刘帅
丛新法
侯青军
杨通军
娄晓东
韩伟
宋世超
徐令瀚
隋宇晖
陈丽
郝军
王磊
刘晓伟
王翠玲
韩冰
杨陈学璋
刘增泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202010123415.3A priority Critical patent/CN111275268A/en
Publication of CN111275268A publication Critical patent/CN111275268A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06Q50/40

Abstract

The application provides a method, a device, equipment and a storage medium for forecasting batching process efficiency. The method comprises the following steps: receiving a plurality of influence parameters input by a user, wherein the influence parameters are parameters influencing rating process efficiency in a dialogue list charging system; inputting each influence parameter into a trained neural network model to obtain a predicted value of rating process efficiency output by the neural network model, wherein the predicted value is used for adjusting the call bill charging system; and displaying the predicted value. The pricing process efficiency under multiple influence parameters can be predicted through the pre-trained neural network model, and the prediction accuracy of the pricing process efficiency is improved.

Description

Pricing process efficiency prediction method, device, equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for predicting pricing process efficiency.
Background
The rating function is the most important part in the call ticket charging system in the telecommunication industry, the processing efficiency of the rating function directly influences the stability of the whole call ticket charging system, and the rating function is the most important part influencing the user perception in the telecommunication industry. However, in the operation process of the call ticket charging system, the processing efficiency of the rating process is often affected by various factors, which reduces the processing efficiency, forms the backlog of the call ticket, affects each link of the later account management and control, and affects the perception of the user. Therefore, it is important to accurately predict the rating process efficiency of the dialog list charging system.
Generally, various influencing parameters of the rating process efficiency are checked by workers, and the rating process efficiency when the call ticket charging system operates is predicted according to artificial experience. The staff can optimize and adjust the call ticket charging system according to the predicted rating process efficiency, so that the stability and the operation efficiency of the call ticket charging system are ensured.
However, the accuracy of the efficiency of the rating process predicted by human experience is low.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for predicting pricing process efficiency, and aims to solve the problem that the prediction accuracy for predicting the pricing process efficiency is low at present.
In a first aspect, an embodiment of the present application provides a method for predicting efficiency of a batching process, including:
receiving a plurality of influence parameters input by a user, wherein the influence parameters are parameters influencing rating process efficiency in a dialogue list charging system;
inputting each influence parameter into a trained neural network model to obtain a predicted value of rating process efficiency output by the neural network model, wherein the predicted value is used for adjusting the call bill charging system;
and displaying the predicted value.
In one possible embodiment, the method further comprises:
collecting a plurality of sample data, wherein each sample data comprises a plurality of influence parameter samples and corresponding batch price carrying efficiency values;
constructing the neural network model;
and training the neural network model through a plurality of sample data to obtain the trained neural network model.
In one possible embodiment, collecting a plurality of sample data comprises:
when the call ticket charging system operates, acquiring various influence parameters at multiple moments and rating process efficiency;
and (4) forming a sample data by the various influence parameters and the pricing process efficiency at the same moment to obtain a plurality of sample data.
In one possible embodiment, training the neural network model with a plurality of sample data comprises:
dividing a plurality of sample data into a training set and a test set;
and training the network parameters of the neural network model through the training set, and testing the trained neural network model through the testing set.
In one possible embodiment, the neural network model is a long-short term memory network model.
In one possible embodiment, the receiving of the plurality of influence parameters input by the user includes:
displaying a parameter configuration interface;
receiving a plurality of influence parameters input by a user on the parameter configuration interface, wherein the influence parameters comprise at least one of the following:
the memory value of the CPU, the load capacity of the CPU, the ticket backlog and the network load.
In a second aspect, an embodiment of the present application provides an apparatus for predicting efficiency of a batching process, including:
the receiving module is used for receiving a plurality of influence parameters input by a user, wherein the influence parameters are parameters influencing the rating process efficiency in the dialogue list charging system;
the processing module is used for inputting each influence parameter into the trained neural network model to obtain a predicted value of the rating process efficiency output by the neural network model, wherein the predicted value is used for adjusting the ticket charging system;
and the display module is used for displaying the predicted value.
In a possible embodiment, the apparatus further comprises a training module;
the training module is configured to:
collecting a plurality of sample data, wherein each sample data comprises a plurality of influence parameter samples and corresponding batch price carrying efficiency values;
constructing the neural network model;
and training the neural network model through a plurality of sample data to obtain the trained neural network model.
In one possible embodiment, the training module is configured to:
when the call ticket charging system operates, acquiring various influence parameters at multiple moments and rating process efficiency;
and (4) forming a sample data by the various influence parameters and the pricing process efficiency at the same moment to obtain a plurality of sample data.
In one possible embodiment, the training module is configured to:
dividing a plurality of sample data into a training set and a test set;
and training the network parameters of the neural network model through the training set, and testing the trained neural network model through the testing set.
In one possible embodiment, the neural network model is a long-short term memory network model.
In a possible implementation, the receiving module is configured to:
displaying a parameter configuration interface;
receiving a plurality of influence parameters input by a user on the parameter configuration interface, wherein the influence parameters comprise at least one of the following:
the memory value of the CPU, the load capacity of the CPU, the ticket backlog and the network load.
In a third aspect, an embodiment of the present application provides a device for predicting efficiency of a batching process, including: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the rating process efficiency prediction method as described above in the first aspect and in various possible embodiments of the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where computer-executable instructions are stored, and when a processor executes the computer-executable instructions, the method for predicting the rating process efficiency according to the first aspect and various possible implementation manners of the first aspect is implemented.
According to the method, the device, the equipment and the storage medium for predicting the rating process efficiency, a plurality of influence parameters input by a user are received, wherein the influence parameters are parameters influencing the rating process efficiency in a dialogue list charging system; inputting each influence parameter into the trained neural network model to obtain a predicted value of the rating process efficiency output by the neural network model, wherein the predicted value is used for adjusting the call bill charging system and displaying the predicted value, and the rating process efficiency under the plurality of influence parameters can be predicted through the pre-trained neural network model, so that the prediction accuracy of the rating process efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a schematic structural diagram of a rating process efficiency prediction system according to an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating a method for predicting rating process efficiency according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a rating process efficiency prediction method according to another embodiment of the present application;
fig. 4 is a block diagram of a pricing process efficiency prediction system according to another embodiment of the present application;
fig. 5 is a schematic structural diagram of a device for predicting pricing process efficiency according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a pricing process efficiency predicting apparatus according to another embodiment of the present application;
fig. 7 is a schematic hardware structure diagram of a pricing process efficiency predicting device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a schematic structural diagram of a rating process efficiency prediction system according to an embodiment of the present application. As shown in fig. 1, the rating process efficiency prediction system provided in this embodiment includes a ticket charging device 11 and a rating process efficiency prediction device 12. The call ticket charging device 11 is a device for operating a call ticket charging system, and the rating process efficiency predicting device 12 is a device for predicting rating process efficiency. The bill charging device 11 may be a desktop computer, a server, etc., and is not limited herein. The rating process efficiency predicting device 12 may be a mobile phone, a portable computer, a desktop computer, or the like, and is not limited herein. The bill charging device 11 can operate the bill charging system, record the influencing parameters influencing the rating process efficiency and the rating process efficiency in the process of operating the bill charging system at different times, and then send the influencing parameters recording different times and the corresponding rating process efficiency to the rating process efficiency prediction device 12. The rating process efficiency prediction device 12 may form a training sample according to the influence parameters at different times and corresponding rating process efficiencies, train the neural network model, predict the corresponding rating process efficiency based on the multiple influence parameters input by the user when the trained neural network model receives the multiple influence parameters input by the user, and display the predicted rating process efficiency, so that the user can view the predicted rating process prediction efficiency, so that the user can simulate and predict the ticket charging system under various working conditions, thereby optimizing and adjusting the ticket charging system, and ensuring the stability and the operating efficiency of the ticket charging system.
Fig. 2 is a schematic flow chart of a rating process efficiency prediction method according to an embodiment of the present application. As shown in fig. 2, the method includes:
s201, receiving a plurality of influence parameters input by a user, wherein the influence parameters are parameters influencing rating process efficiency in a dialogue list charging system.
In this embodiment, in the operation process of the ticket charging system, various factors may affect the rating process efficiency, and the numerical values representing the factors are used as the influencing parameters of the rating process efficiency. Optionally, the influencing parameters include, but are not limited to, at least one of: memory value of a Central Processing Unit (CPU), load of the CPU, ticket backlog, and network load. The user can input a plurality of influence parameters into the rating process efficiency prediction device.
Optionally, displaying a parameter configuration interface; and receiving a plurality of influence parameters input on the parameter configuration interface by a user.
In this embodiment, the pricing process efficiency predicting device may display a parameter configuration interface on a screen, for example, the parameter configuration interface may include a plurality of input controls affecting the parameter. The user can input various influence parameters required to participate in prediction into the corresponding input control. Or, the user may also edit each influence parameter into a configuration file, upload the configuration file through a parameter configuration interface, and analyze the configuration file by the pricing process efficiency prediction device to obtain each influence parameter.
S202, inputting each influence parameter into a trained neural network model to obtain a predicted value of rating process efficiency output by the neural network model, wherein the predicted value is used for adjusting the call bill charging system.
In this embodiment, the neural network model may be trained in advance through the collected training samples, and then each influence parameter input by the user is input to the trained neural network model, and the neural network model predicts and outputs the predicted value of the rating process efficiency. Alternatively, the neural network model may be a Long Short-Term Memory network (LSTM) model. For example, a three-layer neural network structure may be adopted, the first layer is an LSTM network model and is provided with 128 neurons, the second layer is also an LSTM model and is provided with 256 neurons, and the third layer is a fully-connected layer. The model can select Mean Squared Error (MSE) as a loss function, and the optimizer selects an Adam optimizer.
And S203, displaying the predicted value.
In this embodiment, the pricing process efficiency prediction device may display the predicted value predicted by the neural network model on a display screen, so that a user can optimize and adjust the ticket charging system according to the predicted pricing process efficiency, thereby ensuring the stability and the operating efficiency of the ticket charging system.
In the embodiment of the application, a plurality of influence parameters input by a user are received, wherein the influence parameters are parameters influencing the rating process efficiency in a dialogue list charging system; inputting each influence parameter into the trained neural network model to obtain a predicted value of the rating process efficiency output by the neural network model, wherein the predicted value is used for adjusting the call bill charging system and displaying the predicted value, and the rating process efficiency under the plurality of influence parameters can be predicted through the pre-trained neural network model, so that the prediction accuracy of the rating process efficiency is improved.
Fig. 3 is a flowchart illustrating a pricing process efficiency prediction method according to another embodiment of the present application. The embodiment describes a specific implementation process of training the neural network model in detail. As shown in fig. 3, the method includes:
s301, collecting a plurality of sample data, wherein each sample data comprises a plurality of influence parameter samples and corresponding batch price carrying efficiency values.
In this embodiment, the pricing process efficiency prediction device may collect data of a call ticket charging system during operation, and process the data to form standard index item data, where the index item data generally includes a plurality of Key values Key and a Value, for example, a host CPU index item, where Key is a host IP, a domain to which the host belongs, a host type, and the Value is an average CPU load of the host. The acquired item data can be stored in InfluxDB, which is a high-performance time sequence database special for storing time sequence data and is very suitable for index access service in the scene. The pricing process efficiency prediction device can extract index data from InfluxDB and generate a plurality of sample data according to the index data.
Optionally, S301 may include:
when the call ticket charging system operates, acquiring various influence parameters at multiple moments and rating process efficiency;
and (4) forming a sample data by the various influence parameters and the pricing process efficiency at the same moment to obtain a plurality of sample data.
In this embodiment, the influence parameters and the pricing process efficiency collected at the same time are combined into one sample data, so that a plurality of sample data are obtained.
S302, constructing the neural network model.
S303, training the neural network model through a plurality of sample data to obtain the trained neural network model.
In this embodiment, a neural network model may be constructed, and then the neural network model is trained through a plurality of sample data to obtain the trained neural network model, so that the pricing process efficiency is predicted based on the trained neural network model in the following.
Optionally, a plurality of sample data may be divided into a training set and a test set; and training the network parameters of the neural network model through the training set, and testing the trained neural network model through the testing set.
For example, 80% of all the multiple sample data may be used as the training set, and the remaining 20% may be used as the test set. The data in the training set and the data set can be subjected to dimension conversion, and the training set and the test set are uniformly converted into data patterns, such as [ samples, time steps, characteristics ], accepted by the LSTM model. The model after training in the training set can be saved as h5 model file, the testing set is used for testing, if the error rate of the testing result of the model does not meet the requirement, the h5 model file is deleted, the number of the neural network layers and the number of the neurons are adjusted, and the training set is used again for training until the error rate of the training result meets the requirement.
S304, receiving a plurality of influence parameters input by a user, wherein the influence parameters are parameters influencing the rating process efficiency in the dialogue list charging system.
In this embodiment, S304 is similar to S201 in the embodiment of fig. 2, and is not described here again.
S305, inputting each influence parameter into the trained neural network model to obtain a predicted value of the rating process efficiency output by the neural network model, wherein the predicted value is used for adjusting the call bill charging system.
In this embodiment, S305 is similar to S202 in the embodiment of fig. 2, and is not described herein again.
And S306, displaying the predicted value.
In this embodiment, S306 is similar to S203 in the embodiment of fig. 2, and is not described herein again.
The following is a description of a specific embodiment. Fig. 4 is a block diagram of a pricing process efficiency prediction system according to another embodiment of the present application. As shown in fig. 4, the pricing process efficiency prediction system may be divided into a data acquisition module, a data processing module, a data storage module, a model training module, and a data interface module, where each module directly and mainly communicates through a Rest interface, and the functions mainly implemented by each module are as follows:
a data acquisition module: the system is mainly used for collecting various influence parameters related to the rating process efficiency, such as the host CPU service condition, the host memory occupation condition, the network load condition, the rating ticket backlog condition and the like.
A data processing module: data are read from a kafka specified subject topic, basic data are converted into index item data in a fixed format according to rules, and the index item data are sent to a data storage module through a Rest interface to be stored in a database.
A data storage module: the method is mainly used for storing the index data into the InfluxDB.
A model training module: the model training module is a core function of the whole equipment, processes a Python library based on the LSTM model and the large data such as pandas, numpy and the like, and trains the neural network model by reading the stored data in the InfluxDB.
A data interface module: the module is mainly used for interacting with the front end, simulating and predicting the rating process processing efficiency under the current condition by utilizing a trained neural network model based on Django and Springboot and according to various influence parameters provided by a front end user, and returning the rating process processing efficiency to the front end.
A front end display module: the data interface module is mainly used for interacting with a user, receiving various influence parameters input by the user, sending the influence parameters to the data interface module, and then feeding the result back to the user.
The processing flow of the pricing process efficiency prediction method is as follows:
firstly, various influencing parameters influencing the rating process efficiency are sorted out, such as factors of a host CPU memory where the rating process is located, ticket backlog, network load, SDFS host CPU load, fast cube host load _ average and the like.
And secondly, collecting corresponding data by a data collection module, processing the data to form standard index item data, wherein the index item generally comprises a plurality of keys and a Value, for example, the Key is a host computer CPU index item, the Key is a host computer IP, a host computer domain, a host computer type (x86 or Power) and the like, and the Value is the average CPU load of the host computer.
And step three, storing the acquired index item data into the InfluxDB, wherein the InfluxDB is a high-performance time sequence database special for storing time sequence data and is very suitable for index access service in the scene.
And fourthly, carrying out data processing and training of the neural network model. Firstly, index item data is taken out from InfluxDB, values of all index item data at a certain moment are arranged according to a certain sequence and used as input, and pricing process efficiency at the same moment is used as output. Since there are 74 indices used in this example, the input matrix is a 74xN two-dimensional matrix, and the output matrix is a 1xN two-dimensional matrix. And then, carrying out normalization operation on the input matrix, wherein the purpose of the step is to eliminate unit limitation of data and convert the unit limitation into a dimensionless pure numerical value, so that indexes of different units or orders of magnitude can be compared and weighted conveniently, and later-stage neural network model training is facilitated.
Step five, dividing a training and testing set: and dividing the processed input and output into 80% of data as a training set, and using the rest 20% of data as a test set.
Step six, dimension conversion: the training set and the testing set are uniformly converted into the 3D model accepted by the LSTM model, namely [ sample, time step length, characteristics ], and after the conversion is finished, the neural network modeling is started.
Step seven, establishing a neural network model: in the example, a three-layer neural network structure is adopted, wherein the first layer is an LSTM network model and has 128 neurons in total, the second layer is also an LSTM model and is provided with 256 neurons, and the third layer is a full-connection layer and is provided with an output space dimension as a dimension of a training set output matrix. The MSE is selected as a loss function by the model, and the Adam optimizer is selected by the optimizer.
Step eight, training and testing a neural network: after the model is built, training and testing are started, the trained network model is stored as an h5 model file, if the error rate of the test result of the model does not meet the requirement, the h5 model file is deleted, the number of the neural network layers and the number of the neurons are adjusted, and training is carried out again until the error rate of the trained result meets the requirement.
And step nine, loading the trained model into the data interface layer, taking various influence parameters transmitted by the foreground interface as input, and returning the result after calculation by the neural network to the foreground.
And step ten, the foreground interface is responsible for receiving values of various influence parameters set by a user, transmitting the values to the data interface layer, acquiring data processed by the data interface layer, and displaying the data to the user in real time.
The embodiment finally realizes a stable and efficient billing system rating efficiency simulation system, can simulate and predict the rating efficiency of the billing system in real time according to various index information, keeps the error rate below 0.1 percent, and provides powerful data and technical support for various optimization and emergency schemes of production environment.
The traditional forecasting means for the pricing process efficiency still stays on the basis of manual experience and some simple statistical scripts, the mode is too single, the dependence degree on work experience and business knowledge is high, and the forecasting result is inaccurate and untimely. The scheme is based on the real-time prediction of the rating process efficiency of the telecommunication industry bill charging system of the LSTM model, the processing efficiency of the rating process in the state is automatically predicted based on various influence parameters influencing the rating process efficiency, the real-time prediction of the rating program processing efficiency of the dialogue bill charging system is realized, a user can check the rating efficiency in the state in real time by changing the numerical values of various influence parameters, and the system error is within 0.1%. The method can help the operation and maintenance personnel to simulate and predict the influence of various emergency dialogue list charging rating systems possibly encountered in production in advance, so as to make an emergency scheme of the situation, and can also help the operation and maintenance personnel to find the factor which has the largest influence on the dialogue list charging rating system from the source, so that various optimization and adjustment can be performed in a targeted manner, the system can operate in the optimal state, and the stability and the operation efficiency of the whole charging system are improved.
Fig. 5 is a schematic structural diagram of a device for predicting rating process efficiency according to an embodiment of the present application. As shown in fig. 5, the rating process efficiency predicting apparatus 50 includes: a receiving module 501, a processing module 502 and a display module 503.
A receiving module 501, configured to receive multiple influencing parameters input by a user, where the influencing parameters are parameters that influence rating process efficiency in a dialog list charging system;
a processing module 502, configured to input each influence parameter into a trained neural network model, to obtain a predicted value of rating process efficiency output by the neural network model, where the predicted value is used to adjust the ticket charging system;
and a display module 503, configured to display the predicted value.
The method comprises the steps of receiving a plurality of influence parameters input by a user, wherein the influence parameters are parameters influencing rating process efficiency in a dialogue list charging system; inputting each influence parameter into the trained neural network model to obtain a predicted value of the rating process efficiency output by the neural network model, wherein the predicted value is used for adjusting the call bill charging system and displaying the predicted value, and the rating process efficiency under the plurality of influence parameters can be predicted through the pre-trained neural network model, so that the prediction accuracy of the rating process efficiency is improved.
Fig. 6 is a schematic structural diagram of a device for predicting the rating process efficiency according to another embodiment of the present application. As shown in fig. 6, the pricing efficiency predicting device 50 provided in this embodiment may further include, on the basis of the pricing efficiency predicting device provided in the embodiment shown in fig. 5: a training module 504.
Optionally, the training module 504 is configured to:
collecting a plurality of sample data, wherein each sample data comprises a plurality of influence parameter samples and corresponding batch price carrying efficiency values;
constructing the neural network model;
and training the neural network model through a plurality of sample data to obtain the trained neural network model.
Optionally, the training module 504 is configured to:
when the call ticket charging system operates, acquiring various influence parameters at multiple moments and rating process efficiency;
and (4) forming a sample data by the various influence parameters and the pricing process efficiency at the same moment to obtain a plurality of sample data.
Optionally, the neural network model is a long-term and short-term memory network model.
Optionally, the receiving module 501 is configured to:
displaying a parameter configuration interface;
receiving a plurality of influence parameters input by a user on the parameter configuration interface, wherein the influence parameters comprise at least one of the following:
the memory value of the CPU, the load capacity of the CPU, the ticket backlog and the network load.
The pricing process efficiency prediction device provided in the embodiment of the present application can be used to implement the above method embodiments, and the implementation principle and technical effect are similar, which are not described herein again.
Fig. 7 is a schematic hardware structure diagram of a pricing process efficiency predicting device according to an embodiment of the present application. As shown in fig. 7, the rating process efficiency prediction apparatus 70 provided in the present embodiment includes: at least one processor 701 and a memory 702. The rating process efficiency predicting device 70 further includes a communicating section 703. The processor 701, the memory 702, and the communication section 703 are connected by a bus 704.
In particular implementations, execution of the computer-executable instructions stored by the memory 702 by the at least one processor 701 causes the at least one processor 701 to perform the rating process efficiency prediction method as described above.
For a specific implementation process of the processor 701, reference may be made to the above method embodiments, which implement principles and technical effects similar to each other, and details of this embodiment are not described herein again.
In the embodiment shown in fig. 7, it should be understood that the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in the incorporated application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in the processor.
The memory may comprise high speed RAM memory and may also include non-volatile storage NVM, such as at least one disk memory.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
The application also provides a computer-readable storage medium, wherein computer-executable instructions are stored in the computer-readable storage medium, and when a processor executes the computer-executable instructions, the pricing process efficiency prediction method is realized.
The readable storage medium may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks. Readable storage media can be any available media that can be accessed by a general purpose or special purpose computer.
An exemplary readable storage medium is coupled to the processor such the processor can read information from, and write information to, the readable storage medium. Of course, the readable storage medium may also be an integral part of the processor. The processor and the readable storage medium may reside in an Application Specific Integrated Circuits (ASIC). Of course, the processor and the readable storage medium may also reside as discrete components in the apparatus.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (14)

1. A method for predicting the efficiency of a batching process is characterized by comprising the following steps:
receiving a plurality of influence parameters input by a user, wherein the influence parameters are parameters influencing rating process efficiency in a dialogue list charging system;
inputting each influence parameter into a trained neural network model to obtain a predicted value of rating process efficiency output by the neural network model, wherein the predicted value is used for adjusting the call bill charging system;
and displaying the predicted value.
2. The method of claim 1, further comprising:
collecting a plurality of sample data, wherein each sample data comprises a plurality of influence parameter samples and corresponding batch price carrying efficiency values;
constructing the neural network model;
and training the neural network model through a plurality of sample data to obtain the trained neural network model.
3. The method of claim 2, wherein collecting a plurality of sample data comprises:
when the call ticket charging system operates, acquiring various influence parameters at multiple moments and rating process efficiency;
and (4) forming a sample data by the various influence parameters and the pricing process efficiency at the same moment to obtain a plurality of sample data.
4. The method of claim 2, wherein training the neural network model with a plurality of sample data comprises:
dividing a plurality of sample data into a training set and a test set;
and training the network parameters of the neural network model through the training set, and testing the trained neural network model through the testing set.
5. The method of claim 1, wherein the neural network model is a long-short term memory network model.
6. The method of any one of claims 1-5, wherein receiving a plurality of user input impact parameters comprises:
displaying a parameter configuration interface;
receiving a plurality of influence parameters input by a user on the parameter configuration interface, wherein the influence parameters comprise at least one of the following:
the memory value of the CPU, the load capacity of the CPU, the ticket backlog and the network load.
7. An apparatus for predicting efficiency of a rating process, comprising:
the receiving module is used for receiving a plurality of influence parameters input by a user, wherein the influence parameters are parameters influencing the rating process efficiency in the dialogue list charging system;
the processing module is used for inputting each influence parameter into the trained neural network model to obtain a predicted value of the rating process efficiency output by the neural network model, wherein the predicted value is used for adjusting the ticket charging system;
and the display module is used for displaying the predicted value.
8. The apparatus of claim 7, further comprising a training module;
the training module is configured to:
collecting a plurality of sample data, wherein each sample data comprises a plurality of influence parameter samples and corresponding batch price carrying efficiency values;
constructing the neural network model;
and training the neural network model through a plurality of sample data to obtain the trained neural network model.
9. The apparatus of claim 8, wherein the training module is configured to:
when the call ticket charging system operates, acquiring various influence parameters at multiple moments and rating process efficiency;
and (4) forming a sample data by the various influence parameters and the pricing process efficiency at the same moment to obtain a plurality of sample data.
10. The apparatus of claim 8, wherein the training module is configured to:
dividing a plurality of sample data into a training set and a test set;
and training the network parameters of the neural network model through the training set, and testing the trained neural network model through the testing set.
11. The apparatus of claim 7, wherein the neural network model is a long-short term memory network model.
12. The apparatus according to any one of claims 7-11, wherein the receiving module is configured to:
displaying a parameter configuration interface;
receiving a plurality of influence parameters input by a user on the parameter configuration interface, wherein the influence parameters comprise at least one of the following:
the memory value of the CPU, the load capacity of the CPU, the ticket backlog and the network load.
13. A batching process efficiency prediction device, comprising: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the rating process efficiency prediction method of any of claims 1-6.
14. A computer-readable storage medium having computer-executable instructions stored thereon, which when executed by a processor, implement the rating process efficiency prediction method according to any one of claims 1 to 6.
CN202010123415.3A 2020-02-27 2020-02-27 Pricing process efficiency prediction method, device, equipment and storage medium Pending CN111275268A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010123415.3A CN111275268A (en) 2020-02-27 2020-02-27 Pricing process efficiency prediction method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010123415.3A CN111275268A (en) 2020-02-27 2020-02-27 Pricing process efficiency prediction method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111275268A true CN111275268A (en) 2020-06-12

Family

ID=70999669

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010123415.3A Pending CN111275268A (en) 2020-02-27 2020-02-27 Pricing process efficiency prediction method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111275268A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914013A (en) * 2020-08-13 2020-11-10 傲普(上海)新能源有限公司 Data management method, system, terminal and medium based on pandas database and InfluxDB database

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5602964A (en) * 1993-05-21 1997-02-11 Autometric, Incorporated Automata networks and methods for obtaining optimized dynamically reconfigurable computational architectures and controls
CN105279692A (en) * 2015-11-17 2016-01-27 中国建设银行股份有限公司 Financial information technology system performance prediction method and apparatus
CN107301466A (en) * 2016-04-15 2017-10-27 中国移动通信集团四川有限公司 To business load and resource distribution and the Forecasting Methodology and forecasting system of property relationship
CN109766244A (en) * 2019-01-04 2019-05-17 中国银行股份有限公司 A kind of distributed system CPU method for detecting abnormality, device and storage medium
CN110445939A (en) * 2019-08-08 2019-11-12 中国联合网络通信集团有限公司 The prediction technique and device of capacity resource

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5602964A (en) * 1993-05-21 1997-02-11 Autometric, Incorporated Automata networks and methods for obtaining optimized dynamically reconfigurable computational architectures and controls
CN105279692A (en) * 2015-11-17 2016-01-27 中国建设银行股份有限公司 Financial information technology system performance prediction method and apparatus
CN107301466A (en) * 2016-04-15 2017-10-27 中国移动通信集团四川有限公司 To business load and resource distribution and the Forecasting Methodology and forecasting system of property relationship
CN109766244A (en) * 2019-01-04 2019-05-17 中国银行股份有限公司 A kind of distributed system CPU method for detecting abnormality, device and storage medium
CN110445939A (en) * 2019-08-08 2019-11-12 中国联合网络通信集团有限公司 The prediction technique and device of capacity resource

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
乔非: "《虚拟教学实验设计和开发技术与应用》", 31 December 2013, 同济大学出版社 *
卫红春: "《信息系统分析与设计》", 30 September 2018, 西安电子科技大学出版社 *
徐键: "计费系统负载模型的研究与应用", 《计算机工程》 *
韦鹏程: "《面向大数据应用的数据采集技术研究》", 31 December 2019, 中国原子能出版社 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914013A (en) * 2020-08-13 2020-11-10 傲普(上海)新能源有限公司 Data management method, system, terminal and medium based on pandas database and InfluxDB database
CN111914013B (en) * 2020-08-13 2023-02-28 傲普(上海)新能源有限公司 Data management method, system, terminal and medium based on pandas database and InfluxDB database

Similar Documents

Publication Publication Date Title
CN109949290B (en) Pavement crack detection method, device, equipment and storage medium
CN110264270B (en) Behavior prediction method, behavior prediction device, behavior prediction equipment and storage medium
CN111833583B (en) Training method, device, equipment and medium for power data anomaly detection model
CN112633316A (en) Load prediction method and device based on boundary estimation theory
CN111680841A (en) Short-term load prediction method and system based on principal component analysis and terminal equipment
CN112215408A (en) Rail transit passenger flow volume prediction method and device
CN109670073A (en) A kind of information conversion method and device, interaction auxiliary system
CN111461283A (en) Automatic iteration operation and maintenance method, system, equipment and storage medium of AI model
CN111275268A (en) Pricing process efficiency prediction method, device, equipment and storage medium
CN111385601B (en) Video auditing method, system and equipment
CN114781650A (en) Data processing method, device, equipment and storage medium
CN111210332A (en) Method and device for generating post-loan management strategy and electronic equipment
CN112533270B (en) Base station energy-saving processing method and device, electronic equipment and storage medium
CN109117352B (en) Server performance prediction method and device
CN109858548A (en) The judgment method and device of abnormal power consumption, storage medium, communication terminal
CN112801315A (en) State diagnosis method and device for power secondary equipment and terminal
CN111400964A (en) Fault occurrence time prediction method and device
CN114970357A (en) Energy-saving effect evaluation method, system, device and storage medium
CN113381417B (en) Three-phase load unbalance optimization method, device and terminal for power distribution network area
CN115577927A (en) Important power consumer electricity utilization safety assessment method and device based on rough set
CN111611117B (en) Hard disk fault prediction method, device, equipment and computer readable storage medium
CN107066337A (en) Determination method, device, equipment and the storage medium of equilibrium of stock
CN114971053A (en) Training method and device for online prediction model of network line loss rate of low-voltage transformer area
CN114529042A (en) Abandoned number user prediction method and device and electronic equipment
CN114282881A (en) Depreciation measuring and calculating method and device, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200612