CN116232923A - Model training method and device and network traffic prediction method and device - Google Patents

Model training method and device and network traffic prediction method and device Download PDF

Info

Publication number
CN116232923A
CN116232923A CN202211666559.9A CN202211666559A CN116232923A CN 116232923 A CN116232923 A CN 116232923A CN 202211666559 A CN202211666559 A CN 202211666559A CN 116232923 A CN116232923 A CN 116232923A
Authority
CN
China
Prior art keywords
model
parameters
training
neural network
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211666559.9A
Other languages
Chinese (zh)
Inventor
刘派
马星粟
杨鑫
杨本艳
双程
赵静
赵煜
高允翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202211666559.9A priority Critical patent/CN116232923A/en
Publication of CN116232923A publication Critical patent/CN116232923A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/06Testing, supervising or monitoring using simulated traffic

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Environmental & Geological Engineering (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Traffic Control Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The present invention relates to the field of communications technologies, and in particular, to a model training method and apparatus, and a network traffic prediction method and apparatus, which can perform prediction allocation of network resources for a place where a request is not made. The method comprises the following steps: determining a first training parameter and a second training parameter; the first training parameters are network flow parameters of at least one cell in a first preset time period, and the second training parameters are network flow parameters of at least one cell in a second preset time period; inputting the first training parameters and the second training parameters into the current neural network model for iterative training until the loss value of the current neural network model meets the preset condition; determining the current neural network model as a network flow prediction model; the network flow prediction model is used for predicting the network flow parameters of the target cell in the target time period according to the input historical network flow parameters of the target cell. The method and the device are used in the model training and network traffic prediction process.

Description

Model training method and device and network traffic prediction method and device
Technical Field
The present disclosure relates to the field of communications technologies, and in particular, to a model training method and apparatus, and a network traffic prediction method and apparatus.
Background
In the carrier allocation method of the related art, an operator determines the time and place of a large-scale activity first, a network side sends a guarantee request of the place, and then an on-line operation and maintenance personnel allocates resources to the place at the appointed time. Therefore, how to predict the network traffic of a cell is a technical problem to be solved.
Disclosure of Invention
The application provides a model training method and device, and a network traffic prediction method and device, which can predict the network traffic of a cell.
In order to achieve the above purpose, the present application adopts the following technical scheme:
in a first aspect, the present application provides a model training method, the method comprising: determining a first training parameter and a second training parameter; the first training parameters are network flow parameters of at least one cell in a first preset time period, and the second training parameters are network flow parameters of at least one cell in a second preset time period; inputting the first training parameters and the second training parameters into the current neural network model for iterative training until the loss value of the current neural network model meets the preset condition; the loss value is used for representing the second training parameter and an error value of the output result; the output result is the network flow parameter of at least one cell in a second preset time period, which is obtained by predicting the current neural network model according to the first training parameter and the second training parameter; determining the current neural network model as a network flow prediction model; the network flow prediction model is used for predicting the network flow parameters of the target cell in the target time period according to the input historical network flow parameters of the target cell.
With reference to the first aspect, in one possible implementation manner, inputting the first training parameter and the second training parameter into the current neural network model for iterative training until a loss value of the current neural network model meets a preset condition, includes: step 1, inputting a first training parameter and a second training parameter into a current neural network model, and determining an output result of the current neural network model; step 2, determining a loss value of the current neural network model according to the output result and the second training parameter; step 3, determining whether the loss value of the current neural network model meets a preset condition; step 4, if the network traffic prediction model meets the requirement, the current neural network model is used as the network traffic prediction model; step 5, if the model parameters do not meet the loss value, adjusting the model parameters of the current neural network model; and step 6, taking the current neural network model with the parameters adjusted as the current neural network model, and iteratively executing the steps 1, 2, 3, 4, 5 and 6 until the loss value of the current neural network model meets the preset condition.
With reference to the first aspect, in one possible implementation manner, the network traffic parameters include h parameters; the output layer of the current neural network model comprises h output layer nodes; the h output layer nodes are in one-to-one correspondence with the h parameters; the i output layer nodes are used for outputting the i-th parameter predicted by the current neural network model; h and i are positive integers; determining a loss value of the current neural network model according to the output result and the second training parameter, including: determining a loss value of each output node in the h output layer nodes; the loss value of the ith node is determined according to the output result of the ith node and the ith parameter in the second training parameters; the sum of the loss values of each output node is the loss value of the current neural network model.
With reference to the first aspect, in one possible implementation manner, the hidden layer of the current neural network model includes q hidden layer nodes; an h hidden layer node of the q hidden layer nodes comprises a model parameter; the a model parameters are in one-to-one correspondence with the a output layer nodes; q and a are positive integers; according to the loss value, adjusting model parameters of the current neural network model, including: adjusting the value of the jth model parameter of the h hidden layer node to be: the sum of the current value of the jth model parameter of the jth hidden layer node and the loss value of the jth output layer node; j is a positive integer.
In a second aspect, the present application provides a network traffic prediction method, the method comprising: acquiring network flow parameters of a target cell in a third preset time period; inputting network flow parameters of the target cell in a third preset time period into a network flow prediction model, and determining network flow parameters of the target cell in a fourth preset time period; the network traffic prediction model is a network traffic prediction model trained according to the model method of the first aspect.
With reference to the second aspect, in one possible implementation manner, the network traffic parameters of the target cell in the fourth preset time period include network traffic parameters of each time interval; after determining the network traffic parameter of the target cell within the fourth preset time period, the method comprises the following steps: determining a target cell with the network flow parameter in a fourth preset time period being larger than a first preset threshold and smaller than a second preset threshold as a volume-reducing cell; determining a target cell with the network flow parameter larger than a second preset threshold value in a fourth preset time period as a capacity expansion cell; and allocating the carrier resources to be allocated of the capacity-reducing cell to the capacity-expanding cell.
In a third aspect, the present application provides a model training apparatus, the apparatus comprising: a processing unit; the processing unit is used for determining a first training parameter and a second training parameter; the first training parameters are network flow parameters of at least one cell in a first preset time period, and the second training parameters are network flow parameters of at least one cell in a second preset time period; the processing unit is also used for inputting the first training parameter and the second training parameter into the current neural network model for iterative training until the loss value of the current neural network model meets the preset condition; the loss value is used for representing the second training parameter and an error value of the output result; the output result is the network flow parameter of at least one cell in a second preset time period, which is obtained by predicting the current neural network model according to the first training parameter and the second training parameter; the processing unit is also used for determining the current neural network model as a network flow prediction model; the network flow prediction model is used for predicting the network flow parameters of the target cell in the target time period according to the input historical network flow parameters of the target cell.
With reference to the third aspect, in one possible implementation manner, the processing unit is specifically configured to perform the following steps: step 1, inputting a first training parameter and a second training parameter into a current neural network model, and determining an output result of the current neural network model; step 2, determining a loss value of the current neural network model according to the output result and the second training parameter; step 3, determining whether the loss value of the current neural network model meets a preset condition; step 4, if the network traffic prediction model meets the requirement, the current neural network model is used as the network traffic prediction model; step 5, if the model parameters do not meet the loss value, adjusting the model parameters of the current neural network model; and step 6, taking the current neural network model with the parameters adjusted as the current neural network model, and iteratively executing the steps 1, 2, 3, 4, 5 and 6 until the loss value of the current neural network model meets the preset condition.
With reference to the third aspect, in one possible implementation manner, the network traffic parameters include h parameters; the output layer of the current neural network model comprises h output layer nodes; the h output layer nodes are in one-to-one correspondence with the h parameters; the i output layer nodes are used for outputting the i-th parameter predicted by the current neural network model; h and i are positive integers; the processing unit is also used for determining the loss value of each output node in the h output layer nodes; the loss value of the ith node is determined according to the output result of the ith node and the ith parameter in the second training parameters; the sum of the loss values of each output node is the loss value of the current neural network model.
With reference to the third aspect, in one possible implementation manner, the hidden layer of the current neural network model includes q hidden layer nodes; an h hidden layer node of the q hidden layer nodes comprises a model parameter; the a model parameters are in one-to-one correspondence with the a output layer nodes; q and a are positive integers; the processing unit is further configured to adjust a value of a jth model parameter of the h hidden layer node to: the sum of the current value of the jth model parameter of the jth hidden layer node and the loss value of the jth output layer node; j is a positive integer.
In a fourth aspect, the present application provides a network traffic prediction apparatus, the apparatus comprising: a processing unit and an acquisition unit; the acquisition unit is used for acquiring network flow parameters of the target cell in a third preset time period; the processing unit is used for inputting the network flow parameters of the target cell in the third preset time period into the network flow prediction model and determining the network flow parameters of the target cell in the fourth preset time period; the network traffic prediction model is a network traffic prediction model trained according to the model method of the first aspect.
With reference to the fourth aspect, in one possible implementation manner, the network traffic parameters of the target cell in the fourth preset time period include network traffic parameters of each time interval; the processing unit is further used for determining a target cell with the network flow parameter in the fourth preset time period being larger than the first preset threshold and smaller than the second preset threshold as a capacity-reducing cell; determining a target cell with the network flow parameter larger than a second preset threshold value in a fourth preset time period as a capacity expansion cell; and allocating the carrier resources to be allocated of the capacity-reducing cell to the capacity-expanding cell.
In a fifth aspect, the present application provides an electronic device, the apparatus comprising: a processor and a communication interface; the communication interface is coupled to a processor for running a computer program or instructions to implement the model training method as described in any one of the possible implementations of the first aspect and the first aspect.
In a sixth aspect, the present application provides an electronic device, the apparatus including: a processor and a communication interface; the communication interface is coupled to a processor for running a computer program or instructions to implement the network traffic prediction method as described in any one of the possible implementations of the second aspect and the second aspect.
In a seventh aspect, the present application provides a computer readable storage medium having instructions stored therein that, when run on a terminal, cause the terminal to perform a model training method as described in any one of the possible implementations of the first aspect and the first aspect.
In an eighth aspect, the present application provides a computer readable storage medium having instructions stored therein which, when run on a terminal, cause the terminal to perform a network traffic prediction method as described in any one of the possible implementations of the second aspect and the second aspect.
In this application, the names of the above-mentioned electronic devices do not constitute limitations on the devices or function modules themselves, and in actual implementation, these devices or function modules may appear under other names. Insofar as the function of each device or function module is similar to the present application, it is within the scope of the claims of the present application and the equivalents thereof.
These and other aspects of the present application will be more readily apparent from the following description.
Based on the above technical solution, according to the model training method provided by the embodiment of the present application, the model training device first determines a first training parameter and a second training parameter, inputs the first training parameter and the second training parameter into the current neural network model for iterative training until a loss value of the current neural network model meets a preset condition, determines that the current neural network model is a network traffic prediction model, and the network traffic prediction model can predict a network traffic parameter of a target cell in a target time period according to the input historical network traffic parameter of the target cell; the model training method can be deployed without the need of operators to determine the time and place of the activities in advance and to make requests.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device provided in the present application;
FIG. 2 is a flow chart of a model training method provided herein;
FIG. 3 is a flow chart of another model training method provided herein;
Fig. 4 is a flowchart of a network traffic prediction method provided in the present application;
FIG. 5 is a schematic structural diagram of a model training device provided in the present application;
fig. 6 is a schematic structural diagram of a network traffic prediction device provided in the present application;
fig. 7 is a schematic structural diagram of another electronic device provided in the present application.
Detailed Description
The model training method and device and the network traffic prediction method and device provided by the embodiment of the application are described in detail below with reference to the accompanying drawings.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone.
The terms "first" and "second" and the like in the description and in the drawings are used for distinguishing between different objects or for distinguishing between different processes of the same object and not for describing a particular sequential order of objects.
Furthermore, references to the terms "comprising" and "having" and any variations thereof in the description of the present application are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed but may optionally include other steps or elements not listed or inherent to such process, method, article, or apparatus.
It should be noted that, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
Fig. 1 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, where the electronic device may be a model training device or a network traffic prediction device. As shown in fig. 1, the electronic device 100 comprises at least one processor 101, a communication line 102, and at least one communication interface 104, and may further comprise a memory 103. The processor 101, the memory 103, and the communication interface 104 may be connected through a communication line 102.
The processor 101 may be a central processing unit (central processing unit, CPU), an application specific integrated circuit (application specific integrated circuit, ASIC), or one or more integrated circuits configured to implement embodiments of the present application, such as: one or more digital signal processors (digital signal processor, DSP), or one or more field programmable gate arrays (field programmable gate array, FPGA).
Communication line 102 may include a pathway for communicating information between the aforementioned components.
The communication interface 104, for communicating with other devices or communication networks, may use any transceiver-like device, such as ethernet, radio access network (radio access network, RAN), wireless local area network (wireless local area networks, WLAN), etc.
The memory 103 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a random access memory (random access memory, RAM) or other type of dynamic storage device that can store information and instructions, or an electrically erasable programmable read-only memory (electrically erasable programmable read-only memory, EEPROM), a compact disc read-only memory (compact disc read-only memory) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to include or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
In a possible design, the memory 103 may exist independent of the processor 101, i.e. the memory 103 may be a memory external to the processor 101, where the memory 103 may be connected to the processor 101 through a communication line 102 for storing execution instructions or application program codes, and the execution is controlled by the processor 101 to implement a network quality determining method provided in the embodiments described below. In yet another possible design, the memory 103 may be integrated with the processor 101, i.e., the memory 103 may be an internal memory of the processor 101, e.g., the memory 103 may be a cache, and may be used to temporarily store some data and instruction information, etc.
As one implementation, processor 101 may include one or more CPUs, such as CPU0 and CPU1 in fig. 1. As another implementation, the data transmission 100 may include multiple processors, such as the processor 101 and the processor 107 in fig. 1. As yet another implementation, the data transmission 100 may also include an output device 105 and an input device 106.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of functional modules is illustrated, and in practical application, the above-described functional allocation may be implemented by different functional modules according to needs, i.e. the internal structure of the network node is divided into different functional modules to implement all or part of the functions described above. The specific working processes of the above-described system, module and network node may refer to the corresponding processes in the foregoing method embodiments, which are not described herein.
At present, a method for carrier self-adaptive allocation of base station equipment and a base station are proposed in the related art, and the method firstly predicts the number of carriers required by a cell; determining the cell of the carrier to be allocated according to the number of the carriers required by the predicted cell and the allocable carrier resources and the principle of meeting the cell of the maximum number of the carriers to be added; and allocating the carrier wave to the cell to be allocated with the carrier wave.
The carrier allocation method of the related art mainly includes determining time and place of large-scale activities, providing a guarantee request of the place by a network side, allocating network resources required by the place to access network equipment where the local place is located by first-line operation and maintenance personnel at the appointed time, and releasing or restoring the network resources of the place by the operation and maintenance personnel after the activities are finished.
In order to solve the problems in the prior art, an embodiment of the present application provides a model training method as shown in fig. 2. The method comprises the following steps: determining a first training parameter and a second training parameter through a model training device, inputting the first training parameter and the second training parameter into a current neural network model for iterative training until a loss value of the current neural network model meets a preset condition, and determining the current neural network model as a network flow prediction model by the model training device.
As shown in fig. 2, a flowchart of a model training method provided in an embodiment of the present application is shown, where the model training method provided in the embodiment of the present application may be applied to an electronic device shown in fig. 1, and the model training method provided in the embodiment of the present application may be implemented by the following steps.
Step 201, the model training device determines a first training parameter and a second training parameter.
The first training parameter is a network traffic parameter of at least one cell in a first preset time period, and the second training parameter is a network traffic parameter of at least one cell in a second preset time period.
In one possible implementation, the data required by the model training apparatus may be synchronized from the wireless performance data acquisition system and the process parameter data acquisition system in real time into the distributed file (Hadoop Distributed File System, HDFS) system of the source database by means of file transfer protocol (File Transfer Protocol, FTP) synchronization or by means of message subscription acquisition. The main data includes data such as base station hour data, base station parameter information, cell geographic information and the like on the wireless network side, for example, uplink utilization rate (Physical Uplink Shared Channel, PUSCH), downlink utilization rate PDSCH, downlink utilization rate PDCCH, effective radio resource control connection reestablishment request (Radio Resource Control, RRC) connection average number, PRB (permeable Reactive Barriers) utilization rate, user perception number and the like.
In the training parameter construction process, the model training device screens data of approximately 28 days from a source database HDFS system to serve as training samples, and the data are combined with geographic information distributed by base stations to construct a first training parameter and a second training parameter.
It is noted that the network traffic parameter may be cell KPI data, the first training parameter may be a cell PRB utilization, an uplink traffic, and a downlink traffic for approximately 7 days to approximately 28 days, and the second training parameter may be a cell PRB utilization, an uplink traffic, and a downlink traffic for approximately 7 days.
Step 202, the model training device inputs the first training parameter and the second training parameter into the current neural network model for iterative training until the loss value of the current neural network model meets the preset condition.
The loss value is used for representing the second training parameter and an error value of the output result; and the output result is the network flow parameter of at least one cell in a second preset time period, which is obtained by the current neural network model according to the first training parameter and the second training parameter.
In one possible implementation manner, the implementation process of the step 202 may be: the model training device inputs the first training parameters and the second training parameters into the current neural network model, and determines an output result; and the model training device calculates a loss value according to the output result and the second training parameter, and if the loss value is larger than a preset value, the model training device carries out iterative training on the first training parameter and the second training parameter until the loss value meets a preset condition.
Optionally, the parameters input into the current neural network model may further include time dimension information and geographic dimension information.
For example, the input data structure of the first training parameter and the second training parameter may be [ batch_size x in_seq_len (sequence information), and the input data structure of the time dimension information and the geographic dimension information may be batch_size x n_features (time, ground city, etc.). Correspondingly, the output data structure of the current neural network model can be [ batch_size x out_seq_len ], and the output data result is a cell index trend prediction result of the single city for the next 48 hours.
Optionally, the activation parameter in the input data structure adopts a ReLU function to prevent gradient disappearance; batch Normalization is adopted to assist in rapid parameter adjustment, so that training is rapidly converged; the loss function adopts Mean Absolute Error function; residual connection is introduced into the model, so that the robustness of the model is improved; introducing a learning rate arm up and an attenuation mechanism, and reducing the vibration in the training process; dropout and early shutdown are introduced to prevent the model from over fitting.
And 203, determining the current neural network model as a network flow prediction model by the model training device.
The network flow prediction model is used for predicting the network flow parameters of the target cell in the target time period according to the input historical network flow parameters of the target cell.
In one possible implementation manner, the implementation process of the step 203 may be: and under the condition that the loss value meets the preset condition, the model training device judges that the current neural network model is a network flow prediction model.
The scheme at least brings the following beneficial effects:
based on the above technical solution, according to the model training method provided by the embodiment of the present application, the model training device first determines a first training parameter and a second training parameter, inputs the first training parameter and the second training parameter into the current neural network model for iterative training until a loss value of the current neural network model meets a preset condition, determines that the current neural network model is a network traffic prediction model, and the network traffic prediction model can predict a network traffic parameter of a target cell in a target time period according to the input historical network traffic parameter of the target cell; the model training method can be deployed without the need of operators to determine the time and place of the activities in advance and to make requests.
In a possible implementation manner, as shown in fig. 3 in conjunction with fig. 2, the step 202 of inputting the first training parameter and the second training parameter into the current neural network model for iterative training until the loss value of the current neural network model meets the preset condition may be specifically implemented by the following steps 301 to 306, which are described in detail below:
step 301, the model training device inputs the first training parameter and the second training parameter into the current neural network model, and determines an output result of the current neural network model.
In one possible implementation, the hidden layer of the current neural network model includes q hidden layer nodes; an h hidden layer node of the q hidden layer nodes comprises a model parameter; the a model parameters are in one-to-one correspondence with the a output layer nodes; q and a are positive integers.
As an example, q hidden layer nodes may be q hidden layer neurons, and first, the model training apparatus determines, as an input vector matrix, a cell PRB utilization rate of approximately 21 days, an uplink traffic, and a downlink traffic, and a cell PRB utilization rate of approximately 7 days, the uplink traffic, and the downlink traffic, and inputs the input vector matrix into the neural network model, and in particular, may determine an output result of q hidden layer neurons by equation 1.
The output results of the q hidden layer neurons satisfy the following equation 1:
Figure BDA0004014868730000091
wherein Bq represents the output results of q hidden layer neurons; d represents the number of input vector matrices; v iq A weight value representing each input vector from the input layer to the hidden layer; x is x i Representing an ith input vector; θ q Representing the bias variables of q hidden layer neurons.
Optionally, the input vector matrix may further include time dimension information and geographic dimension information.
As one possible implementation, the network traffic parameters include h parameters; the output layer of the current neural network model comprises h output layer nodes; the h output layer nodes are in one-to-one correspondence with the h parameters; the i output layer nodes are used for outputting the i-th parameter predicted by the current neural network model; h and i are positive integers.
In yet another example, the model training apparatus determines the output result of the current neural network model according to the determined output results of the q hidden layer neurons, and may specifically be implemented by equation 2.
Figure BDA0004014868730000101
Wherein Y is h Representing the output result of the current neural network model; q represents the number of hidden layer neurons; w (w) qh A weight value representing each hidden layer neuron from the hidden layer to the output layer; b (B) q Output results of q hidden layer neurons are represented; θ h Representing the bias variables of the h output layer neurons.
And 302, the model training device determines a loss value of the current neural network model according to the output result and the second training parameter.
As a possible implementation manner, the model training device determines a loss value of each output node in the h output layer nodes; the loss value of the ith node is determined according to the output result of the ith node and the ith parameter in the second training parameters; the sum of the loss values of each output node is the loss value of the current neural network model.
An example, network traffic parameters include PRB utilization, upstream traffic, and downstream traffic; the output layer of the current neural network model comprises a first output layer node, a second output layer node and a third output layer node; the first output layer node is used for outputting the PRB utilization rate predicted by the current neural network model; the second output layer node is used for outputting the uplink flow predicted by the current neural network model; and the third output layer node is used for outputting the downlink traffic predicted by the current neural network model.
Further, the model training device determines a loss value E of the first output layer node of the current neural network model according to the output result of the first output layer node and the PRB utilization rate of the second training parameter 1 . For example: if the output result of the first output layer node is y 1 The PRB utilization rate of the second training set is t 1 The loss value E of the first output layer node 1 Specifically, the method can be realized by a formula 3:
Figure BDA0004014868730000102
the model training device determines a loss value E of the second output layer node of the current neural network model according to the output result of the second output layer node and the uplink flow of the second training parameter 2
The model training device determines a loss value E of the third output layer node of the current neural network model according to the output result of the third output layer node and the downlink flow of the second training parameter 3
Similarly, the loss value E of the second output layer node 2 And a loss value E of the third output layer node 3 All can be realized by a corresponding formula 3.
The model training device determines the sum of the loss value of the first output layer node, the loss value of the second output layer node and the loss value of the third output layer node as a loss value E of the current neural network model. For example: e=e 1 +E 2 +E 3
Step 303, the model training device determines whether the loss value of the current neural network model meets a preset condition.
And 304, if the model is satisfied, the model training device takes the current neural network model as a network flow prediction model.
In one possible implementation, if the loss value of the current neural network model is smaller than the preset condition, the model training device determines the current neural network model as the network traffic prediction model.
And 305, if the model parameters do not meet the loss values, the model training device adjusts the model parameters of the current neural network model according to the loss values.
In one possible implementation, the model training apparatus adjusts the value of the jth model parameter of the h hidden layer node to be: the sum of the current value of the jth model parameter of the jth hidden layer node and the loss value of the jth output layer node; j is a positive integer.
In combination with the example in step 302, if the loss value of the current neural network model is greater than or equal to the preset value, the model training apparatus first adjusts weight values from h output layer nodes to q hidden layer nodes in the current neural network model according to the loss value.
The model training device determines the weight deviation from the first output node to the h hidden layer node, and the weight deviation satisfies the following formula: ΔW (delta W) 1h =E 1 (a) The method comprises the steps of carrying out a first treatment on the surface of the Wherein a is the learning rate.
The model training device adjusts the weight value from the first output node to the h hidden layer node and deviates the weight by delta W 1h Weight W from previous time 1h Adding to determine the adjusted weight value W of the first output node to the h hidden layer node 1h '. The model training means determines the first output node to each hidden layer node in the same wayWeight deviation of points and readjusting weight value W from first output node to each hidden layer node 1h ’。
It will be appreciated that the model training apparatus adjusts the weight value of each output node to each hidden layer node in the same manner as described above.
And 306, the model training device iteratively executes 301, 302, 303, 304, 305 and 306 by taking the current neural network model with the parameters adjusted as the current neural network model until the loss value of the current neural network model meets the preset condition.
In one possible implementation manner, after the model training apparatus adjusts the weight value from each output node to each hidden layer node, the model training apparatus iteratively performs steps 301, 302, 303, 304, 305, and 306, if the loss value of the current neural network model is still greater than or equal to the preset condition, the model training apparatus adjusts the weight value from each hidden layer node to each input layer in the same manner as step 305, and iteratively performs steps 30, 302, 303, 304, 305, and 306, if the loss value of the current neural network model is still greater than or equal to the preset condition.
The model training device inputs the first training parameters and the second training parameters into the current neural network model for iterative training until the loss value of the current neural network model meets the preset condition, and the detailed description is made.
In a possible implementation manner, as shown in fig. 2 and in connection with fig. 4, after determining that the current neural network model is the network traffic prediction model in step 203, how the network traffic prediction device predicts and allocates the network traffic may be implemented specifically by the following steps 401 to 405.
Step 401, the network traffic predicting device obtains the network traffic parameter of the target cell in a third preset time period.
For example, the network traffic parameter in the third preset time period may be data such as PRB utilization of the target cell 28 days before the current time, uplink traffic, downlink traffic, and the like.
Step 402, the network traffic prediction device inputs the network traffic parameter of the target cell in the third preset time period to the network traffic prediction model, and determines the network traffic parameter of the target cell in the fourth preset time period.
The network traffic prediction model is a network traffic prediction model trained according to the model methods of the steps 201-203 and the steps 301-306.
The fourth preset time period is, for example, 48 hours in the future.
The PRB utilization rate of the target cell in the third preset time period of the network traffic prediction device, and data such as uplink traffic, downlink traffic and the like are input into a network traffic prediction model. According to the formula 1 in the step 301, the network traffic prediction model determines the output result of the hidden layer of the network traffic prediction model, and then according to the formula 2, determines the output result of the output layer of the network traffic prediction model, and finally the network traffic prediction model outputs the PRB utilization rate, the uplink traffic and the downlink traffic of the target cell in the future 48 hours.
Step 403, the network traffic prediction device determines that the target cell whose network traffic parameter in the fourth preset time period is greater than the first preset threshold and less than the second preset threshold is a volume-reduced cell.
As an example, the network traffic prediction device screens a cell with a lower limit of 30% -an upper limit of 70% as a cell that can be reduced in size according to the PRB utilization of forty-eight hours in the future of the target cell.
The number of carriers that can be allocated by the capacity reduction cell is determined according to the uplink flow and the downlink flow of the capacity reduction cell.
Step 404, the network traffic prediction device determines that the target cell whose network traffic parameter is greater than the second preset threshold value in the fourth preset time period is a capacity expansion cell.
As an example, the network traffic prediction device screens cells with more than 70% as cells requiring capacity expansion according to the PRB utilization of forty-eight hours in the future of the target cell.
The number of carriers required for the capacity expansion cell is determined according to the uplink traffic and the downlink traffic of the capacity expansion cell.
And 405, the network flow prediction device allocates the carrier resources to be allocated of the capacity-reduction cell to the capacity-expansion cell.
In combination with the examples in step 403 and step 404, after the network traffic prediction device determines the capacity-expanding cell and the capacity-reducing cell, the network traffic prediction device allocates the execution time of the configuration task, for example, between 8 points and 9 points in the morning, and allocates the carrier of the capacity-reducing cell to the cell requiring capacity expansion as near as possible in combination with the address location information of the base station.
The network flow prediction device is provided with a task executor, the task executor monitors the current time as the task execution time, fills in the command template in combination with the base station construction information and sends a volume reduction command to the OMC, and executes the next step when the result returns to be normal, otherwise rolls back the step and informs network management personnel through a system short message interface to finish.
Correspondingly, when the network flow prediction device executes the step, the instruction template is filled in combination with the equipment information, and a capacity expansion instruction is sent to the OMC, and the operation is finished when the result returns to be normal, otherwise, the step and the previous step are rolled back, and network management personnel are notified to finish the operation again through a system short message interface.
In one possible implementation manner, to ensure that the predicted data is as accurate as possible, the network traffic prediction model periodically acquires the updated network traffic parameters of the target cell in the third preset time period, outputs the updated network traffic parameters of the target cell in the fourth preset time period, and then sequentially repeats steps 302-306.
The embodiment of the application may divide the functional modules or functional units of the electronic device according to the above method examples, for example, each functional module or functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules may be implemented in hardware, or in software functional modules or functional units. The division of the modules or units in the embodiments of the present application is merely a logic function division, and other division manners may be implemented in practice.
Fig. 5 is a schematic structural diagram of a model training device according to an embodiment of the present application, where the device includes: a processing unit 501; a processing unit 501, configured to determine a first training parameter and a second training parameter; the first training parameters are network flow parameters of at least one cell in a first preset time period, and the second training parameters are network flow parameters of at least one cell in a second preset time period; the processing unit 501 is further configured to input the first training parameter and the second training parameter into the current neural network model for iterative training until a loss value of the current neural network model meets a preset condition; the loss value is used for representing the second training parameter and an error value of the output result; the output result is the network flow parameter of at least one cell in a second preset time period, which is obtained by predicting the current neural network model according to the first training parameter and the second training parameter; the processing unit 501 is further configured to determine that the current neural network model is a network traffic prediction model; the network flow prediction model is used for predicting the network flow parameters of the target cell in the target time period according to the input historical network flow parameters of the target cell.
Optionally, the processing unit 501 is specifically configured to perform the following steps: step 1, inputting a first training parameter and a second training parameter into a current neural network model, and determining an output result of the current neural network model; step 2, determining a loss value of the current neural network model according to the output result and the second training parameter; step 3, determining whether the loss value of the current neural network model meets a preset condition; step 4, if the network traffic prediction model meets the requirement, the current neural network model is used as the network traffic prediction model; step 5, if the model parameters do not meet the loss value, adjusting the model parameters of the current neural network model; and step 6, taking the current neural network model with the parameters adjusted as the current neural network model, and iteratively executing the steps 1, 2, 3, 4, 5 and 6 until the loss value of the current neural network model meets the preset condition.
Optionally, the network traffic parameters include h parameters; the output layer of the current neural network model comprises h output layer nodes; the h output layer nodes are in one-to-one correspondence with the h parameters; the i output layer nodes are used for outputting the i-th parameter predicted by the current neural network model; the processing unit 501 is further configured to determine a loss value of each of the h output layer nodes; the loss value of the ith node is determined according to the output result of the ith node and the ith parameter in the second training parameters; the sum of the loss values of each output node is the loss value of the current neural network model.
Optionally, the hidden layer of the current neural network model includes q hidden layer nodes; an h hidden layer node in the q hidden layer nodes comprises l model parameters; the l model parameters are in one-to-one correspondence with the l output layer nodes; the processing unit 501 is further configured to adjust a value of a j-th model parameter of the h-th hidden layer node to: the sum of the current value of the j-th model parameter of the h-th hidden layer node and the loss value of the j-th output layer node.
Optionally, the model training apparatus may further comprise a communication unit 502. The model training apparatus may communicate with other devices (e.g., network traffic prediction apparatus) through the communication unit 502.
Fig. 6 is a schematic structural diagram of a network traffic prediction device according to an embodiment of the present application, where the device includes: a processing unit 601 and an acquisition unit 602; an obtaining unit 602, configured to obtain a network traffic parameter of the target cell in a third preset time period; a processing unit 601, configured to input a network traffic parameter of the target cell in a third preset time period to the network traffic prediction model, and determine a network traffic parameter of the target cell in a fourth preset time period; the network traffic prediction model is a network traffic prediction model trained according to the model method of the first aspect.
Optionally, the network traffic parameters of the target cell in the fourth preset time period include network traffic parameters of each time interval; the processing unit 601 is further configured to determine that a target cell, in which the network traffic parameter in the fourth preset time period is greater than the first preset threshold and less than the second preset threshold, is a volume-reduced cell; determining a target cell with the network flow parameter larger than a second preset threshold value in a fourth preset time period as a capacity expansion cell; and allocating the carrier resources to be allocated of the capacity-reducing cell to the capacity-expanding cell.
When implemented in hardware, the communication unit 502 or the acquisition unit 602 in the embodiments of the present application are integrated on a communication interface, and the processing unit 501 or the processing unit 601 may be integrated on a processor. A specific implementation is shown in fig. 7.
Fig. 7 shows a further possible structural schematic of the electronic device involved in the above-described embodiment. The electronic device includes: a processor 702 and a communication interface 703. The processor 702 is configured to control and manage the actions of the electronic settings, e.g., perform the steps performed by the processing unit 501 or the processing unit 601 described above, and/or to perform other processes of the techniques described herein. The communication interface 703 is used to support communication of the electronic device with other network entities, for example, performing the steps performed by the communication unit 502 or the acquisition unit 602 described above. The electronic device may further comprise a memory 701 and a bus 704, the memory 701 being for storing program codes and data of the electronic device.
Wherein the memory 701 may be a memory in an electronic device or the like, which may include a volatile memory such as a random access memory; the memory may also include non-volatile memory, such as read-only memory, flash memory, hard disk or solid state disk; the memory may also comprise a combination of the above types of memories.
The processor 702 may be implemented or executed with the various exemplary logic blocks, modules, and circuits described in connection with this disclosure. The processor may be a central processing unit, a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules, and circuits described in connection with this disclosure. The processor may also be a combination that performs the function of a computation, e.g., a combination comprising one or more microprocessors, a combination of a DSP and a microprocessor, etc.
Bus 704 may be an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus or the like. The bus 704 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 7, but not only one bus or one type of bus.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of functional modules is illustrated, and in practical application, the above-described functional allocation may be implemented by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to implement all or part of the functions described above. The specific working processes of the above-described systems, devices and units may refer to the corresponding processes in the foregoing method embodiments, which are not described herein.
Embodiments of the present application provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform the model training method or the network traffic prediction method in the method embodiments described above.
The embodiment of the application also provides a computer readable storage medium, in which instructions are stored, which when executed on a computer, cause the computer to execute the model training method or the network traffic prediction method in the method flow shown in the method embodiment.
The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access Memory (Random Access Memory, RAM), a Read-Only Memory (ROM), an erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), a register, a hard disk, an optical fiber, a portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing, or any other form of computer readable storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an application specific integrated circuit (Application Specific Integrated Circuit, ASIC). In the context of the present application, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Since the electronic device, the computer readable storage medium, and the computer program product in the embodiments of the present invention can be applied to the above-mentioned method, the technical effects that can be obtained by the method can also refer to the above-mentioned method embodiments, and the embodiments of the present invention are not described herein again.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, e.g., the partitioning of elements is merely a logical functional partitioning, and there may be additional partitioning in actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not implemented. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interface, indirect coupling or communication connection of devices or units, electrical, mechanical, or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The foregoing is merely a specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (14)

1. A model training method applied to a predictive model, the method comprising:
determining a first training parameter and a second training parameter; the first training parameters are network flow parameters of at least one cell in a first preset time period, and the second training parameters are network flow parameters of the at least one cell in a second preset time period;
inputting the first training parameters and the second training parameters into a current neural network model for iterative training until the loss value of the current neural network model meets a preset condition; wherein the loss value is used for representing the second training parameter and an error value of an output result; the output result is a network flow parameter of the at least one cell in the second preset time period, which is obtained by the current neural network model according to the first training parameter and the second training parameter;
Determining the current neural network model as a network flow prediction model; the network flow prediction model is used for predicting the network flow parameters of the target cell in a target time period according to the input historical network flow parameters of the target cell.
2. The method according to claim 1, wherein inputting the first training parameter and the second training parameter into a current neural network model for iterative training until a loss value of the current neural network model meets a preset condition, comprises:
step 1, inputting the first training parameters and the second training parameters into a current neural network model, and determining the output result of the current neural network model;
step 2, determining a loss value of the current neural network model according to the output result and the second training parameter;
step 3, determining whether the loss value of the current neural network model meets the preset condition;
step 4, if yes, using the current neural network model as the network flow prediction model;
step 5, if not, adjusting model parameters of the current neural network model according to the loss value;
And step 6, taking the current neural network model with the parameters adjusted as the current neural network model, and iteratively executing the steps 1, 2, 3, 4, 5 and 6 until the loss value of the current neural network model meets the preset condition.
3. The method of claim 2, wherein the network traffic parameters include h parameters; the output layer of the current neural network model comprises h output layer nodes; the h output layer nodes are in one-to-one correspondence with the h parameters; the i output layer nodes are used for outputting the i-th parameter predicted by the current neural network model; h and i are positive integers;
and determining a loss value of the current neural network model according to the output result and the second training parameter, including:
determining a loss value of each output node in the h output layer nodes; the loss value of the ith node is determined according to the output result of the ith node and the ith parameter in the second training parameters;
the sum of the loss values of each output node is the loss value of the current neural network model.
4. A method according to claim 3, wherein the hidden layer of the current neural network model comprises q hidden layer nodes; an h hidden layer node of the q hidden layer nodes comprises a model parameter; the a model parameters are in one-to-one correspondence with the a output layer nodes; q and a are positive integers;
And adjusting model parameters of the current neural network model according to the loss value, wherein the method comprises the following steps of:
adjusting the value of the j-th model parameter of the h hidden layer node to be: the sum of the current value of the jth model parameter of the jth hidden layer node and the loss value of the jth output layer node; j is a positive integer.
5. A method for predicting network traffic, comprising:
acquiring network flow parameters of a target cell in a third preset time period;
inputting the network flow parameters of the target cell in a third preset time period into a network flow prediction model, and determining the network flow parameters of the target cell in a fourth preset time period; the network traffic prediction model is a network traffic prediction model trained according to the model method described in claims 1-4.
6. The method of claim 5, wherein the network traffic parameters of the target cell for a fourth predetermined period of time comprise network traffic parameters for each time interval;
after the network traffic parameter of the target cell in the fourth preset time period is determined, the method includes:
determining a target cell with the network flow parameter in a fourth preset time period being larger than a first preset threshold and smaller than a second preset threshold as a volume-reducing cell;
Determining a target cell with the network flow parameter larger than the second preset threshold value in a fourth preset time period as a capacity expansion cell;
and allocating the carrier resources to be allocated of the capacity-reduction cell to the capacity-expansion cell.
7. A model training apparatus, the apparatus comprising: a processing unit;
the processing unit is used for determining a first training parameter and a second training parameter; the first training parameters are network flow parameters of at least one cell in a first preset time period, and the second training parameters are network flow parameters of the at least one cell in a second preset time period;
the processing unit is further configured to input the first training parameter and the second training parameter into a current neural network model for iterative training until a loss value of the current neural network model meets a preset condition; wherein the loss value is used for representing the second training parameter and an error value of an output result; the output result is a network flow parameter of the at least one cell in the second preset time period, which is obtained by the current neural network model according to the first training parameter and the second training parameter;
The processing unit is further used for determining that the current neural network model is a network traffic prediction model; the network flow prediction model is used for predicting the network flow parameters of the target cell in a target time period according to the input historical network flow parameters of the target cell.
8. The apparatus according to claim 7, wherein the processing unit is specifically configured to perform the steps of:
step 1, inputting the first training parameters and the second training parameters into a current neural network model, and determining the output result of the current neural network model;
step 2, determining a loss value of the current neural network model according to the output result and the second training parameter;
step 3, determining whether the loss value of the current neural network model meets the preset condition;
step 4, if yes, using the current neural network model as the network flow prediction model;
step 5, if not, adjusting model parameters of the current neural network model according to the loss value;
and step 6, taking the current neural network model with the parameters adjusted as the current neural network model, and iteratively executing the steps 1, 2, 3, 4, 5 and 6 until the loss value of the current neural network model meets the preset condition.
9. The apparatus of claim 8, wherein the network traffic parameters comprise h parameters; the output layer of the current neural network model comprises h output layer nodes; the h output layer nodes are in one-to-one correspondence with the h parameters; the i output layer nodes are used for outputting the i-th parameter predicted by the current neural network model; h and i are positive integers;
the processing unit is further configured to determine a loss value of each output node in the h output layer nodes; the loss value of the ith node is determined according to the output result of the ith node and the ith parameter in the second training parameters;
the sum of the loss values of each output node is the loss value of the current neural network model.
10. The apparatus of claim 9, wherein the hidden layer of the current neural network model comprises q hidden layer nodes; an h hidden layer node of the q hidden layer nodes comprises a model parameter; the a model parameters are in one-to-one correspondence with the a output layer nodes; q and a are positive integers;
the processing unit is further configured to adjust a value of a jth model parameter of the h hidden layer node to: the sum of the current value of the jth model parameter of the jth hidden layer node and the loss value of the jth output layer node; j is a positive integer.
11. A network traffic prediction device, characterized by a processing unit and an acquisition unit;
the acquiring unit is used for acquiring network flow parameters of the target cell in a third preset time period;
the processing unit is used for inputting the network traffic parameters of the target cell in a third preset time period into a network traffic prediction model and determining the network traffic parameters of the target cell in a fourth preset time period; the network traffic prediction model is a network traffic prediction model trained according to the model method described in claims 1-4.
12. The apparatus of claim 11, wherein the network traffic parameters of the target cell for a fourth predetermined period of time comprise network traffic parameters for each time interval;
the processing unit is further configured to determine that a target cell, in which the network traffic parameter in the fourth preset time period is greater than the first preset threshold and less than the second preset threshold, is a volume-reduced cell;
determining a target cell with the network flow parameter larger than the second preset threshold value in a fourth preset time period as a capacity expansion cell;
and allocating the carrier resources to be allocated of the capacity-reduction cell to the capacity-expansion cell.
13. An electronic device, comprising: a processor and a communication interface; the communication interface is coupled to the processor for running a computer program or instructions to implement the model training method or the network traffic prediction method as claimed in any of claims 1-6.
14. A computer readable storage medium having instructions stored therein, which, when executed by a processor, performs the model training method or the network traffic prediction method of any of the preceding claims 1-6.
CN202211666559.9A 2022-12-23 2022-12-23 Model training method and device and network traffic prediction method and device Pending CN116232923A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211666559.9A CN116232923A (en) 2022-12-23 2022-12-23 Model training method and device and network traffic prediction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211666559.9A CN116232923A (en) 2022-12-23 2022-12-23 Model training method and device and network traffic prediction method and device

Publications (1)

Publication Number Publication Date
CN116232923A true CN116232923A (en) 2023-06-06

Family

ID=86575798

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211666559.9A Pending CN116232923A (en) 2022-12-23 2022-12-23 Model training method and device and network traffic prediction method and device

Country Status (1)

Country Link
CN (1) CN116232923A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117432414A (en) * 2023-12-20 2024-01-23 中煤科工开采研究院有限公司 Method and system for regulating and controlling top plate frosted jet flow seam formation

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109951357A (en) * 2019-03-18 2019-06-28 西安电子科技大学 Network application recognition methods based on multilayer neural network
CN110213784A (en) * 2019-07-05 2019-09-06 中国联合网络通信集团有限公司 A kind of method for predicting and device
CN110621026A (en) * 2019-02-18 2019-12-27 北京航空航天大学 Base station flow multi-time prediction method
CN111200531A (en) * 2020-01-02 2020-05-26 国网冀北电力有限公司信息通信分公司 Communication network traffic prediction method and device and electronic equipment
CN111260122A (en) * 2020-01-13 2020-06-09 重庆首讯科技股份有限公司 Method and device for predicting traffic flow on expressway
CN111294227A (en) * 2018-12-10 2020-06-16 中国移动通信集团四川有限公司 Method, apparatus, device and medium for neural network-based traffic prediction
CN111327441A (en) * 2018-12-14 2020-06-23 中兴通讯股份有限公司 Traffic data prediction method, device, equipment and storage medium
US20200403913A1 (en) * 2019-06-21 2020-12-24 Beijing University Of Posts And Telecommunications Network Resource Scheduling Method, Apparatus, Electronic Device and Storage Medium
WO2021188022A1 (en) * 2020-03-17 2021-09-23 Telefonaktiebolaget Lm Ericsson (Publ) Radio resource allocation
CN113497717A (en) * 2020-03-19 2021-10-12 中国移动通信有限公司研究院 Network flow prediction method, device, equipment and storage medium
WO2022161599A1 (en) * 2021-01-26 2022-08-04 Telefonaktiebolaget Lm Ericsson (Publ) Training and using a neural network for managing an environment in a communication network

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111294227A (en) * 2018-12-10 2020-06-16 中国移动通信集团四川有限公司 Method, apparatus, device and medium for neural network-based traffic prediction
CN111327441A (en) * 2018-12-14 2020-06-23 中兴通讯股份有限公司 Traffic data prediction method, device, equipment and storage medium
CN110621026A (en) * 2019-02-18 2019-12-27 北京航空航天大学 Base station flow multi-time prediction method
CN109951357A (en) * 2019-03-18 2019-06-28 西安电子科技大学 Network application recognition methods based on multilayer neural network
US20200403913A1 (en) * 2019-06-21 2020-12-24 Beijing University Of Posts And Telecommunications Network Resource Scheduling Method, Apparatus, Electronic Device and Storage Medium
CN110213784A (en) * 2019-07-05 2019-09-06 中国联合网络通信集团有限公司 A kind of method for predicting and device
CN111200531A (en) * 2020-01-02 2020-05-26 国网冀北电力有限公司信息通信分公司 Communication network traffic prediction method and device and electronic equipment
CN111260122A (en) * 2020-01-13 2020-06-09 重庆首讯科技股份有限公司 Method and device for predicting traffic flow on expressway
WO2021188022A1 (en) * 2020-03-17 2021-09-23 Telefonaktiebolaget Lm Ericsson (Publ) Radio resource allocation
CN113497717A (en) * 2020-03-19 2021-10-12 中国移动通信有限公司研究院 Network flow prediction method, device, equipment and storage medium
WO2022161599A1 (en) * 2021-01-26 2022-08-04 Telefonaktiebolaget Lm Ericsson (Publ) Training and using a neural network for managing an environment in a communication network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
S XU: ""Realtime scheduling and power allocation using deep neural networks"", 《IEEE》, 31 December 2019 (2019-12-31) *
丁章;于德成;: "基于循环神经网络的无线通信指标预测应用研究", 山东通信技术, no. 01, 15 March 2020 (2020-03-15) *
张杰;白光伟;沙鑫磊;赵文天;沈航;: "基于时空特征的移动网络流量预测模型", 计算机科学, no. 12, 31 December 2019 (2019-12-31) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117432414A (en) * 2023-12-20 2024-01-23 中煤科工开采研究院有限公司 Method and system for regulating and controlling top plate frosted jet flow seam formation
CN117432414B (en) * 2023-12-20 2024-03-19 中煤科工开采研究院有限公司 Method and system for regulating and controlling top plate frosted jet flow seam formation

Similar Documents

Publication Publication Date Title
US5809423A (en) Adaptive-Dynamic channel assignment organization system and method
US5404574A (en) Apparatus and method for non-regular channel assignment in wireless communication networks
CN110839184B (en) Method and device for adjusting bandwidth of mobile fronthaul optical network based on flow prediction
Chen et al. Multiuser computation offloading and resource allocation for cloud–edge heterogeneous network
CN111132190A (en) Base station load early warning method and device
JP2000078651A (en) Method for dynamically allocating carrier in radio packet network accompanied by reuse of the carrier
CN113472844B (en) Edge computing server deployment method, device and equipment for Internet of vehicles
CN111601327B (en) Service quality optimization method and device, readable medium and electronic equipment
WO2019129169A1 (en) Electronic apparatus and method used in wireless communications, and computer readable storage medium
KR102668157B1 (en) Apparatus and method for dynamic resource allocation in cloud radio access networks
CN113407249B (en) Task unloading method facing to position privacy protection
CN116232923A (en) Model training method and device and network traffic prediction method and device
Das et al. A novel load balancing scheme for the tele-traffic hot spot problem in cellular networks
CN113076177A (en) Dynamic migration method of virtual machine in edge computing environment
Saffar et al. Pricing and rate optimization of cloud radio access network using robust hierarchical dynamic game
KR102056894B1 (en) Dynamic resource orchestration for fog-enabled industrial internet of things networks
Ansari et al. Token based distributed dynamic channel allocation in wireless communication network
Jiang et al. A channel borrowing scheme for TDMA cellular communication systems
WO2023220975A1 (en) Method, apparatus and system for managing network resources
Casares-Giner et al. Performance model for two-tier mobile wireless networks with macrocells and small cells
Touati et al. Queue-based model approximation for inter-cell coordination with service differentiation
Lin et al. Channel assignment for GSM half-rate and full-rate traffic
Chousainov et al. Performance evaluation of a C-RAN supporting a mixture of random and quasi-random traffic
WO2023003275A1 (en) Method and system for dynamic beamwidth management in the wireless communication systems
Skulysh et al. Traffic aggregation nodes placement for virtual EPC

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination