WO2021057245A1 - 带宽预测方法、装置、电子设备及存储介质 - Google Patents

带宽预测方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2021057245A1
WO2021057245A1 PCT/CN2020/105292 CN2020105292W WO2021057245A1 WO 2021057245 A1 WO2021057245 A1 WO 2021057245A1 CN 2020105292 W CN2020105292 W CN 2020105292W WO 2021057245 A1 WO2021057245 A1 WO 2021057245A1
Authority
WO
WIPO (PCT)
Prior art keywords
network state
network
bandwidth
state sequence
sequence
Prior art date
Application number
PCT/CN2020/105292
Other languages
English (en)
French (fr)
Inventor
周超
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Priority to EP20867417.6A priority Critical patent/EP4037256A4/en
Publication of WO2021057245A1 publication Critical patent/WO2021057245A1/zh
Priority to US17/557,445 priority patent/US11374825B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • H04L43/0894Packet rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions

Definitions

  • This application relates to the field of network technology, and in particular to a bandwidth prediction method, device, electronic equipment, and storage medium.
  • Bandwidth prediction is the basis of network optimization. Accurate bandwidth prediction is particularly important for network congestion control and application layer rate adaptation. Generally speaking, bandwidth prediction is based on historical network state data, and is predicted through certain modeling. This type of prediction method is based on the assumption that the historical network state data is known.
  • a common prediction scheme at present is bandwidth prediction based on historical bandwidth averages, that is, the average of bandwidth sampling points within a certain period of time in history, as the bandwidth value at the next moment. However, if the bandwidth sampling points within a certain period of time are missing, the scheme cannot take effect; another common prediction scheme is: regression prediction scheme based on historical bandwidth, that is, based on the bandwidth sampling points within a certain period of history, the regression prediction model is used Perform fitting to obtain the law of the bandwidth curve, and predict the bandwidth at the next moment. However, when the historical bandwidth sampling points are missing, the accuracy of the regression model will be seriously affected.
  • This application provides a bandwidth prediction method, device, electronic equipment, and storage medium to at least solve the problem of inaccurate bandwidth prediction caused by lack of historical network state data in related technologies.
  • the technical solution of this application is as follows:
  • a bandwidth prediction method including:
  • the network state is converted to the predicted network state, and the second network state sequence output by the autoencoder is obtained; wherein the network state at the multiple time points includes the effective network state and/or the missing network state;
  • the network bandwidth is predicted according to the second network state sequence.
  • a bandwidth prediction device including:
  • the collection unit is configured to perform acquisition of valid network status at multiple time points in the historical time period, and set the missing network status at the multiple time points as a preset value;
  • a generating unit configured to execute the generation of a first network state sequence from the network states at the multiple time points in chronological order
  • the adjustment unit is configured to input the first network state sequence into the trained autoencoder, and use the feature information between the missing network state in the autoencoder and the real network state to transfer the first network state sequence
  • the missing network state in a network state sequence is converted into a predicted network state to obtain the second network state sequence output by the autoencoder; wherein the network state at the multiple time points includes the effective network state and/or The missing network status;
  • the prediction unit is configured to perform prediction of the network bandwidth according to the second network state sequence.
  • an electronic device including:
  • a memory for storing executable instructions of the processor
  • the processor is configured to execute the instruction to implement the bandwidth prediction method according to any one of the first aspect of the embodiments of the present application.
  • a non-volatile readable storage medium is provided.
  • the electronic device can execute the embodiments of the present application.
  • the bandwidth prediction method according to any one of the first aspect.
  • a computer program product is provided.
  • the computer program product runs on an electronic device, the electronic device executes and implements the first aspect and any of the first aspects of the embodiments of the present application.
  • One possible method is provided.
  • Fig. 1 is a schematic diagram showing a download situation when watching a video according to an exemplary embodiment
  • Fig. 2 is a schematic diagram showing a bandwidth prediction method according to an exemplary embodiment
  • Fig. 3 is a schematic diagram showing a network state in a historical period of time according to an exemplary embodiment
  • Fig. 4 is a schematic diagram showing another network state in a historical period of time according to an exemplary embodiment
  • Fig. 5A is a schematic diagram showing a discretization of sub-periods according to an exemplary embodiment
  • Fig. 5B is a schematic diagram showing another discretization of a seed period according to an exemplary embodiment
  • Fig. 6A is a schematic diagram showing a bandwidth prediction system according to an exemplary embodiment
  • Fig. 6B is a schematic diagram showing a structure of a self-encoder according to an exemplary embodiment
  • Fig. 7 is a schematic diagram showing a bandwidth curve according to an exemplary embodiment
  • Fig. 8 is a flowchart showing a complete method for bandwidth prediction according to an exemplary embodiment
  • Fig. 9 is a block diagram showing a bandwidth prediction device according to an exemplary embodiment
  • Fig. 10 is a block diagram showing an electronic device according to an exemplary embodiment.
  • electronic equipment in the embodiments of this application refers to equipment that is composed of electronic components such as integrated circuits, transistors, and electron tubes, and uses electronic technology (including) software to function, including electronic computers and robots controlled by electronic computers, Numerical control or program control system, etc.
  • the term "Autodencoder” in the embodiments of this application belongs to unsupervised learning and does not need to mark training samples. It generally consists of a three-layer network, where the number of neurons in the input layer is equal to the number of neurons in the output layer. The number of neurons in the hidden layer is less than that of the input layer and output layer.
  • network training for each training sample, a new signal will be generated in the output layer through the network.
  • the purpose of network learning is to make the output signal and the input signal as similar as possible.
  • the self-encoder After the self-encoder is trained, it can be composed of two parts, the first is the input layer and the hidden layer, which can be used to compress the signal; the second is the hidden layer and the output layer, which can restore the compressed signal.
  • timeslot Timeslot, TS
  • TS testing data management
  • bandwidth in the embodiments of the present application generally refers to the frequency bandwidth occupied by a signal; when used to describe a channel, bandwidth refers to the maximum frequency bandwidth of a signal that can effectively pass through the channel. For analog signals, bandwidth is also called bandwidth, in Hertz (Hz). For digital signals, bandwidth refers to the amount of data that the link can pass through in a unit of time. Since the transmission of the digital signal is completed by the modulation of the analog signal, in order to distinguish it from the analog bandwidth, the bandwidth of the digital channel is generally directly described by the baud rate or the symbol rate.
  • Sigmoid function in the embodiments of the present application is often used as a threshold function of neural networks due to its single increase and inverse function single increase, which maps variables between 0 and 1.
  • the collection of the network status depends on the actual transmission status. For example, when downloading a file, the download speed can be approximately equivalent to the current download bandwidth.
  • the network does not continuously transmit data.
  • the state of the network cannot be known, that is, the record of the historical state of the network is missing.
  • a typical scenario is to watch a short video, as shown in Figure 1:
  • the user starts to watch a short video at time t0.
  • the video is downloaded.
  • information such as network bandwidth and delay can be obtained.
  • the user’s viewing behavior may continue until t2, at which time the video just finished playing, but during the period from t1 to t2, the network did not download data, so any status information of the network could not be obtained;
  • the user may continue to watch the video, but the video is read from the local cache and played in a carousel. During this time period, the network has no download data.
  • the user may stay for a while, that is, t3-t4.
  • this application proposes a bandwidth prediction solution based on an auto-encoder.
  • the auto-encoder is used to fill in the missing network state, and in the case of historical network state data, accurate bandwidth prediction is performed.
  • the automatic encoder can also be in the form of a self-encoder.
  • Fig. 2 is a flowchart of a bandwidth prediction method according to an exemplary embodiment. As shown in Fig. 2, the method includes the following steps.
  • step S21 obtain valid network status at multiple time points in the historical time period, and set the missing network status at the multiple time points as a preset value
  • step S22 the network states at the multiple time points are generated in a chronological order to generate a first network state sequence
  • step S23 the first network state sequence is input to the trained autoencoder, and the feature information between the missing network state and the real network state in the autoencoder is used to convert the first network
  • the missing network state in the state sequence is converted into a predicted network state, and the second network state sequence output by the autoencoder is obtained; wherein the network state at the multiple time points includes the effective network state and/or the Missing network status;
  • step S24 the network bandwidth is predicted according to the second network state sequence.
  • the first network state sequence is generated in chronological order, the network state at the previous point in time is also in the first network state sequence, and the network state at the later point in time is in the first network.
  • the position in the state sequence is last. For example, the network state at time 12:01 is before the network state at time 12:02.
  • the missing network status is filled based on the autoencoder. Because the network bandwidth is not directly used to predict the network bandwidth in the embodiment of the application, the collected network status is not directly used, but missing from multiple time points collected.
  • the network status is preset to a preset value, and the first network status sequence is generated according to the network status at multiple time points, and the first network status sequence is processed by the autoencoder, and the first network can be realized through the autoencoder. Prediction of the part of the state sequence set to the preset value, so there is no missing network state in the second network state sequence output by the self-encoder, so it is more accurate to use the second network state sequence to predict the network bandwidth.
  • the preset value is 0 as an example for detailed introduction, and setting the missing network state to the preset value can be regarded as zero padding.
  • the network state set by collecting network states in different periods of time, set a statistical time interval for statistical network state, and collect the network state according to the set statistical time interval, and use the network state to indicate the download speed
  • the statistical time interval can be set to 1s, and the download speed can be calculated every one second, and finally the network status collection is composed of the statistical data.
  • a continuous sub-period in which the valid network state sub-periods and the missing network state sub-periods alternate with each other in the continuous multiple sub-periods, as shown in Figure 3, where download means download and idle means idle, where: ts 1 ⁇ te 1 , ts 2 ⁇ te 2 , ts 3 ⁇ te 3 , ts 4 ⁇ te 4 indicate the sub-periods without missing network status, te 1 ⁇ ts 2 , te 2 ⁇ ts 3 , te 3 ⁇ ts 4 indicate missing network status The sub-period.
  • the value of s(ts, te) is the average value of the network state collected during the time interval from time ts to time te.
  • the network state is the download speed
  • the speed interval is 1 second, that is, the current download speed is collected every 1 second. For example, if ts is 12:00 and te is 12:15, the download speed will be collected in 15 minutes from 12:00 to 12:15. 900 times, where s(ts, te) is the average of the download speeds collected for 900 times.
  • this sub-period is a sub-period of missing network status, so the network state of the sub-period can be set to a preset value .
  • the network status of the time point belonging to the sub-period is also the preset value.
  • multiple consecutive sub-periods may not alternate between valid network state sub-periods and missing network state sub-periods.
  • ts 1 to te 1 in FIG. 3 or te 1 ⁇ ts 2 can continue to be subdivided into multiple consecutive sub-periods; or as shown in Figure 4, where ts 1 ⁇ te 1 , ts 2 ⁇ te 2 , te 2 ⁇ ts 3 , ts 4 ⁇ te 4 represent The sub-periods where the network state is not missing, te 1 ⁇ ts 2 , ts 3 ⁇ te 3 , and te 3 ⁇ ts 4 represent the sub-periods of the missing network state.
  • the sub-periods in order to realize discrete and equally spaced sampling of the entire historical network state in the time dimension, can be divided into approximately equal intervals according to the set time interval, and multiple time points can be determined according to the division result.
  • the sub-period is divided by TS to obtain multiple time points, for example: the time point determined after dividing ts 1 to te 1 is ts 1 + TS, ts 1 +2*TS, ts 1 +3*TS,..., ts 1 +k 1 *TS.
  • the network state at the time point of the effective network state is determined in the following manner:
  • the network state of the sub-period to which the time point with a valid network state belongs from the network state set; set the network state of the sub-period to which it belongs to the network state of the time point.
  • the sub-period to which the time point 12:02:30 belongs is from 12:02 to 12:03, and the network status of this minute is set to the network status of the time point of 12:02:30.
  • s(t) represents the network status at time t, or s(t) represents the network status from t ⁇ TS to time t, where ki is the largest integer that satisfies ts i +k i *TS ⁇ te i .
  • Fig. 5B is a schematic diagram of discretization of another seed time period shown in an embodiment of the application.
  • s(t) represents the network status at time t or the network status from t ⁇ TS to time t
  • the network status of s(t) is equivalent to the sub-period to which s(t) belongs. network status.
  • the time te 1 also serves as a point of time, for example, ts 1 ⁇ te 1 to 1.1s , Assuming that it is divided every 200ms, it can be divided into 5 200ms and 1 100ms, and finally 6 time points are ts 1 +200, ts 1 +400, ts 1 +600, ts 1 +800, ts 1 +1000, te 1 , where the network status at these 6 time points are all equal to the network status in the sub-period ts 1 ⁇ te 1.
  • the network state at the time point of the missing network state is set to a preset value, for example, zero is added to the part of the missing network state, and the network state at the time point of the missing network state is set to 0.
  • the network status of the plurality of time points to generate a first sequence in accordance with the network state ordered chronological order for example, historical time period ts 1 ⁇ te n network state series this time can be expressed as:
  • the missing part of the network bandwidth is predicted by an auto-encoder, where the auto-encoder includes an encoder and a decoder, as shown in FIG. 6A, the network state sequence S obtained after zero-padded Input to the encoder as a vector or matrix.
  • the dimension of S will be relatively high.
  • the signal S' is obtained.
  • the dimension of S' will be much lower; then, S'enters the decoder (Decoder) for dimension upgrade to obtain S”, and the dimension of S” is the same as S ,
  • Its purpose is to predict the state of the calendar network that is missing (replaced with zero) in S.
  • "From S to S" is the workflow of an automatic coding machine. In actual implementation, a classic mature coding machine can be used for model training.
  • the encoder and decoder in the automatic encoder can also be implemented in the form of a neural network.
  • Both the encoder and decoder in Figure 6A are designed as neural networks.
  • the encoder can be designed as a dimensionality reduction neural network.
  • the decoder is designed as an ascending dimensional neural network. From S to S’ is the dimensionality reduction process, from high dimensionality to low dimensionality; from S’ to S" is the dimensionality increase process, that is, from low dimensionality to high dimensionality, S" and The dimensions of S are the same.
  • the dimension increase or dimension reduction refers to increasing or decreasing the dimensionality of a vector.
  • the first network state sequence of the original input is a vector with zeros with a length of 100000.
  • another vector is obtained with a dimension of only 100. Then, the dimension is restored to 100000 through an ascending neural network. .
  • the first network state sequence is processed by the dimensionality reduction neural network in the autoencoder.
  • the network state sequence is processed by dimensionality reduction to obtain the intermediate network state sequence; then the intermediate network state sequence is upgraded by the dimensionality reduction neural network after the dimensionality reduction neural network in the autoencoder, and the network state sequence after the dimensionality reduction process As the second network state sequence.
  • the parameters or functions in the ascending-dimension neural network and the dimensionality-reducing neural network are determined during training.
  • the autoencoder needs to determine the parameters of the neural network, such as the weight W, the bias b, etc., through training, that is, during the training phase, repeated training is performed by continuously adjusting the parameters. After each training, by comparing the original first network state sequence S input to the autoencoder with the second network state sequence S", it is known whether the output of the autoencoder is correct and how big the error is, and then repeated training. Stop training when S" is as consistent with S as possible.
  • An optional implementation manner is that the autoencoder is trained in the following manner:
  • a third network state sequence is generated in chronological order; the effective network state at at least one time point in the third network state sequence is set to a preset value to obtain the fourth network state Sequence; the fourth network state sequence is used as the input feature, and the third network state sequence is used as the output feature to train the autoencoder.
  • the model parameters are constantly iterated to find the internal connection between each value, that is, the characteristic information between the missing network state and the real network state.
  • the output value is calculated using the model.
  • the autoencoder in the embodiment of the present application can predict the missing network state after being trained.
  • the autoencoder is a neural network with 3 layers or more, which encodes the input expression X into a new expression Y, and then decodes Y back to X. This is an unsupervised learning algorithm that uses the Error Back Propagation (BP) algorithm to train the network so that the output is equal to the input.
  • BP Error Back Propagation
  • the neural network includes an input layer, a hidden layer, and an output layer.
  • x 1 , x 2 , x 3 , ... on the left are the input values of the operation
  • 1 represents the intercept
  • each arrow line from the input to the middle circle is attached with a weight W (length equals the number of inputs + The number of intercepts is 1)
  • the circle in the middle represents the activation function f( ⁇ ), usually there are s activation function (sigmoid, value range 0 to 1) and hyperbolic tangent function (tanh, value range -1 to 1).
  • the final h(x) represents the output:
  • the data of each layer contains an additional offset node, which is the intercept.
  • the intercepts of the input layer and the hidden layer are both 1.
  • Each straight line with an arrow represents a weight (except those after the output layer).
  • the circles in the middle and the right represent the activation process of the weighted sum of all his directly input data.
  • the process of training the autoencoder model is mainly to predict the network state at the time point when the network state is missing according to the numerical search law of the actual network state at the time point when the network state is not confirmed. Through repeated training, the prediction result is as accurate as possible.
  • the output Y represents S" where x 1 , x 2 , x 3 , ... can represent the network state at a time point in the input network state sequence, It can represent the network state at the time point in the second network state sequence.
  • the input S contains x1 ⁇ x6, which are: 2, 3, 2, 0, 0, 1;
  • the output S contains Respectively: 2, 3.01, 1.90, 1.50, 2.50, 1, where the part with no missing network status is 2, 3, 2, 1, and the part with no missing network status output S is 2, 3.01, 1.90, 1, missing
  • the output of the part of the network state is 1.50, 2.50, that is, 1.50, 2.50 is the predicted value of the part of the missing network state obtained by the autoencoder.
  • the network state at the next moment is predicted according to the second network state sequence S". If the network state represents bandwidth, the network bandwidth at the next moment may be calculated according to the second network state sequence. prediction.
  • any existing bandwidth prediction method can be used subsequently to predict the bandwidth at the next moment, where the next moment refers to the next moment in the historical time period. .
  • the historical time period is 12:00:00 ⁇ 12:15:00. If the download speed (network status) is collected every second, the next moment refers to 12:15:01, according to the historical time period 12:00
  • the network status sequence corresponding to :00 ⁇ 12:15:00 can predict the download speed (network status) at 12:15:01.
  • Prediction method 1 Predict the network bandwidth according to the mean value of the network state in the second network state sequence.
  • the second network state sequence S" is (1,2,3,4,5,6,7,8,9,10), assuming that the time interval set when determining the time point is 200ms, the network state sequence corresponds to The historical time period is from 12:01 to 12:03, then the bandwidth at the next moment refers to the bandwidth at 12:04:
  • 5.5 represents the prediction result of the bandwidth at 12:3:200 milliseconds. Since there is an equivalent way to determine the network state at the time point in the process of generating the network bandwidth sequence, so It can be understood that the network status at 12:4 is equivalent to the network status at 12:3:200 milliseconds.
  • the second prediction method is to fit the network state in the second network state sequence through a regression prediction model to obtain a bandwidth curve, and predict the network bandwidth according to the bandwidth curve.
  • x represents the time
  • f(x) is a quadratic function.
  • Prediction method 3 Input the second network state sequence into the bandwidth prediction neural network model to predict the network bandwidth.
  • the bandwidth prediction neural network model includes a convolutional neural network and a fully connected neural network. Both neural networks are used for feature extraction. First, the second network state sequence is input into the convolutional neural network for processing. Feature extraction: Afterwards, the result of the feature extraction of the convolutional neural network is input into the fully connected neural network for another feature extraction, and the network bandwidth is predicted through the fully connected neural network.
  • S is used for feature extraction through Convolutional Neural Networks (CNNs), and then feature extraction is performed through Fully Connected (FCs) neural networks to output the predicted network bandwidth.
  • CNNs Convolutional Neural Networks
  • FCs Fully Connected neural networks
  • the basic structure of a CNN includes two layers, one of which is a feature extraction layer.
  • the input of each neuron is connected to the local receptive field of the previous layer, and the local features are extracted.
  • the second is the feature mapping layer
  • each computing layer of the network is composed of multiple feature maps, and each feature map is a plane.
  • the weights of all neurons on the plane are equal.
  • the feature mapping structure uses a sigmoid function with a small influencing function core as the activation function of the convolutional neural network, so that the feature mapping has displacement invariance.
  • the neurons on a mapping plane share weights, the number of free parameters of the network is reduced.
  • Each convolutional layer in the convolutional neural network is followed by a calculation layer for local averaging and secondary extraction. This unique two-feature extraction structure reduces the feature resolution.
  • a fully connected neural network refers to a fully connected neural network.
  • any node in the n-1 layer is connected to all nodes in the nth layer. That is, when each node in the nth layer is performing the calculation, the input of the activation function is the weight of all nodes in the n-1 layer.
  • bandwidth prediction neural network listed in the embodiment of the present application is only an example, and any neural network that can realize the bandwidth prediction is applicable to the embodiment of the present application.
  • the discontinuous part of the missing network state is filled with zeros, and then the network state sequence is generated through the division of time slots, and the autoencoder is used to fill in the network state of the missing network state, and then Use the learning method to make the final bandwidth prediction, specifically by connecting a convolutional neural network and a fully connected neural network behind the autoencoder, where the convolutional neural network and the fully connected neural network can be one or more layers , Depends on the business's requirements for network depth and complexity.
  • the autoencoder is used to complete it, and then the convolutional neural network and the fully connected neural network are used to estimate the bandwidth, which can adapt to various transmission environments and improve the accuracy of bandwidth prediction.
  • Fig. 8 is a flow chart showing a complete method for bandwidth prediction according to an exemplary embodiment, which specifically includes the following steps:
  • step S81 the historical time period is divided into a plurality of continuous sub-periods according to whether the network state is missing, wherein the sub-periods of the valid network state and the sub-periods of the missing network state alternate with each other in the multiple continuous sub-periods;
  • step S82 for any sub-period, a time point in the sub-period is determined according to a set time interval
  • step S83 determine the network status at multiple time points in the historical time period according to the network status of the sub-period to which the time point belongs, and set the network status at the time point where the network status is missing to a preset value to zero;
  • step S84 a first network state sequence is generated in chronological order according to the network state at multiple time points in the historical time period;
  • step S85 the second network state sequence output by the self-encoder is determined after the network state sequence is processed by the self-encoder for dimensionality reduction and dimension upgrade;
  • step S86 the network bandwidth is predicted according to the second network state sequence.
  • step S86 there are many ways to predict the network bandwidth according to the second network state sequence, such as the bandwidth curve or bandwidth obtained from the mean value of the network state sequence or the regression prediction model listed in the above embodiment.
  • the specific process of the different prediction methods is similar to the above-mentioned embodiment, and the repetition will not be repeated.
  • Fig. 9 is a block diagram showing a bandwidth prediction device according to an exemplary embodiment. 9, the device includes an acquisition unit 900, a generation unit 901, an adjustment unit 902, and a prediction unit 903.
  • the collection unit 900 is configured to perform acquisition of valid network status at multiple time points in the historical time period, and set the missing network status at the multiple time points as a preset value;
  • the generating unit 901 is configured to obtain valid network status at multiple time points in the historical time period, and set the missing network status at the multiple time points as a preset value;
  • the adjustment unit 902 is configured to input the first network state sequence into the trained autoencoder, and use the feature information between the missing network state and the real network state in the autoencoder to convert the first network state sequence into The missing network state is converted to the predicted network state, and the second network state sequence output by the self-encoder is obtained; the network state at multiple time points includes the effective network state and/or the missing network state;
  • the prediction unit 903 is configured to perform prediction of the network bandwidth according to the second network state sequence.
  • the generating unit 901 is further configured to execute:
  • the time point in the sub-period is determined according to the set time interval.
  • the generating unit 901 is further configured to determine the effective network state at the time point in the following manner:
  • the adjustment unit 902 is specifically configured to execute:
  • the first network state sequence is input to the autoencoder in the form of a vector
  • the first network state sequence is reduced in dimensionality according to the dimensionality reduction function in the feature information to obtain the intermediate network state sequence, where the dimensionality reduction processing means reducing the first network The dimension of the state sequence;
  • the dimensionality increase processing is performed on the intermediate network state sequence according to the dimensionality increase function in the feature information to obtain the second network state sequence, where the dimensionality increase processing means to increase the dimension of the intermediate network state sequence.
  • the prediction unit 903 is specifically configured to execute:
  • the second network state sequence is input into the bandwidth prediction neural network model to predict the network bandwidth.
  • the bandwidth prediction neural network model includes a convolutional neural network and a fully connected neural network
  • the prediction unit 903 is specifically configured to execute:
  • the result of the feature extraction of the convolutional neural network is input into the fully connected neural network, and the network bandwidth is predicted through the fully connected neural network.
  • Fig. 10 is a block diagram showing an electronic device 1000 according to an exemplary embodiment.
  • the device includes:
  • a memory 1020 for storing executable instructions of the processor 1010
  • the processor 1010 is configured to execute:
  • the network bandwidth is predicted according to the second network state sequence.
  • processor 1010 is further configured to execute:
  • the time point in the sub-period is determined according to the set time interval.
  • the processor 1010 is further configured to determine the effective network state at the time point in the following manner:
  • processor 1010 is specifically configured to execute:
  • the first network state sequence is input to the autoencoder in the form of a vector
  • the first network state sequence is reduced in dimensionality according to the dimensionality reduction function in the feature information to obtain the intermediate network state sequence, where the dimensionality reduction processing means reducing the first network The dimension of the state sequence;
  • the dimensionality increase processing is performed on the intermediate network state sequence according to the dimensionality increase function in the feature information to obtain the second network state sequence, where the dimensionality increase processing means to increase the dimension of the intermediate network state sequence.
  • processor 1010 is specifically configured to execute:
  • the second network state sequence is input into the bandwidth prediction neural network model to predict the network bandwidth.
  • the bandwidth prediction neural network model includes a convolutional neural network and a fully connected neural network
  • the processor 1010 is specifically configured to execute:
  • the result of the feature extraction of the convolutional neural network is input into the fully connected neural network, and the network bandwidth is predicted through the fully connected neural network.
  • a storage medium including instructions, for example, a memory 1020 including instructions, and the foregoing instructions may be executed by the processor 1010 of the electronic device 1000 to complete the foregoing method.
  • the storage medium may be a non-transitory computer-readable storage medium.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, and optical data storage. Equipment, etc.
  • the embodiments of the present application also provide a computer program product, which when the computer program product runs on an electronic device, causes the electronic device to execute any one of the bandwidth prediction methods or any one of the bandwidth prediction methods described in the embodiments of the present application Any method that may be involved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Environmental & Geological Engineering (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请是关于一种带宽预测方法、装置、电子设备及存储介质,涉及网络技术领域,用以解决由于网络历史状态数据缺失造成带宽预测不准确的问题,本申请方法包括:获取历史时间段中多个时间点中有效的网络状态,并将多个时间点中缺失的网络状态设置为预设值;将多个时间点的网络状态按照时间先后顺序生成第一网络状态序列;利用自编码器中缺失的网络状态和真实的网络状态之间的特征信息,将第一网络状态序列中缺失的网络状态转换为预测的网络状态,得到自编码器输出的第二网络状态序列;根据第二网络状态序列对网络带宽进行预测。由于本申请中通过自编码器对缺失网络状态的部分进行了预测,根据预测的结果进行下一时刻的带宽预测,预测结果更加准确。

Description

带宽预测方法、装置、电子设备及存储介质
相关申请的交叉引用
本申请要求在2019年09月23日提交中国专利局、申请号为201910898740.4、申请名称为“带宽预测方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及网络技术领域,尤其涉及一种带宽预测方法、装置、电子设备及存储介质。
背景技术
带宽预测是网络优化的基础,准确的带宽预测,对网络拥塞控制,应用层的码率自适应尤为重要。一般而言,带宽预测都是基于历史的网络状态数据,通过一定的建模进行预测,这类预测方式都是建立在网络历史状态数据已知的假设下进行的。
目前常见的一种预测方案为基于历史带宽均值的带宽预测,即历史一定时间内的带宽采样点的均值,作为下一时刻的带宽值。然而历史一定时间内的带宽采样点如果缺失,该方案无法生效;常见的另一种预测方案为:基于历史带宽进行回归预测的方案,即基于历史一定时间内的带宽采样点,通过回归预测模型进行拟合,得到带宽曲线的规律,进行预测下一时刻的带宽。然而当历史带宽采样点出现缺失时,回归模型的准确度会严重受影响。
综上所述,在实际系统中,由于各种原因,网络历史状态数据会出现缺失,在这种不完备的网络历史状态数据下,如果进行带宽预测,容易使得带宽预测不准确。
发明内容
本申请提供一种带宽预测方法、装置、电子设备及存储介质,以至少解决相关技术中由于网络历史状态数据缺失造成带宽预测不准确的问题。本申请的技术方案如下:
根据本申请实施例的第一方面,提供一种带宽预测方法,包括:
获取历史时间段中多个时间点中有效的网络状态,并将所述多个时间点中缺失的网络状态设置为预设值;
将所述多个时间点的网络状态按照时间先后顺序生成第一网络状态序列;
将所述第一网络状态序列输入已训练好的自编码器,利用所述自编码器中缺失的网络状态和真实的网络状态之间的特征信息,将所述第一网络状态序列中缺失的网络状态转换为预测的网络状态,得到所述自编码器输出的第二网络状态序列;其中所述多个时间点的网络状态包括所述有效的网络状态和/或所述缺失的网络状态;
根据所述第二网络状态序列对网络带宽进行预测。
根据本申请实施例的第二方面,提供一种带宽预测装置,包括:
采集单元,被配置为执行获取历史时间段中多个时间点中有效的网络状态,并将所述多个时间点中缺失的网络状态设置为预设值;
生成单元,被配置为执行将所述多个时间点的网络状态按照时间先后顺序生成第一网络状态序列;
调整单元,被配置为执行将所述第一网络状态序列输入已训练好的自编码器,利用所述自编码器中缺失的网络状态和真实的网络状态之间的特征信息,将所述第一网络状态序列中缺失的网络状态转换为预测的网络状态,得到所述自编码器输出的第二网络状态序列;其中所述多个时间点的网络状态包括所述有效的网络状态和/或所述缺失的网络状态;
预测单元,被配置为执行根据所述第二网络状态序列对网络带宽进行预测。
根据本申请实施例的第三方面,提供一种电子设备,包括:
处理器;
用于存储所述处理器可执行指令的存储器;
其中,所述处理器被配置为执行所述指令,以实现本申请实施例第一方面中任一项所述的带宽预测方法。
根据本申请实施例的第四方面,提供一种非易失性可读存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行本申请实施例第一方面中任一项所述的带宽预测方法。
根据本申请实施例的第五方面,提供一种计算机程序产品,当所述计算机程序产品在电子设备上运行时,使得所述电子设备执行实现本申请实施例上述第一方面以及第一方面任一可能涉及的方法。
附图说明
图1是根据一示例性实施例示出的一种观看视频时下载情况的示意图;
图2是根据一示例性实施例示出的一种带宽预测方法的示意图;
图3是根据一示例性实施例示出的一种一段历史时间内的网络状态示意图;
图4是根据一示例性实施例示出的另一种一段历史时间内的网络状态示意图;
图5A是根据一示例性实施例示出的一种子时段离散化的示意图;
图5B是根据一示例性实施例示出的另一种子时段离散化的示意图;
图6A是根据一示例性实施例示出的一种带宽预测的系统的示意图;
图6B是根据一示例性实施例示出的一种自编码器结构的示意图;
图7是根据一示例性实施例示出的一种带宽曲线的示意图;
图8是根据一示例性实施例示出的一种带宽预测的完整方法的流程图;
图9是根据一示例性实施例示出的一种带宽预测装置的框图;
图10是根据一示例性实施例示出的一种电子设备的框图。
具体实施方式
下面对文中出现的一些词语进行解释:
1、本申请实施例中术语“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
2、本申请实施例中术语“电子设备”是指由集成电路、晶体管、电子管等电子元器件组成,应用电子技术(包括)软件发挥作用的设备,包括电子计算机以及由电子计算机控制的机器人、数控或程控系统等。
3、本申请实施例中术语“自编码器(Autodencoder)”属于非监督学习,不需要对训练样本进行标记,一般由三层网络组成,其中输入层神经元数量与输出层神经元数量相等,隐含层神经元数量少于输入层和输出层。在网络训练期间,对每个训练样本,经过网络会在输出层产生一个新的信号,网络学习的目的就是使输出信号与输入信号尽量相似。自编码器训练结束之后,其可以由两部分组成,首先是输入层和隐含层,可以用这个网络来对信号进行压缩;其次是隐含层和输出层,可以将压缩的信号进行还原。
4、本申请实施例中术语“时隙(Timeslot,TS)”是电路交换汇总信息传送的最小单位,专用于某一个单个通道的时隙信息的串行自复用的一个部分,是时分复用模式(Testing Data Management,TDM)中的一个时间片。
5、本申请实施例中术语“带宽”通常指信号所占据的频带宽度;在被用来描述信道时,带宽是指能够有效的通过该信道的信号的最大频带宽度。对于模拟信号而言,带宽又称为频宽,以赫兹(Hz)为单位。对于数字信号而言,带宽是指单位时间内链路能够通过的数据量。由于数字信号的传输是通过模拟信号的调制完成的,为了与模拟带宽进行区分,数字信道的带宽一般直接用波特率或符号率来描述。
6、本申请实施例中术语“Sigmoid函数”由于其单增以及反函数单增等性质,常被用作神经网络的阈值函数,将变量映射到0,1之间。
本申请实施例描述的应用场景是为了更加清楚的说明本申请实施例的技 术方案,并不构成对于本申请实施例提供的技术方案的限定,本领域普通技术人员可知,随着新应用场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。其中,在本申请的描述中,除非另有说明,“多个”的含义。
例如,在实际网络中,网络状态的收集依赖于实际传输的状态,例如当下载一个文件,可以近似的将下载速度等效为当前的下载带宽。然而,存在很多场景,网络并不是一直不停的传输数据,在网络没有进行数据传输时,无法获知网络的状态,也就是对网络历史状态的记录出现了缺失。
一个典型的场景就是观看短视频,如图1所示:用户在t0时刻开始观看一个短视频,在t0-t1时刻,视频被下载完成,这段时间可以获取网络的带宽、延迟等信息。然后,用户的观看行为可能持续到t2,此时该视频刚好播放完成,但在t1~t2这段区间,网络并没有下载数据,因此无法获取网络的任何状态信息;在t2~t3时刻,用户可能继续观看该视频,但该视频是从本地缓存读取并进行轮播,这段时间区间,网络也没有下载数据。最后,在下一次观看前,用户可能还会停留一段时间,即t3~t4。总体而言,在t0~t4这段时间,只有t0~t1有数据下载,可以获知网络的状态,t1~t4这段时间,无法获知网络的状态,在这种缺失历史网络状态的前提下,如果估计t4时刻的带宽,则是网络预测面临的挑战之一。
因而本申请提出一种基于自动编码机的带宽预测方案,在历史网络状态缺失的情况下,通过自动编码机填补缺失的网络状态,在网络历史状态数据的情况下,进行准确的带宽预测,其中的自动编码机也可以是自编码器的形式。
图2是根据一示例性实施例示出的一种带宽预测方法的流程图,如图2所示,包括以下步骤。
在步骤S21中,获取历史时间段中多个时间点中有效的网络状态,并将所述多个时间点中缺失的网络状态设置为预设值;
在步骤S22中,将所述多个时间点的网络状态按照时间先后顺序生成第 一网络状态序列;
在步骤S23中,将所述第一网络状态序列输入已训练好的自编码器,利用所述自编码器中缺失的网络状态和真实的网络状态之间的特征信息,将所述第一网络状态序列中缺失的网络状态转换为预测的网络状态,得到所述自编码器输出的第二网络状态序列;其中所述多个时间点的网络状态包括所述有效的网络状态和/或所述缺失的网络状态;
在步骤S24中,根据所述第二网络状态序列对网络带宽进行预测。
其中,生成第一网络状态序列时是按照时间先后顺序,时间在前的时间点的网络状态在第一网络状态序列中的位置也在前,时间在后的时间点的网络状态在第一网络状态序列中的位置在后。例如,时间点12:01的网络状态在时间点12:02的网络状态之前。
通过上述方案,基于自编码器对缺失的网络状态进行填补,由于本申请实施例中对网络带宽进行预测时并未直接采用采集到的网络状态,而是将采集到的多个时间点中缺失的网络状态预先设置为了预设值,根据多个时间点的网络状态生成了第一网络状态序列,将第一网络状态序列通过自编码器进行了处理,通过自编码器可以实现对第一网络状态序列中设置为预设值的部分的预测,因而自编码器输出的第二网络状态序列中不会存在缺失网络状态的情况,所以采用第二网络状态序列进行网络带宽的预测更加准确。
在本申请实施例中以预设值为0为例进行详细介绍,将缺失的网络状态设置为预设值可以看作是补零。
在本申请实施例中,首先需要通过采集不同时段的网络状态构成网络状态集合,设置一个用于统计网络状态的统计时间间隔,并根据设置的统计时间间隔采集网络状态,以网络状态表示下载速度为例,则可设置统计时间间隔为1s,每隔一秒统计一次下载速度,最终由统计的数据组成网络状态集合。
在一种可选的实施方式中,根据网络状态集合中的数据则可确定是否缺失网络状态的时刻,其中没有采集到下载速度的时刻为缺失网络状态的时刻,进而将历史时间段划分为多个连续的子时段,其中连续的多个子时段中有效 的网络状态的子时段和缺失网络状态的子时段相互交替,如图3所示,其中download表示下载,idle表示空闲,其中:ts 1~te 1、ts 2~te 2、ts 3~te 3、ts 4~te 4表示未缺失网络状态的子时段,te 1~ts 2、te 2~ts 3、te 3~ts 4表示缺失网络状态的子时段。
例如,用符号s(ts,te)表示ts~te时刻这段区间的网络状态,这里的网络状态指历史带宽,则历史时间段(指一段历史区间)的网络状态为S=[s(ts 1,te 1),s(ts 2,te 2),...,s(ts n,te n)],是一系列网络状态构成的序列。
其中,对于
Figure PCTCN2020105292-appb-000001
都有ts i≥te i-1,即在时间上并不是连续的,而中间不连续的部分就是缺失网络状态的部分,对于
Figure PCTCN2020105292-appb-000002
都有
Figure PCTCN2020105292-appb-000003
这些区间的网络状态是缺失的。
在一种可选的实施方式中,s(ts,te)的取值为ts时刻至te时刻这段时间区间内所采集到的网络状态的均值,当网络状态为下载速度时,假设采集下载速度的时间间隔为1秒,即每隔1秒采集一次当前的下载速度,例如ts为12:00,te为12:15,则在12:00~12:15这15分钟内共采集下载速度900次,在s(ts,te)为采集的这900次下载速度的平均值。
假设,在12:15之后开始15分钟内未下载,即该子时段内未采集到下载速度,因而该子时段为缺失网络状态的子时段,因而可将子时段的网络状态设置为预设值,属于该子时段的时间点的网络状态也为预设值。
若在12:30~12:32内又重新开始下载,则在该子时段内共采集下载速度120次,该子时段的网络状态为采集到的120次下载速度的平均值。
在一种可选的实施方式中,多个连续的子时段也可以不是有效的网络状态的子时段和缺失网络状态的子时段相互交替的形式,例如将图3中的ts 1~te 1或te 1~ts 2都可以继续细分为多个连续的子时段;或者如图4所示,其中ts 1~te 1、ts 2~te 2、te 2~ts 3、ts 4~te 4表示未缺失网络状态的子时段,te 1~ts 2、ts 3~te 3、te 3~ts 4表示缺失网络状态的子时段。
在本申请实施例中,为了使整个历史网络状态在时间维度上实现离散等 间隔的采样,可以按照设定的时间间隔将子时段近似等间隔划分,根据划分结果确定多个时间点。
在一种可选的实施方式中,针对任意一个子时段按时隙TS划分,将子时段按TS划分得到多个时间点,例如:将ts 1~te 1划分后确定的时间点为ts 1+TS、ts 1+2*TS、ts 1+3*TS、…、ts 1+k 1*TS。
在一种可选的实施方式中,通过下列方式确定有效的网络状态的时间点的网络状态:
从网络状态集合中确定具有有效的网络状态的时间点所属的子时段的网络状态;将所属的子时段的网络状态设置为该时间点的网络状态。比如时间点12时02分30秒所属的子时段为12时02分至12时03分,则将这一分钟的网络状态设置为时间点12时02分30秒的网络状态。
例如,对于
Figure PCTCN2020105292-appb-000004
都有s(ts i,te i)可以等效为:
s(ts i+TS)=s(ts i+2*TS)=...=s(ts i+k i*TS)=s(ts i,te i)
其中,s(t)表示t时刻的网络状态,或者s(t)代表了t~TS到t时刻这段区间的网络状态,其中ki为满足ts i+k i*TS≤te i的最大整数。
图5A为本申请实施例示出的一种子时段离散化的示意图,将ts 1~te 1按时隙划分为如图5A所示,ts 1+13TS<te 1,ts 1+14TS>te 1,因而k 1=13:
s(ts 1+TS)=s(ts 1+2*TS)=...=s(ts 1+13*TS)=s(ts 1,te 1);
图5B为本申请实施例示出的另一种子时段离散化的示意图,将ts 2~te 2按时隙划分为如图5B所示,ts 2+11TS<te 2,ts 2+12TS>te 2,因而k 2=11:
s(ts 2+TS)=s(ts 2+2*TS)=...=s(ts 2+11*TS)=s(ts 2,te 2)。
需要说明的是,无论s(t)表示t时刻的网络状态还是t~TS到t时刻这段区间的网络状态,s(t)的网络状态都等效为s(t)所属的子时段的网络状态。
在一种可选的实施方式中,若ts i+k i*TS<te i,则将te 1时刻做特殊处理,将te 1时刻也作为一个时间点,例如ts 1~te 1为1.1s,假设每200ms划分 一次的话,则可划分为5个200ms以及1个100ms,最终得到6个时间点分别为ts 1+200,ts 1+400,ts 1+600,ts 1+800,ts 1+1000,te 1,其中这6个时间点的网络状态都等于ts 1~te 1这个子时段的网络状态。
在本申请实施例中,将缺失网络状态的时间点的网络状态设置为预设值,例如对缺失网络状态的部分补零,将缺失网络状态的时间点的网络状态设置为0。通过将多个时间点的网络状态按照时间先后的顺序排序后生成第一网络状态序列,例如历史时间段ts 1~te n这段时间的网络状态序列可表示为:
S=[s(ts 1+TS),s(ts 1+2*TS),...,s(ts 1+k 1*TS),0,0,...,s(ts 2+TS),...]
在一种可选的实施方式中,通过自动编码机对缺失部分的网络带宽进行预测,其中自动编码机包括编码器和解码器,如图6A所示,将补零后得到的网络状态序列S作为一个向量或者是矩阵等的形式输入给编码器(Encoder)。一般情况下,因为有缺失的网络状态,因此,S的维度会比较高。经过编码器后,得到信号S’,S’相比S而言,其维度会低很多;紧接着,S’进入解码器(Decoder)进行升维处理得到S”,S”的维度与S一致,其目的为将S中缺失的(用零代替)的历网络状态预测出来。从S到S”是一个自动编码机的工作流程,在实际实施时,可以采用经典的成熟的编码机进行模型训练。
需要说明的是,自动编码机中的编码器和解码器也可以通过神经网络的形式实现,将图6A中的编码器和解码器均设计为神经网络,例如编码器可以设计为降维神经网络,解码器设计为升维神经网络从S到S’是降维过程,从高维到低维;再从S’到S”是升维过程,即从低维回到高维,S”与S的维度一致。其中升维或降维指的是增大或减小向量的维度。
例如,原始输入的第一网络状态序列是一个带零的向量,长度为100000,经过降维神经网络,得到另一个向量,维度只有100,再继续通过一个升维神经网络,将维度恢复为100000。
可选的,将图6A中的编码器和解码器均设计为神经网络时,将第一网络状态序列以向量的形式输入自编码器后,通过自编码器中的降维神经网络对第一网络状态序列进行降维处理得到中间网络状态序列;然后再通过自编码 器中降维神经网络之后的升维神经网络对中间网络状态序列进行升维处理,并将升维处理后的网络状态序列作为第二网络状态序列。
其中,升维神经网络和降维神经网络中的参数或者函数是在训练的时候确定的。
在本申请的实施例中,自编码器需要通过训练的方式确定神经网络的参数,例如权重W、偏置b等,也就是在训练阶段,通过不断调整参数进行反复训练。其中每次训练之后,通过将输入自编码器的原始的第一网络状态序列S与第二网络状态序列S”进行对比,就知道自编码器输出对不对,以及误差有多大,然后反复训练,当S”尽可能与S保持一致时停止训练。
一种可选的实施方式为,自编码器是通过以下方式进行训练的:
根据采集到的多个时间点的网络状态,按照时间先后顺序生成第三网络状态序列;将第三网络状态序列中至少一个时间点的有效的网络状态设置为预设值后得到第四网络状态序列;将第四网络状态序列作为输入特征,将第三网络状态序列作为输出特征对自编码器进行训练。
在训练的时候,通过训练时输出序列与真实序列的差异,不断迭代模型参数,找到各个值之间的内在联系,也就是缺失的网络状态和真实的网络状态之间的特征信息。在实际用的时候,输出的值就是用模型算出来的。
在本申请实施例中的自编码器,经过训练后可以实现将缺失的网络状态预测出来。自编码器内部有一个隐含层h,可以产生编码(code)表示输入。该网络可以看作由两部分组成:一个由函数h=f(x)表示的编码器(可以理解为进行降维处理的编码层)和一个生成重构的解码器r=g(h)(可以理解为进行升维处理的解码层)。自编码器是一个3层或者大于3层的神经网络,将输入表达X编码为一个新的表达Y,然后再将Y解码回X。这是一个非监督学习算法,使用误差反向传播(Error Back Propagation,BP)算法来训练网络使得输出等于输入。
如图6B所示,为本申请实施例示出的一种自编码器的结构示意图,该神经网络包括输入层、隐含层以及输出层。其中左边的x 1,x 2,x 3,…是运算输 入值,1代表截距,接着每个由输入到中间的那个圆圈的各条箭头线都附有权重W(长度等于输入个数+截距个数1),中间的圆圈代表激活函数f(·),通常有s激活函数(sigmoid,取值范围0~1)和双曲正切函数(tanh,取值范围-1~1)。最后的h(x)代表输出:
Figure PCTCN2020105292-appb-000005
在自编码器中除了最后一层输出层,每一层的数据都包含额外的一项偏置结点,也就是截距,如图6B所示,输入层和隐含层的截距都为1。每一条带箭头的直线都代表一个权重(输出层后的除外)。中间和右侧的圈都代表对他的所有直接输入数据的加权和的激活处理。
在本申请实施例中,对自编码器模型进行训练的过程中,主要是根据未确实网络状态的时间点的实际网络状态的数值寻找规律,对缺失网络状态的时间点的网络状态进行预测,通过反复训练使得预测结果尽可能准确。
在本申请实施例中,当输入X为信号S时,输出Y表示S”,其中,x 1,x 2,x 3,…则可表示输入的网络状态序列中时间点的网络状态,
Figure PCTCN2020105292-appb-000006
则可表示第二网络状态序列中时间点的网络状态,例如,输入的S中包含x1~x6,分别为:2,3,2,0,0,1;输出的S”中包含
Figure PCTCN2020105292-appb-000007
分别为:2,3.01,1.90,1.50,2.50,1,其中未缺失网络状态的部分为2,3,2,1,未缺失网络状态的部分输出的S为2,3.01,1.90,1,缺失网络状态的部分的输出为1.50,2.50,即1.50,2.50是通过自编码器得到的对缺失网络状态的部分的预测值。
需要说明的是,上述列举的自编码器的结构只是一种距离说明,具体可参考实际情况调整等。
在一种可选的实施方式中,根据第二网络状态序列S”对下一时刻的网络状态进行预测,若网络状态表示带宽,则可根据第二网络状态序列对下一时刻的网络带宽进行预测。
由于从第二网络状态序列S”是一个完整的历史状态序列,因而后续可以采用任何已有的带宽预测方式,预测出下一时刻的带宽,其中下一时刻指针对历史时间段的下一时刻。
例如,历史时间段为12:00:00~12:15:00,若每隔一秒采集一次下载速度(网络状态),则下一时刻指12:15:01,根据历史时间段12:00:00~12:15:00对应的网络状态序列则可预测12:15:01的下载速度(网络状态)。
在本申请实施例中,根据所述第二网络状态序列对网络带宽进行预测的方式有很多种,下面列举几种:
预测方式一、根据所述第二网络状态序列中的网络状态的均值对所述网络带宽进行预测。
例如,第二网络状态序列S”为(1,2,3,4,5,6,7,8,9,10),假设确定时间点时设置的时间间隔为200ms,该网络状态序列所对应的历史时间段为12:01~12:03,则下一时刻的带宽指12:04时刻的带宽为:
(1+2+3+4+5+6+7+8+9+10)/10=5.5。
需要说明的是,更加准确地描述的话5.5表示的是12时3分200毫秒时刻的带宽的预测结果,由于在生成网络带宽序列过程中确定时间点的网络状态时有用到等效的方式,因而可以理解为12时4分的网络状态等效为12时3分200毫秒的网络状态。
预测方式二、通过回归预测模型对第二网络状态序列中的网络状态进行拟合得到带宽曲线,并根据带宽曲线对网络带宽进行预测。
具体的,将网络状态序列中各时间点的网络状态通过回归预测模型进行拟合,利用拟合得到带宽曲线都下一时刻的网络带宽进行预测,假设带宽曲线的表达式为y=f(x),如图7所示,其中x表示时刻,f(x)是一个二次函数,当需要预测某一时刻的网络带宽时,则将该时刻的时间看作x,代入表达式中求y,求得的y即预测得到的带宽。
预测方式三、将第二网络状态序列输入带宽预测神经网络模型对网络带宽进行预测。
在一种可选的事实方式中,带宽预测神经网络模型包括卷积神经网络和全连接神经网络,这两个神经网络都用于特征提取,首先将第二网络状态序列输入卷积神经网络进行特征提取;之后将卷积神经网络特征提取后的结果输入全连接神经网络进行再一次的特征提取,通过全连接神经网络对网络带宽进行预测。
如图6A所示,其中S”经过卷积神经网络(Convolutional Neural Networks,CNNs)进行特征提取之后,再通过全连接(Fully Connected,FCs)神经网络进行特征提取输出预测的网络带宽。
一般地,CNN的基本结构包括两层,其一为特征提取层,每个神经元的输入与前一层的局部接受域相连,并提取该局部的特征。一旦该局部特征被提取后,它与其它特征间的位置关系也随之确定下来;其二是特征映射层,网络的每个计算层由多个特征映射组成,每个特征映射是一个平面,平面上所有神经元的权值相等。特征映射结构采用影响函数核小的sigmoid函数作为卷积神经网络的激活函数,使得特征映射具有位移不变性。此外,由于一个映射面上的神经元共享权值,因而减少了网络自由参数的个数。卷积神经网络中的每一个卷积层都紧跟着一个用来求局部平均与二次提取的计算层,这种特有的两次特征提取结构减小了特征分辨率。
在本申请实施例中,全连接神经网络指全连接的神经网络,对n-1层和n层而言,n-1层的任意一个节点,都和第n层所有节点有连接。即第n层的每个节点在进行计算的时候,激活函数的输入是n-1层所有节点的加权。
需要说明的是,本申请实施例中所列举的带宽预测神经网络的结构也只是一种举例说明,任何一种可以实现带宽预测的神经网络都适用于本申请实施例。
在本申请实施例中,对不连续的缺失网络状态的部分进行补零,之后通过时隙的划分实现网络状态序列的生成,利用自编码器对缺失网络状态的部分的网络状态进行填补,之后利用学习的方式进行最终的带宽预测,具体通过在自编码器的后面接上一个卷积神经网络和一个全连接神经网络,其中卷 积神经网络和全连接神经网络,可以是一层或多层,取决于业务对网络深度和复杂度的要求。在历史网络状态缺失的情况下,通过自编码器进行补全,然后利用卷积神经网络和全连接神经网络进行带宽估计,能适应各种传输环境,提高带宽预测准确率。
图8是根据一示例性实施例示出的一种带宽预测的完整方法流程图,具体包括以下步骤:
在步骤S81中,将历史时间段按照是否缺失网络状态划分为多个连续的子时段,其中多个连续的子时段中有效的网络状态的子时段和缺失网络状态的子时段相互交替;
在步骤S82中,针对任意一个子时段,按照设定的时间间隔确定子时段中的时间点;
在步骤S83中,根据时间点所属的子时段的网络状态确定历史时间段中的多个时间点的网络状态,并将缺失网络状态的时间点的网络状态为预设值设置为零;
在步骤S84中,根据历史时间段中的多个时间点的网络状态,按照时间先后顺序生成第一网络状态序列;
在步骤S85中,将网络状态序列通过自编码器进行降维及升维处理后,确定自编码器输出的第二网络状态序列;
在步骤S86中,根据第二网络状态序列对网络带宽进行预测。
需要说明的是,在步骤S86中,根据第二网络状态序列对网络带宽进行预测的方式有很多种,例如上述实施例中列举的根据网络状态序列的均值或回归预测模型得到的带宽曲线或者带宽预测神经网络模型等,不同的额预测方式的具体过程与上述实施例类似,重复之处不再赘述。
图9是根据一示例性实施例示出的一种带宽预测装置框图。参照图9,该装置包括采集单元900,生成单元901,调整单元902和预测单元903。
采集单元900,被配置为执行获取历史时间段中多个时间点中有效的网络状态,并将多个时间点中缺失的网络状态设置为预设值;
生成单元901,被配置为执行获取历史时间段中多个时间点中有效的网络状态,并将多个时间点中缺失的网络状态设置为预设值;
调整单元902,被配置为执行将第一网络状态序列输入已训练好的自编码器,利用自编码器中缺失的网络状态和真实的网络状态之间的特征信息,将第一网络状态序列中缺失的网络状态转换为预测的网络状态,得到自编码器输出的第二网络状态序列;其中多个时间点的网络状态包括有效的网络状态和/或缺失的网络状态;
预测单元903,被配置为执行根据第二网络状态序列对网络带宽进行预测。
在一种可选的实施方式中,生成单元901还被配置为执行:
在根据历史时间段中的多个时间点的网络状态,按照时间先后顺序生成网络状态序列之前,将历史时间段按照网络状态划分为多个连续的子时段,其中多个连续的子时段中有效的网络状态的子时段和缺失网络状态的子时段相互交替;
针对任意一个子时段,按照设定的时间间隔确定子时段中的时间点。
在一种可选的实施方式中,生成单元901还被配置为执行通过下列方式确定时间点的有效的网络状态:
将时间点所属的子时段的网络状态设置为时间点的有效的网络状态。
在一种可选的实施方式中,调整单元902具体被配置为执行:
将第一网络状态序列以向量的形式输入自编码器后,根据特征信息中的降维函数对第一网络状态序列进行降维处理,得到中间网络状态序列,其中降维处理表示降低第一网络状态序列的维度;
根据特征信息中的升维函数对中间网络状态序列进行升维处理,得第二网络状态序列,其中升维处理表示增大中间网络状态序列的维度。
在一种可选的实施方式中,预测单元903具体被配置为执行:
根据第二网络状态序列中的网络状态的均值对网络带宽进行预测;或
通过回归预测模型对第二网络状态序列中的网络状态进行拟合得到带宽 曲线,并根据带宽曲线对网络带宽进行预测;或
将第二网络状态序列输入带宽预测神经网络模型对网络带宽进行预测。
在一种可选的实施方式中,带宽预测神经网络模型包括卷积神经网络和全连接神经网络;
预测单元903具体被配置为执行:
将第二网络状态序列输入卷积神经网络进行特征提取;
将卷积神经网络特征提取后的结果输入全连接神经网络,通过全连接神经网络对网络带宽进行预测。
关于上述实施例中的装置,其中各个单元执行请求的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
图10是根据一示例性实施例示出的一种电子设备1000的框图,该装置包括:
处理器1010;
用于存储处理器1010可执行指令的存储器1020;
其中,处理器1010被配置为执行:
获取历史时间段中多个时间点中有效的网络状态,并将多个时间点中缺失的网络状态设置为预设值;
将多个时间点的网络状态按照时间先后顺序生成第一网络状态序列;
将第一网络状态序列输入已训练好的自编码器,利用自编码器中缺失的网络状态和真实的网络状态之间的特征信息,将第一网络状态序列中缺失的网络状态转换为预测的网络状态,得到自编码器输出的第二网络状态序列;其中多个时间点的网络状态包括有效的网络状态和/或缺失的网络状态;
根据第二网络状态序列对网络带宽进行预测。
可选的,处理器1010还被配置为执行:
在根据历史时间段中的多个时间点的网络状态,按照时间先后顺序生成网络状态序列之前,将历史时间段按照网络状态划分为多个连续的子时段,其中多个连续的子时段中有效的网络状态的子时段和缺失网络状态的子时段 相互交替;
针对任意一个子时段,按照设定的时间间隔确定子时段中的时间点。
可选的,处理器1010还被配置为执行通过下列方式确定时间点的有效的网络状态:
将时间点所属的子时段的网络状态设置为时间点的有效的网络状态。
可选的,处理器1010具体被配置为执行:
将第一网络状态序列以向量的形式输入自编码器后,根据特征信息中的降维函数对第一网络状态序列进行降维处理,得到中间网络状态序列,其中降维处理表示降低第一网络状态序列的维度;
根据特征信息中的升维函数对中间网络状态序列进行升维处理,得第二网络状态序列,其中升维处理表示增大中间网络状态序列的维度。
可选的,处理器1010具体被配置为执行:
根据第二网络状态序列中的网络状态的均值对网络带宽进行预测;或
通过回归预测模型对第二网络状态序列中的网络状态进行拟合得到带宽曲线,并根据带宽曲线对网络带宽进行预测;或
将第二网络状态序列输入带宽预测神经网络模型对网络带宽进行预测。
可选的,带宽预测神经网络模型包括卷积神经网络和全连接神经网络;
处理器1010具体被配置为执行:
将第二网络状态序列输入卷积神经网络进行特征提取;
将卷积神经网络特征提取后的结果输入全连接神经网络,通过全连接神经网络对网络带宽进行预测。
在示例性实施例中,还提供了一种包括指令的存储介质,例如包括指令的存储器1020,上述指令可由电子设备1000的处理器1010执行以完成上述方法。可选地,存储介质可以是非临时性计算机可读存储介质,例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD~ROM、磁带、软盘和光数据存储设备等。
本申请实施例还提供一种计算机程序产品,当所述计算机程序产品在电 子设备上运行时,使得所述电子设备执行实现本申请实施例上述任意一项带宽预测方法或任意一项带宽预测方法任一可能涉及的方法。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本申请的其它实施方案。本申请旨在涵盖本申请的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本申请未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本申请的真正范围和精神由下面的权利要求指出。
应当理解的是,本申请并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本申请的范围仅由所附的权利要求来限制。

Claims (14)

  1. 一种带宽预测方法,包括:
    获取历史时间段中多个时间点中有效的网络状态,并将所述多个时间点中缺失的网络状态设置为预设值;
    将所述多个时间点的网络状态按照时间先后顺序生成第一网络状态序列;
    将所述第一网络状态序列输入已训练好的自编码器,利用所述自编码器中缺失的网络状态和真实的网络状态之间的特征信息,将所述第一网络状态序列中缺失的网络状态转换为预测的网络状态,得到所述自编码器输出的第二网络状态序列;其中所述多个时间点的网络状态包括所述有效的网络状态和/或所述缺失的网络状态;
    根据所述第二网络状态序列对网络带宽进行预测。
  2. 根据权利要求1所述的带宽预测方法,在所述根据历史时间段中的多个时间点的网络状态,按照时间先后顺序生成网络状态序列的步骤之前,还包括:
    将所述历史时间段按照网络状态划分为多个连续的子时段,其中所述多个连续的子时段中所述有效的网络状态的子时段和所述缺失网络状态的子时段相互交替;
    针对任意一个子时段,按照设定的时间间隔确定所述子时段中的时间点。
  3. 根据权利要求2所述的带宽预测方法,通过下列方式确定所述时间点的有效的网络状态:
    将所述时间点所属的子时段的网络状态设置为所述时间点的有效的网络状态。
  4. 根据权利要求1所述的带宽预测方法,所述将所述第一网络状态序列输入已训练好的自编码器,利用所述自编码器中缺失的网络状态和真实的网络状态之间的特征信息,将所述第一网络状态序列中缺失的网络状态转换为 真实的网络状态,得到所述自编码器输出的第二网络状态序列的步骤包括:
    将所述第一网络状态序列以向量的形式输入所述自编码器后,根据所述特征信息中的降维函数对所述第一网络状态序列进行降维处理,得到中间网络状态序列,其中降维处理表示降低所述第一网络状态序列的维度;
    根据所述特征信息中的升维函数对所述中间网络状态序列进行升维处理,得所述第二网络状态序列,其中升维处理表示增大所述中间网络状态序列的维度。
  5. 根据权利要求1所述的带宽预测方法,所述根据所述第二网络状态序列对网络带宽进行预测的步骤包括:
    根据所述第二网络状态序列中的网络状态的均值对所述网络带宽进行预测;或
    通过回归预测模型对所述第二网络状态序列中的网络状态进行拟合得到带宽曲线,并根据所述带宽曲线对所述网络带宽进行预测;或
    将所述第二网络状态序列输入带宽预测神经网络模型对所述网络带宽进行预测。
  6. 根据权利要求5所述的带宽预测方法,所述带宽预测神经网络模型包括卷积神经网络和全连接神经网络;
    所述将所述第二网络状态序列输入带宽预测神经网络模型对所述网络带宽进行预测的步骤包括:
    将所述第二网络状态序列输入所述卷积神经网络进行特征提取;
    将所述卷积神经网络特征提取后的结果输入所述全连接神经网络,通过所述全连接神经网络对网络带宽进行预测。
  7. 一种带宽预测装置,包括:
    采集单元,被配置为执行获取历史时间段中多个时间点中有效的网络状态,并将所述多个时间点中缺失的网络状态设置为预设值;
    生成单元,被配置为执行将所述多个时间点的网络状态按照时间先后顺序生成第一网络状态序列;
    调整单元,被配置为执行将所述第一网络状态序列输入已训练好的自编码器,利用所述自编码器中缺失的网络状态和真实的网络状态之间的特征信息,将所述第一网络状态序列中缺失的网络状态转换为预测的网络状态,得到所述自编码器输出的第二网络状态序列;其中所述多个时间点的网络状态包括所述有效的网络状态和/或所述缺失的网络状态;
    预测单元,被配置为执行根据所述第二网络状态序列对网络带宽进行预测。
  8. 一种电子设备,包括:
    处理器;
    用于存储所述处理器可执行指令的存储器;
    其中,所述处理器被配置为执行:
    获取历史时间段中多个时间点中有效的网络状态,并将所述多个时间点中缺失的网络状态设置为预设值;
    将所述多个时间点的网络状态按照时间先后顺序生成第一网络状态序列;
    将所述第一网络状态序列输入已训练好的自编码器,利用所述自编码器中缺失的网络状态和真实的网络状态之间的特征信息,将所述第一网络状态序列中缺失的网络状态转换为预测的网络状态,得到所述自编码器输出的第二网络状态序列;其中所述多个时间点的网络状态包括所述有效的网络状态和/或所述缺失的网络状态;
    根据所述第二网络状态序列对网络带宽进行预测。
  9. 根据权利要求8所述的电子设备,所述处理器还被配置为执行:
    在根据历史时间段中的多个时间点的网络状态,按照时间先后顺序生成网络状态序列之前,将所述历史时间段按照网络状态划分为多个连续的子时段,其中所述多个连续的子时段中所述有效的网络状态的子时段和所述缺失网络状态的子时段相互交替;
    针对任意一个子时段,按照设定的时间间隔确定所述子时段中的时间点。
  10. 根据权利要求8所述的电子设备,所述处理器还被配置为执行通过下列方式确定所述时间点的有效的网络状态:
    将所述时间点所属的子时段的网络状态设置为所述时间点的有效的网络状态。
  11. 根据权利要求8所述的电子设备,所述处理器具体被配置为执行:
    将所述第一网络状态序列以向量的形式输入所述自编码器后,根据所述特征信息中的降维函数对所述第一网络状态序列进行降维处理,得到中间网络状态序列,其中降维处理表示降低所述第一网络状态序列的维度;
    根据所述特征信息中的升维函数对所述中间网络状态序列进行升维处理,得所述第二网络状态序列,其中升维处理表示增大所述中间网络状态序列的维度。
  12. 根据权利要求8所述的电子设备,所述处理器具体被配置为执行:
    根据所述第二网络状态序列中的网络状态的均值对所述网络带宽进行预测;或
    通过回归预测模型对所述第二网络状态序列中的网络状态进行拟合得到带宽曲线,并根据所述带宽曲线对所述网络带宽进行预测;或
    将所述第二网络状态序列输入带宽预测神经网络模型对所述网络带宽进行预测。
  13. 根据权利要求12所述的电子设备,所述带宽预测神经网络模型包括卷积神经网络和全连接神经网络;
    所述处理器具体被配置为执行:
    将所述第二网络状态序列输入所述卷积神经网络进行特征提取;
    将所述卷积神经网络特征提取后的结果输入所述全连接神经网络,通过所述全连接神经网络对网络带宽进行预测。
  14. 一种存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行如权利要求1至权利要求6中任一项所述的带宽预测方法。
PCT/CN2020/105292 2019-09-23 2020-07-28 带宽预测方法、装置、电子设备及存储介质 WO2021057245A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20867417.6A EP4037256A4 (en) 2019-09-23 2020-07-28 BANDWIDTH PREDICTION METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIA
US17/557,445 US11374825B2 (en) 2019-09-23 2021-12-21 Method and apparatus for predicting bandwidth

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910898740.4 2019-09-23
CN201910898740.4A CN110474815B (zh) 2019-09-23 2019-09-23 带宽预测方法、装置、电子设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/557,445 Continuation US11374825B2 (en) 2019-09-23 2021-12-21 Method and apparatus for predicting bandwidth

Publications (1)

Publication Number Publication Date
WO2021057245A1 true WO2021057245A1 (zh) 2021-04-01

Family

ID=68516643

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/105292 WO2021057245A1 (zh) 2019-09-23 2020-07-28 带宽预测方法、装置、电子设备及存储介质

Country Status (4)

Country Link
US (1) US11374825B2 (zh)
EP (1) EP4037256A4 (zh)
CN (1) CN110474815B (zh)
WO (1) WO2021057245A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114039870A (zh) * 2021-09-27 2022-02-11 河海大学 基于深度学习的蜂窝网络中视频流应用实时带宽预测方法
CN114389975A (zh) * 2022-02-08 2022-04-22 北京字节跳动网络技术有限公司 网络带宽预估方法、装置、系统、电子设备及存储介质
CN117560531A (zh) * 2024-01-11 2024-02-13 淘宝(中国)软件有限公司 一种带宽探测方法、装置、电子设备及存储介质

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110474815B (zh) * 2019-09-23 2021-08-13 北京达佳互联信息技术有限公司 带宽预测方法、装置、电子设备及存储介质
CN111277870B (zh) * 2020-03-05 2022-09-30 广州市百果园信息技术有限公司 带宽预测方法、装置、服务器及存储介质
CN111405319B (zh) * 2020-03-31 2021-07-23 北京达佳互联信息技术有限公司 带宽确定方法、装置、电子设备和存储介质
CN111698262B (zh) * 2020-06-24 2021-07-16 北京达佳互联信息技术有限公司 带宽确定方法、装置、终端及存储介质
CN112994944B (zh) * 2021-03-03 2023-07-25 上海海洋大学 一种网络状态预测方法
CN113489645B (zh) * 2021-07-08 2022-08-19 北京中交通信科技有限公司 一种基于卫星通信的数据链路聚合方法和路由器、服务器
CN114118567A (zh) * 2021-11-19 2022-03-01 国网河南省电力公司经济技术研究院 一种基于双通路融合网络的电力业务带宽预测方法
CN115577769B (zh) * 2022-10-10 2024-05-31 国网湖南省电力有限公司 一种基于双向神经网络自回归模型的量测数据拟合方法
CN116743635B (zh) * 2023-08-14 2023-11-07 北京大学深圳研究生院 一种网络预测与调控方法及网络调控系统
CN117041074B (zh) * 2023-10-10 2024-02-27 联通在线信息科技有限公司 Cdn带宽预测方法、装置、电子设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101827014A (zh) * 2009-03-03 2010-09-08 华为技术有限公司 一种切换控制方法、网关服务器及通信网络
EP2701408A1 (en) * 2012-08-24 2014-02-26 La Citadelle Inzenjering d.o.o. Method and apparatus for managing a wireless network
CN109800483A (zh) * 2018-12-29 2019-05-24 北京城市网邻信息技术有限公司 一种预测方法、装置、电子设备和计算机可读存储介质
CN110400010A (zh) * 2019-07-11 2019-11-01 新华三大数据技术有限公司 预测方法、装置、电子设备以及计算机可读存储介质
CN110474815A (zh) * 2019-09-23 2019-11-19 北京达佳互联信息技术有限公司 带宽预测方法、装置、电子设备及存储介质
CN110798365A (zh) * 2020-01-06 2020-02-14 支付宝(杭州)信息技术有限公司 基于神经网络的流量预测方法及装置
CN111327441A (zh) * 2018-12-14 2020-06-23 中兴通讯股份有限公司 一种流量数据预测方法、装置、设备及存储介质

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ATE336780T1 (de) * 1999-11-23 2006-09-15 Texas Instruments Inc Verschleierungsverfahren bei verlust von sprachrahmen
CN104050319B (zh) * 2014-06-13 2017-10-10 浙江大学 一种实时在线验证复杂交通控制算法的方法
CN104064023B (zh) * 2014-06-18 2016-12-07 银江股份有限公司 一种基于时空关联的动态交通流预测方法
US10491754B2 (en) * 2016-07-22 2019-11-26 Level 3 Communications, Llc Visualizing predicted customer bandwidth utilization based on utilization history
US10592368B2 (en) * 2017-10-26 2020-03-17 International Business Machines Corporation Missing values imputation of sequential data
US10846888B2 (en) * 2018-09-26 2020-11-24 Facebook Technologies, Llc Systems and methods for generating and transmitting image sequences based on sampled color information
CN109728939B (zh) * 2018-12-13 2022-04-26 杭州迪普科技股份有限公司 一种网络流量检测方法及装置
US20210012902A1 (en) * 2019-02-18 2021-01-14 University Of Notre Dame Du Lac Representation learning for wearable-sensor time series data
CN109785629A (zh) * 2019-02-28 2019-05-21 北京交通大学 一种短时交通流量预测方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101827014A (zh) * 2009-03-03 2010-09-08 华为技术有限公司 一种切换控制方法、网关服务器及通信网络
EP2701408A1 (en) * 2012-08-24 2014-02-26 La Citadelle Inzenjering d.o.o. Method and apparatus for managing a wireless network
CN111327441A (zh) * 2018-12-14 2020-06-23 中兴通讯股份有限公司 一种流量数据预测方法、装置、设备及存储介质
CN109800483A (zh) * 2018-12-29 2019-05-24 北京城市网邻信息技术有限公司 一种预测方法、装置、电子设备和计算机可读存储介质
CN110400010A (zh) * 2019-07-11 2019-11-01 新华三大数据技术有限公司 预测方法、装置、电子设备以及计算机可读存储介质
CN110474815A (zh) * 2019-09-23 2019-11-19 北京达佳互联信息技术有限公司 带宽预测方法、装置、电子设备及存储介质
CN110798365A (zh) * 2020-01-06 2020-02-14 支付宝(杭州)信息技术有限公司 基于神经网络的流量预测方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4037256A4 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114039870A (zh) * 2021-09-27 2022-02-11 河海大学 基于深度学习的蜂窝网络中视频流应用实时带宽预测方法
CN114389975A (zh) * 2022-02-08 2022-04-22 北京字节跳动网络技术有限公司 网络带宽预估方法、装置、系统、电子设备及存储介质
CN114389975B (zh) * 2022-02-08 2024-03-08 北京字节跳动网络技术有限公司 网络带宽预估方法、装置、系统、电子设备及存储介质
CN117560531A (zh) * 2024-01-11 2024-02-13 淘宝(中国)软件有限公司 一种带宽探测方法、装置、电子设备及存储介质
CN117560531B (zh) * 2024-01-11 2024-04-05 淘宝(中国)软件有限公司 一种带宽探测方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
US20220116281A1 (en) 2022-04-14
US11374825B2 (en) 2022-06-28
CN110474815B (zh) 2021-08-13
EP4037256A1 (en) 2022-08-03
CN110474815A (zh) 2019-11-19
EP4037256A4 (en) 2022-11-02

Similar Documents

Publication Publication Date Title
WO2021057245A1 (zh) 带宽预测方法、装置、电子设备及存储介质
CN112668128B (zh) 联邦学习系统中终端设备节点的选择方法及装置
Retherford A theory of marital fertility transition
CN111629380B (zh) 面向高并发多业务工业5g网络的动态资源分配方法
CN102469103B (zh) 基于bp神经网络的木马事件预测方法
WO2020228796A1 (en) Systems and methods for wireless signal configuration by a neural network
CN110460880B (zh) 基于粒子群和神经网络的工业无线流媒体自适应传输方法
US11424963B2 (en) Channel prediction method and related device
CN112529153B (zh) 基于卷积神经网络的bert模型的微调方法及装置
CN113469325B (zh) 一种边缘聚合间隔自适应控制的分层联邦学习方法、计算机设备、存储介质
CN113518250B (zh) 一种多媒体数据处理方法、装置、设备及可读存储介质
CN106850289B (zh) 结合高斯过程与强化学习的服务组合方法
CN112733043B (zh) 评论推荐方法及装置
CN116225696B (zh) 用于流处理系统的算子并发度调优方法及装置
WO2023045565A1 (zh) 网络管控方法及其系统、存储介质
CN115481748A (zh) 一种基于数字孪生辅助的联邦学习新鲜度优化方法与系统
CN114039870A (zh) 基于深度学习的蜂窝网络中视频流应用实时带宽预测方法
CN111343006A (zh) 一种cdn峰值流量预测方法、装置及存储介质
CN116016223B (zh) 一种数据中心网络数据传输优化方法
CN114500561B (zh) 电力物联网网络资源分配决策方法、系统、设备及介质
CN117950879B (zh) 自适应云服务器分布方法、装置及计算机设备
CN115396328A (zh) 一种网络指标预测方法、装置及电子设备
CN116385059A (zh) 行为数据预测模型的更新方法、装置、设备及存储介质
CN117675917A (zh) 面向边缘智能的自适应缓存更新方法、装置及边缘服务器
CN116306801A (zh) 基于重要性的数据采集存储方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20867417

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020867417

Country of ref document: EP

Effective date: 20220425