CN108449286B - Network bandwidth resource allocation method and device - Google Patents

Network bandwidth resource allocation method and device Download PDF

Info

Publication number
CN108449286B
CN108449286B CN201810172531.7A CN201810172531A CN108449286B CN 108449286 B CN108449286 B CN 108449286B CN 201810172531 A CN201810172531 A CN 201810172531A CN 108449286 B CN108449286 B CN 108449286B
Authority
CN
China
Prior art keywords
bandwidth
service data
onu
request
hidden layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810172531.7A
Other languages
Chinese (zh)
Other versions
CN108449286A (en
Inventor
张丽佳
忻向军
刘博�
张琦
田清华
王拥军
刘铭
任建新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN201810172531.7A priority Critical patent/CN108449286B/en
Publication of CN108449286A publication Critical patent/CN108449286A/en
Application granted granted Critical
Publication of CN108449286B publication Critical patent/CN108449286B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/83Admission control; Resource allocation based on usage prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/142Network analysis or design using statistical or mathematical methods

Abstract

The embodiment of the invention provides a method and a device for allocating network bandwidth resources, which are applied to a controller in an SDN network architecture and comprise the following steps: using the acquired first bandwidth request information sent by the ONU in the previous period as the input of a preset neural network prediction model to acquire the prediction request bandwidth data of the ONU in the current period; and sending first bandwidth authorization information to the ONU according to the predicted request bandwidth data of the ONU in the current period. According to the embodiment of the invention, the acquired service data in the first bandwidth request information sent by the ONU in the previous period is input into the neural network prediction model corresponding to the service data identifier, the ONU prediction request bandwidth data in the current period is predicted, the size of each service data in the ONU can be predicted in a targeted manner, and then authorization information is issued to the ONU, so that the accuracy of network bandwidth resource allocation can be improved.

Description

Network bandwidth resource allocation method and device
Technical Field
The present invention relates to the field of network resource management technologies, and in particular, to a method and an apparatus for allocating network bandwidth resources.
Background
SDN (Software Defined Network) includes: ser (Server), CON (controller), ONU (Optical Network Unit), and OLT (Optical line terminal). At present, a network bandwidth resource allocation method of an SDN is as follows: the ONU receives service data sent by different user terminals, and then sends bandwidth request information to the CON according to the bandwidth required by the service data. After receiving the bandwidth request information, the CON allocates fixed bandwidths set by the service data with different priorities in sequence from high to low according to the priorities of the service data, allocates network bandwidth resources, and sends bandwidth authorization information to the ONU, wherein the bandwidth authorization information includes: fixed bandwidth set by the service data. And after receiving the bandwidth authorization information, the ONU transmits bandwidth data to the OLT according to the bandwidth authorization information and then transmits the bandwidth data to the server by the OLT.
Firstly, in the method for allocating network bandwidth resources, the bandwidth allocated by the CON to the service data with the lowest priority is much smaller than the bandwidth required by the service data with the lowest priority, so that the service data with lower priority cannot be transmitted.
Secondly, since the bandwidth allocated by the CON to the service data with different priorities is a fixed bandwidth, and the service data transmitted by the user terminal is changed in real time, the bandwidth required by the different service data is also changed in real time. The fixed bandwidth allocated according to the priority of different service data has a large difference from the actually required bandwidth of the service data. Therefore, the accuracy of allocating bandwidth resources is not high in the network bandwidth resource allocation method in the prior art.
Disclosure of Invention
The embodiment of the invention aims to provide a method and a device for allocating network bandwidth resources, so as to improve the accuracy of network bandwidth resource allocation. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a method for allocating network bandwidth resources, which is applied to a controller in a Software Defined Network (SDN) architecture, and includes:
the acquired first bandwidth request information sent by the ONU in the previous period is used as the input of a preset neural network prediction model, and the prediction request bandwidth data of the ONU in the current period is acquired through the neural network prediction model; the first bandwidth request information includes: the service data identification and the request bandwidth data corresponding to the service data identification;
the neural network prediction model corresponds to the service data identification one by one; the neural network prediction model comprises: the mathematical operation relation between the request bandwidth corresponding to the service data identification and the ONU predicted request bandwidth data; the predicting requested bandwidth data comprises: the service data identification and the prediction request bandwidth data corresponding to the service data identification;
sending first bandwidth authorization information to the ONU according to the bandwidth data of the predicted request of the ONU in the current period;
wherein the first bandwidth authorization information comprises: the service data identification, the prediction request bandwidth data corresponding to the service data identification and the time slot size of the transmission channel occupied by the ONU in the transmission period are obtained.
Optionally, the preset neural network prediction model is obtained by pre-training through the following steps:
dividing all second bandwidth request information sent by the ONU and received before the current period into a generated data set and a prediction set according to a preset time sequence and step length;
the occurring data set and prediction set each comprise: the method comprises the steps that ONU identification, service data identification and request bandwidth data corresponding to the service data identification are carried out;
wherein, the ONU mark corresponds to three service data marks;
normalizing the request bandwidth data corresponding to the service data in the generated data set and the prediction set by using a preset normalization algorithm;
taking the request bandwidth data corresponding to the service data identifier in the generated data set after normalization processing as the input of an initial neural network prediction model;
wherein the initial neural network prediction model comprises: presetting various parameters;
taking the request bandwidth corresponding to the service data in the prediction set as a training target of an initial neural network prediction model;
adjusting various parameters of the initial neural network prediction model according to the error function of the output layer;
and taking the initial neural network prediction model after each parameter is adjusted as a preset neural network prediction model.
Optionally, the initial neural network prediction model is:
Figure GDA0002418676090000031
Figure GDA0002418676090000032
Figure GDA0002418676090000033
Figure GDA0002418676090000034
wherein f (x) represents a transfer function of each layer, xiFor the input of the input layer, the output of hidden layer 1 is yjThe output of hidden layer 2 is ykThe output of the output layer is OmThe weight from input layer to hidden layer 1 is WijThe threshold value is thetajThe weight of hidden layer 1 to hidden layer 2 is WjkThe threshold value is thetakThe weight from hidden layer 2 to output layer is WkmThe threshold value is thetam
Optionally, the adjusting, according to a preset error function, each parameter of the initial neural network prediction model includes:
adjusting the weight between each layer in the initial neural network prediction model according to the error function of the output layer and the weight adjusting function between each layer;
adjusting the threshold value between each layer in the initial neural network prediction model according to the error function of the output layer and the threshold value adjusting function between each layer;
the error function is:
Figure GDA0002418676090000035
Figure GDA0002418676090000036
Figure GDA0002418676090000037
Figure GDA0002418676090000038
wherein R ismRepresenting the size of the requested bandwidth data corresponding to the service data identifier in the actual bandwidth request information of the ONU; o ismRepresenting the size of the requested bandwidth data corresponding to the service data identifier in the predicted bandwidth request information;
the weight adjustment function from the input layer to the hidden layer 1 is: wij(n+1)=Wij(n)+ηjδjxi
The weight adjustment function from hidden layer 1 to hidden layer 2 is: wjk(n+1)=Wjk(n)+ηkδkyj
The weight adjustment function from the hidden layer 2 to the output layer is: wkm(n+1)=Wkm(n)+ηmδmyk
The threshold adjustment function for the input layer to hidden layer 1 is: thetaj(n+1)=θj(n)+λjδj
The threshold adjustment function for hidden layer 1 to hidden layer 2 is: thetak(n+1)=θk(n)+λkδk
The threshold adjustment function from hidden layer 2 to output layer is: thetam(n+1)=θm(n)+λmδm
Wherein, ηjInput layer to hidden layer 1 weight learning Rate, ηkRepresenting the weight learning rate of hidden layer 1 to hidden layer 2, ηmRepresents the learning rate of hidden layer 2 to output layer; n represents a period, and a positive integer is taken; wij(n) represents the weight, W, of the input layer before the adaptation to the hidden layer 1ij(n +1) represents the adjusted weight from the input layer to the hidden layer 1, Wjk(n) represents the weight before adjustment of hidden layer 1 to hidden layer 2, Wjk(n +1) represents the weight of the hidden layer 1 to hidden layer 2 after adjustment, Wkm(n +1) represents the weight before adjustment from hidden layer 2 to output layer, Wkm(n) represents the weight adjusted from the hidden layer 2 to the output layer; lambda [ alpha ]jThreshold learning rate, λ, of input layer to hidden layer 1kRepresenting a threshold learning rate, λ, from hidden layer 1 to hidden layer 2mRepresenting threshold learning rate, θ, from hidden layer 2 to output layerj(n) represents the threshold, θ, before the input layer is adjusted to the hidden layer 1j(n +1) represents the adjusted threshold, θ, for input layer to hidden layer 1k(n) represents the threshold, θ, before the alignment of hidden layer 1 to hidden layer 2k(n +1) represents the adjusted threshold, θ, from hidden layer 1 to hidden layer 2m(n +1) represents the threshold before adjustment of the hidden layer 2 to the output layer, θm(n) represents the adjusted threshold from hidden layer 2 to output layer.
Optionally, the step of using the obtained first bandwidth request information sent by the ONU in the previous period as an input of a preset neural network prediction model includes:
normalizing the acquired first bandwidth request information sent by the ONU in the previous period;
and taking the normalized first bandwidth request information as the input of a preset neural network prediction model.
Optionally, sending first bandwidth authorization information to the ONU according to the predicted request bandwidth data of the ONU in the current period includes:
performing reverse normalization processing on the predicted request bandwidth data of the ONU in the current period by using a preset reverse normalization algorithm;
taking the prediction request bandwidth data of the ONU in the current period after the reverse normalization processing as target prediction request bandwidth data;
judging whether the size of the requested bandwidth data corresponding to each service data identifier in the target prediction requested bandwidth data exceeds a preset bandwidth threshold corresponding to each service data identifier;
if the size of the requested bandwidth data corresponding to the service data identifier exceeds a preset bandwidth threshold corresponding to the service data identifier of the service data identifier, setting the identifier as a heavy load for the requested bandwidth data corresponding to the service data identifier;
if the size of the requested bandwidth data corresponding to the service data identifier does not exceed the preset bandwidth threshold corresponding to the service data identifier of the service data identifier, setting the identifier as a light load for the requested bandwidth data corresponding to the service data identifier;
according to the preset priority of the service data and the request bandwidth data identification corresponding to the service data identification, sequencing the request bandwidth data corresponding to the service data identification in the target prediction request bandwidth data to obtain a sequencing result of the request bandwidth data corresponding to the service data identification;
wherein, the sequencing result of the request bandwidth data corresponding to the service data identifier includes: the light load with the highest priority, the heavy load with the highest priority, the light load with the second priority, the light load with the lowest priority, the heavy load with the second priority and the heavy load with the lowest priority;
and sending first bandwidth authorization information to the ONU according to the sequencing result of the request bandwidth data corresponding to the service data identifier.
Optionally, after the step of sending the first bandwidth authorization information to the ONU according to the predicted request bandwidth data of the ONU in the current period, the method includes:
acquiring bandwidth data transmitted by an OLT in a current period as target transmission bandwidth data;
the target transmission bandwidth data is bandwidth data transmitted to the OLT by the ONU according to the received first bandwidth authorization information;
and updating the preset neural network prediction model according to the target transmission bandwidth data.
Optionally, after the step of sending the first bandwidth authorization information to the ONU according to the predicted request bandwidth data of the ONU in the current period, the method further includes:
sending second bandwidth authorization information to the ONU according to second bandwidth request information and the target transmission bandwidth data which are sent by the ONU in the next period and received in the current period;
wherein the second bandwidth authorization information includes: the service data identification, the prediction request bandwidth data corresponding to the service data identification and the time slot size of the transmission channel occupied by the ONU in the transmission period are obtained.
In a second aspect, the present embodiment provides a network bandwidth resource allocation apparatus, which is applied to a controller in a software defined network SDN architecture, and includes:
the bandwidth prediction module is used for taking the acquired first bandwidth request information sent by the ONU in the previous period as the input of a preset neural network prediction model and acquiring the prediction request bandwidth data of the ONU in the current period through the neural network prediction model;
the first bandwidth request information includes: the service data identification and the request bandwidth data corresponding to the service data identification;
the neural network prediction model corresponds to the service data identification one by one; the neural network prediction model comprises: the mathematical operation relation between the request bandwidth corresponding to the service data identification and the ONU predicted request bandwidth data; the predicting requested bandwidth data comprises: the service data identification and the prediction request bandwidth data corresponding to the service data identification;
the bandwidth authorization module is used for sending first bandwidth authorization information to the ONU according to the bandwidth data of the predicted request of the ONU in the current period;
wherein the first bandwidth authorization information comprises: the service data identification, the prediction request bandwidth data corresponding to the service data identification and the time slot size of the transmission channel occupied by the ONU in the transmission period are obtained.
In another aspect of the present invention, there is also provided an electronic device, including a processor, a communication interface, a memory and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing the method for estimating the age bracket of the user when executing the program stored in the memory.
In yet another aspect of the present invention, there is also provided a computer-readable storage medium having stored therein instructions, which when run on a computer, cause the computer to perform a method of network bandwidth resource allocation as described in any one of the above.
In yet another aspect of the present invention, the present invention further provides a computer program product containing instructions, which when run on a computer, causes the computer to execute any one of the above described network bandwidth resource allocation methods.
The embodiment of the invention provides a network bandwidth resource allocation method and device, which are applied to a controller in an SDN network architecture. Using the acquired first bandwidth request information sent by the ONU in the previous period as the input of a preset neural network prediction model to acquire the prediction request bandwidth data of the ONU in the current period; the first bandwidth request information includes: the service data identification and the request bandwidth data corresponding to the service data identification; the neural network prediction model corresponds to the service data identification one by one; the neural network prediction model comprises: the mathematical operation relation between the request bandwidth corresponding to the service data identification and the ONU predicted request bandwidth data; the predicting requested bandwidth data comprises: the service data identification and the prediction request bandwidth data corresponding to the service data identification; sending first bandwidth authorization information to the ONU according to the bandwidth data of the predicted request of the ONU in the current period; wherein the first bandwidth authorization information comprises: the service data identification, the prediction request bandwidth data corresponding to the service data identification and the time slot size of the transmission channel occupied by the ONU in the transmission period are obtained. According to the scheme, the service data in the acquired first bandwidth request information sent by the ONU in the previous period is input into the neural network prediction model corresponding to the service data identification, the ONU prediction request bandwidth data in the current period is predicted, the size of each service data in the ONU can be predicted in a targeted manner, and then authorization information is issued to the ONU, so that the accuracy of network bandwidth resource allocation can be improved. Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a network bandwidth resource allocation method according to an embodiment of the present invention;
FIG. 2 is a flow chart of training a neural network model according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating sending a first bandwidth grant message according to an embodiment of the present invention;
fig. 4 is a structural diagram of a network bandwidth resource allocation apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention aims to solve the problem that the accuracy of bandwidth resource allocation is not high in the network bandwidth resource allocation method for allocating fixed bandwidth according to different priorities of service data in the prior art. It can be understood that the types of service data in the network resource bandwidth are different, and the request bandwidth corresponding to different service data is different. Therefore, in this embodiment, the bandwidth request information is sent to the neural network models corresponding to different service data, the bandwidth request information in the current period is predicted, and the bandwidth authorization information is issued to the ONU.
As shown in fig. 1, a method for allocating network bandwidth resources provided in an embodiment of the present invention is applied to a controller in a Software Defined Network (SDN) architecture, and includes the following steps:
s101, using the acquired first bandwidth request information sent by the ONU in the previous period as the input of a preset neural network prediction model, and acquiring the prediction request bandwidth data of the ONU in the current period through the neural network prediction model;
the first bandwidth request information includes: the service data identification and the request bandwidth data corresponding to the service data identification;
the neural network prediction model corresponds to the service data identification one by one; the neural network prediction model comprises: the mathematical operation relation between the request bandwidth corresponding to the service data identification and the ONU predicted request bandwidth data; predicting the requested bandwidth data includes: service data identification and prediction request bandwidth data corresponding to the service data identification;
s102, sending first bandwidth authorization information to the ONU according to the bandwidth data of the predicted request of the ONU in the current period;
wherein the first bandwidth authorization information comprises: the service data identification, the prediction request bandwidth data corresponding to the service data identification and the time slot size of the transmission channel occupied by the ONU in the transmission period.
In the embodiment, before the ONU sends the requested bandwidth data of the current period, the predicted requested bandwidth data of the ONU of the current period is predicted, and the bandwidth authorization information is issued in advance, so that the round-trip time for sending the requested bandwidth data and receiving the bandwidth authorization information is saved, and therefore, the real-time performance of allocating the bandwidth is improved. The bandwidth resource allocation method in the embodiment has high real-time performance, and uses the neural network model corresponding to the service data identifiers one to one as the bandwidth prediction model, so that each service data can be transmitted, the bandwidth of each service data in the current period request bandwidth information is predicted in a targeted manner, and the bandwidth authorization information is issued. Therefore, the present embodiment can improve the accuracy of allocating bandwidth resources.
Optionally, as shown in fig. 2, in this embodiment, a preset neural network prediction model is trained in advance according to bandwidth request information acquired before a current period, so as to save time for predicting request bandwidth data of an ONU that predicts the current period subsequently. The preset neural network prediction model is obtained by pre-training through the following steps:
s201, dividing all second bandwidth request information sent by the ONU and received before the current period into a generated data set and a prediction set according to a preset time sequence and step length;
wherein the generated data set and the prediction set each comprise: the method comprises the steps that ONU identification, service data identification and request bandwidth data corresponding to the service data identification are carried out; wherein, one ONU mark corresponds to three service data marks;
the preset time sequence is from front to back according to the system time, the step length is a length set manually according to experience, and the actual step length can be set to be 100.
Assuming that there are 100 ONUs 1, it can be divided into 100 groups. The first group is the 1 st to 100 th data and the 101 th data, the 1 st to 100 th data are the occurred data set, and the 101 th data are the predicted data set.
S202, normalizing the request bandwidth data corresponding to the service data in the generated data set and the prediction set by using a preset normalization algorithm;
s203, taking the request bandwidth data corresponding to the service data identifier in the generated data set after normalization processing as the input of an initial neural network prediction model;
wherein the initial neural network prediction model comprises: presetting various parameters; the initial neural network prediction model is as follows:
Figure GDA0002418676090000101
Figure GDA0002418676090000102
Figure GDA0002418676090000103
Figure GDA0002418676090000104
wherein f (x) represents a transfer function of each layer, xiFor the input of the input layer, the output of hidden layer 1 is yjThe output of hidden layer 2 is ykThe output of the output layer is OmThe weight from input layer to hidden layer 1 is WijThe threshold value is thetajThe weight of hidden layer 1 to hidden layer 2 is WjkThe threshold value is thetakThe weight from hidden layer 2 to output layer is WkmThe threshold value is thetam
The initial neural network model of this embodiment has three layers, and any neural network model can also be used as the initial neural network model, and a preset neural network prediction model is trained according to the generated data set. Here, the present embodiment is not limited.
S204, taking the request bandwidth corresponding to the service data in the prediction set as a training target of the initial neural network prediction model;
s205, adjusting each parameter of the initial neural network prediction model according to the error function of the output layer;
and S206, taking the initial neural network prediction model after the parameters are adjusted as a preset neural network prediction model.
It will be appreciated that the normalization process acts such that the requested bandwidth data takes on a value between 0 and 1 and the neural network prediction model output value is between 0 and 1. According to the embodiment, the requested bandwidth data is subjected to normalization processing, so that subsequent data processing is simpler, and the speed of training a neural network model can be increased. In the embodiment, all the second bandwidth request information sent by the ONU and received before the current period is divided into the generated data set and the prediction set, and the neural network model corresponding to the service data identifier is trained in a targeted manner according to the generated data set, so that the accuracy of the neural network model corresponding to the service data identifier can be improved.
Optionally, S205 may be implemented according to the following steps:
the method comprises the following steps: adjusting the weight between each layer in the initial neural network prediction model according to the error function of the output layer and the weight adjusting function between each layer;
step two: adjusting the threshold value between each layer in the initial neural network prediction model according to the error function of the output layer and the threshold value adjusting function between each layer;
wherein the error function is:
Figure GDA0002418676090000111
Figure GDA0002418676090000112
Figure GDA0002418676090000113
Figure GDA0002418676090000114
wherein R ismRepresenting the size of the requested bandwidth data corresponding to the service data identifier in the actual bandwidth request information of the ONU; o ismRepresenting the size of the requested bandwidth data corresponding to the service data identifier in the predicted bandwidth request information;
the weight adjustment function from the input layer to the hidden layer 1 is: wij(n+1)=Wij(n)+ηjδjxi
The weight adjustment function from hidden layer 1 to hidden layer 2 is: wjk(n+1)=Wjk(n)+ηkδkyj
The weight adjustment function from the hidden layer 2 to the output layer is: wkm(n+1)=Wkm(n)+ηmδmyk
The threshold adjustment function for the input layer to hidden layer 1 is: thetaj(n+1)=θj(n)+λjδj
The threshold adjustment function for hidden layer 1 to hidden layer 2 is: thetak(n+1)=θk(n)+λkδk
The threshold adjustment function from hidden layer 2 to output layer is: thetam(n+1)=θm(n)+λmδm
Wherein, ηjInput layer to hidden layer 1 weight learning Rate, ηkRepresenting the weight learning rate of hidden layer 1 to hidden layer 2, ηmRepresents the learning rate of hidden layer 2 to output layer; n represents a period, and a positive integer is taken; wij(n) represents the weight, W, of the input layer before the adaptation to the hidden layer 1ij(n +1) represents the adjusted weight from the input layer to the hidden layer 1, Wjk(n) represents the weight before adjustment of hidden layer 1 to hidden layer 2, Wjk(n +1) represents the weight of the hidden layer 1 to hidden layer 2 after adjustment, Wkm(n +1) represents the weight before adjustment from hidden layer 2 to output layer, Wkm(n) represents the weight adjusted from the hidden layer 2 to the output layer; lambda [ alpha ]jThreshold learning rate, λ, of input layer to hidden layer 1kRepresenting a threshold learning rate, λ, from hidden layer 1 to hidden layer 2mRepresenting threshold learning rate, θ, from hidden layer 2 to output layerj(n) represents the threshold, θ, before the input layer is adjusted to the hidden layer 1j(n +1) represents the adjusted threshold, θ, for input layer to hidden layer 1k(n) represents the threshold, θ, before the alignment of hidden layer 1 to hidden layer 2k(n +1) represents the adjusted threshold, θ, from hidden layer 1 to hidden layer 2m(n +1) represents the threshold before adjustment of the hidden layer 2 to the output layer, θm(n) represents the adjusted threshold from hidden layer 2 to output layer.
Optionally, in step S101, the obtained first bandwidth request information sent by the ONU in the previous period is used as an input of a preset neural network prediction model, and the method includes the following steps:
the method comprises the following steps: normalizing the acquired first bandwidth request information sent by the ONU in the previous period;
step two: and taking the normalized first bandwidth request information as the input of a preset neural network prediction model.
It is understood that the method of determining the normalization process in the present embodiment is the same as that of the prior art normalization process, and will not be described in detail here.
Alternatively, as shown in fig. 3, S102 may be obtained according to the following steps:
s301, performing inverse normalization processing on the bandwidth data of the prediction request of the ONU in the current period by using a preset inverse normalization algorithm;
s302, the prediction request bandwidth data of the ONU in the current period after the reverse normalization processing is used as target prediction request bandwidth data;
s303, judging whether the size of the requested bandwidth data corresponding to each service data identifier in the target prediction requested bandwidth data exceeds a preset bandwidth threshold corresponding to each service data identifier;
the preset bandwidth threshold corresponding to the service data identifier is set according to experience by people and is set by calculating the average value of the sizes of all service data actually transmitted by the network in a certain time period.
S304, if the size of the request bandwidth data corresponding to the service data identifier exceeds the preset bandwidth threshold corresponding to the service data identifier of the user, setting the identifier as a heavy load for the request bandwidth data corresponding to the service data identifier;
s305, if the size of the requested bandwidth data corresponding to the service data identifier does not exceed the preset bandwidth threshold corresponding to the service data identifier, setting the identifier as light load for the requested bandwidth data corresponding to the service data identifier;
s306, according to the preset priority of the service data and the request bandwidth data identifier corresponding to the service data identifier, sequencing the request bandwidth data corresponding to the service data identifier in the target prediction request bandwidth data to obtain a sequencing result of the request bandwidth data corresponding to the service data identifier;
the sequencing result of the request bandwidth data corresponding to the service data identifier comprises: the light load with the highest priority, the heavy load with the highest priority, the light load with the second priority, the light load with the lowest priority, the heavy load with the second priority and the heavy load with the lowest priority;
s307, sending first bandwidth authorization information to the ONU according to the sequencing result of the request bandwidth data corresponding to the service data identifier.
For example, the target requested bandwidth data includes three service data types: expedited Forwarding (EF), Assured Forwarding (AF), and Best Effort (BE). The business data of EF type has the highest priority; the service data of the AF type has a second priority; the service data priority of the BE type is lowest.
Assuming that the size of the request bandwidth data corresponding to the EF type service data in the target prediction request bandwidth data is 100M, and the preset bandwidth threshold corresponding to the EF type service data is 80M, setting and identifying the request bandwidth data corresponding to the EF type service data identification as a heavy load; setting and identifying request bandwidth data corresponding to AF type service data in the target prediction request bandwidth data as light load if the size of the request bandwidth data corresponding to the AF type service data in the target prediction request bandwidth data is 50M and a preset bandwidth threshold corresponding to the AF type service data is 60M; setting and marking the request bandwidth data corresponding to the BE type service data in the target prediction request bandwidth data as a heavy load if the request bandwidth data size corresponding to the BE type service data in the target prediction request bandwidth data is 70M and the preset bandwidth threshold corresponding to the BE type service data is 60M; then the ordering result of the requested bandwidth data corresponding to the service data identifier in the target predicted requested bandwidth data is: and the EF heavy load, the AF light load and the BE heavy load send first bandwidth authorization information to the ONU according to the sequencing result.
The output of the neural network model is the result of the normalization of the predicted requested bandwidth data of the ONU in the current period. Therefore, in this embodiment, the predicted request bandwidth data of the ONU in the current period output by the neural network is denormalized, after the denormalization, the request bandwidth data corresponding to the service data identifier in the predicted request bandwidth data is set with an identifier, sorted according to the priority and the request bandwidth data identifier corresponding to the service data identifier, and then the first authorization information is sent. The embodiment ensures that each service data in the ONU is normally transmitted, and avoids the condition that the service data cannot be transmitted because of lower priority. Meanwhile, the embodiment considers two factors of the priority of the service data and the size of the requested bandwidth data corresponding to the service data identifier, and sends the first bandwidth authorization information to allocate the bandwidth resources, so that the accuracy of allocating the bandwidth resources is improved.
Optionally, after the step of S102, the method for allocating network bandwidth resources in this embodiment further includes:
the method comprises the following steps: acquiring bandwidth data transmitted by an OLT in a current period as target transmission bandwidth data;
the target transmission bandwidth data is bandwidth data transmitted to the OLT by the ONU according to the received first bandwidth authorization information;
step two: and updating a preset neural network prediction model according to the target transmission bandwidth data.
In the embodiment, the bandwidth data transmitted by the OLT in the current period is used as the target transmission bandwidth data, and the preset neural network prediction model is updated, so that the neural network prediction model is predicted more accurately. The neural network prediction model is updated once in one period, and the real-time performance of predicting the bandwidth data requested by the ONU is stronger by using the updated neural network prediction model.
Optionally, after the step of S102, the method for allocating network bandwidth resources in this embodiment further includes:
sending second bandwidth authorization information to the ONU according to second bandwidth request information and target transmission bandwidth data sent by the ONU in the next period received in the current period;
wherein the second bandwidth authorization information includes: the service data identification and the prediction request bandwidth data corresponding to the service data identification and the time slot size of the transmission channel which can be occupied by the ONU in the transmission period.
It can be understood that, the controller receives the bandwidth request information sent by the ONU continuously in one transmission period, and during the whole process of receiving the first bandwidth request information of the current period and sending the first bandwidth grant information, the controller also receives the second bandwidth request information sent by the ONU of the next period. According to the second bandwidth request information and the target transmission bandwidth data sent by the ONU in the next period received in the current period, the controller adjusts and sends the second bandwidth authorization information to the ONU in real time so as to prepare for the transmission of the data in the next period.
For example, it is assumed that the prediction request bandwidth data corresponding to the service data service identifier in the bandwidth request information sent by the ONU in the current period is predicted, and the time slot in the transmission channel that the ONU can occupy is small, and the first bandwidth authorization information is sent to the ONU, and the ONU performs data transmission on the bandwidth data transmitted to the OLT according to the first bandwidth authorization information. The controller receives the second bandwidth request information of the next period in the current period, and the bandwidth corresponding to the service data in the second bandwidth request information is larger, so that the controller can adjust the bandwidth authorization information in real time, the predicted request bandwidth data corresponding to the service identifier in the second bandwidth authorization information and the time slot of the transmission channel occupied by the ONU in the transmission period are increased, the backlog of the data at the ONU end is reduced, and the data transmission efficiency of the ONU is improved.
As shown in fig. 4, a network bandwidth resource allocation apparatus provided in an embodiment of the present invention is applied to a controller in a software defined network SDN architecture, and includes:
the bandwidth prediction module 401 is configured to use the obtained first bandwidth request information sent by the ONU in the previous period as an input of a preset neural network prediction model, and obtain prediction request bandwidth data of the ONU in the current period through the neural network prediction model;
wherein the first bandwidth request information includes: the service data identification and the request bandwidth data corresponding to the service data identification;
the neural network prediction model corresponds to the service data identification one by one; the neural network prediction model comprises: the mathematical operation relation between the request bandwidth corresponding to the service data identification and the ONU predicted request bandwidth data; predicting the requested bandwidth data includes: service data identification and prediction request bandwidth data corresponding to the service data identification;
a bandwidth authorization module 402, configured to send first bandwidth authorization information to an ONU according to bandwidth data of a prediction request of the ONU in a current period;
wherein the first bandwidth authorization information comprises: the service data identification and the prediction request bandwidth data corresponding to the service data identification and the time slot size of the transmission channel which can be occupied by the ONU in the transmission period.
Optionally, the bandwidth prediction module 401 includes: a model training submodule;
the model training submodule comprises:
the set dividing subunit is used for dividing all the second bandwidth request information sent by the ONU and received before the current period into a generated data set and a prediction set according to a preset time sequence and step length;
wherein the generated data set and the prediction set each comprise: the ONU mark, the service data mark and the request bandwidth data corresponding to the service data mark;
wherein, one ONU mark corresponds to three service data marks;
the normalization subunit is used for performing normalization processing on the request bandwidth data corresponding to the service data in the generated data set and the prediction set by using a preset normalization algorithm;
the model input subunit is used for taking the request bandwidth data corresponding to the service data identifier in the generated data set after the normalization processing as the input of the initial neural network prediction model;
wherein the initial neural network prediction model comprises: presetting various parameters;
the model target subunit is used for taking the request bandwidth corresponding to the service data in the prediction set as a training target of the initial neural network prediction model;
the parameter adjusting subunit is used for adjusting each parameter of the initial neural network prediction model according to the error function of the output layer;
and the prediction model subunit is used for taking the initial neural network prediction model after the parameters are adjusted as a preset neural network prediction model.
Optionally, the prediction model subunit is specifically configured to: taking the initial neural network prediction model after adjusting various parameters as a preset neural network prediction model;
the initial neural network prediction model is as follows:
Figure GDA0002418676090000161
Figure GDA0002418676090000162
Figure GDA0002418676090000163
Figure GDA0002418676090000164
wherein f (x) represents a transfer function of each layer, xiFor the input of the input layer, the output of hidden layer 1 is yjThe output of hidden layer 2 is ykThe output of the output layer is OmThe weight from input layer to hidden layer 1 is WijThe threshold value is thetajThe weight of hidden layer 1 to hidden layer 2 is WjkThe threshold value is thetakThe weight from hidden layer 2 to output layer is WkmThe threshold value is thetam
Optionally, the parameter adjusting subunit is specifically configured to:
adjusting the weight between each layer in the initial neural network prediction model according to the error function of the output layer and the weight adjusting function between each layer;
adjusting the threshold value between each layer in the initial neural network prediction model according to the error function of the output layer and the threshold value adjusting function between each layer;
the error function is:
Figure GDA0002418676090000171
Figure GDA0002418676090000172
Figure GDA0002418676090000173
Figure GDA0002418676090000174
wherein R ismRepresenting the size of the requested bandwidth data corresponding to the service data identifier in the actual bandwidth request information of the ONU; o ismRepresenting the size of the requested bandwidth data corresponding to the service data identifier in the predicted bandwidth request information;
the weight adjustment function from the input layer to the hidden layer 1 is: wij(n+1)=Wij(n)+ηjδjxi
The weight adjustment function from hidden layer 1 to hidden layer 2 is: wjk(n+1)=Wjk(n)+ηkδkyj
The weight adjustment function from the hidden layer 2 to the output layer is: wkm(n+1)=Wkm(n)+ηmδmyk
The threshold adjustment function for the input layer to hidden layer 1 is: thetaj(n+1)=θj(n)+λjδj
The threshold adjustment function for hidden layer 1 to hidden layer 2 is: thetak(n+1)=θk(n)+λkδk
The threshold adjustment function from hidden layer 2 to output layer is: thetam(n+1)=θm(n)+λmδm
Wherein, ηjInput layer to hidden layer 1 weight learning Rate, ηkRepresenting the weight learning rate of hidden layer 1 to hidden layer 2, ηmRepresents the learning rate of hidden layer 2 to output layer; n represents a period, and a positive integer is taken; wij(n) represents the weight, W, of the input layer before the adaptation to the hidden layer 1ij(n +1) represents the adjusted weight from the input layer to the hidden layer 1, Wjk(n) represents the weight before adjustment of hidden layer 1 to hidden layer 2, Wjk(n +1) represents the weight of the hidden layer 1 to hidden layer 2 after adjustment, Wkm(n +1) represents the weight before adjustment from hidden layer 2 to output layer, Wkm(n) represents the weight adjusted from the hidden layer 2 to the output layer; lambda [ alpha ]jThreshold learning rate, λ, of input layer to hidden layer 1kRepresenting a threshold learning rate, λ, from hidden layer 1 to hidden layer 2mRepresenting threshold learning rate, θ, from hidden layer 2 to output layerj(n) represents the threshold, θ, before the input layer is adjusted to the hidden layer 1j(n +1) represents the adjusted threshold, θ, for input layer to hidden layer 1k(n) represents the threshold, θ, before the alignment of hidden layer 1 to hidden layer 2k(n +1) represents the adjusted threshold, θ, from hidden layer 1 to hidden layer 2m(n +1) represents the threshold before adjustment of the hidden layer 2 to the output layer, θm(n) represents the adjusted threshold from hidden layer 2 to output layer.
Optionally, the prediction model subunit is specifically configured to:
normalizing the acquired first bandwidth request information sent by the ONU in the previous period;
and taking the normalized first bandwidth request information as the input of a preset neural network prediction model.
Optionally, the bandwidth authorization module 402 is specifically configured to:
performing inverse normalization processing on the predicted request bandwidth data of the ONU in the current period by using a preset inverse normalization algorithm;
taking the prediction request bandwidth data of the ONU in the current period after the reverse normalization processing as target prediction request bandwidth data;
judging whether the size of the requested bandwidth data corresponding to each service data identifier in the target prediction requested bandwidth data exceeds a preset bandwidth threshold corresponding to each service data identifier;
if the size of the requested bandwidth data corresponding to the service data identifier exceeds a preset bandwidth threshold corresponding to the service data identifier of the user, setting the identifier as a heavy load for the requested bandwidth data corresponding to the service data identifier;
if the size of the request bandwidth data corresponding to the service data identifier does not exceed the preset bandwidth threshold corresponding to the service data identifier of the user, setting the identifier as a light load for the request bandwidth data corresponding to the service data identifier;
according to the preset priority of the service data and the request bandwidth data identification corresponding to the service data identification, sequencing the request bandwidth data corresponding to the service data identification in the target prediction request bandwidth data to obtain a sequencing result of the request bandwidth data corresponding to the service data identification;
the sequencing result of the request bandwidth data corresponding to the service data identifier comprises: the light load with the highest priority, the heavy load with the highest priority, the light load with the second priority, the light load with the lowest priority, the heavy load with the second priority and the heavy load with the lowest priority;
and sending first bandwidth authorization information to the ONU according to the sequencing result of the request bandwidth data corresponding to the service data identifier.
Optionally, the bandwidth authorization module 402 includes:
the model updating submodule is used for acquiring bandwidth data transmitted by the OLT in the current period as target transmission bandwidth data;
the target transmission bandwidth data is bandwidth data transmitted to the OLT by the ONU according to the received first bandwidth authorization information;
and updating a preset neural network prediction model according to the target transmission bandwidth data.
Optionally, the bandwidth authorization module 402 includes:
the bandwidth authorization sub-module is used for sending second bandwidth authorization information to the ONU according to second bandwidth request information and target transmission bandwidth data which are sent by the ONU in the next period and received in the current period;
wherein the second bandwidth authorization information includes: the service data identification and the prediction request bandwidth data corresponding to the service data identification and the time slot size of the transmission channel which can be occupied by the ONU in the transmission period.
An embodiment of the present invention further provides an electronic device, as shown in fig. 5, which includes a processor 501, a communication interface 502, a memory 503 and a communication bus 504, where the processor 501, the communication interface 502 and the memory 503 complete mutual communication through the communication bus 504,
a memory 503 for storing a computer program;
the processor 501, when executing the program stored in the memory 503, implements the following steps:
the acquired first bandwidth request information sent by the ONU in the previous period is used as the input of a preset neural network prediction model, and the prediction request bandwidth data of the ONU in the current period is acquired through the neural network prediction model; the first bandwidth request information includes: the service data identification and the request bandwidth data corresponding to the service data identification;
the neural network prediction model corresponds to the service data identification one by one; the neural network prediction model comprises: the mathematical operation relation between the request bandwidth corresponding to the service data identification and the ONU predicted request bandwidth data; predicting the requested bandwidth data includes: service data identification and prediction request bandwidth data corresponding to the service data identification;
sending first bandwidth authorization information to the ONU according to the bandwidth data of the predicted request of the ONU in the current period;
wherein the first bandwidth authorization information comprises: the service data identification and the prediction request bandwidth data corresponding to the service data identification and the time slot size of the transmission channel which can be occupied by the ONU in the transmission period.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
In yet another embodiment of the present invention, a computer-readable storage medium is further provided, which stores instructions that, when executed on a computer, cause the computer to perform a network bandwidth resource allocation method as described in any of the above embodiments.
In yet another embodiment of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform a method of network bandwidth resource allocation as described in any of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.

Claims (9)

1. A network bandwidth resource allocation method is applied to a controller in a Software Defined Network (SDN) architecture, and the method comprises the following steps:
the acquired first bandwidth request information sent by the ONU in the previous period is used as the input of a preset neural network prediction model, and the prediction request bandwidth data of the ONU in the current period is acquired through the neural network prediction model; the first bandwidth request information includes: the service data identification and the request bandwidth data corresponding to the service data identification;
the neural network prediction model corresponds to the service data identification one by one; the neural network prediction model comprises: the mathematical operation relation between the request bandwidth corresponding to the service data identification and the ONU predicted request bandwidth data; the predicting requested bandwidth data comprises: the service data identification and the prediction request bandwidth data corresponding to the service data identification;
sending first bandwidth authorization information to the ONU according to the bandwidth data of the predicted request of the ONU in the current period;
wherein the first bandwidth authorization information comprises: the service data identification, the prediction request bandwidth data corresponding to the service data identification and the time slot size of the transmission channel occupied by the ONU in the transmission period are obtained;
sending second bandwidth authorization information to the ONU according to second bandwidth request information and target transmission bandwidth data sent by the ONU in the next period received in the current period;
the target transmission bandwidth data is bandwidth data transmitted by the OLT in the current period, and the target transmission bandwidth data is bandwidth data transmitted by the ONU to the OLT according to the received first bandwidth authorization information, where the second bandwidth authorization information includes: and the service data identification, the prediction request bandwidth data corresponding to the service data identification and the time slot size of the transmission channel occupied by the ONU in the transmission period.
2. The method of claim 1, wherein the predetermined neural network prediction model is pre-trained by:
dividing all second bandwidth request information sent by the ONU and received before the current period into a generated data set and a prediction set according to a preset time sequence and step length;
the occurring data set and prediction set each comprise: the method comprises the steps that ONU identification, service data identification and request bandwidth data corresponding to the service data identification are carried out;
wherein, one ONU mark corresponds to three service data marks;
normalizing the request bandwidth data corresponding to the service data in the generated data set and the prediction set by using a preset normalization algorithm;
taking the request bandwidth data corresponding to the service data identifier in the generated data set after normalization processing as the input of an initial neural network prediction model;
wherein the initial neural network prediction model comprises: presetting various parameters;
taking the request bandwidth corresponding to the service data in the prediction set as a training target of an initial neural network prediction model;
adjusting various parameters of the initial neural network prediction model according to the error function of the output layer;
and taking the initial neural network prediction model after each parameter is adjusted as a preset neural network prediction model.
3. The method of claim 2, wherein the initial neural network prediction model is:
Figure FDA0002497743840000021
Figure FDA0002497743840000022
Figure FDA0002497743840000023
Figure FDA0002497743840000024
wherein f (x) represents a transfer function of each layer, xiFor the input of the input layer, the output of hidden layer 1 is yjThe output of hidden layer 2 is ykThe output of the output layer is OmThe weight from input layer to hidden layer 1 is WijThe threshold value is thetajThe weight of hidden layer 1 to hidden layer 2 is WjkThe threshold value is thetakThe weight from hidden layer 2 to output layer is WkmThe threshold value is thetam
4. The method of claim 3, wherein adjusting parameters of the initial neural network prediction model according to the error function of the output layer comprises:
adjusting the weight between each layer in the initial neural network prediction model according to the error function of the output layer and the weight adjusting function between each layer;
adjusting the threshold value between each layer in the initial neural network prediction model according to the error function of the output layer and the threshold value adjusting function between each layer;
the error function is:
Figure FDA0002497743840000031
Figure FDA0002497743840000032
Figure FDA0002497743840000033
Figure FDA0002497743840000034
wherein R ismRepresenting the size of the requested bandwidth data corresponding to the service data identifier in the actual bandwidth request information of the ONU; o ismRepresenting the size of the requested bandwidth data corresponding to the service data identifier in the predicted bandwidth request information;
the weight adjustment function from the input layer to the hidden layer 1 is: wij(n+1)=Wij(n)+ηjδjxi
The weight adjustment function from hidden layer 1 to hidden layer 2 is: wjk(n+1)=Wjk(n)+ηkδkyj
The weight adjustment function from the hidden layer 2 to the output layer is: wkm(n+1)=Wkm(n)+ηmδmyk
The threshold adjustment function for the input layer to hidden layer 1 is: thetaj(n+1)=θj(n)+λjδj
The threshold adjustment function for hidden layer 1 to hidden layer 2 is: thetak(n+1)=θk(n)+λkδk
The threshold adjustment function from hidden layer 2 to output layer is: thetam(n+1)=θm(n)+λmδm
Wherein, ηjInput layer to hidden layer 1 weight learning Rate, ηkRepresenting the weight learning rate of hidden layer 1 to hidden layer 2, ηmRepresents the learning rate of hidden layer 2 to output layer; n represents a period, and a positive integer is taken; wij(n) represents the weight, W, of the input layer before the adaptation to the hidden layer 1ij(n +1) represents the adjusted weight from the input layer to the hidden layer 1, Wjk(n) represents the weight before adjustment of hidden layer 1 to hidden layer 2, Wjk(n +1) represents implicitAdjusted weights, W, from layer 1 to hidden layer 2km(n +1) represents the weight before adjustment from hidden layer 2 to output layer, Wkm(n) represents the weight adjusted from the hidden layer 2 to the output layer; lambda [ alpha ]jThreshold learning rate, λ, of input layer to hidden layer 1kRepresenting a threshold learning rate, λ, from hidden layer 1 to hidden layer 2mRepresenting threshold learning rate, θ, from hidden layer 2 to output layerj(n) represents the threshold, θ, before the input layer is adjusted to the hidden layer 1j(n +1) represents the adjusted threshold, θ, for input layer to hidden layer 1k(n) represents the threshold, θ, before the alignment of hidden layer 1 to hidden layer 2k(n +1) represents the adjusted threshold, θ, from hidden layer 1 to hidden layer 2m(n +1) represents the threshold before adjustment of the hidden layer 2 to the output layer, θm(n) represents the adjusted threshold from hidden layer 2 to output layer.
5. The method of claim 1,
the method for using the obtained first bandwidth request information sent by the ONU in the previous period as the input of the preset neural network prediction model comprises the following steps:
normalizing the acquired first bandwidth request information sent by the ONU in the previous period;
and taking the normalized first bandwidth request information as the input of a preset neural network prediction model.
6. The method of claim 5, wherein the sending a first bandwidth grant message to the ONU according to the predicted requested bandwidth data of the ONU for the current period comprises:
performing reverse normalization processing on the predicted request bandwidth data of the ONU in the current period by using a preset reverse normalization algorithm;
taking the prediction request bandwidth data of the ONU in the current period after the reverse normalization processing as target prediction request bandwidth data;
judging whether the size of the requested bandwidth data corresponding to each service data identifier in the target prediction requested bandwidth data exceeds a preset bandwidth threshold corresponding to each service data identifier;
if the size of the requested bandwidth data corresponding to the service data identifier exceeds a preset bandwidth threshold corresponding to the service data identifier of the service data identifier, setting the identifier as a heavy load for the requested bandwidth data corresponding to the service data identifier;
if the size of the requested bandwidth data corresponding to the service data identifier does not exceed the preset bandwidth threshold corresponding to the service data identifier of the service data identifier, setting the identifier as a light load for the requested bandwidth data corresponding to the service data identifier;
according to the preset priority of the service data and the request bandwidth data identification corresponding to the service data identification, sequencing the request bandwidth data corresponding to the service data identification in the target prediction request bandwidth data to obtain a sequencing result of the request bandwidth data corresponding to the service data identification;
wherein, the sequencing result of the request bandwidth data corresponding to the service data identifier includes: the light load with the highest priority, the heavy load with the highest priority, the light load with the second priority, the light load with the lowest priority, the heavy load with the second priority and the heavy load with the lowest priority;
and sending first bandwidth authorization information to the ONU according to the sequencing result of the request bandwidth data corresponding to the service data identifier.
7. The method of claim 1, wherein the step of sending a first bandwidth grant message to the ONU based on the predicted requested bandwidth data of the ONU in the current period is followed by:
acquiring bandwidth data transmitted by an OLT in a current period as target transmission bandwidth data;
and updating the preset neural network prediction model according to the target transmission bandwidth data.
8. A network bandwidth resource allocation apparatus applied to a controller in a Software Defined Network (SDN) architecture, the apparatus comprising:
the bandwidth prediction module is used for taking the acquired first bandwidth request information sent by the ONU in the previous period as the input of a preset neural network prediction model and acquiring the prediction request bandwidth data of the ONU in the current period through the neural network prediction model;
the first bandwidth request information includes: the service data identification and the request bandwidth data corresponding to the service data identification;
the neural network prediction model corresponds to the service data identification one by one; the neural network prediction model comprises: the mathematical operation relation between the request bandwidth corresponding to the service data identification and the ONU predicted request bandwidth data; the predicting requested bandwidth data comprises: the service data identification and the prediction request bandwidth data corresponding to the service data identification;
the bandwidth authorization module is used for sending first bandwidth authorization information to the ONU according to the bandwidth data of the predicted request of the ONU in the current period;
wherein the first bandwidth authorization information comprises: the service data identification, the prediction request bandwidth data corresponding to the service data identification and the time slot size of the transmission channel occupied by the ONU in the transmission period are obtained;
the bandwidth authorization module comprises:
the bandwidth authorization sub-module is used for sending second bandwidth authorization information to the ONU according to second bandwidth request information and target transmission bandwidth data which are sent by the ONU in the next period and received in the current period;
the target transmission bandwidth data is bandwidth data transmitted by the OLT in the current period, and the target transmission bandwidth data is bandwidth data transmitted by the ONU to the OLT according to the received first bandwidth authorization information, where the second bandwidth authorization information includes: and the service data identification, the prediction request bandwidth data corresponding to the service data identification and the time slot size of the transmission channel occupied by the ONU in the transmission period.
9. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1 to 7 when executing a program stored in the memory.
CN201810172531.7A 2018-03-01 2018-03-01 Network bandwidth resource allocation method and device Active CN108449286B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810172531.7A CN108449286B (en) 2018-03-01 2018-03-01 Network bandwidth resource allocation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810172531.7A CN108449286B (en) 2018-03-01 2018-03-01 Network bandwidth resource allocation method and device

Publications (2)

Publication Number Publication Date
CN108449286A CN108449286A (en) 2018-08-24
CN108449286B true CN108449286B (en) 2020-07-03

Family

ID=63193462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810172531.7A Active CN108449286B (en) 2018-03-01 2018-03-01 Network bandwidth resource allocation method and device

Country Status (1)

Country Link
CN (1) CN108449286B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598378A (en) * 2018-11-30 2019-04-09 平安医疗健康管理股份有限公司 Medical insurance prediction technique, device, equipment and computer readable storage medium
CN111491312B (en) * 2019-01-28 2023-07-25 中国移动通信有限公司研究院 Method and equipment for predicting allocation, acquisition and training of wireless resources and neural network
CN111586501A (en) * 2019-02-19 2020-08-25 中兴通讯股份有限公司 Data transmission method and device, AP, ONU PON, networking and storage medium
CN110198280A (en) * 2019-05-28 2019-09-03 华南理工大学 A kind of SDN link allocation method based on BP neural network
CN110493072A (en) * 2019-07-11 2019-11-22 网宿科技股份有限公司 Bandwidth filtering method, device, server and storage medium based on deep learning
US11005689B2 (en) 2019-07-11 2021-05-11 Wangsu Science & Technology Co., Ltd. Method and apparatus for bandwidth filtering based on deep learning, server and storage medium
CN110535803B (en) * 2019-09-03 2021-06-25 西南交通大学 Passive optical network uplink transmission receiving end demodulation method
CN111565323B (en) * 2020-03-23 2022-11-08 视联动力信息技术股份有限公司 Flow control method and device, electronic equipment and storage medium
CN114501353B (en) * 2020-10-23 2024-01-05 维沃移动通信有限公司 Communication information sending and receiving method and communication equipment
CN112367708B (en) * 2020-10-30 2023-05-26 新华三技术有限公司 Network resource allocation method and device
CN112532459A (en) * 2020-12-07 2021-03-19 郑州师范学院 Bandwidth resource adjusting method, device and equipment
CN114866145B (en) * 2021-01-20 2024-02-09 上海诺基亚贝尔股份有限公司 Method, apparatus, device and computer readable medium for optical communication
CN114244767B (en) * 2021-11-01 2023-09-26 北京邮电大学 Link minimum end-to-end delay routing algorithm based on load balancing
CN115002215B (en) * 2022-04-11 2023-12-05 北京邮电大学 Cloud government enterprise oriented resource allocation model training method and resource allocation method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1697348A (en) * 2004-05-14 2005-11-16 上海贝尔阿尔卡特股份有限公司 Dynamic bandwidth allocation method and device in multiple operation types, and optical line terminal
CN101512970A (en) * 2005-04-15 2009-08-19 新泽西理工学院 Dynamic bandwidth allocation and service differentiation for broadband passive optical networks
WO2011020918A1 (en) * 2009-08-21 2011-02-24 Telefonaktiebolaget L M Ericsson (Publ) Method for soft bandwidth limiting in dynamic bandwidth allocation
CN105933064A (en) * 2016-07-05 2016-09-07 北京邮电大学 Dynamic bandwidth allocation method and apparatus
CN106209687A (en) * 2016-07-12 2016-12-07 重庆邮电大学 A kind of hybrid multiplex efficient distribution method of PON global resource

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1697348A (en) * 2004-05-14 2005-11-16 上海贝尔阿尔卡特股份有限公司 Dynamic bandwidth allocation method and device in multiple operation types, and optical line terminal
CN101512970A (en) * 2005-04-15 2009-08-19 新泽西理工学院 Dynamic bandwidth allocation and service differentiation for broadband passive optical networks
WO2011020918A1 (en) * 2009-08-21 2011-02-24 Telefonaktiebolaget L M Ericsson (Publ) Method for soft bandwidth limiting in dynamic bandwidth allocation
CN105933064A (en) * 2016-07-05 2016-09-07 北京邮电大学 Dynamic bandwidth allocation method and apparatus
CN106209687A (en) * 2016-07-12 2016-12-07 重庆邮电大学 A kind of hybrid multiplex efficient distribution method of PON global resource

Also Published As

Publication number Publication date
CN108449286A (en) 2018-08-24

Similar Documents

Publication Publication Date Title
CN108449286B (en) Network bandwidth resource allocation method and device
CN111444009A (en) Resource allocation method and device based on deep reinforcement learning
CN109768879B (en) Method and device for determining target service server and server
CN110519183B (en) Node speed limiting method and device, electronic equipment and storage medium
US10708195B2 (en) Predictive scheduler
CN111970762B (en) Spectrum allocation method and device and electronic equipment
CN109144730B (en) Task unloading method and device under small cell
CN112882809A (en) Method and device for determining computing terminal of driving task and computer equipment
CN109041236B (en) Wireless resource allocation method and device for services with different weights
CN108805332B (en) Feature evaluation method and device
CN107357649B (en) Method and device for determining system resource deployment strategy and electronic equipment
CN114077483A (en) Data resource scheduling method, server, system and storage medium
CN116820769A (en) Task allocation method, device and system
US9514289B2 (en) License management methods
CN113472591B (en) Method and device for determining service performance
CN111181875A (en) Bandwidth adjusting method and device
CN114675845A (en) Information age optimization method and device, computer equipment and storage medium
CN113269339A (en) Method and system for automatically creating and distributing network appointment tasks
CN113055199A (en) Gateway access method and device and gateway equipment
CN111130933A (en) Page flow estimation method and device and computer readable storage medium
CN110381168B (en) Prediction period determining method, prediction content pushing method, device and system
JP7384214B2 (en) Analysis processing device, system, method and program
CN111917657B (en) Method and device for determining flow transmission strategy
CN112306701B (en) Service fusing method, device, equipment and storage medium
CN111404729B (en) Edge cloud cooperative system management method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant