CN113055762A - Traffic prediction device method and device, and bandwidth allocation method and device - Google Patents

Traffic prediction device method and device, and bandwidth allocation method and device Download PDF

Info

Publication number
CN113055762A
CN113055762A CN201911380635.8A CN201911380635A CN113055762A CN 113055762 A CN113055762 A CN 113055762A CN 201911380635 A CN201911380635 A CN 201911380635A CN 113055762 A CN113055762 A CN 113055762A
Authority
CN
China
Prior art keywords
bandwidth
traffic
information
prediction information
cont
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911380635.8A
Other languages
Chinese (zh)
Inventor
王硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sanechips Technology Co Ltd
Original Assignee
Sanechips Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sanechips Technology Co Ltd filed Critical Sanechips Technology Co Ltd
Priority to CN201911380635.8A priority Critical patent/CN113055762A/en
Priority to PCT/CN2020/138728 priority patent/WO2021129687A1/en
Publication of CN113055762A publication Critical patent/CN113055762A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q11/0067Provisions for optical access or distribution networks, e.g. Gigabit Ethernet Passive Optical Network (GE-PON), ATM-based Passive Optical Network (A-PON), PON-Ring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q2011/0079Operation or maintenance aspects
    • H04Q2011/0083Testing; Monitoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q2011/0086Network resource allocation, dimensioning or optimisation

Abstract

The invention provides a traffic prediction device method and a device thereof, and a bandwidth allocation method and a device thereof, wherein the traffic prediction device method comprises the following steps: determining second traffic prediction information from: first cache information, first traffic information, second cache information; the first buffer information is used for indicating a buffer occupancy state of a transmission container T-cont in a first dynamic bandwidth allocation DBA period, the first traffic information is used for indicating traffic sent by the T-cont in the first DBA period, the second buffer information is used for indicating a buffer occupancy state of the T-cont in a second DBA period, the second traffic prediction information is used for indicating a predicted value of the OLT for the second traffic information, and the second traffic information is the traffic sent by the T-cont in the second DBA period. According to the invention, the problem that the OLT cannot accurately predict the bandwidth required by the ONU in the related technology can be solved, so that the effect that the OLT can accurately predict the bandwidth required by the ONU can be achieved.

Description

Traffic prediction device method and device, and bandwidth allocation method and device
Technical Field
The present invention relates to the field of communications, and in particular, to a traffic prediction apparatus method and apparatus, and a bandwidth allocation method and apparatus.
Background
In a passive Optical Network system, an Optical Line Terminal (OLT) generally allocates an upstream Bandwidth to an Optical Network Unit (ONU) in a Dynamic Bandwidth Allocation (DBA) manner; the OLT may specifically perform bandwidth prediction on each ONU according to the buffer or traffic monitoring reported by the ONU. Since the bandwidth of the whole link is fixed, in order to utilize the bandwidth efficiently in practical use, the bandwidth of each ONU needs to be accurately predicted to avoid bandwidth waste.
The OLT generally predicts the bandwidth required by the user in practical operation in the following two ways: a Status Report (SR), that is, the ONU reports the cache occupancy to the OLT, and the OLT performs bandwidth prediction based on the Report; and (TM) flow monitoring, wherein the OLT predicts the bandwidth according to the monitored occupation ratio of the GEM frame. In the actual prediction process, the mode of the state report always has a certain lag, the OLT cannot quickly make timely feedback on the cache change of the ONU, and the time delay is large in the actual use; for the above traffic monitoring method, if the user information is hijacked, the reported information may not be exactly needed, which may cause the OLT to issue an inaccurate bandwidth, thereby resulting in bandwidth waste.
In view of the above problem that the OLT cannot accurately predict the bandwidth required by the ONU in the related art, no effective solution has been proposed in the related art.
Disclosure of Invention
Embodiments of the present invention provide a traffic prediction apparatus and method, and a bandwidth allocation method and apparatus, so as to at least solve a problem that an OLT in a related art cannot accurately predict a bandwidth required by an ONU.
According to an embodiment of the present invention, there is provided a traffic prediction method applied to an optical line terminal OLT, the method including:
determining second traffic prediction information from: first cache information, first traffic information, second cache information;
the first buffer information is used to indicate a buffer occupancy state of a transmission container T-cont in a first dynamic bandwidth allocation DBA cycle, the first traffic information is used to indicate traffic sent by the T-cont in the first DBA cycle, the second buffer information is used to indicate a buffer occupancy state of the T-cont in a second DBA cycle, the second traffic prediction information is used to indicate a predicted value of the OLT for the second traffic information, and the second traffic information is the traffic sent by the T-cont in the second DBA cycle.
According to another embodiment of the present invention, there is further provided a bandwidth allocation method applied to an OLT, the method includes the traffic prediction method described in the foregoing embodiment, and the bandwidth allocation method includes:
determining first bandwidth prediction information according to the second traffic prediction information, wherein the first bandwidth prediction information is used for indicating a predicted value of a bandwidth required by the OLT for the T-cont;
and allocating the bandwidth for the T-cont according to at least the first bandwidth prediction information.
According to another embodiment of the present invention, there is also provided a traffic prediction apparatus applied to an optical line terminal OLT, including:
a prediction module to determine second traffic prediction information from: first cache information, first traffic information, second cache information;
the first buffer information is used to indicate a buffer occupancy state of a transmission container T-cont in a first dynamic bandwidth allocation DBA cycle, the first traffic information is used to indicate traffic sent by the T-cont in the first DBA cycle, the second buffer information is used to indicate a buffer occupancy state of the T-cont in a second DBA cycle, the second traffic prediction information is used to indicate a predicted value of the OLT for the second traffic information, and the second traffic information is the traffic sent by the T-cont in the second DBA cycle.
According to another embodiment of the present invention, there is further provided a bandwidth allocation apparatus applied to an OLT, the apparatus including the traffic prediction apparatus in the foregoing embodiment, the bandwidth allocation apparatus including:
a determining module, configured to determine first bandwidth prediction information according to the second traffic prediction information, where the first bandwidth prediction information is used to indicate a predicted value of a bandwidth required by the OLT for the T-cont;
and the allocation module is used for allocating the bandwidth for the T-cont at least according to the first bandwidth prediction information.
According to another embodiment of the present invention, a computer-readable storage medium is also provided, in which a computer program is stored, wherein the computer program is configured to perform the steps of any of the above-described method embodiments when executed.
According to another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
With the present invention, since the second traffic prediction information can be determined from the following objects: first cache information, first traffic information, second cache information; the first buffer information is used to indicate a buffer occupancy state of a transmission container T-cont in a first dynamic bandwidth allocation DBA cycle, the first traffic information is used to indicate traffic sent by the T-cont in the first DBA cycle, the second buffer information is used to indicate a buffer occupancy state of the T-cont in a second DBA cycle, the second traffic prediction information is used to indicate a predicted value of the OLT for the second traffic information, and the second traffic information is the traffic sent by the T-cont in the second DBA cycle. Therefore, the invention can solve the problem that the OLT cannot accurately predict the bandwidth required by the ONU in the related technology, so as to achieve the effect that the OLT can accurately predict the bandwidth required by the ONU, further ensure stable flow transmission between the OLT and the ONU, and avoid possible bandwidth waste.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of a traffic prediction method provided in accordance with an embodiment of the present invention;
FIG. 2 is a flow diagram of a DBA provided in accordance with an exemplary embodiment of the invention;
FIG. 3 is a schematic illustration of flow prediction provided in accordance with an embodiment of the present invention;
fig. 4 is a flowchart of a bandwidth allocation method provided according to an embodiment of the present invention;
fig. 5 is a flow chart of bandwidth adjustment in a bandwidth allocation process according to an embodiment of the present invention;
fig. 6 is a flow chart of bandwidth allocation in a bandwidth allocation process according to an embodiment of the present invention;
fig. 7 is a block diagram of a flow prediction apparatus according to an embodiment of the present invention;
fig. 8 is a block diagram of a bandwidth allocation apparatus according to an embodiment of the present invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Example 1
This embodiment provides a traffic prediction method, which is applied to an optical line terminal OLT, and fig. 1 is a flowchart of the traffic prediction method according to the embodiment of the present invention, as shown in fig. 1, the method in this embodiment includes:
s102, determining second flow prediction information according to the following objects: first cache information, first traffic information, second cache information;
the first buffer information is used for indicating a buffer occupancy state of a transmission container T-cont in a first dynamic bandwidth allocation DBA period, the first traffic information is used for indicating traffic sent by the T-cont in the first DBA period, the second buffer information is used for indicating a buffer occupancy state of the T-cont in a second DBA period, the second traffic prediction information is used for indicating a predicted value of the OLT for the second traffic information, and the second traffic information is the traffic sent by the T-cont in the second DBA period.
It should be further noted that the first DBA period and the second DBA period are two consecutive DBA periods, and the first DBA period is a previous period in the second DBA period timing sequence. The traffic prediction method in the embodiment may predict traffic forwarded by T-cont in a current DBA cycle according to a cache occupancy state of T-cont in a previous DBA cycle, traffic forwarded by T-cont in the previous DBA cycle, and a cache occupancy state reported by T-cont in the current DBA cycle, to determine the second traffic prediction information.
In an optional embodiment, in step S102, the determining the second traffic prediction information according to the following objects includes:
determining residual cache information according to the first cache information and the first flow information, wherein the residual cache information is used for indicating the residual cache of the T-cont when the first DBA cycle is ended;
and determining second flow prediction information according to the residual cache information and the second cache information.
It should be further noted that, the remaining cache of the T-cont at the end of the first DBA cycle indicated by the remaining cache information may specifically remove the traffic sent by the T-cont in the first DBA cycle from the cache occupation state reported by the T-cont in the first DBA cycle, so as to obtain the remaining part in the cache occupation state reported by the T-cont at the end of the first DBA cycle, so as to determine the remaining cache information.
With the traffic prediction method in this embodiment, the second traffic prediction information can be determined according to the following objects: first cache information, first traffic information, second cache information; the first buffer information is used for indicating a buffer occupancy state of a transmission container T-cont in a first dynamic bandwidth allocation DBA period, the first traffic information is used for indicating traffic sent by the T-cont in the first DBA period, the second buffer information is used for indicating a buffer occupancy state of the T-cont in a second DBA period, the second traffic prediction information is used for indicating a predicted value of the OLT for the second traffic information, and the second traffic information is the traffic sent by the T-cont in the second DBA period. Therefore, the traffic prediction method in this embodiment can solve the problem that the OLT cannot accurately predict the bandwidth required by the ONU in the related art, so that the OLT can accurately predict the bandwidth required by the ONU, thereby ensuring stable traffic transmission between the OLT and the ONU, and avoiding possible bandwidth waste.
Specifically, in this embodiment, the prediction performed by the OLT on the traffic uploaded by the current DBA cycle T-cont is not only based on the buffer occupancy state reported by T-cont in the current cycle, but also performed by synthesizing the buffer occupancy states reported by T-cont in the current cycle according to the buffer occupancy state reported by T-cont in the previous DBA cycle and the traffic uploaded by T-cont, so that the OLT can avoid the problem of poor prediction accuracy due to lag of the state report reported by T-cont, and perform accurate prediction processing on the traffic uploaded by T-cont in each DBA cycle. Therefore, the OLT can accurately judge the bandwidth required by the ONU in the bandwidth allocation process so as to achieve the effects of ensuring stable flow transmission between the OLT and the ONU and avoiding possible bandwidth waste.
To further illustrate the flow prediction method in this embodiment, the following description will be made with respect to the flow prediction method by way of specific examples.
Detailed description of the preferred embodiment 1
In this embodiment, the buffer occupancy status reported by each T-cont is defined as r (T), the traffic reported by each T-cont is defined as pm (T), and the number of empty frames inserted by each T-cont is defined as idle (T); wherein t is the DBA period.
The DBA performs traffic prediction for each T-cont to further perform bandwidth prediction for bandwidth allocation, and generates BWMAP, thereby enabling OLT downlink frames to be transmitted to ONU. Fig. 2 is a flowchart of a DBA according to an embodiment of the present invention, where the DBA implements corresponding functions as shown in fig. 2
Fig. 3 is a schematic diagram of flow prediction according to an embodiment of the present invention, where the flow prediction process implemented in the embodiment is as shown in fig. 3, and specifically, with a tth DBA cycle as a current cycle, after a last DBA cycle and T-cont issues a flow, a remaining cache of T-cont is R (T-1) -PM (T-1); assuming that the flow entering T-cont in the current period is PM (T), the flow in the current period can be predicted to be PM (T) ═ R (T) - (R (T-1) -PM (T-1)), so that the flow input of the user in the current period can be predicted according to R (T) reported by the uplink dynamic bandwidth report DBRu in the current period and R (T-1) and PM (T-1) in the previous period, and the bandwidth of the user in the current period can be accurately predicted on the basis of the prediction.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 2
The embodiment provides a bandwidth allocation method, which is applied to an OLT; the bandwidth allocation method in this embodiment includes the traffic prediction method in embodiment 1, and fig. 4 is a flowchart of the bandwidth allocation method provided in the embodiment of the present invention, and as shown in fig. 4, the bandwidth allocation method in this embodiment includes:
s202, determining first bandwidth prediction information according to the second flow prediction information, wherein the first bandwidth prediction information is used for indicating a predicted value of a bandwidth required by the OLT for T-cont;
s204, allocating bandwidth for the T-cont at least according to the first bandwidth prediction information.
It should be further noted that, in the bandwidth allocation method in this embodiment, on the basis of the second traffic prediction information obtained by the traffic prediction method in embodiment 1, the bandwidth required by T-cont is predicted, that is, the first bandwidth prediction information in step S202 is obtained, and the bandwidth is allocated for T-cont by using the first bandwidth prediction information.
By the bandwidth allocation method in the embodiment, the bandwidth required by the ONU can be accurately predicted in the bandwidth allocation process, so that the influence of other factors of the whole link on bandwidth prediction is reduced, and more accurate bandwidth allocation to the ONU is realized.
In an optional embodiment, in step S204, allocating a bandwidth for T-cont according to at least the first bandwidth prediction information includes:
determining the flow change state in the T-cont according to the first flow prediction information and the second flow prediction information; the first traffic prediction information is used for indicating a predicted value of the OLT on the first traffic information in a first DBA period;
and adjusting the first bandwidth prediction information according to the traffic change state to determine second bandwidth prediction information, and allocating the bandwidth for the T-cont according to the second bandwidth prediction information.
It should be further noted that, in the above optional embodiment, the traffic change state in T-cont is determined according to the first traffic prediction information and the second traffic prediction information, so that on the basis of the first bandwidth prediction information obtained according to the second traffic prediction information, whether the first bandwidth prediction information conforms to the traffic change of T-cont is further determined, so as to determine the second bandwidth prediction information and further provide more accurate bandwidth allocation for T-cont.
In an optional embodiment, the determining the traffic change state in T-cont according to the first traffic prediction information and the second traffic prediction information includes:
and determining the flow change state according to the relation between the difference value between the first flow prediction information and the second flow prediction information and a preset difference value threshold value.
It should be further noted that, the above alternative embodiment may determine the flow rate change state in T-cont by using a difference between the first flow rate prediction information and the second flow rate prediction information, specifically, when the difference between the first flow rate prediction information and the second flow rate prediction information is too large, that is, is greater than or equal to a difference threshold, it may be determined that the flow rate change state in T-cont is an unstable state; accordingly, in the case where the difference between the first traffic prediction information and the second traffic prediction information is not large, that is, smaller than the difference threshold, it is determined that the traffic change in T-cont is small and the traffic change state in T-cont is a steady state.
In an optional embodiment, the determining the traffic change state in T-cont according to the first traffic prediction information and the second traffic prediction information includes:
determining a flow change state according to a relationship between the first flow prediction information and a preset minimum value and a relationship between the second flow prediction information and a preset maximum value; alternatively, the first and second electrodes may be,
and determining the flow change state according to the relationship between the second flow prediction information and the preset minimum value and the relationship between the first flow prediction information and the preset maximum value.
It should be further noted that, in the above alternative embodiment, the flow rate change state in T-cont can be determined by the relationship between the first flow rate prediction information and the second flow rate prediction information and the corresponding minimum value and maximum value. Specifically, in the case where the smaller of the first traffic prediction information and the second traffic prediction information is smaller than the minimum value and the other is larger than the maximum value, it is determined that the traffic change in T-cont is large and the traffic change state in T-cont is an unsteady state; accordingly, in the case where the smaller one of the first and second traffic prediction information is greater than the minimum value and the other one of the first and second traffic prediction information is less than the maximum value, it is determined that the traffic change in T-cont is small and the traffic change state in T-cont is a steady state.
In an optional embodiment, the adjusting the first bandwidth prediction information according to the traffic change status to determine the second bandwidth prediction information includes:
and in the case that the traffic change state in the T-cont is a stable state, determining the second bandwidth prediction information according to the first bandwidth prediction information, or the first bandwidth prediction information and the first traffic information.
It should be further noted that, in the above alternative embodiment, on the basis of determining that the traffic change state in T-cont is a stable state, it may be selected to determine the second bandwidth prediction information directly according to the first bandwidth prediction information, that is, determine the first bandwidth prediction information as the second bandwidth prediction information, or process the first bandwidth prediction information according to the first traffic information to determine the second bandwidth prediction information different from the first bandwidth prediction information.
In an optional embodiment, the determining the second bandwidth prediction information according to the first bandwidth prediction information or the first bandwidth prediction information and the first traffic information includes:
acquiring the number of IDLE frames in the T-cont;
determining the first bandwidth prediction information as second bandwidth prediction information in the case that the number of IDLE frames is less than or equal to a prediction threshold; alternatively, the first and second electrodes may be,
under the condition that the number of the IDLE frames is larger than the prediction threshold value, determining second bandwidth prediction information according to the first bandwidth prediction information and the first flow information; wherein the second bandwidth prediction information is greater than the first bandwidth prediction information;
wherein the prediction threshold is determined based on the first traffic information.
It should be further noted that, in the above alternative embodiment, the first bandwidth prediction information is determined as the second bandwidth prediction information according to the number of IDLE frames in T-cont, or the first bandwidth prediction information is processed according to the first traffic information to determine the second bandwidth prediction information. Specifically, in the case where the number of IDLE frames is less than or equal to the prediction threshold, that is, the IDLE frame occupation ratio in T-cont is not large, the first bandwidth prediction information may be directly determined as the second bandwidth prediction information, and in the case where the number of IDLE frames is greater than the prediction threshold, that is, the IDLE frame occupation ratio in T-cont is large, at this time, the bandwidth actually required by T-cont may be greater than the bandwidth indicated by the first bandwidth prediction information, and therefore, the second bandwidth prediction information greater than the first bandwidth prediction information needs to be determined according to the first bandwidth prediction information and the first traffic information.
In an optional embodiment, the adjusting the first bandwidth prediction information according to the traffic change status to determine the second bandwidth prediction information includes:
under the condition that the flow change state in the T-cont is an unstable state, determining second bandwidth prediction information according to the first bandwidth prediction information and the second cache information; wherein the second bandwidth prediction information is greater than the first bandwidth prediction information.
It should be further noted that, in the case that the traffic variation state in the T-cont is an unstable state, the traffic in the T-cont in the second DBA period may have a larger variation than the traffic in the T-cont in the first DBA period, so that the accuracy of determining bandwidth allocation according to the second traffic prediction information determined in the traffic prediction method is not good enough. In the above optional embodiment, under the above situation, the second bandwidth prediction information may be determined according to the first bandwidth prediction information and the second cache information again, so as to implement accurate prediction and allocation of the bandwidth required by T-cont.
In an optional embodiment, the allocating the bandwidth to the T-cont according to the second bandwidth prediction information includes:
allocating bandwidth for the plurality of T-cont according to the second bandwidth prediction information corresponding to each T-cont, wherein the bandwidth comprises at least one of the following:
fixed bandwidth, guaranteed bandwidth, best effort bandwidth, non-guaranteed bandwidth.
In an optional embodiment, the allocating bandwidths to the plurality of T-cont according to the second bandwidth prediction information corresponding to each T-cont further includes:
and in the situation that the residual bandwidth exists, distributing the residual bandwidth for the plurality of T-cont according to the relation between the residual bandwidth and the sum of the request traffic of the plurality of T-cont.
In an optional embodiment, the allocating the remaining bandwidth to the plurality of T-cont includes:
under the condition that the residual bandwidth is smaller than the sum of the requested flow, distributing the residual bandwidth for the plurality of T-cont according to a preset first weight; alternatively, the first and second electrodes may be,
and determining a second weight among the plurality of T-cont according to the request traffic of each T-cont in case that the residual bandwidth is greater than or equal to the sum of the request traffic, and allocating the residual bandwidth for the plurality of T-cont according to the second weight.
It should be further noted that, in the above alternative embodiment, the allocation manner of the remaining bandwidth may be further determined according to a relationship between the remaining bandwidth and a traffic sum of the plurality of T-cont requests. Specifically, in the case that the remaining bandwidth is greater than or equal to the sum of the requested traffic, the remaining bandwidth may be allocated to the T-cont according to a preset first weight, such as a priority level during the establishment of the T-cont; in the case that the remaining bandwidth is greater than or equal to the sum of the requested traffic, the weight between the T-cont is determined according to the requested traffic of the T-cont, i.e. the second weight, and the remaining bandwidth is allocated to the T-cont. Through the allocation of the optional embodiment to the residual bandwidth, the cache of the ONU can be effectively reduced, and the effect of reducing the time delay is further achieved.
To further illustrate the bandwidth allocation method in this embodiment, the following further describes the bandwidth allocation method in the above embodiment by way of specific embodiments.
Specific example 2
In this embodiment, the buffer occupancy status reported by each T-cont is defined as r (T), the traffic reported by each T-cont is defined as pm (T), and the number of empty frames inserted by each T-cont is defined as idle (T); wherein t is the DBA period.
S1, acquiring DBRu, PM (T) and IDLE (T) reported by each T-cont in each DBA period, and predicting the flow rate according to the DBRu, PM (T) and IDLE (T); wherein, the DBRu contains a cache occupation state R (T) of T-cont;
s2, calculates the predicted flow EST _ C ═ R (t) - (R (t-1) -PM (t-1)) of the current DBA cycle, and acquires the predicted flow EST _ P (which can be calculated to be acquired in the previous DBA cycle) held in the previous DBA cycle. And comparing the two values with two set thresholds: and comparing the upper threshold T1 with the lower threshold T2 to judge the flow change state of the T-cont according to the predicted flow of the current DBA period and the previous DBA period. Fig. 5 is a flow chart of bandwidth adjustment in a bandwidth allocation process according to an embodiment of the present invention, and the flow of the foregoing determination and adjustment of EST _ C is shown in fig. 5. The judgment process is as follows:
when EST _ C > T1 and EST _ P < T2 (in the case that EST _ P is greater than EST _ C), or EST _ P > T1 and EST _ C < T2 (in the case that EST _ P is less than EST _ C), it is determined that the flow rate of T-cont is in an unstable state, and then the processing of step S4 is performed; otherwise, it may be determined that the flow rate of T-cont is in a stable state, and at this time, the step S3 is executed;
s3, when the T-cont flow rate is in a stable state, it can further distinguish what trend the T-cont flow rate is in, and the determining process is as follows:
if IDLE > β × PM (β is a predetermined coefficient, which may be 0.0625 in this embodiment), the predicted bandwidth may be slightly smaller, so that the predicted bandwidth needs to be adjusted a little larger, specifically, the adjusted predicted bandwidth may be EST _ C + PM (t-1) × (λ is a predetermined coefficient, which may be 0.25 in this embodiment); otherwise, the issued bandwidth is enough, and the EST _ C calculated in S2 can be directly issued as the determined predicted bandwidth;
s4, the T-cont flow is in a changed state, so the influence reported by DBRu needs to be considered to predict the bandwidth again; the bandwidth of the above re-prediction may be EST _ C + α × r (t) (a is a preset coefficient, and in this embodiment, 0.125 may be adopted);
s5, storing EST _ C, R (t) finally determined by the current DBA period and PM (t) for traffic prediction and bandwidth allocation in the next DBA period;
fig. 6 is a flow chart of bandwidth allocation in the bandwidth allocation process according to the embodiment of the present invention, which is shown in the flow chart 5 of S6 to S8 described below.
S6, allocating fixed bandwidth and guaranteed bandwidth;
s7, allocating non-guaranteed bandwidth and best-effort bandwidth; if the bandwidth remains after the bandwidth allocation is finished, the step enters S8, otherwise, the step is finished directly;
s8, allocating the remaining bandwidth, wherein the remaining bandwidth and all requested values need to be determined: if the requested value is larger than the residual bandwidth, distributing the residual bandwidth according to the preset weight; and if the request value is smaller than the residual bandwidth, distributing the residual bandwidth by taking the request value of each T-cont as a weight.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 3
The present embodiment provides a traffic prediction apparatus, which is applied to an optical line terminal OLT, and is configured to implement the foregoing embodiments and preferred embodiments, which have already been described and are not described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated. Fig. 7 is a block diagram of a flow rate prediction apparatus according to an embodiment of the present invention, and as shown in fig. 7, the flow rate prediction apparatus in the embodiment includes:
a prediction module 302 for determining second traffic prediction information from: first cache information, first traffic information, second cache information;
the first buffer information is used for indicating a buffer occupancy state of a transmission container T-cont in a first dynamic bandwidth allocation DBA period, the first traffic information is used for indicating traffic sent by the T-cont in the first DBA period, the second buffer information is used for indicating a buffer occupancy state of the T-cont in a second DBA period, the second traffic prediction information is used for indicating a predicted value of the OLT for the second traffic information, and the second traffic information is the traffic sent by the T-cont in the second DBA period.
It should be further explained that the remaining optional embodiments and technical effects of the flow prediction apparatus in this embodiment all correspond to the flow prediction method in embodiment 1, and therefore, no further description is provided herein.
In an optional embodiment, the determining the second traffic prediction information according to the following objects includes:
determining residual cache information according to the first cache information and the first flow information, wherein the residual cache information is used for indicating the residual cache of the T-cont when the first DBA cycle is ended;
and determining second flow prediction information according to the residual cache information and the second cache information.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Example 4
The present embodiment provides a bandwidth allocation apparatus, which is applied to an OLT, and is used to implement the foregoing embodiments and preferred embodiments, which have already been described and are not described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated. The bandwidth allocation apparatus in this embodiment includes the traffic prediction apparatus in embodiment 3, and fig. 8 is a block diagram of a structure of the bandwidth allocation apparatus according to an embodiment of the present invention, as shown in fig. 8, the bandwidth allocation apparatus in this embodiment includes:
a determining module 402, configured to determine first bandwidth prediction information according to the second traffic prediction information, where the first bandwidth prediction information is used to indicate a predicted value of a bandwidth required by the OLT for T-cont;
an allocating module 406, configured to allocate a bandwidth for T-cont according to at least the first bandwidth prediction information.
It should be further explained that the remaining optional embodiments and technical effects of the bandwidth allocation apparatus in this embodiment all correspond to the bandwidth allocation method in embodiment 2, and therefore, no further description is provided herein.
In an optional embodiment, the allocating the bandwidth for the T-cont according to at least the first bandwidth prediction information includes:
determining the flow change state in the T-cont according to the first flow prediction information and the second flow prediction information; the first traffic prediction information is used for indicating a predicted value of the OLT on the first traffic information in a first DBA period;
and adjusting the first bandwidth prediction information according to the traffic change state to determine second bandwidth prediction information, and allocating the bandwidth for the T-cont according to the second bandwidth prediction information.
In an optional embodiment, the determining the traffic change state in T-cont according to the first traffic prediction information and the second traffic prediction information includes:
and determining the flow change state according to the relation between the difference value between the first flow prediction information and the second flow prediction information and a preset difference value threshold value.
In an optional embodiment, the determining the traffic change state in T-cont according to the first traffic prediction information and the second traffic prediction information includes:
determining a flow change state according to a relationship between the first flow prediction information and a preset minimum value and a relationship between the second flow prediction information and a preset maximum value; alternatively, the first and second electrodes may be,
and determining the flow change state according to the relationship between the second flow prediction information and the preset minimum value and the relationship between the first flow prediction information and the preset maximum value.
In an optional embodiment, the adjusting the first bandwidth prediction information according to the traffic change status to determine the second bandwidth prediction information includes:
and in the case that the traffic change state in the T-cont is a stable state, determining the second bandwidth prediction information according to the first bandwidth prediction information, or the first bandwidth prediction information and the first traffic information.
In an optional embodiment, the determining the second bandwidth prediction information according to the first bandwidth prediction information or the first bandwidth prediction information and the first traffic information includes:
acquiring the number of IDLE frames in the T-cont;
determining the first bandwidth prediction information as second bandwidth prediction information in the case that the number of IDLE frames is less than or equal to a prediction threshold; alternatively, the first and second electrodes may be,
under the condition that the number of the IDLE frames is larger than the prediction threshold value, determining second bandwidth prediction information according to the first bandwidth prediction information and the first flow information; wherein the second bandwidth prediction information is greater than the first bandwidth prediction information;
wherein the prediction threshold is determined based on the first traffic information.
In an optional embodiment, the adjusting the first bandwidth prediction information according to the traffic change status to determine the second bandwidth prediction information includes:
under the condition that the flow change state in the T-cont is an unstable state, determining second bandwidth prediction information according to the first bandwidth prediction information and the second cache information; wherein the second bandwidth prediction information is greater than the first bandwidth prediction information.
In an optional embodiment, the allocating the bandwidth to the T-cont according to the second bandwidth prediction information includes:
allocating bandwidth for the plurality of T-cont according to the second bandwidth prediction information corresponding to each T-cont, wherein the bandwidth comprises at least one of the following:
fixed bandwidth, guaranteed bandwidth, best effort bandwidth, non-guaranteed bandwidth.
In an optional embodiment, the allocating bandwidths to the plurality of T-cont according to the second bandwidth prediction information corresponding to each T-cont further includes:
and in the situation that the residual bandwidth exists, distributing the residual bandwidth for the plurality of T-cont according to the relation between the residual bandwidth and the sum of the request traffic of the plurality of T-cont.
In an optional embodiment, the allocating the remaining bandwidth to the plurality of T-cont includes:
under the condition that the residual bandwidth is smaller than the sum of the requested flow, distributing the residual bandwidth for the plurality of T-cont according to a preset first weight; alternatively, the first and second electrodes may be,
and determining a second weight among the plurality of T-cont according to the request traffic of each T-cont in case that the residual bandwidth is greater than or equal to the sum of the request traffic, and allocating the residual bandwidth for the plurality of T-cont according to the second weight.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Example 5
Embodiments of the present invention also provide a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to perform the steps of any of the above-mentioned method embodiments when executed.
Alternatively, in this embodiment, the computer-readable storage medium may be configured to store a computer program for executing the method steps recited in the above embodiments:
optionally, in this embodiment, the computer-readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Example 6
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in this embodiment, the processor may be configured to execute the method steps recited in the above embodiments through a computer program.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (16)

1. A flow prediction method is applied to an Optical Line Terminal (OLT), and is characterized by comprising the following steps:
determining second traffic prediction information from: first cache information, first traffic information, second cache information;
the first buffer information is used to indicate a buffer occupancy state of a transmission container T-cont in a first dynamic bandwidth allocation DBA cycle, the first traffic information is used to indicate traffic sent by the T-cont in the first DBA cycle, the second buffer information is used to indicate a buffer occupancy state of the T-cont in a second DBA cycle, the second traffic prediction information is used to indicate a predicted value of the OLT for the second traffic information, and the second traffic information is the traffic sent by the T-cont in the second DBA cycle.
2. The method of claim 1, wherein determining the second traffic prediction information comprises:
determining remaining cache information according to the first cache information and the first traffic information, wherein the remaining cache information is used for indicating remaining cache of the T-cont when the first DBA cycle is ended;
and determining second flow prediction information according to the residual cache information and the second cache information.
3. A bandwidth allocation method applied to an OLT, the method comprising the traffic prediction method of claim 1 or 2, the bandwidth allocation method comprising:
determining first bandwidth prediction information according to the second traffic prediction information, wherein the first bandwidth prediction information is used for indicating a predicted value of a bandwidth required by the OLT for the T-cont;
and allocating the bandwidth for the T-cont according to at least the first bandwidth prediction information.
4. The method of claim 3, wherein said allocating bandwidth for said T-cont according to at least said first bandwidth prediction information comprises:
determining a traffic change state in the T-cont according to the first traffic prediction information and the second traffic prediction information; wherein the first traffic prediction information is used to indicate a prediction value of the OLT on the first traffic information in the first DBA period;
and adjusting the first bandwidth prediction information according to the traffic change state to determine second bandwidth prediction information, and allocating the bandwidth to the T-cont according to the second bandwidth prediction information.
5. The method of claim 4, wherein determining the traffic change status in the T-cont from the first traffic prediction information and the second traffic prediction information comprises:
and determining the flow change state according to the relation between the difference value between the first flow prediction information and the second flow prediction information and a preset difference value threshold.
6. The method of claim 4, wherein determining the traffic change status in the T-cont from the first traffic prediction information and the second traffic prediction information comprises:
determining the flow change state according to the relationship between the first flow prediction information and a preset minimum value and the relationship between the second flow prediction information and a preset maximum value; alternatively, the first and second electrodes may be,
and determining the flow change state according to the relationship between the second flow prediction information and a preset minimum value and the relationship between the first flow prediction information and a preset maximum value.
7. The method of claim 4, wherein the adjusting the first bandwidth prediction information to determine second bandwidth prediction information according to the traffic change status comprises:
and determining the second bandwidth prediction information according to the first bandwidth prediction information or the first bandwidth prediction information and the first traffic information when the traffic change state in the T-cont is a steady state.
8. The method of claim 7, wherein determining the second bandwidth prediction information according to the first bandwidth prediction information or the first bandwidth prediction information and the first traffic information comprises:
acquiring the number of IDLE frames in the T-cont;
determining the first bandwidth prediction information as the second bandwidth prediction information on a condition that the number of IDLE frames is less than or equal to a prediction threshold; alternatively, the first and second electrodes may be,
determining the second bandwidth prediction information according to the first bandwidth prediction information and the first traffic information in a case that the number of IDLE frames is greater than the prediction threshold; wherein the second bandwidth prediction information is greater than the first bandwidth prediction information;
wherein the prediction threshold is determined from the first traffic information.
9. The method of claim 4, wherein the adjusting the first bandwidth prediction information to determine second bandwidth prediction information according to the traffic change status comprises:
determining the second bandwidth prediction information according to the first bandwidth prediction information and the second cache information under the condition that the traffic change state in the T-cont is an unstable state; wherein the second bandwidth prediction information is greater than the first bandwidth prediction information.
10. The method according to any one of claims 4 to 9, wherein said allocating bandwidth for said T-cont according to said second bandwidth prediction information comprises:
allocating a bandwidth to the plurality of T-cont according to the second bandwidth prediction information corresponding to each T-cont, where the bandwidth includes at least one of:
fixed bandwidth, guaranteed bandwidth, best effort bandwidth, non-guaranteed bandwidth.
11. The method according to claim 10, wherein said allocating bandwidth for a plurality of said T-cont according to said second bandwidth prediction information corresponding to each of said T-cont further comprises:
and under the condition that the residual bandwidth exists, distributing the residual bandwidth for the plurality of T-cont according to the relation between the residual bandwidth and the sum of the request traffic of the plurality of T-cont.
12. The method of claim 11, wherein said allocating the remaining bandwidth for the plurality of T-cont comprises:
under the condition that the residual bandwidth is smaller than the sum of the request flows, distributing the residual bandwidth for the T-cont according to a preset first weight; alternatively, the first and second electrodes may be,
and under the condition that the residual bandwidth is greater than or equal to the sum of the request flows, determining a second weight among the T-cont according to the request flow of each T-cont, and distributing the residual bandwidth for the T-cont according to the second weight.
13. A flow prediction device applied to an Optical Line Terminal (OLT), the device comprising:
a prediction module to determine second traffic prediction information from: first cache information, first traffic information, second cache information;
the first buffer information is used to indicate a buffer occupancy state of a transmission container T-cont in a first dynamic bandwidth allocation DBA cycle, the first traffic information is used to indicate traffic sent by the T-cont in the first DBA cycle, the second buffer information is used to indicate a buffer occupancy state of the T-cont in a second DBA cycle, the second traffic prediction information is used to indicate a predicted value of the OLT for the second traffic information, and the second traffic information is the traffic sent by the T-cont in the second DBA cycle.
14. A bandwidth allocation apparatus applied to an OLT, wherein the apparatus comprises the traffic prediction apparatus of claim 13, and the bandwidth allocation apparatus comprises:
a determining module, configured to determine first bandwidth prediction information according to the second traffic prediction information, where the first bandwidth prediction information is used to indicate a predicted value of a bandwidth required by the OLT for the T-cont;
and the allocation module is used for allocating the bandwidth for the T-cont at least according to the first bandwidth prediction information.
15. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to perform the method of any one of claims 1 to 2 and 3 to 12 when the computer program is executed.
16. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and the processor is configured to execute the computer program to perform the method of any one of claims 1 to 2 and 3 to 12.
CN201911380635.8A 2019-12-27 2019-12-27 Traffic prediction device method and device, and bandwidth allocation method and device Pending CN113055762A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911380635.8A CN113055762A (en) 2019-12-27 2019-12-27 Traffic prediction device method and device, and bandwidth allocation method and device
PCT/CN2020/138728 WO2021129687A1 (en) 2019-12-27 2020-12-23 Traffic prediction method and device and bandwidth allocation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911380635.8A CN113055762A (en) 2019-12-27 2019-12-27 Traffic prediction device method and device, and bandwidth allocation method and device

Publications (1)

Publication Number Publication Date
CN113055762A true CN113055762A (en) 2021-06-29

Family

ID=76506809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911380635.8A Pending CN113055762A (en) 2019-12-27 2019-12-27 Traffic prediction device method and device, and bandwidth allocation method and device

Country Status (2)

Country Link
CN (1) CN113055762A (en)
WO (1) WO2021129687A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101997761B (en) * 2009-08-13 2012-12-19 中兴通讯股份有限公司 Bandwidth allocation method and optical line terminal (OLT)
JP5669613B2 (en) * 2011-02-18 2015-02-12 沖電気工業株式会社 Dynamic bandwidth allocation method, optical communication network, and station side apparatus
CN103581060B (en) * 2012-08-03 2016-08-03 华为技术有限公司 A kind of sub-carrier wave distribution method, equipment and system
US11539436B2 (en) * 2017-08-18 2022-12-27 Maxlinear, Inc. Cable modem system management of passive optical networks (PONs)

Also Published As

Publication number Publication date
WO2021129687A1 (en) 2021-07-01

Similar Documents

Publication Publication Date Title
US11546644B2 (en) Bandwidth control method and apparatus, and device
EP2466769B1 (en) Bandwidth allocation method and optical line terminal
CN110474852B (en) Bandwidth scheduling method and device
US20100008379A1 (en) Dynamic bandwidth allocation device for an optical network and method thereof
KR102369305B1 (en) Client transfer method and device
KR100712608B1 (en) Dynamic band allocation circuit, dynamic band allocation method, and recording medium
CN105468458A (en) Resource scheduling method and system of computer cluster
US11134506B2 (en) Systems and methods for avoiding delays for ULL traffic
Arokkiam et al. Refining the GIANT dynamic bandwidth allocation mechanism for XG-PON
CN113687873A (en) Large-page memory configuration method, system and related device in cloud service page table
KR101448413B1 (en) Method and apparatus for scheduling communication traffic in atca-based equipment
CN110221775B (en) Method and device for distributing tokens in storage system
CN113055762A (en) Traffic prediction device method and device, and bandwidth allocation method and device
US20130148670A1 (en) Method of resource allocation and resource arbitrator
CN111988683B (en) Bandwidth allocation method and related equipment
CN114936089A (en) Resource scheduling method, system, device and storage medium
CN108737289B (en) Storage multipath load balancing method and system
JP5900375B2 (en) COMMUNICATION SYSTEM, COMMUNICATION DEVICE, COMMUNICATION BAND CONTROL METHOD
CN113873361A (en) Configuration method for uplink service transmission capability of ONU (optical network Unit) and optical line terminal
JP2003087281A (en) Circuit, method and program for assigning dynamic band and recording medium
CN1697348B (en) Dynamic bandwidth allocation method and device in multiple operation types, and optical line terminal
SE522876C2 (en) Method for assigning time slots in a data communication system
US20210224120A1 (en) Resource allocation device, resource allocation method, and resource allocation program
KR101813165B1 (en) Adaptive control plane management method for software defined network and apparatus thereof
CN111314923B (en) Method and device for realizing dynamic bandwidth

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination