WO2018149102A1 - Method and device for reducing transmission latency of high-priority data, and storage medium - Google Patents

Method and device for reducing transmission latency of high-priority data, and storage medium Download PDF

Info

Publication number
WO2018149102A1
WO2018149102A1 PCT/CN2017/097325 CN2017097325W WO2018149102A1 WO 2018149102 A1 WO2018149102 A1 WO 2018149102A1 CN 2017097325 W CN2017097325 W CN 2017097325W WO 2018149102 A1 WO2018149102 A1 WO 2018149102A1
Authority
WO
WIPO (PCT)
Prior art keywords
cache
data
priority
scheduling
index
Prior art date
Application number
PCT/CN2017/097325
Other languages
French (fr)
Chinese (zh)
Inventor
孙月
胡达
钱晓东
肖洁
Original Assignee
深圳市中兴微电子技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市中兴微电子技术有限公司 filed Critical 深圳市中兴微电子技术有限公司
Publication of WO2018149102A1 publication Critical patent/WO2018149102A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/28Flow control; Congestion control in relation to timing considerations
    • H04L47/283Flow control; Congestion control in relation to timing considerations in response to processing delays, e.g. caused by jitter or round trip time [RTT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6215Individual queue per QOS, rate or priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q11/0067Provisions for optical access or distribution networks, e.g. Gigabit Ethernet Passive Optical Network (GE-PON), ATM-based Passive Optical Network (A-PON), PON-Ring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q2011/0064Arbitration, scheduling or medium access control aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q2011/0088Signalling aspects

Definitions

  • the present invention relates to the field of network communication technologies, and in particular, to a method and apparatus, and a storage medium for reducing a high priority data transmission delay in a congested state in an optical network unit (ONU).
  • ONU optical network unit
  • bandwidth control is based on Static Bandwidth Allocation (SBA) and Dynamic Bandwidth Allocation (DBA) mechanisms.
  • SBA Static Bandwidth Allocation
  • DBA Dynamic Bandwidth Allocation
  • GPON Gigabit Passive Optical Network
  • GPON Transmission Containers
  • TCONT is used to allocate bandwidth.
  • the TCONT is a scheduling time slot of GPON time division multiplexing, in which data belonging to the time slot can be transmitted, Logical Link Identifier (LLID)
  • LLID Logical Link Identifier
  • a scheduling time slot for EPON time division multiplexing the principle is the same as GPON. In the case of multiple TCONT/LLID, the bandwidth is polled between each TCONT/LLID.
  • the traffic management (TM) egress module In order to ensure the line speed performance in this case, the traffic management (TM) egress module usually needs a larger cache.
  • the space stores the entire data packet to ensure that the overhead of the MAC framing is reduced, the bandwidth utilization is improved, and the transmission of a large number of IDLE frames is prevented due to waiting for the whole packet.
  • the bandwidth allocation method of the OLT is based on TCONT ⁇ LLID, not In the priority queue, the internal scheduling of the TM egress module is mostly based on the priority queue.
  • the intermediate cache stores the data after the internal scheduling of the TM egress module.
  • the following problems may occur: First, the packets are sequentially written into the cache. Once the low-priority data (messages) is scheduled, in order to ensure that no error packets and out-of-orders occur, even then The arrival of data with high priority cannot be scheduled immediately, and it needs to wait and congestion occurs, as shown in Figure 1.
  • the slot scheduling feature of the ONU causes the next scheduling of the message to wait for an interval above one slot. Therefore, the delay of high priority data will be amplified, affecting the quality of service, and the user experience is not good.
  • embodiments of the present invention provide a method and apparatus, and a storage medium for reducing a high priority data transmission delay.
  • An embodiment of the present invention provides a method for reducing a high-priority data transmission delay, which is applied to an ONU, and the method includes:
  • Data is read from a cache corresponding to the cache index and ultimately transmitted to the OLT.
  • the method when the data is read from the cache corresponding to the cache index, the method further includes:
  • the current scheduling information includes at least: the current scheduled data and the corresponding stored cache.
  • the storing, in the cache, the data in the packet according to the descriptor information includes:
  • the current bandwidth information delivered by the OLT, the preset priority scheduling rule, and the cached state corresponding to the bandwidth information, and the cache of the current scheduling are calculated according to an absolute priority (SP) scheduling method.
  • SP absolute priority
  • the highest priority data ID corresponding to the TCONT ⁇ LLID ID in the current bandwidth information is selected from the mapping table based on the mapping table of the TCONT ⁇ LLID ID and the data priority in the preset priority scheduling rule;
  • the current bandwidth information delivered by the OLT, the preset priority scheduling rule, and the cached state corresponding to the bandwidth information, and the cached index of the current scheduling are calculated according to the SP scheduling method, including:
  • the embodiment of the present invention further provides an apparatus for reducing a high-priority data transmission delay, which is applied to an ONU, and the apparatus includes:
  • the cache module is configured to store the data in the packet in the cache according to the descriptor information of the packet parsed from the packet; and the cache index of the current schedule based on the scheduling module transmits from the cache index Reading data in the corresponding cache and finally transmitting to the OLT;
  • the scheduling module is configured to calculate, according to the current bandwidth information delivered by the OLT, a preset priority scheduling rule, and a cached state corresponding to the bandwidth information, and calculate a cache index of the current scheduling according to the SP scheduling method, and send the cached index to the Cache module.
  • the scheduling module is further configured to record the current state of all the caches and the current scheduling information for subsequent data scheduling.
  • the current scheduling information includes at least the current scheduled data and the corresponding stored cache.
  • the cache module includes:
  • the cache unit is configured to store the data in the packet in the cache according to the descriptor information of the packet parsed from the packet;
  • the reading unit is configured to read data from the cache corresponding to the cache index based on the currently scheduled cache index transmitted by the scheduling module, and finally transmit the data to the OLT.
  • the cache unit includes:
  • a descriptor cache unit configured to cache the descriptor information
  • the data buffering unit is configured to store, according to the TCONT ⁇ LLID ID and the priority ID of the data in the descriptor information stored by the descriptor buffer unit, the data in the packet, corresponding to the same TCONT ⁇ LLID ID. Data of different priorities are written into different caches; wherein the higher the priority of the data, the larger the priority ID.
  • the scheduling module includes:
  • the first cache determining unit is configured to determine, according to the TCONT ⁇ LLID ID in the current bandwidth information and the cached index, the cache of the current scheduling;
  • the mapping unit is configured to: according to the mapping table of the TCONT ⁇ LLID ID and the data priority in the preset priority scheduling rule, select the highest priority data ID corresponding to the TCONT ⁇ LLID ID in the current bandwidth information from the mapping table;
  • a first index determining unit configured to determine a priority ID corresponding to the data ID according to the descriptor information of the packet; and determine, when the state of the cache corresponding to the priority ID is a non-empty state, determining the cached
  • the index is the cached index of this schedule.
  • the scheduling module includes:
  • a second cache determining unit configured to determine a cache of the current scheduling according to the TCONT ⁇ LLID ID in the current bandwidth information and the cached index
  • a state determining unit configured to determine a cache in the determined cache that is non-empty
  • the second index determining unit is configured to determine, according to the preset priority scheduling rule, that the cache index with the highest priority ID is the cache index of the current scheduling based on the cache index corresponding to the non-empty cache.
  • Embodiments of the present invention also provide a computer storage medium storing a computer program configured to perform the above method of reducing a high priority data transmission delay.
  • the method and device for reducing the high-priority data transmission delay and the storage medium provided by the embodiment of the present invention, according to the descriptor information of the packet parsed from the packet, correspondingly storing the data in the packet in the cache; Calculating the cache index of the current schedule according to the current bandwidth information delivered by the OLT, the preset priority scheduling rule, and the state of the cache corresponding to the bandwidth information; and reading data from the cache corresponding to the cache index, And finally transmitted to the OLT.
  • the data with the highest priority is preferentially transmitted based on the preset priority scheduling rule, and the packet with the lower priority is not passed or passed, thereby effectively reducing the delay of the high priority data and improving the user.
  • FIG. 1 is a schematic diagram of a high priority data generation delay in the prior art
  • FIG. 2 is a schematic structural diagram of an apparatus for reducing a high priority data transmission delay in a congestion state in an ONU according to an embodiment of the present invention
  • FIG. 3 is a schematic flowchart of a method for reducing a high priority data transmission delay in a congestion state in an ONU according to an embodiment of the present invention
  • FIG. 4 is a schematic structural diagram 1 of the egress data cache management module in the embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of an egress data buffer scheduling module in an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram 2 of the egress data cache management module in the embodiment of the present invention.
  • FIG. 7 is a flowchart of a method for reducing a high priority data transmission delay in a congestion state in an ONU according to an embodiment of the present disclosure
  • FIG. 8 is a schematic structural diagram of an apparatus for reducing a high priority data transmission delay in a congestion state in an ONU according to an embodiment of the present disclosure
  • FIG. 9 is a schematic structural diagram of a cache module according to an embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of a cache unit according to an embodiment of the present invention.
  • FIG. 11 is a schematic structural diagram of a scheduling module according to an embodiment of the present invention.
  • FIG. 12 is a schematic structural diagram of a scheduling module according to another embodiment of the present invention.
  • Embodiments of the present invention provide a method and apparatus for reducing a high-priority data transmission delay.
  • high-priority data refers to high-priority data in a congestion state in an ONU.
  • An embodiment of the present invention provides a method for reducing a high priority data transmission delay. As shown in FIG. 7, the method includes:
  • Step 701 Corresponding to storing the data in the packet in the cache according to the descriptor information of the packet parsed from the packet;
  • Step 702 Calculate the cache index of the current scheduling according to the current bandwidth information delivered by the OLT, the preset priority scheduling rule, and the cache status corresponding to the bandwidth information.
  • Step 703 Read data from the cache corresponding to the cache index, and finally transmit the data to the OLT.
  • the preset priority scheduling rule may be: a priority scheduling rule that is pre-configured according to the needs of the user service, that is, the data that needs to be preferentially scheduled may be arbitrarily set; or the preset priority scheduling rule is : determining the highest priority data from the priority of storing data in the cache as the priority scheduled data.
  • the descriptor information may include: data priority obtained from parsing in the message, bandwidth information (such as TCONT ⁇ LLID ID, etc.), data ID, and the like.
  • the data with the highest priority is preferentially transmitted based on the preset priority scheduling rule, and the packet with the lower priority is not passed or passed, thereby effectively reducing the delay of the high priority data and improving the user.
  • the delay of high-priority data is significantly reduced, the consumption of cache resources is reduced correspondingly (no need to cache data in a large cache), and the system operation cost is reduced.
  • the method when the data is read from the cache corresponding to the cache index, the method further includes:
  • the current scheduling information includes at least: the current scheduled data and the corresponding stored cache.
  • the storing, in the cache, the data in the packet according to the descriptor information includes:
  • TCONT ⁇ LLID ID in the descriptor information and the priority ID of the data The data of the different priorities of the same TCONT ⁇ LLID ID is written into different caches, wherein the higher the priority of the data, the larger the priority ID.
  • the descriptor information may be stored separately from the data, the descriptor information is stored in the descriptor cache, and the data is stored in the data cache. For the same data (queue), the data is cached and described. The cache index of the cache is the same.
  • the current bandwidth information delivered by the OLT, the preset priority scheduling rule, and the cached state corresponding to the bandwidth information, and the cached index of the current scheduling are calculated according to the SP scheduling method, including:
  • the highest priority data ID corresponding to the TCONT ⁇ LLID ID in the current bandwidth information is selected from the mapping table based on the mapping table of the TCONT ⁇ LLID ID and the data priority in the preset priority scheduling rule;
  • the current bandwidth information delivered by the OLT, the preset priority scheduling rule, and the cached state corresponding to the bandwidth information, and the cached index of the current scheduling are calculated according to the SP scheduling method, including :
  • the embodiment of the present invention can flexibly set the data to be preferentially transmitted according to the service requirement, and can also preferentially send the high priority data according to the priority of the data in the packet, thereby implementing the method flexible and effectively reducing the data delay, especially in the case of congestion.
  • the delay time of high priority data ensures the quality of service and improves the user experience.
  • Embodiments of the present invention also provide a storage medium having stored therein executable instructions that, when executed, are configured to perform any of the steps of reducing a high priority data transmission delay .
  • the foregoing storage medium may include, but is not limited to, a USB flash drive, a Read-Only Memory (ROM), and a Random Access Memory (RAM).
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • the embodiment of the present invention further provides a device for reducing the high-priority data transmission delay, which is used to implement the foregoing embodiments and preferred embodiments, and has not been described again.
  • the term “module” "unit” may implement a combination of software and/or hardware of a predetermined function. As shown in Figure 8, the device includes:
  • the cache module 81 is configured to: according to the descriptor information of the packet parsed from the packet, store the data in the packet in the cache; and use the cache index of the current schedule transmitted by the scheduling module from the cache Reading data in the cache corresponding to the index, and finally transmitting to the OLT;
  • the scheduling module 82 is configured to calculate the current cached index according to the current bandwidth information delivered by the OLT, the preset priority scheduling rule, and the cached state corresponding to the bandwidth information, and send the cached index of the current scheduling according to the SP scheduling method.
  • the cache module is configured to calculate the current cached index according to the current bandwidth information delivered by the OLT, the preset priority scheduling rule, and the cached state corresponding to the bandwidth information, and send the cached index of the current scheduling according to the SP scheduling method.
  • the preset priority scheduling rule may be: a priority scheduling rule that is pre-configured according to the needs of the user service, that is, the data that needs to be preferentially scheduled may be arbitrarily set; or the preset priority scheduling rule is : determining the highest priority data from the priority of storing data in the cache as the priority scheduled data.
  • the descriptor information may include: solving from a message Analysis of the resulting data priority, bandwidth information (such as TCONT ⁇ LLID ID, etc.), data ID, and so on.
  • the data with the highest priority is preferentially transmitted based on the preset priority scheduling rule, and the packet with the lower priority is not passed or passed, thereby effectively reducing the delay of the high priority data and improving the user.
  • the delay of high-priority data is significantly reduced, the consumption of cache resources is reduced correspondingly (no need to cache data in a large cache), and the system operation cost is reduced.
  • the scheduling module 82 when the scheduling module 82 reads data from the cache corresponding to the cache index,
  • the scheduling module 82 is further configured to record the current state of all the caches and the current scheduling information for subsequent data scheduling; the current scheduling information includes at least: the current scheduled data and the corresponding stored cache. .
  • the cache module 81 includes:
  • the buffer unit 811 is configured to store, according to the descriptor information of the packet parsed from the packet, the data in the packet in the cache;
  • the reading unit 812 is configured to read data from the cache corresponding to the cache index based on the currently scheduled cache index transmitted by the scheduling module, and finally transmit the data to the OLT.
  • the cache unit 811 includes:
  • a descriptor buffer unit 8111 configured to cache the descriptor information
  • the data buffering unit 8112 is configured to store the data in the packet according to the TCONT ⁇ LLID ID and the priority ID of the data in the descriptor information stored by the descriptor buffer unit, and correspond to the same TCONT ⁇ LLID ID. Different priority data is written into different caches; wherein the higher the priority of the data, the larger the priority ID.
  • the scheduling module 82 can include:
  • the first cache determining unit 821 is configured to determine, according to the TCONT ⁇ LLID ID in the current bandwidth information and the cached index, the cache of the current scheduling;
  • the mapping unit 822 is configured to: according to the mapping table of the TCONT ⁇ LLID ID and the data priority in the preset priority scheduling rule, select the highest priority data ID corresponding to the TCONT ⁇ LLID ID in the current bandwidth information from the mapping table;
  • the first index determining unit 823 is configured to determine a priority ID corresponding to the data ID according to the descriptor information of the packet, and determine the cache when determining that the cached state corresponding to the priority ID is a non-empty state.
  • the index of this is the cached index of this schedule.
  • the scheduling module 82 can include:
  • the second cache determining unit 824 is configured to determine, according to the TCONT ⁇ LLID ID in the current bandwidth information and the cached index, the cache of the current scheduling;
  • a state determining unit 825 configured to determine that the cached state in the cache is non-empty
  • the second index determining unit 826 is configured to determine, according to the preset priority scheduling rule, that the cache index with the highest priority ID is the cache index of the current scheduling based on the cache index corresponding to the non-empty cache.
  • the embodiment of the present invention can flexibly set the data to be preferentially transmitted according to the service requirement, and can also preferentially send the high priority data according to the priority of the data in the packet, thereby implementing the method flexible and effectively reducing the data delay, especially in the case of congestion.
  • the delay time of high priority data ensures the quality of service and improves the user experience.
  • This embodiment provides another apparatus for reducing the high priority data transmission delay in a congestion state in an ONU, including the following modules:
  • the internal traffic management module A the data cache management module B (corresponding to the cache module 81), the data scheduling management module C (corresponding to the scheduling module 82), and the bandwidth information management module D (the four modules are referred to as A , B, C, D); among them,
  • the internal traffic management module A is configured to generate descriptor information of the internal message of the ONU.
  • the data cache management module B is configured to store the corresponding data content according to the descriptor information, and sequentially write the cache;
  • the data scheduling management module C is configured to receive the bandwidth information sent by the bandwidth information management module D, and simultaneously According to the software configuration rule (preset priority scheduling rule), SP scheduling is performed, and the current scheduling cache index is calculated and provided to the data cache management module B;
  • the data cache management module B is based on the information (the current scheduling cache index) The data is read from the cache and provided to the bandwidth information management module D to complete a complete scheduling process.
  • the scheduling method based on the above apparatus may include the following steps:
  • the first step: A is responsible for the internal traffic management of the ONU, including QOS functions such as speed limit, shaping, scheduling, etc., and finally provides queue descriptor information of different priorities to B;
  • QOS functions such as speed limit, shaping, scheduling, etc.
  • the pre-read descriptor needs to be used for buffering. That is, if the OLT has no bandwidth to send, A pre-schedules a certain number of data packets to be written to B, and keeps the buffer in B. The continuity of the readout, the generation of the IDLE frame is simulated.
  • the index of the internal cache of B is TCONT ⁇ LLID ID+priority ID, that is, the packets of different priorities corresponding to the same TCONT ⁇ LLID ID are written into different caches. space.
  • the second step B reports the status of D to the current cache, indicating which TCONT ⁇ LLID ID corresponding buffers have valid data (non-empty), which can be read.
  • D is based on the bandwidth information delivered by the current OLT and the buffer status reported by B. , notify C of the TCONT ⁇ LLID ID and other information that can be scheduled this time;
  • the third step C according to the TCONT ⁇ LLID ID, the pre-configured TCONT ⁇ LLID ID and the data priority mapping relationship, using the SP method combined with the current TCONT ⁇ LLID ID available for reading the queue status (cache status), calculate The highest priority queue ID is scheduled to be transmitted to B as a cached index, and B reads the corresponding data to D according to the scheduling information index to complete a scheduling process.
  • C also records the status and current scheduling information of all priority queue caches under all TCONT ⁇ LLID IDs to maintain the integrity of packet scheduling to prevent mis-packets and out-of-order situations.
  • FIG. 2 is a schematic structural diagram of an apparatus for reducing a high priority data transmission delay in a congestion state in an ONU, as shown in FIG. 2, the apparatus includes: a descriptor scheduling module 201; an exit descriptor cache 202; The data cache management module 203, the egress data cache scheduling module 204, the bandwidth information processing module 205; wherein the descriptor scheduling module 201 corresponds to the above A, the egress descriptor cache 202 and the egress data cache management module 203 form a B, and the egress data cache The scheduling module 204 corresponds to C, and the bandwidth information processing module 205 corresponds to D;
  • the descriptor scheduling module 201 provides a descriptor that can be scheduled to the egress descriptor cache 202 for buffering the descriptor;
  • the bandwidth information processing module 205 provides the currently scheduled TCONT ⁇ LLID information (TCONT ⁇ LLID ID) to the egress data cache scheduling module 204;
  • the classification configuration module 207 provides pre-configured priority information to the egress data cache scheduling module 204;
  • the egress data cache scheduling module 204 along with the cache size and backpressure information (ie, the cache state) provided by the egress data cache management module 203, and the descriptor cache size and corresponding backpressure information provided by the egress descriptor cache 202 ( That is, the cache state), combined with the TCONT ⁇ LLID information and the pre-configured priority information, internally outputs the currently scheduled descriptor address to the egress descriptor cache 202 and the egress data cache management module 203 in an absolute priority manner, and from the exit.
  • the descriptor cache 202 and the egress data cache management module 203 read out the corresponding descriptors and data and send them to the bandwidth information processing module 205.
  • FIG. 3 is a schematic flowchart of a method for reducing a high-priority data transmission delay in a congestion state in an ONU according to the embodiment of the present invention. As shown in FIG. 3, the method includes:
  • S301 The packet enters the ONU.
  • S302 Determine, according to the empty indication signal of the egress descriptor cache 202, whether the descriptor can be written, and determine whether there is a descriptor that can be pre-scheduled in the packet (there is a non-empty queue in the packet), if there is a non-empty queue , proceed to S303; otherwise, continue to judge (execute S302);
  • S303 Write the descriptor of the queue that can be pre-scheduled to the corresponding egress descriptor cache 202.
  • the index of the cache is: TCONT ⁇ LLID ID+PRI ID, that is, the priorities corresponding to different TCONT ⁇ LLID IDs are stored separately. Look up the table to indicate which one of the TCONT ⁇ LLID IDs corresponds to which priority queue.
  • the maximum number of TCONTs is 32, and the maximum number of priority queues per TCONT is 8.
  • the cached resources can be flexibly configured. You can flexibly choose several fixed mapping relationships, or you can configure them by upper layer software.
  • S304 According to the current bandwidth information, that is, the TCONT ⁇ LLID ID that can be scheduled, the current egress descriptor buffer corresponding to the status information of all priority queues under the TCONT ⁇ LLID ID, the currently saved TCONT ⁇ LLID ID and corresponding The priority information, and the storage status information of the data cache (exit data cache management module) corresponding to the currently configurable TCONT ⁇ LLID ID, performs primary SP scheduling to determine the TCONT ⁇ LLID ID and corresponding priority of the current schedule.
  • Level (PRI) ID PRI
  • ONU is based on fragment scheduling in GPON mode. That is, different fragments can be scheduled under different TCONTs within a period of time. It is not necessary to read the entire packet. In order to maintain the integrity and continuity of the packet, it is necessary to maintain a current The scheduling status information of all queues. If a priority packet has been scheduled in the current TCONT, packets of other priorities cannot be scheduled to prevent out-of-order. At this time, if the currently saved TCONT ⁇ LLID ID+PRI ID is obtained, S307 is performed; then, the remaining messages or new messages corresponding to the TCONT ⁇ LLID ID+PRI ID are dispatched through S305, and S306 is continued to be executed. Scheduling use.
  • S305 Write message data to the subsequent module (exit data cache management module 203) according to the descriptor information, and perform fragmentation processing;
  • S306 The message is stored in the corresponding egress buffer (FIFO, corresponding to the egress data cache management module 203) under the current TCONT ⁇ LLID+PRI ID, and the FIFO size is configurable, and the configuration of the maximum fragment length is considered at the same time, and the cache index is :TCONT ⁇ LLID ID+PRI ID, that is, the priority queues corresponding to different TCONT ⁇ LLID IDs are stored separately, and this needs to maintain a current slowdown.
  • FIFO corresponding egress buffer
  • FIFO size is configurable, and the configuration of the maximum fragment length is considered at the same time
  • the cache index is :TCONT ⁇ LLID ID+PRI ID, that is, the priority queues corresponding to different TCONT ⁇ LLID IDs are stored separately, and this needs to maintain a current slowdown.
  • mapping table corresponding to TCONT ⁇ LLID ID+PRI ID for judgment in S304 also convert all priority queue states under each TCONT ⁇ LLID ID to TCONT ⁇ LLID state (ie: put all All the priority queue states under the TCONT ⁇ LLID ID are aggregated and reflected in the form of the TCONT ⁇ LLID ID total schedulable state, and fed back to the passive optical network (PON) MAC;
  • PON passive optical network
  • the read data packet is transmitted to the OLT, and the MAC module extracts corresponding bandwidth information to the egress data cache scheduling module.
  • the classification configuration module 207 provides a classification configuration corresponding to each descriptor.
  • the configuration can support software configuration or automatic hardware identification. If the software configuration is valid, the software can specify which channel cache priority under which TCONT/LLID ID. The highest, can be arbitrarily specified.
  • dscp difference service code point
  • the 401 is a minimum cache division basic unit, and includes an address generation module 403 (ADDR_GEN) configured to calculate an address for each update; a cache control module. 404 (FIFO_CTRL), configured to generate an empty full signal according to read and write enable; the division of the address generation module 403 is determined by cfg_mode Address and end address, and cfg_mode is the flexible configuration mode mentioned in the classification configuration module 207; followed by a dual port 1 write 1 read RAM 402 for storing data, using different 401 modules to generate different RAM read and write addresses , to achieve cache sharing and dynamic segmentation.
  • ADDR_GEN address generation module 403
  • FIFO_CTRL FIFO_CTRL
  • the implementation manners of the egress descriptor cache (202) and the egress data cache management module (203) are based on FIFO, and the principle of partitioning of the cache is based on TCONT ⁇ LLID ID+pri ID, that is, each TCONT ⁇
  • the FIFO IDs of all pri IDs corresponding to the LLID ID are grouped into one group, and each pri ID corresponds to one FIFO.
  • the number of pris under each TCONT ⁇ LLID can be flexibly allocated.
  • these caches are dynamically divided. In GPON mode, the maximum number of TCONTs is 32.
  • the maximum number of LLIDs is 8.
  • the number of pris supported in GPON mode can vary according to the number of different TCONT configurations.
  • the number of FIFOs per TCONT is a/t.
  • the number of FIFOs under each LLID is a/l.
  • l is the number of LLIDs, that is, all TCONT and LLID shared caches to maximize cache utilization.
  • the design of pri can choose the supported pri according to the number of caches, TCONT and LLID. Number, can be flexibly configured ⁇ 8, 4 ⁇ way, this embodiment can be divided into the cache software to write configuration representation is achieved in that the dynamic partitioning logic to the buffer size according to the configuration mode.
  • the structure of the egress data cache scheduling module 204 is shown in FIG. 5.
  • the module is composed of a mapping module 501 and a scheduling module 502.
  • the N-way status information corresponds to the N-way cache.
  • each bit of the N-way information represents a pri queue state under the TCONT ⁇ LLID.
  • different mapping relationships are selected, and the mapping module 501 sets the N-channel state.
  • the information is mapped into N ⁇ M road information, ensuring that each TCONT ⁇ LLID has M-path pri state information, so that the following scheduling module can use the same module structure to select, and can be flexibly extended according to the configuration, thereby obtaining N*M path cache status information, N*M way entry descriptor Status information, current scheduling status information of the N*M path, and phase-by-phase and after, finally obtain the entry information of the participating scheduling module 502, and then the principle of scheduling each M-way entry information is based on the SP scheduling mode (ie, the highest priority queue)
  • the first scheduling is until there is no queue data of the priority at the current moment, and then the next highest priority queue is scheduled, and so on, and the N-way sub-scheduler outputs the final scheduling information of the N-way, and then enters the MUX module, according to the PON MAC input.
  • the current scheduling read unit determines which routing information is output to the cache device and reads the data.
  • the embodiment of the present disclosure provides a structure diagram of another egress data cache management module 203.
  • the structure in FIG. 4 is based on a FIFO architecture.
  • the implementation of the cache structure based on the linked list management in this embodiment is as shown in FIG. a linked list control module 601, an address management module 602, a linked list pointer storage module 603, and a data content storage module 604, wherein
  • 601 receives the descriptor of the written message and raises the free address request to 602, and allocates a free address to 601 according to the idle condition of the internal address, and saves the current address and the next hop address to 603, 601 according to the address returned by 602, the current packet content is written to 604 for saving; when 601 receives the read message descriptor, the address request corresponding to the message is raised to 602, 602 according to the content of the head pointer, In 603, the saved next hop address is read, and after returning to 601, 601 to obtain the address, the message is read from 604.
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention can take the form of a hardware embodiment, a software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
  • the technical solution of the embodiment of the present invention can preferentially transmit data with a high priority based on a preset priority scheduling rule, and the packet with a lower priority does not pass or pass the second priority, thereby effectively reducing the delay of the high priority data.
  • the user experience is improved.
  • the delay of high-priority data is significantly reduced (after the measurement, the delay is reduced from 8ms to about 500us), which reduces the occupation of cache resources (no need for large cache storage congestion). Data), reducing system operating costs.

Abstract

Provided in embodiments of the present invention are a method and device for reducing a transmission latency of high-priority data, and storage medium. The method comprises: storing, according to packet descriptor information obtained by parsing a packet, data of the packet correspondingly into a buffer; computing, according to current bandwidth information sent by an optical line terminal (OLT), a predetermined priority-based scheduling rule, and a buffer state corresponding to the bandwidth information, a buffer index of a current scheduling operation; and reading the data from the buffer corresponding to the buffer index, and transmitting the same to the OLT.

Description

一种降低高优先级数据传输时延的方法和装置、存储介质Method and device for reducing high priority data transmission delay, storage medium
相关申请的交叉引用Cross-reference to related applications
本申请基于申请号为201710091051.3、申请日为2017年02月20日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。The present application is based on a Chinese patent application filed on Jan. 20, 2017, the entire disclosure of which is hereby incorporated by reference.
技术领域Technical field
本发明涉及网络通信技术领域,尤其涉及一种降低光网络单元(ONU)中拥塞状态下高优先级数据传输时延的方法和装置、存储介质。The present invention relates to the field of network communication technologies, and in particular, to a method and apparatus, and a storage medium for reducing a high priority data transmission delay in a congested state in an optical network unit (ONU).
背景技术Background technique
随着网络应用技术的发展,网络中传输的数据流量越来越多。为了保证网络应用的服务质量,网络设备中优先级和时延的保证是非常重要的。With the development of network application technology, more and more data traffic is transmitted in the network. In order to guarantee the quality of service of network applications, the guarantee of priority and delay in network equipment is very important.
对于光线路终端(OLT)来说,带宽控制是基于静态带宽分配(SBA)\动态带宽分配(DBA)机制的,比如:吉比特无源光网络(GPON)设备是基于传输容器(Transmission Container,TCONT)来进行分配带宽的,所述TCONT为GPON时分复用的一个调度时隙,在此时隙下,属于该时隙的数据才能被传输出去,逻辑链路标记(Logical Link Identifier,LLID)为EPON时分复用的一个调度时隙,原理同GPON。多TCONT/LLID的情况下,每个TCONT/LLID之间是轮询下发带宽的,为了保证此情况下达到线速性能,通常在流量管理(Traffic Management,TM)出口模块需要比较大的缓存空间来存储整个数据包,以保证减少MAC组帧的开销,提高带宽利用率,防止因为等待整包的情况出现大量IDLE帧的发送。For Optical Line Terminals (OLTs), bandwidth control is based on Static Bandwidth Allocation (SBA) and Dynamic Bandwidth Allocation (DBA) mechanisms. For example, Gigabit Passive Optical Network (GPON) devices are based on Transmission Containers (Transmission Containers). TCONT) is used to allocate bandwidth. The TCONT is a scheduling time slot of GPON time division multiplexing, in which data belonging to the time slot can be transmitted, Logical Link Identifier (LLID) A scheduling time slot for EPON time division multiplexing, the principle is the same as GPON. In the case of multiple TCONT/LLID, the bandwidth is polled between each TCONT/LLID. In order to ensure the line speed performance in this case, the traffic management (TM) egress module usually needs a larger cache. The space stores the entire data packet to ensure that the overhead of the MAC framing is reduced, the bandwidth utilization is improved, and the transmission of a large number of IDLE frames is prevented due to waiting for the whole packet.
可以看出,OLT的带宽分配方式是基于TCONT\LLID为单位的,不是 以优先级队列为单位,TM出口模块内部的调度多是基于优先级队列的,中间缓存存放的是TM出口模块内部调度之后的数据。对于端到端的调度来看,有可能出现如下问题:首先,报文依次写入缓存,一旦低优先级的数据(报文)被调度之后,为了保证不出现错包和乱序,这时即使有高优先级的数据到来也不能立即被调度,需要等待,出现拥塞,如图1所示;另一方面,ONU的时隙调度特性导致下一次调度该报文需要等待一个时隙以上的间隔,从而导致高优先级数据的时延会被放大,影响服务质量,用户体验不佳。It can be seen that the bandwidth allocation method of the OLT is based on TCONT\LLID, not In the priority queue, the internal scheduling of the TM egress module is mostly based on the priority queue. The intermediate cache stores the data after the internal scheduling of the TM egress module. For the end-to-end scheduling, the following problems may occur: First, the packets are sequentially written into the cache. Once the low-priority data (messages) is scheduled, in order to ensure that no error packets and out-of-orders occur, even then The arrival of data with high priority cannot be scheduled immediately, and it needs to wait and congestion occurs, as shown in Figure 1. On the other hand, the slot scheduling feature of the ONU causes the next scheduling of the message to wait for an interval above one slot. Therefore, the delay of high priority data will be amplified, affecting the quality of service, and the user experience is not good.
发明内容Summary of the invention
为解决现有存在的技术问题,本发明实施例提供一种降低高优先级数据传输时延的方法和装置、存储介质。In order to solve the existing technical problems, embodiments of the present invention provide a method and apparatus, and a storage medium for reducing a high priority data transmission delay.
为达到上述目的,本发明实施例的技术方案是这样实现的:To achieve the above objective, the technical solution of the embodiment of the present invention is implemented as follows:
本发明实施例提供一种降低高优先级数据传输时延的方法,应用于ONU中,该方法包括:An embodiment of the present invention provides a method for reducing a high-priority data transmission delay, which is applied to an ONU, and the method includes:
依据从报文中解析所得的报文的描述符信息,对应存储所述报文中的数据于缓存中;Corresponding to storing the data in the packet in the cache according to the descriptor information of the packet parsed from the packet;
依据OLT下发的当前的带宽信息、预设的优先级调度规则以及所述带宽信息对应的缓存的状态,计算本次调度的缓存索引;Calculating the cache index of the current schedule according to the current bandwidth information delivered by the OLT, the preset priority scheduling rule, and the state of the cache corresponding to the bandwidth information;
从与所述缓存索引对应的缓存中读出数据,并最终传输给所述OLT。Data is read from a cache corresponding to the cache index and ultimately transmitted to the OLT.
上述方案中,所述从与所述缓存索引对应的缓存中读出数据时,该方法还包括:In the above solution, when the data is read from the cache corresponding to the cache index, the method further includes:
记录当前所有缓存的状态以及本次的调度信息,以用于后续的数据调度;所述本次的调度信息至少包括:本次调度的数据和对应存储的缓存。Recording the current state of all the caches and the current scheduling information for subsequent data scheduling; the current scheduling information includes at least: the current scheduled data and the corresponding stored cache.
上述方案中,所述依据所述描述符信息,对应存储所述报文中的数据于缓存中,包括: In the above solution, the storing, in the cache, the data in the packet according to the descriptor information includes:
依据所述描述符信息中的TCONT\LLID ID以及数据的优先级ID为索引、存储所述报文中的数据,同一个TCONT\LLID ID对应的不同优先级的数据写入不同的缓存中;其中,所述数据的优先级越高对应的优先级ID越大。Determining, according to the TCONT\LLID ID and the priority ID of the data in the descriptor information, the data in the packet, and the data of different priorities corresponding to the same TCONT\LLID ID are written into different caches; The higher the priority of the data, the larger the priority ID.
上述方案中,所述依据OLT下发的当前的带宽信息、预设的优先级调度规则以及所述带宽信息对应的缓存的状态,并基于绝对优先级(SP)调度方法计算本次调度的缓存索引,包括:In the foregoing solution, the current bandwidth information delivered by the OLT, the preset priority scheduling rule, and the cached state corresponding to the bandwidth information, and the cache of the current scheduling are calculated according to an absolute priority (SP) scheduling method. Index, including:
依据所述当前的带宽信息中的TCONT\LLID ID和所述缓存的索引确定本次调度的缓存;Determining a cache of the current schedule according to the TCONT\LLID ID in the current bandwidth information and the cached index;
基于预设的优先级调度规则中TCONT\LLID ID与数据优先级的映射表,从映射表中选取当前带宽信息中的TCONT\LLID ID对应的最高优先级的数据ID;The highest priority data ID corresponding to the TCONT\LLID ID in the current bandwidth information is selected from the mapping table based on the mapping table of the TCONT\LLID ID and the data priority in the preset priority scheduling rule;
依据所述报文的描述符信息确定所述数据ID对应的优先级ID;Determining, according to the descriptor information of the packet, a priority ID corresponding to the data ID;
判断所述优先级ID对应的缓存的状态为非空状态时,确定所述缓存的索引为本次调度的缓存索引。When it is determined that the state of the cache corresponding to the priority ID is a non-empty state, determining that the cached index is the cached index of the current schedule.
上述方案中,所述依据OLT下发的当前的带宽信息、预设的优先级调度规则以及所述带宽信息对应的缓存的状态,并基于SP调度方法计算本次调度的缓存索引,包括:In the above solution, the current bandwidth information delivered by the OLT, the preset priority scheduling rule, and the cached state corresponding to the bandwidth information, and the cached index of the current scheduling are calculated according to the SP scheduling method, including:
依据所述当前的带宽信息中的TCONT\LLID ID和所述缓存的索引确定本次调度的缓存;Determining a cache of the current schedule according to the TCONT\LLID ID in the current bandwidth information and the cached index;
确定所述已确定的缓存中状态为非空的缓存;Determining, in the determined cache, a state in which the state is non-empty;
依据预设的优先级调度规则,基于所述非空的缓存对应的缓存索引确定优先级ID最大的缓存索引为本次调度的缓存索引。And determining, according to the preset priority scheduling rule, a cache index with a maximum priority ID based on the cache index corresponding to the non-empty cache as a cache index of the schedule.
本发明实施例还提供一种降低高优先级数据传输时延的装置,应用于ONU中,该装置包括: The embodiment of the present invention further provides an apparatus for reducing a high-priority data transmission delay, which is applied to an ONU, and the apparatus includes:
缓存模块,配置为依据从报文中解析所得的报文的描述符信息,对应存储所述报文中的数据于缓存中;基于调度模块传输的本次调度的缓存索引从与所述缓存索引对应的缓存中读出数据,并最终传输给所述OLT;The cache module is configured to store the data in the packet in the cache according to the descriptor information of the packet parsed from the packet; and the cache index of the current schedule based on the scheduling module transmits from the cache index Reading data in the corresponding cache and finally transmitting to the OLT;
调度模块,配置为依据OLT下发的当前的带宽信息、预设的优先级调度规则以及所述带宽信息对应的缓存的状态,并基于SP调度方法计算本次调度的缓存索引,并发送给所述缓存模块。The scheduling module is configured to calculate, according to the current bandwidth information delivered by the OLT, a preset priority scheduling rule, and a cached state corresponding to the bandwidth information, and calculate a cache index of the current scheduling according to the SP scheduling method, and send the cached index to the Cache module.
上述方案中,所述调度模块从与所述缓存索引对应的缓存中读出数据时,In the above solution, when the scheduling module reads data from a cache corresponding to the cache index,
所述调度模块,还配置为记录当前所有缓存的状态以及本次的调度信息,以用于后续的数据调度;所述本次的调度信息至少包括:本次调度的数据和对应存储的缓存。The scheduling module is further configured to record the current state of all the caches and the current scheduling information for subsequent data scheduling. The current scheduling information includes at least the current scheduled data and the corresponding stored cache.
上述方案中,所述缓存模块包括:In the above solution, the cache module includes:
缓存单元,配置为依据从报文中解析所得的报文的描述符信息,对应存储所述报文中的数据于缓存中;The cache unit is configured to store the data in the packet in the cache according to the descriptor information of the packet parsed from the packet;
读取单元,配置为基于调度模块传输的本次调度的缓存索引从与所述缓存索引对应的缓存中读出数据,并最终传输给所述OLT。The reading unit is configured to read data from the cache corresponding to the cache index based on the currently scheduled cache index transmitted by the scheduling module, and finally transmit the data to the OLT.
上述方案中,所述缓存单元包括:In the above solution, the cache unit includes:
描述符缓存单元,配置为缓存所述描述符信息;a descriptor cache unit configured to cache the descriptor information;
数据缓存单元,配置为依据所述描述符缓存单元存储的描述符信息中的TCONT\LLID ID以及数据的优先级ID为索引、存储所述报文中的数据,同一个TCONT\LLID ID对应的不同优先级的数据写入不同的缓存中;其中,所述数据的优先级越高对应的优先级ID越大。The data buffering unit is configured to store, according to the TCONT\LLID ID and the priority ID of the data in the descriptor information stored by the descriptor buffer unit, the data in the packet, corresponding to the same TCONT\LLID ID. Data of different priorities are written into different caches; wherein the higher the priority of the data, the larger the priority ID.
上述方案中,所述调度模块包括:In the above solution, the scheduling module includes:
第一缓存确定单元,配置为依据所述当前的带宽信息中的TCONT\LLID ID和所述缓存的索引确定本次调度的缓存; The first cache determining unit is configured to determine, according to the TCONT\LLID ID in the current bandwidth information and the cached index, the cache of the current scheduling;
映射单元,配置为基于预设的优先级调度规则中TCONT\LLID ID与数据优先级的映射表,从映射表中选取当前带宽信息中的TCONT\LLID ID对应的最高优先级的数据ID;The mapping unit is configured to: according to the mapping table of the TCONT\LLID ID and the data priority in the preset priority scheduling rule, select the highest priority data ID corresponding to the TCONT\LLID ID in the current bandwidth information from the mapping table;
第一索引确定单元,配置为依据所述报文的描述符信息确定所述数据ID对应的优先级ID;判断所述优先级ID对应的缓存的状态为非空状态时,确定所述缓存的索引为本次调度的缓存索引。a first index determining unit, configured to determine a priority ID corresponding to the data ID according to the descriptor information of the packet; and determine, when the state of the cache corresponding to the priority ID is a non-empty state, determining the cached The index is the cached index of this schedule.
上述方案中,所述调度模块包括:In the above solution, the scheduling module includes:
第二缓存确定单元,配置为依据所述当前的带宽信息中的TCONT\LLID ID和所述缓存的索引确定本次调度的缓存;a second cache determining unit, configured to determine a cache of the current scheduling according to the TCONT\LLID ID in the current bandwidth information and the cached index;
状态确定单元,配置为确定所述已确定的缓存中状态为非空的缓存;a state determining unit, configured to determine a cache in the determined cache that is non-empty;
第二索引确定单元,配置为依据预设的优先级调度规则,基于所述非空的缓存对应的缓存索引确定优先级ID最大的缓存索引为本次调度的缓存索引。The second index determining unit is configured to determine, according to the preset priority scheduling rule, that the cache index with the highest priority ID is the cache index of the current scheduling based on the cache index corresponding to the non-empty cache.
本发明实施例还提供一种计算机存储介质,该计算机存储介质存储有计算机程序,该计算机程序配置为执行上述降低高优先级数据传输时延的方法。Embodiments of the present invention also provide a computer storage medium storing a computer program configured to perform the above method of reducing a high priority data transmission delay.
本发明实施例提供的降低高优先级数据传输时延的方法和装置、存储介质,依据从报文中解析所得的报文的描述符信息,对应存储所述报文中的数据于缓存中;依据OLT下发的当前的带宽信息、预设的优先级调度规则以及所述带宽信息对应的缓存的状态,计算本次调度的缓存索引;从与所述缓存索引对应的缓存中读出数据,并最终传输给所述OLT。本发明实施例可基于预设的优先级调度规则令优先级高的数据优先传输,优先级低的报文不通过,或者次之通过,有效降低了高优先级数据的时延,提高了用户的使用体验;同时,由于高优先级数据的时延得到明显降低(经过实测,时延从8ms减少到500us左右),相应减少了缓存资源的占用(不需大 缓存存储拥塞的数据),降低系统运营成本。The method and device for reducing the high-priority data transmission delay and the storage medium provided by the embodiment of the present invention, according to the descriptor information of the packet parsed from the packet, correspondingly storing the data in the packet in the cache; Calculating the cache index of the current schedule according to the current bandwidth information delivered by the OLT, the preset priority scheduling rule, and the state of the cache corresponding to the bandwidth information; and reading data from the cache corresponding to the cache index, And finally transmitted to the OLT. In the embodiment of the present invention, the data with the highest priority is preferentially transmitted based on the preset priority scheduling rule, and the packet with the lower priority is not passed or passed, thereby effectively reducing the delay of the high priority data and improving the user. At the same time, because the delay of high-priority data is significantly reduced (after the measured, the delay is reduced from 8ms to about 500us), the cache resource is reduced accordingly (no need to be large) Cache stores congested data), reducing system operating costs.
附图说明DRAWINGS
图1为现有技术中高优先级的数据产生延时的示意图;1 is a schematic diagram of a high priority data generation delay in the prior art;
图2为本发明场景实施例中所述降低ONU中拥塞状态下高优先级数据传输时延的装置结构示意图;2 is a schematic structural diagram of an apparatus for reducing a high priority data transmission delay in a congestion state in an ONU according to an embodiment of the present invention;
图3为本发明场景实施例中所述降低ONU中拥塞状态下高优先级数据传输时延的方法流程示意图;3 is a schematic flowchart of a method for reducing a high priority data transmission delay in a congestion state in an ONU according to an embodiment of the present invention;
图4为本发明场景实施例中所述出口数据缓存管理模块的结构示意图一;4 is a schematic structural diagram 1 of the egress data cache management module in the embodiment of the present invention;
图5为本发明场景实施例中所述出口数据缓存调度模块的结构示意图;FIG. 5 is a schematic structural diagram of an egress data buffer scheduling module in an embodiment of the present invention;
图6为本发明场景实施例中所述出口数据缓存管理模块的结构示意图二;6 is a schematic structural diagram 2 of the egress data cache management module in the embodiment of the present invention;
图7为本发明实施例所述降低ONU中拥塞状态下高优先级数据传输时延的方法实现流程图;FIG. 7 is a flowchart of a method for reducing a high priority data transmission delay in a congestion state in an ONU according to an embodiment of the present disclosure;
图8为本发明实施例所述降低ONU中拥塞状态下高优先级数据传输时延的装置结构示意图;FIG. 8 is a schematic structural diagram of an apparatus for reducing a high priority data transmission delay in a congestion state in an ONU according to an embodiment of the present disclosure;
图9为本发明实施例所述缓存模块的结构示意图;FIG. 9 is a schematic structural diagram of a cache module according to an embodiment of the present invention;
图10为本发明实施例所述缓存单元的结构示意图;FIG. 10 is a schematic structural diagram of a cache unit according to an embodiment of the present invention;
图11为本发明一实施例所述调度模块的结构示意图;FIG. 11 is a schematic structural diagram of a scheduling module according to an embodiment of the present invention;
图12为本发明另一实施例所述调度模块的结构示意图。FIG. 12 is a schematic structural diagram of a scheduling module according to another embodiment of the present invention.
具体实施方式detailed description
下面结合具体实施例对本发明进行详细描述。The invention is described in detail below with reference to specific embodiments.
本发明实施例提供了一种降低高优先级数据传输时延的方法及装置,这里,高优先级数据是指ONU中拥塞状态下高优先级数据。 Embodiments of the present invention provide a method and apparatus for reducing a high-priority data transmission delay. Here, high-priority data refers to high-priority data in a congestion state in an ONU.
本发明实施例提供了一种降低高优先级数据传输时延的方法,如图7所示,该方法包括:An embodiment of the present invention provides a method for reducing a high priority data transmission delay. As shown in FIG. 7, the method includes:
步骤701:依据从报文中解析所得的报文的描述符信息,对应存储所述报文中的数据于缓存中;Step 701: Corresponding to storing the data in the packet in the cache according to the descriptor information of the packet parsed from the packet;
步骤702:依据OLT下发的当前的带宽信息、预设的优先级调度规则以及所述带宽信息对应的缓存的状态,计算本次调度的缓存索引;Step 702: Calculate the cache index of the current scheduling according to the current bandwidth information delivered by the OLT, the preset priority scheduling rule, and the cache status corresponding to the bandwidth information.
步骤703:从与所述缓存索引对应的缓存中读出数据,并最终传输给所述OLT。Step 703: Read data from the cache corresponding to the cache index, and finally transmit the data to the OLT.
这里,所述预设的优先级调度规则可以为:根据用户业务的需要预先配置的优先级调度规则,即:可任意设置需要优先调度的数据;或者,所述预设的优先级调度规则为:从所述缓存中存储数据的优先级中确定最高优先级的数据作为优先调度的数据。所述描述符信息可包括:从报文中解析所得的数据优先级、带宽信息(如TCONT\LLID ID等)、数据ID等等。Here, the preset priority scheduling rule may be: a priority scheduling rule that is pre-configured according to the needs of the user service, that is, the data that needs to be preferentially scheduled may be arbitrarily set; or the preset priority scheduling rule is : determining the highest priority data from the priority of storing data in the cache as the priority scheduled data. The descriptor information may include: data priority obtained from parsing in the message, bandwidth information (such as TCONT\LLID ID, etc.), data ID, and the like.
本发明实施例可基于预设的优先级调度规则令优先级高的数据优先传输,优先级低的报文不通过,或者次之通过,有效降低了高优先级数据的时延,提高了用户的使用体验;同时,由于高优先级数据的时延得到明显降低,相应减少了缓存资源的占用(不需大缓存存储拥塞的数据),降低系统运营成本。In the embodiment of the present invention, the data with the highest priority is preferentially transmitted based on the preset priority scheduling rule, and the packet with the lower priority is not passed or passed, thereby effectively reducing the delay of the high priority data and improving the user. At the same time, because the delay of high-priority data is significantly reduced, the consumption of cache resources is reduced correspondingly (no need to cache data in a large cache), and the system operation cost is reduced.
本发明实施例中,所述从与所述缓存索引对应的缓存中读出数据时,该方法还包括:In the embodiment of the present invention, when the data is read from the cache corresponding to the cache index, the method further includes:
记录当前所有缓存的状态以及本次的调度信息,以用于后续的数据调度;所述本次的调度信息至少包括:本次调度的数据和对应存储的缓存。Recording the current state of all the caches and the current scheduling information for subsequent data scheduling; the current scheduling information includes at least: the current scheduled data and the corresponding stored cache.
本发明实施例中,所述依据所述描述符信息,对应存储所述报文中的数据于缓存中,包括:In the embodiment of the present invention, the storing, in the cache, the data in the packet according to the descriptor information includes:
依据所述描述符信息中的TCONT\LLID ID以及数据的优先级ID为索 引、存储所述报文中的数据,同一个TCONT\LLID ID对应的不同优先级的数据写入不同的缓存中;其中,所述数据的优先级越高对应的优先级ID越大。According to the TCONT\LLID ID in the descriptor information and the priority ID of the data The data of the different priorities of the same TCONT\LLID ID is written into different caches, wherein the higher the priority of the data, the larger the priority ID.
实际应用时,可将所述描述符信息与所述数据分开存储,将描述符信息存储于描述符缓存中,将数据存储于数据缓存中,对于同一数据(队列)而言,数据缓存和描述符缓存的缓存索引相同。In practical application, the descriptor information may be stored separately from the data, the descriptor information is stored in the descriptor cache, and the data is stored in the data cache. For the same data (queue), the data is cached and described. The cache index of the cache is the same.
一个实施例中,所述依据OLT下发的当前的带宽信息、预设的优先级调度规则以及所述带宽信息对应的缓存的状态,并基于SP调度方法计算本次调度的缓存索引,包括:In one embodiment, the current bandwidth information delivered by the OLT, the preset priority scheduling rule, and the cached state corresponding to the bandwidth information, and the cached index of the current scheduling are calculated according to the SP scheduling method, including:
依据所述当前的带宽信息中的TCONT\LLID ID和所述缓存的索引确定本次调度的缓存;Determining a cache of the current schedule according to the TCONT\LLID ID in the current bandwidth information and the cached index;
基于预设的优先级调度规则中TCONT\LLID ID与数据优先级的映射表,从映射表中选取当前带宽信息中的TCONT\LLID ID对应的最高优先级的数据ID;The highest priority data ID corresponding to the TCONT\LLID ID in the current bandwidth information is selected from the mapping table based on the mapping table of the TCONT\LLID ID and the data priority in the preset priority scheduling rule;
依据所述报文的描述符信息确定所述数据ID对应的优先级ID;Determining, according to the descriptor information of the packet, a priority ID corresponding to the data ID;
判断所述优先级ID对应的缓存的状态为非空状态时,确定所述缓存的索引为本次调度的缓存索引。When it is determined that the state of the cache corresponding to the priority ID is a non-empty state, determining that the cached index is the cached index of the current schedule.
另一个实施例中,所述依据OLT下发的当前的带宽信息、预设的优先级调度规则以及所述带宽信息对应的缓存的状态,并基于SP调度方法计算本次调度的缓存索引,包括:In another embodiment, the current bandwidth information delivered by the OLT, the preset priority scheduling rule, and the cached state corresponding to the bandwidth information, and the cached index of the current scheduling are calculated according to the SP scheduling method, including :
依据所述当前的带宽信息中的TCONT\LLID ID和所述缓存的索引确定本次调度的缓存;Determining a cache of the current schedule according to the TCONT\LLID ID in the current bandwidth information and the cached index;
确定所述已确定的缓存中状态为非空的缓存;Determining, in the determined cache, a state in which the state is non-empty;
依据预设的优先级调度规则,基于所述非空的缓存对应的缓存索引确定优先级ID最大的缓存索引为本次调度的缓存索引。 And determining, according to the preset priority scheduling rule, a cache index with a maximum priority ID based on the cache index corresponding to the non-empty cache as a cache index of the schedule.
可见,本发明实施例可以根据业务需要灵活设置优先发送的数据,也可依据报文中数据的优先级优先发送高优先级的数据,实现方法灵活,有效降低数据时延,尤其是拥塞情况下高优先级数据的时延时间,保证服务质量,提高用户体验。It can be seen that the embodiment of the present invention can flexibly set the data to be preferentially transmitted according to the service requirement, and can also preferentially send the high priority data according to the priority of the data in the packet, thereby implementing the method flexible and effectively reducing the data delay, especially in the case of congestion. The delay time of high priority data ensures the quality of service and improves the user experience.
本发明的实施例还提供了一种存储介质,该存储介质中存储有可执行指令,所述可执行指令被执行时配置为执行所述降低高优先级数据传输时延的方法中的任意步骤。Embodiments of the present invention also provide a storage medium having stored therein executable instructions that, when executed, are configured to perform any of the steps of reducing a high priority data transmission delay .
可选地,在本实施例中,上述存储介质可以包括但不限于:U盘、只读存储器(Read-Only Memory,简称为ROM)、随机存取存储器(Random Access Memory,简称为RAM)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。Optionally, in the embodiment, the foregoing storage medium may include, but is not limited to, a USB flash drive, a Read-Only Memory (ROM), and a Random Access Memory (RAM). A variety of media that can store program code, such as a hard disk, a disk, or an optical disk.
本发明实施例还提供了一种降低高优先级数据传输时延的装置,用于实现上述实施例及优选实施方式,已经进行过说明的不再赘述。如以下所使用的,术语“模块”“单元”可以实现预定功能的软件和/或硬件的组合。如图8所示,该装置包括:The embodiment of the present invention further provides a device for reducing the high-priority data transmission delay, which is used to implement the foregoing embodiments and preferred embodiments, and has not been described again. As used below, the term "module" "unit" may implement a combination of software and/or hardware of a predetermined function. As shown in Figure 8, the device includes:
缓存模块81,配置为依据从报文中解析所得的报文的描述符信息,对应存储所述报文中的数据于缓存中;基于调度模块传输的本次调度的缓存索引从与所述缓存索引对应的缓存中读出数据,并最终传输给所述OLT;The cache module 81 is configured to: according to the descriptor information of the packet parsed from the packet, store the data in the packet in the cache; and use the cache index of the current schedule transmitted by the scheduling module from the cache Reading data in the cache corresponding to the index, and finally transmitting to the OLT;
调度模块82,配置为依据OLT下发的当前的带宽信息、预设的优先级调度规则以及所述带宽信息对应的缓存的状态,并基于SP调度方法计算本次调度的缓存索引,并发送给所述缓存模块。The scheduling module 82 is configured to calculate the current cached index according to the current bandwidth information delivered by the OLT, the preset priority scheduling rule, and the cached state corresponding to the bandwidth information, and send the cached index of the current scheduling according to the SP scheduling method. The cache module.
这里,所述预设的优先级调度规则可以为:根据用户业务的需要预先配置的优先级调度规则,即:可任意设置需要优先调度的数据;或者,所述预设的优先级调度规则为:从所述缓存中存储数据的优先级中确定最高优先级的数据作为优先调度的数据。所述描述符信息可包括:从报文中解 析所得的数据优先级、带宽信息(如TCONT\LLID ID等)、数据ID等等。Here, the preset priority scheduling rule may be: a priority scheduling rule that is pre-configured according to the needs of the user service, that is, the data that needs to be preferentially scheduled may be arbitrarily set; or the preset priority scheduling rule is : determining the highest priority data from the priority of storing data in the cache as the priority scheduled data. The descriptor information may include: solving from a message Analysis of the resulting data priority, bandwidth information (such as TCONT\LLID ID, etc.), data ID, and so on.
本发明实施例可基于预设的优先级调度规则令优先级高的数据优先传输,优先级低的报文不通过,或者次之通过,有效降低了高优先级数据的时延,提高了用户的使用体验;同时,由于高优先级数据的时延得到明显降低,相应减少了缓存资源的占用(不需大缓存存储拥塞的数据),降低系统运营成本。In the embodiment of the present invention, the data with the highest priority is preferentially transmitted based on the preset priority scheduling rule, and the packet with the lower priority is not passed or passed, thereby effectively reducing the delay of the high priority data and improving the user. At the same time, because the delay of high-priority data is significantly reduced, the consumption of cache resources is reduced correspondingly (no need to cache data in a large cache), and the system operation cost is reduced.
本发明实施例中,所述调度模块82从与所述缓存索引对应的缓存中读出数据时,In the embodiment of the present invention, when the scheduling module 82 reads data from the cache corresponding to the cache index,
所述调度模块82,还配置为记录当前所有缓存的状态以及本次的调度信息,以用于后续的数据调度;所述本次的调度信息至少包括:本次调度的数据和对应存储的缓存。The scheduling module 82 is further configured to record the current state of all the caches and the current scheduling information for subsequent data scheduling; the current scheduling information includes at least: the current scheduled data and the corresponding stored cache. .
本发明实施例中,如图9所示,所述缓存模块81包括:In the embodiment of the present invention, as shown in FIG. 9, the cache module 81 includes:
缓存单元811,配置为依据从报文中解析所得的报文的描述符信息,对应存储所述报文中的数据于缓存中;The buffer unit 811 is configured to store, according to the descriptor information of the packet parsed from the packet, the data in the packet in the cache;
读取单元812,配置为基于调度模块传输的本次调度的缓存索引从与所述缓存索引对应的缓存中读出数据,并最终传输给所述OLT。The reading unit 812 is configured to read data from the cache corresponding to the cache index based on the currently scheduled cache index transmitted by the scheduling module, and finally transmit the data to the OLT.
本发明实施例中,如图10所示,所述缓存单元811包括:In the embodiment of the present invention, as shown in FIG. 10, the cache unit 811 includes:
描述符缓存单元8111,配置为缓存所述描述符信息;a descriptor buffer unit 8111, configured to cache the descriptor information;
数据缓存单元8112,配置为依据所述描述符缓存单元存储的描述符信息中的TCONT\LLID ID以及数据的优先级ID为索引、存储所述报文中的数据,同一个TCONT\LLID ID对应的不同优先级的数据写入不同的缓存中;其中,所述数据的优先级越高对应的优先级ID越大。The data buffering unit 8112 is configured to store the data in the packet according to the TCONT\LLID ID and the priority ID of the data in the descriptor information stored by the descriptor buffer unit, and correspond to the same TCONT\LLID ID. Different priority data is written into different caches; wherein the higher the priority of the data, the larger the priority ID.
一个实施例中,如图11所述,所述调度模块82可包括:In one embodiment, as described in FIG. 11, the scheduling module 82 can include:
第一缓存确定单元821,配置为依据所述当前的带宽信息中的TCONT\LLID ID和所述缓存的索引确定本次调度的缓存; The first cache determining unit 821 is configured to determine, according to the TCONT\LLID ID in the current bandwidth information and the cached index, the cache of the current scheduling;
映射单元822,配置为基于预设的优先级调度规则中TCONT\LLID ID与数据优先级的映射表,从映射表中选取当前带宽信息中的TCONT\LLID ID对应的最高优先级的数据ID;The mapping unit 822 is configured to: according to the mapping table of the TCONT\LLID ID and the data priority in the preset priority scheduling rule, select the highest priority data ID corresponding to the TCONT\LLID ID in the current bandwidth information from the mapping table;
第一索引确定单元823,配置为依据所述报文的描述符信息确定所述数据ID对应的优先级ID;判断所述优先级ID对应的缓存的状态为非空状态时,确定所述缓存的索引为本次调度的缓存索引。The first index determining unit 823 is configured to determine a priority ID corresponding to the data ID according to the descriptor information of the packet, and determine the cache when determining that the cached state corresponding to the priority ID is a non-empty state. The index of this is the cached index of this schedule.
另一个实施例中,如图12所述,所述调度模块82可包括:In another embodiment, as illustrated in FIG. 12, the scheduling module 82 can include:
第二缓存确定单元824,配置为依据所述当前的带宽信息中的TCONT\LLID ID和所述缓存的索引确定本次调度的缓存;The second cache determining unit 824 is configured to determine, according to the TCONT\LLID ID in the current bandwidth information and the cached index, the cache of the current scheduling;
状态确定单元825,配置为确定所述已确定的缓存中状态为非空的缓存;a state determining unit 825, configured to determine that the cached state in the cache is non-empty;
第二索引确定单元826,配置为依据预设的优先级调度规则,基于所述非空的缓存对应的缓存索引确定优先级ID最大的缓存索引为本次调度的缓存索引。The second index determining unit 826 is configured to determine, according to the preset priority scheduling rule, that the cache index with the highest priority ID is the cache index of the current scheduling based on the cache index corresponding to the non-empty cache.
可见,本发明实施例可以根据业务需要灵活设置优先发送的数据,也可依据报文中数据的优先级优先发送高优先级的数据,实现方法灵活,有效降低数据时延,尤其是拥塞情况下高优先级数据的时延时间,保证服务质量,提高用户体验。It can be seen that the embodiment of the present invention can flexibly set the data to be preferentially transmitted according to the service requirement, and can also preferentially send the high priority data according to the priority of the data in the packet, thereby implementing the method flexible and effectively reducing the data delay, especially in the case of congestion. The delay time of high priority data ensures the quality of service and improves the user experience.
下面结合具体实施例场景对本发明进行描述。The present invention will be described below in conjunction with specific embodiments.
本实施例提供了另一种降低ONU中拥塞状态下高优先级数据传输时延的装置,包括以下模块:This embodiment provides another apparatus for reducing the high priority data transmission delay in a congestion state in an ONU, including the following modules:
内部流量管理模块A、数据缓存管理模块B(相当于所述缓存模块81)、数据调度管理模块C(相当于所述调度模块82)和带宽信息管理模块D(所述四个模块后续简称A、B、C、D);其中,The internal traffic management module A, the data cache management module B (corresponding to the cache module 81), the data scheduling management module C (corresponding to the scheduling module 82), and the bandwidth information management module D (the four modules are referred to as A , B, C, D); among them,
内部流量管理模块A,配置为产生ONU内部报文的描述符信息送给数 据缓存管理模块B;数据缓存管理模块B,配置为根据描述符信息存储对应的数据内容,依次写入缓存;数据调度管理模块C,配置为接收带宽信息管理模块D下发的带宽信息,同时根据软件配置的规则(预设的优先级调度规则),进行SP调度,计算本次调度缓存索引,提供给数据缓存管理模块B;数据缓存管理模块B根据此信息(所述本次调度缓存索引)从缓存读出数据提供给带宽信息管理模块D,完成一次完整的调度过程。The internal traffic management module A is configured to generate descriptor information of the internal message of the ONU. According to the cache management module B, the data cache management module B is configured to store the corresponding data content according to the descriptor information, and sequentially write the cache; the data scheduling management module C is configured to receive the bandwidth information sent by the bandwidth information management module D, and simultaneously According to the software configuration rule (preset priority scheduling rule), SP scheduling is performed, and the current scheduling cache index is calculated and provided to the data cache management module B; the data cache management module B is based on the information (the current scheduling cache index) The data is read from the cache and provided to the bandwidth information management module D to complete a complete scheduling process.
基于上述装置的调度方法可包括以下步骤:The scheduling method based on the above apparatus may include the following steps:
第一步:A负责ONU内部的流量管理,其中包括限速,整形,调度等QOS功能,最后提供不同优先级的队列描述符信息给B;The first step: A is responsible for the internal traffic management of the ONU, including QOS functions such as speed limit, shaping, scheduling, etc., and finally provides queue descriptor information of different priorities to B;
此处为了能够使ONU能达到线速,需要采用预读描述符的方式进行缓存,即在OLT没有带宽下发的情况下,A预先调度一定数量的数据包写入B,保持B内的缓存读出时的连续性,仿真IDLE帧的产生,此时B内部缓存的索引为TCONT\LLID ID+优先级ID,即:相同的TCONT\LLID ID对应的不同优先级的数据包写入不同的缓存空间。In order to enable the ONU to reach the line rate, the pre-read descriptor needs to be used for buffering. That is, if the OLT has no bandwidth to send, A pre-schedules a certain number of data packets to be written to B, and keeps the buffer in B. The continuity of the readout, the generation of the IDLE frame is simulated. At this time, the index of the internal cache of B is TCONT\LLID ID+priority ID, that is, the packets of different priorities corresponding to the same TCONT\LLID ID are written into different caches. space.
第二步:B上报给D当前缓存的状态,标示有哪些TCONT\LLID ID对应缓存存在有效数据(非空),可供读取,D根据当前OLT下发的带宽信息结合B上报的缓存状态,通知C本次可以调度的TCONT\LLID ID等信息;The second step: B reports the status of D to the current cache, indicating which TCONT\LLID ID corresponding buffers have valid data (non-empty), which can be read. D is based on the bandwidth information delivered by the current OLT and the buffer status reported by B. , notify C of the TCONT\LLID ID and other information that can be scheduled this time;
第三步:C根据TCONT\LLID ID、预先配置的TCONT\LLID ID与数据优先级的映射关系,采用SP的方式结合当前TCONT\LLID ID下可供读取的队列状态(缓存状态),计算本次调度最高优先级队列ID来作为缓存的索引传送给B,B根据此调度信息索引读出相应数据给D,完成一次调度过程。The third step: C according to the TCONT\LLID ID, the pre-configured TCONT\LLID ID and the data priority mapping relationship, using the SP method combined with the current TCONT\LLID ID available for reading the queue status (cache status), calculate The highest priority queue ID is scheduled to be transmitted to B as a cached index, and B reads the corresponding data to D according to the scheduling information index to complete a scheduling process.
其中,C还要记录所有TCONT\LLID ID下面的所有优先级队列缓存的状态和当前调度信息,来保持包调度的完整性,以防止出现错包,乱序情况出现。 Among them, C also records the status and current scheduling information of all priority queue caches under all TCONT\LLID IDs to maintain the integrity of packet scheduling to prevent mis-packets and out-of-order situations.
图2为实际应用时,所述降低ONU中拥塞状态下高优先级数据传输时延的装置结构示意图,如图2所示,该装置包括:描述符调度模块201;出口描述符缓存202;出口数据缓存管理模块203,出口数据缓存调度模块204,带宽信息处理模块205;其中,描述符调度模块201对应上述A,所述出口描述符缓存202和出口数据缓存管理模块203组成B,出口数据缓存调度模块204对应C,带宽信息处理模块205对应D;其中,2 is a schematic structural diagram of an apparatus for reducing a high priority data transmission delay in a congestion state in an ONU, as shown in FIG. 2, the apparatus includes: a descriptor scheduling module 201; an exit descriptor cache 202; The data cache management module 203, the egress data cache scheduling module 204, the bandwidth information processing module 205; wherein the descriptor scheduling module 201 corresponds to the above A, the egress descriptor cache 202 and the egress data cache management module 203 form a B, and the egress data cache The scheduling module 204 corresponds to C, and the bandwidth information processing module 205 corresponds to D;
所述描述符调度模块201,提供可以调度的描述符给出口描述符缓存202,以进行描述符的缓存;The descriptor scheduling module 201 provides a descriptor that can be scheduled to the egress descriptor cache 202 for buffering the descriptor;
所述带宽信息处理模块205,提供当前调度的TCONT\LLID信息(TCONT\LLID ID)给出口数据缓存调度模块204;The bandwidth information processing module 205 provides the currently scheduled TCONT\LLID information (TCONT\LLID ID) to the egress data cache scheduling module 204;
所述分类配置模块207,提供预配置的优先级信息给出口数据缓存调度模块204;The classification configuration module 207 provides pre-configured priority information to the egress data cache scheduling module 204;
所述出口数据缓存调度模块204,跟据出口数据缓存管理模块203提供的缓存大小和反压信息(即缓存状态),以及出口描述符缓存202提供的描述符缓存大小和对应的反压信息(即缓存状态),并结合TCONT\LLID信息和预配置的优先级信息,内部使用绝对优先级的方式输出当前调度的描述符地址给出口描述符缓存202和出口数据缓存管理模块203,并从出口描述符缓存202和出口数据缓存管理模块203中读出相应的描述符和数据送给带宽信息处理模块205。The egress data cache scheduling module 204, along with the cache size and backpressure information (ie, the cache state) provided by the egress data cache management module 203, and the descriptor cache size and corresponding backpressure information provided by the egress descriptor cache 202 ( That is, the cache state), combined with the TCONT\LLID information and the pre-configured priority information, internally outputs the currently scheduled descriptor address to the egress descriptor cache 202 and the egress data cache management module 203 in an absolute priority manner, and from the exit. The descriptor cache 202 and the egress data cache management module 203 read out the corresponding descriptors and data and send them to the bandwidth information processing module 205.
图3为场景实施例中所述降低ONU中拥塞状态下高优先级数据传输时延的方法流程示意图,如图3所示,该方法包括:FIG. 3 is a schematic flowchart of a method for reducing a high-priority data transmission delay in a congestion state in an ONU according to the embodiment of the present invention. As shown in FIG. 3, the method includes:
S301:报文进入ONU;S301: The packet enters the ONU.
S302:根据出口描述符缓存202的空满指示信号,判断是否可以写入描述符,并判断报文中是否有可以预先调度的描述符(报文中有非空队列),如果有非空队列,则进行S303;否则,继续判断(执行S302); S302: Determine, according to the empty indication signal of the egress descriptor cache 202, whether the descriptor can be written, and determine whether there is a descriptor that can be pre-scheduled in the packet (there is a non-empty queue in the packet), if there is a non-empty queue , proceed to S303; otherwise, continue to judge (execute S302);
S303:将可以预先调度的队列的描述符写入对应的出口描述符缓存202,缓存的索引为:TCONT\LLID ID+PRI ID,即不同TCONT\LLID ID对应的优先级分开存储,这里面需要查表来指示哪一路缓存对应哪一个TCONT\LLID ID下面的哪一个优先级队列,TCONT的数量最大为32个,每个TCONT下最大8个优先级队列,缓存的资源可以灵活配置,映射表可以灵活选择几种固定的映射关系,也可以由上层软件配置;S303: Write the descriptor of the queue that can be pre-scheduled to the corresponding egress descriptor cache 202. The index of the cache is: TCONT\LLID ID+PRI ID, that is, the priorities corresponding to different TCONT\LLID IDs are stored separately. Look up the table to indicate which one of the TCONT\LLID IDs corresponds to which priority queue. The maximum number of TCONTs is 32, and the maximum number of priority queues per TCONT is 8. The cached resources can be flexibly configured. You can flexibly choose several fixed mapping relationships, or you can configure them by upper layer software.
S304:根据当前的带宽信息,即可以调度的TCONT\LLID ID,当前出口描述符缓存的对应该TCONT\LLID ID下面所有优先级队列的状态信息,当前保存的已经调度的TCONT\LLID ID和对应的优先级信息,以及当前可以调度的TCONT\LLID ID对应的数据缓存(出口数据缓存管理模块)的存储状态信息,进行一级SP调度,来确定本次调度的TCONT\LLID ID和对应的优先级(PRI)ID;S304: According to the current bandwidth information, that is, the TCONT\LLID ID that can be scheduled, the current egress descriptor buffer corresponding to the status information of all priority queues under the TCONT\LLID ID, the currently saved TCONT\LLID ID and corresponding The priority information, and the storage status information of the data cache (exit data cache management module) corresponding to the currently configurable TCONT\LLID ID, performs primary SP scheduling to determine the TCONT\LLID ID and corresponding priority of the current schedule. Level (PRI) ID;
ONU在GPON模式下是基于分片调度的,即一段时间内,可以调度不同的TCONT下不同的分片,不一定把整包读完,为了保持包的完整性和连续性,需要维护一个当前所有队列的调度状态信息,如果当前TCONT下已经调度了一个优先级报文,则不能调度其他优先级的报文,防止乱序。此时,如果得到当前保存的TCONT\LLID ID+PRI ID,则执行S307;再通过S305将TCONT\LLID ID+PRI ID对应的剩余报文或者新报文调度出来,继续执行S306,供本次调度使用。ONU is based on fragment scheduling in GPON mode. That is, different fragments can be scheduled under different TCONTs within a period of time. It is not necessary to read the entire packet. In order to maintain the integrity and continuity of the packet, it is necessary to maintain a current The scheduling status information of all queues. If a priority packet has been scheduled in the current TCONT, packets of other priorities cannot be scheduled to prevent out-of-order. At this time, if the currently saved TCONT\LLID ID+PRI ID is obtained, S307 is performed; then, the remaining messages or new messages corresponding to the TCONT\LLID ID+PRI ID are dispatched through S305, and S306 is continued to be executed. Scheduling use.
S305:根据描述符信息,将报文数据写入到后级模块(出口数据缓存管理模块203),进行分片处理;S305: Write message data to the subsequent module (exit data cache management module 203) according to the descriptor information, and perform fragmentation processing;
S306:将报文存入当前TCONT\LLID+PRI ID下对应的出口缓存(FIFO,对应出口数据缓存管理模块203),FIFO大小可配置,需同时考虑最大分片长度的配置,缓存的索引为:TCONT\LLID ID+PRI ID,即不同TCONT\LLID ID对应的优先级队列分开存储,这里面需要维护一个当前缓 存状态与对应TCONT\LLID ID+PRI ID的映射表,以供S304中进行判断;同时还要把每个TCONT\LLID ID下面所有的优先级队列状态转换成TCONT\LLID状态(即:把所有该TCONT\LLID ID下面所有优先级的队列状态汇聚在一起,采用或的形式反映为该TCONT\LLID ID总的可调度状态),反馈给无源光纤网络(PON)MAC;S306: The message is stored in the corresponding egress buffer (FIFO, corresponding to the egress data cache management module 203) under the current TCONT\LLID+PRI ID, and the FIFO size is configurable, and the configuration of the maximum fragment length is considered at the same time, and the cache index is :TCONT\LLID ID+PRI ID, that is, the priority queues corresponding to different TCONT\LLID IDs are stored separately, and this needs to maintain a current slowdown. Store the mapping table corresponding to TCONT\LLID ID+PRI ID for judgment in S304; also convert all priority queue states under each TCONT\LLID ID to TCONT\LLID state (ie: put all All the priority queue states under the TCONT\LLID ID are aggregated and reflected in the form of the TCONT\LLID ID total schedulable state, and fed back to the passive optical network (PON) MAC;
S307:根据TCONT\LLID ID读取相应FIFO中的数据包;S307: Read a data packet in the corresponding FIFO according to the TCONT\LLID ID;
此时读取TCONT\LLID ID下哪一个优先级的报文,是由上面S304经过SP调度之后决定的,对于PON MAC而言,接口时序不变,由ONU内部来处理。At this time, it is determined by which the priority message under the TCONT\LLID ID is processed by the SP scheduling in the above S304. For the PON MAC, the interface timing is unchanged and is processed by the ONU internally.
S308:将读取的数据包传输给OLT,MAC模块提取出相应的带宽信息到出口数据缓存调度模块。S308: The read data packet is transmitted to the OLT, and the MAC module extracts corresponding bandwidth information to the egress data cache scheduling module.
下面在对图2中各模块进行详细描述。Each module in Fig. 2 will be described in detail below.
其中,分类配置模块207提供每个描述符对应的分类配置,此配置可以支持软件配置或者是硬件自动识别,如果软件配置有效,则有软件自行指定哪个TCONT/LLID ID下的哪一路缓存优先级最高,可以任意指定。每一路缓存用1bit表示,最多使用32x8=256bit寄存器来实现全部缓存对应的分类配置;也可以使用硬件自动根据描述符调度模块201产生的报文描述符中携带的信息进行自动识别,因为TM中描述符调度模块201在解析报文包头的时候,同时获取了该报文对应的差分服务代码点(dscp)值或者相关优先级信息,可以供后级模块进行调度使用。这样分类配置模块207可以灵活提供每一路缓存对应的优先级信息,灵活进行调度。The classification configuration module 207 provides a classification configuration corresponding to each descriptor. The configuration can support software configuration or automatic hardware identification. If the software configuration is valid, the software can specify which channel cache priority under which TCONT/LLID ID. The highest, can be arbitrarily specified. Each channel buffer is represented by 1 bit, and up to 32x8=256bit registers are used to implement the classification configuration corresponding to all caches; hardware can also be used to automatically identify the information carried in the message descriptor generated by the descriptor scheduling module 201, because TM When parsing the packet header, the descriptor scheduling module 201 obtains the difference service code point (dscp) value or related priority information corresponding to the packet, and can be used by the subsequent module for scheduling. In this way, the classification configuration module 207 can flexibly provide the priority information corresponding to each path cache, and flexibly perform scheduling.
所述出口数据缓存管理模块203的一种结构详见图4,所示401为最小的缓存划分基本单元,包括一个地址生成模块403(ADDR_GEN),配置为计算每次更新的地址;缓存控制模块404(FIFO_CTRL),配置为根据读写使能产生空满信号;地址生成模块403的划分是由cfg_mode来决定开始 地址和结束地址,而cfg_mode就是分类配置模块207中提到的灵活配置的模式;后面为双口1写1读的RAM402,用来存储数据,通过使用多个401模块产生不同的RAM读写地址,实现缓存共享与动态分割功能。A structure of the egress data cache management module 203 is shown in FIG. 4. The 401 is a minimum cache division basic unit, and includes an address generation module 403 (ADDR_GEN) configured to calculate an address for each update; a cache control module. 404 (FIFO_CTRL), configured to generate an empty full signal according to read and write enable; the division of the address generation module 403 is determined by cfg_mode Address and end address, and cfg_mode is the flexible configuration mode mentioned in the classification configuration module 207; followed by a dual port 1 write 1 read RAM 402 for storing data, using different 401 modules to generate different RAM read and write addresses , to achieve cache sharing and dynamic segmentation.
本发明实施例中,对于出口描述符缓存(202)和出口数据缓存管理模块(203)的实现方式是基于FIFO的,缓存的划分原则是基于TCONT\LLID ID+pri ID,即每个TCONT\LLID ID对应的所有pri ID的缓存FIFO分成一组,每一个pri ID对应一个FIFO,为了节省资源,在缓存数量保持不变的情况下,来灵活分配每个TCONT\LLID下pri的数量,可以根据TCONT\LLID的数量和PON模式,来动态划分这些缓存,在GPON模式下TCONT的数量最大是32,而EPON模式下,LLID数量最大是8,此时,假设缓存的数量是a,那么在GPON模式下支持的pri数量可以根据不同的TCONT配置数t而变化,每个TCONT下的FIFO数量为a/t,对于相同的缓存数量,每个LLID下面的FIFO数量为a/l,所示l为LLID的数量,即所有的TCONT和LLID共享缓存的方式来实现缓存的利用率最大化,这里pri的设计即可根据上面缓存的数量,TCONT和LLID的数量而选择支持的pri的数量,可以灵活配置成{8,4,2,1}的方式,这样缓存的划分方式可以用软件配置写表象的方式实现,即逻辑根据配置模式来动态划分缓存的大小。In the embodiment of the present invention, the implementation manners of the egress descriptor cache (202) and the egress data cache management module (203) are based on FIFO, and the principle of partitioning of the cache is based on TCONT\LLID ID+pri ID, that is, each TCONT\ The FIFO IDs of all pri IDs corresponding to the LLID ID are grouped into one group, and each pri ID corresponds to one FIFO. To save resources, in the case where the number of buffers remains unchanged, the number of pris under each TCONT\LLID can be flexibly allocated. According to the number of TCONT\LLID and PON mode, these caches are dynamically divided. In GPON mode, the maximum number of TCONTs is 32. In EPON mode, the maximum number of LLIDs is 8. In this case, if the number of caches is a, then The number of pris supported in GPON mode can vary according to the number of different TCONT configurations. The number of FIFOs per TCONT is a/t. For the same number of buffers, the number of FIFOs under each LLID is a/l. l is the number of LLIDs, that is, all TCONT and LLID shared caches to maximize cache utilization. Here, the design of pri can choose the supported pri according to the number of caches, TCONT and LLID. Number, can be flexibly configured {8, 4} way, this embodiment can be divided into the cache software to write configuration representation is achieved in that the dynamic partitioning logic to the buffer size according to the configuration mode.
其中,所述出口数据缓存调度模块204的结构详见图5,该模块是由映射模块501和调度模块502组成,为了考虑后面可扩展性和兼容性,假设N路状态信息对应N路缓存,按照上面的缓存划分,这N路信息每一个bit,代表一个TCONT\LLID下面的pri队列状态,为了支持最大M个pri队列,跟据配置,选择不同的映射关系,映射模块501把N路状态信息,映射成N×M路信息,保证每一个TCONT\LLID都有M路pri状态信息,这样后面的调度模块可以使用相同模块结构进行选择,又可以根据配置做到后面灵活扩展,由此得到的N*M路缓存状态信息,N*M路入口描述符 状态信息,N*M路当前调度状态信息,按位相与后,最终得到参与调度模块502的入口信息,然后,每M路入口信息进行调度的原则是基于SP调度方式(即最高优先级的队列最开始调度,直到当前时刻没有该优先级的队列数据,接着调度次高优先级队列,依次类推),由N路子调度器,输出N路最终调度信息,然后进入MUX模块,根据PON MAC输入的当前调度读取单元,决定哪一路调度信息输出给缓存装置,读出数据。The structure of the egress data cache scheduling module 204 is shown in FIG. 5. The module is composed of a mapping module 501 and a scheduling module 502. In order to consider the scalability and compatibility, it is assumed that the N-way status information corresponds to the N-way cache. According to the above cache division, each bit of the N-way information represents a pri queue state under the TCONT\LLID. In order to support the maximum M pri queues, according to the configuration, different mapping relationships are selected, and the mapping module 501 sets the N-channel state. The information is mapped into N×M road information, ensuring that each TCONT\LLID has M-path pri state information, so that the following scheduling module can use the same module structure to select, and can be flexibly extended according to the configuration, thereby obtaining N*M path cache status information, N*M way entry descriptor Status information, current scheduling status information of the N*M path, and phase-by-phase and after, finally obtain the entry information of the participating scheduling module 502, and then the principle of scheduling each M-way entry information is based on the SP scheduling mode (ie, the highest priority queue) The first scheduling is until there is no queue data of the priority at the current moment, and then the next highest priority queue is scheduled, and so on, and the N-way sub-scheduler outputs the final scheduling information of the N-way, and then enters the MUX module, according to the PON MAC input. The current scheduling read unit determines which routing information is output to the cache device and reads the data.
另外,本场景实施例还提供了另一种出口数据缓存管理模块203的结构示意图,上述图4中的结构基于FIFO架构,本实施例基于链表管理的实现缓存架构,如图6所示,包括:链表控制模块601,地址管理模块602,链表指针储存模块603,数据内容储存模块604,其中,In addition, the embodiment of the present disclosure provides a structure diagram of another egress data cache management module 203. The structure in FIG. 4 is based on a FIFO architecture. The implementation of the cache structure based on the linked list management in this embodiment is as shown in FIG. a linked list control module 601, an address management module 602, a linked list pointer storage module 603, and a data content storage module 604, wherein
601接收写入报文的描述符的同时向602提起空闲地址申请,602根据内部地址的空闲情况,分配一个空闲地址返回给601,同时把本次地址和下一跳地址写入603保存起来,601根据602返回的地址,把本次的数据包内容写入604进行保存;当601接收到读出的报文描述符,向602提起报文对应的地址申请,602根据头指针的内容,从603里面读出保存的下一跳地址,返回给601,601获得地址后,从604把报文读出。601 receives the descriptor of the written message and raises the free address request to 602, and allocates a free address to 601 according to the idle condition of the internal address, and saves the current address and the next hop address to 603, 601 according to the address returned by 602, the current packet content is written to 604 for saving; when 601 receives the read message descriptor, the address request corresponding to the message is raised to 602, 602 according to the content of the head pointer, In 603, the saved next hop address is read, and after returning to 601, 601 to obtain the address, the message is read from 604.
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art will appreciate that embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention can take the form of a hardware embodiment, a software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、 嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (system), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or FIG. These computer program instructions can be provided to a general purpose computer, a special purpose computer, An processor of an embedded processor or other programmable data processing device to generate a machine such that instructions executed by a processor of a computer or other programmable data processing device are generated for implementation in a flow or a flow of flowcharts and/or Or a block diagram of a device in a box or a function specified in a plurality of boxes.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。The computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device. The apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device. The instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
以上所述,仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。The above is only the preferred embodiment of the present invention and is not intended to limit the scope of the present invention.
工业实用性Industrial applicability
本发明实施例的技术方案,可基于预设的优先级调度规则令优先级高的数据优先传输,优先级低的报文不通过,或者次之通过,有效降低了高优先级数据的时延,提高了用户的使用体验;同时,由于高优先级数据的时延得到明显降低(经过实测,时延从8ms减少到500us左右),相应减少了缓存资源的占用(不需大缓存存储拥塞的数据),降低系统运营成本。 The technical solution of the embodiment of the present invention can preferentially transmit data with a high priority based on a preset priority scheduling rule, and the packet with a lower priority does not pass or pass the second priority, thereby effectively reducing the delay of the high priority data. The user experience is improved. At the same time, the delay of high-priority data is significantly reduced (after the measurement, the delay is reduced from 8ms to about 500us), which reduces the occupation of cache resources (no need for large cache storage congestion). Data), reducing system operating costs.

Claims (12)

  1. 一种降低高优先级数据传输时延的方法,应用于光网络单元ONU中,该方法包括:A method for reducing a high-priority data transmission delay is applied to an optical network unit ONU, and the method includes:
    依据从报文中解析所得的报文的描述符信息,对应存储所述报文中的数据于缓存中;Corresponding to storing the data in the packet in the cache according to the descriptor information of the packet parsed from the packet;
    依据光线路终端OLT下发的当前的带宽信息、预设的优先级调度规则以及所述带宽信息对应的缓存的状态,计算本次调度的缓存索引;Calculating the cache index of the current schedule according to the current bandwidth information delivered by the optical line terminal OLT, the preset priority scheduling rule, and the cache status corresponding to the bandwidth information;
    从与所述缓存索引对应的缓存中读出数据,并最终传输给所述OLT。Data is read from a cache corresponding to the cache index and ultimately transmitted to the OLT.
  2. 根据权利要求1所述的方法,其中,所述从与所述缓存索引对应的缓存中读出数据时,该方法还包括:The method of claim 1, wherein when the data is read from a cache corresponding to the cache index, the method further comprises:
    记录当前所有缓存的状态以及本次的调度信息,以用于后续的数据调度;所述本次的调度信息至少包括:本次调度的数据和对应存储的缓存。Recording the current state of all the caches and the current scheduling information for subsequent data scheduling; the current scheduling information includes at least: the current scheduled data and the corresponding stored cache.
  3. 根据权利要求1或2所述的方法,其中,所述依据所述描述符信息,对应存储所述报文中的数据于缓存中,包括:The method according to claim 1 or 2, wherein the storing the data in the packet in the cache according to the descriptor information comprises:
    依据所述描述符信息中的传输容器TCONT\逻辑链路标记LLID ID以及数据的优先级ID为索引、存储所述报文中的数据,同一个TCONT\LLID ID对应的不同优先级的数据写入不同的缓存中;其中,所述数据的优先级越高对应的优先级ID越大。And according to the transmission container TCONT\logical link identifier LLID ID and the priority ID of the data in the descriptor information as an index, storing data in the packet, and writing data of different priorities corresponding to the same TCONT\LLID ID Entering into different caches; wherein the higher the priority of the data, the larger the priority ID.
  4. 根据权利要求3所述的方法,其中,所述依据光线路终端OLT下发的当前的带宽信息、预设的优先级调度规则以及所述带宽信息对应的缓存的状态,并基于绝对优先级SP调度方法计算本次调度的缓存索引,包括:The method according to claim 3, wherein the current bandwidth information delivered by the optical line terminal OLT, a preset priority scheduling rule, and a cached state corresponding to the bandwidth information are based on an absolute priority SP. The scheduling method calculates the cache index of the current schedule, including:
    依据所述当前的带宽信息中的TCONT\LLID ID和所述缓存的索引确定本次调度的缓存; Determining a cache of the current schedule according to the TCONT\LLID ID in the current bandwidth information and the cached index;
    基于预设的优先级调度规则中TCONT\LLID ID与数据优先级的映射表,从映射表中选取当前带宽信息中的TCONT\LLID ID对应的最高优先级的数据ID;The highest priority data ID corresponding to the TCONT\LLID ID in the current bandwidth information is selected from the mapping table based on the mapping table of the TCONT\LLID ID and the data priority in the preset priority scheduling rule;
    依据所述报文的描述符信息确定所述数据ID对应的优先级ID;Determining, according to the descriptor information of the packet, a priority ID corresponding to the data ID;
    判断所述优先级ID对应的缓存的状态为非空状态时,确定所述缓存的索引为本次调度的缓存索引。When it is determined that the state of the cache corresponding to the priority ID is a non-empty state, determining that the cached index is the cached index of the current schedule.
  5. 根据权利要求3所述的方法,其中,所述依据光线路终端OLT下发的当前的带宽信息、预设的优先级调度规则以及所述带宽信息对应的缓存的状态,并基于绝对优先级SP调度方法计算本次调度的缓存索引,包括:The method according to claim 3, wherein the current bandwidth information delivered by the optical line terminal OLT, a preset priority scheduling rule, and a cached state corresponding to the bandwidth information are based on an absolute priority SP. The scheduling method calculates the cache index of the current schedule, including:
    依据所述当前的带宽信息中的TCONT\LLID ID和所述缓存的索引确定本次调度的缓存;Determining a cache of the current schedule according to the TCONT\LLID ID in the current bandwidth information and the cached index;
    确定所述已确定的缓存中状态为非空的缓存;Determining, in the determined cache, a state in which the state is non-empty;
    依据预设的优先级调度规则,基于所述非空的缓存对应的缓存索引确定优先级ID最大的缓存索引为本次调度的缓存索引。And determining, according to the preset priority scheduling rule, a cache index with a maximum priority ID based on the cache index corresponding to the non-empty cache as a cache index of the schedule.
  6. 一种降低高优先级数据传输时延的装置,应用于光网络单元ONU中,该装置包括:A device for reducing a high-priority data transmission delay is applied to an optical network unit ONU, the device comprising:
    缓存模块,配置为依据从报文中解析所得的报文的描述符信息,对应存储所述报文中的数据于缓存中;基于调度模块传输的本次调度的缓存索引从与所述缓存索引对应的缓存中读出数据,并最终传输给所述OLT;The cache module is configured to store the data in the packet in the cache according to the descriptor information of the packet parsed from the packet; and the cache index of the current schedule based on the scheduling module transmits from the cache index Reading data in the corresponding cache and finally transmitting to the OLT;
    调度模块,配置为依据光线路终端OLT下发的当前的带宽信息、预设的优先级调度规则以及所述带宽信息对应的缓存的状态,并基于绝对优先级SP调度方法计算本次调度的缓存索引,并发送给所述缓存模块。The scheduling module is configured to calculate the current cache according to the current bandwidth information delivered by the optical line terminal OLT, the preset priority scheduling rule, and the cache status corresponding to the bandwidth information, and based on the absolute priority SP scheduling method. Indexed and sent to the cache module.
  7. 根据权利要求6所述的装置,其中,所述调度模块从与所述缓存 索引对应的缓存中读出数据时,The apparatus of claim 6 wherein said scheduling module is from said cache When reading data in the cache corresponding to the index,
    所述调度模块,还配置为记录当前所有缓存的状态以及本次的调度信息,以用于后续的数据调度;所述本次的调度信息至少包括:本次调度的数据和对应存储的缓存。The scheduling module is further configured to record the current state of all the caches and the current scheduling information for subsequent data scheduling. The current scheduling information includes at least the current scheduled data and the corresponding stored cache.
  8. 根据权利要求6或7所述的装置,其中,所述缓存模块包括:The apparatus according to claim 6 or 7, wherein the cache module comprises:
    缓存单元,配置为依据从报文中解析所得的报文的描述符信息,对应存储所述报文中的数据于缓存中;The cache unit is configured to store the data in the packet in the cache according to the descriptor information of the packet parsed from the packet;
    读取单元,配置为基于调度模块传输的本次调度的缓存索引从与所述缓存索引对应的缓存中读出数据,并最终传输给所述OLT。The reading unit is configured to read data from the cache corresponding to the cache index based on the currently scheduled cache index transmitted by the scheduling module, and finally transmit the data to the OLT.
  9. 根据权利要求8所述的装置,其中,所述缓存单元包括:The apparatus of claim 8, wherein the cache unit comprises:
    描述符缓存单元,配置为缓存所述描述符信息;a descriptor cache unit configured to cache the descriptor information;
    数据缓存单元,配置为依据所述描述符缓存单元存储的描述符信息中的传输容器TCONT\逻辑链路标记LLID ID以及数据的优先级ID为索引、存储所述报文中的数据,同一个TCONT\LLID ID对应的不同优先级的数据写入不同的缓存中;其中,所述数据的优先级越高对应的优先级ID越大。a data buffering unit configured to store, according to the transport container TCONT\logical link identifier LLID ID and the priority ID of the data in the descriptor information stored by the descriptor cache unit, the data in the packet, the same The data of different priorities corresponding to the TCONT\LLID ID is written into different caches; wherein the higher the priority of the data, the larger the priority ID.
  10. 根据权利要求9所述的装置,其中,所述调度模块包括:The apparatus of claim 9, wherein the scheduling module comprises:
    第一缓存确定单元,配置为依据所述当前的带宽信息中的TCONT\LLID ID和所述缓存的索引确定本次调度的缓存;The first cache determining unit is configured to determine, according to the TCONT\LLID ID in the current bandwidth information and the cached index, the cache of the current scheduling;
    映射单元,配置为基于预设的优先级调度规则中TCONT\LLID ID与数据优先级的映射表,从映射表中选取当前带宽信息中的TCONT\LLID ID对应的最高优先级的数据ID;The mapping unit is configured to: according to the mapping table of the TCONT\LLID ID and the data priority in the preset priority scheduling rule, select the highest priority data ID corresponding to the TCONT\LLID ID in the current bandwidth information from the mapping table;
    第一索引确定单元,配置为依据所述报文的描述符信息确定所述数据ID对应的优先级ID;判断所述优先级ID对应的缓存的状态为非空状态时,确定所述缓存的索引为本次调度的缓存索引。 a first index determining unit, configured to determine a priority ID corresponding to the data ID according to the descriptor information of the packet; and determine, when the state of the cache corresponding to the priority ID is a non-empty state, determining the cached The index is the cached index of this schedule.
  11. 根据权利要求9所述的装置,其中,所述调度模块包括:The apparatus of claim 9, wherein the scheduling module comprises:
    第二缓存确定单元,配置为依据所述当前的带宽信息中的TCONT\LLID ID和所述缓存的索引确定本次调度的缓存;a second cache determining unit, configured to determine a cache of the current scheduling according to the TCONT\LLID ID in the current bandwidth information and the cached index;
    状态确定单元,配置为确定所述已确定的缓存中状态为非空的缓存;a state determining unit, configured to determine a cache in the determined cache that is non-empty;
    第二索引确定单元,配置为依据预设的优先级调度规则,基于所述非空的缓存对应的缓存索引确定优先级ID最大的缓存索引为本次调度的缓存索引。The second index determining unit is configured to determine, according to the preset priority scheduling rule, that the cache index with the highest priority ID is the cache index of the current scheduling based on the cache index corresponding to the non-empty cache.
  12. 一种存储介质,所述存储介质中存储有计算机可执行指令,该计算机可执行指令配置为执行权利要求1-5任一项所述的降低高优先级数据传输时延的方法。 A storage medium having stored therein computer executable instructions configured to perform the method of reducing high priority data transmission delays of any of claims 1-5.
PCT/CN2017/097325 2017-02-20 2017-08-14 Method and device for reducing transmission latency of high-priority data, and storage medium WO2018149102A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710091051.3 2017-02-20
CN201710091051.3A CN108462649B (en) 2017-02-20 2017-02-20 Method and device for reducing high-priority data transmission delay in congestion state of ONU

Publications (1)

Publication Number Publication Date
WO2018149102A1 true WO2018149102A1 (en) 2018-08-23

Family

ID=63170030

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/097325 WO2018149102A1 (en) 2017-02-20 2017-08-14 Method and device for reducing transmission latency of high-priority data, and storage medium

Country Status (2)

Country Link
CN (1) CN108462649B (en)
WO (1) WO2018149102A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112804162A (en) * 2019-11-13 2021-05-14 深圳市中兴微电子技术有限公司 Scheduling method, scheduling device, terminal equipment and storage medium
WO2024007334A1 (en) * 2022-07-08 2024-01-11 Huawei Technologies Co., Ltd. A device and methodology for hybrid scheduling using strict priority and packet urgentness
WO2024021801A1 (en) * 2022-07-26 2024-02-01 华为技术有限公司 Packet forwarding apparatus and method, communication chip, and network device

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114553781A (en) * 2021-12-23 2022-05-27 北京秒如科技有限公司 Efficient transport layer multi-path convergence method and system
CN117675720A (en) * 2024-01-31 2024-03-08 井芯微电子技术(天津)有限公司 Message transmission method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101022677A (en) * 2006-02-16 2007-08-22 华为技术有限公司 Dynamic service flow classification and mapping method and optical network terminal and optical insertion network
CN101035389A (en) * 2006-03-08 2007-09-12 上海交通大学 System and method for bandwidth allocation in the remote device of the passive optical network
CN101662417A (en) * 2008-08-26 2010-03-03 华为技术有限公司 Method and equipment for multi-service adapting and multi-service loading
JP2014011666A (en) * 2012-06-29 2014-01-20 Nippon Telegr & Teleph Corp <Ntt> Method for allocating band for uplink data and communication device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9059946B2 (en) * 2005-09-29 2015-06-16 Broadcom Corporation Passive optical network (PON) packet processor
CN101079801A (en) * 2006-05-25 2007-11-28 华为技术有限公司 Method for transmitting uplink control packet in Gbit passive optical network system
CN101754057A (en) * 2009-12-11 2010-06-23 杭州钦钺科技有限公司 Data scheduling method used in EPON terminal system and based on absolute priority
CN103905932B (en) * 2014-04-23 2017-06-23 河北工程大学 A kind of service-interworking Ethernet passive optical network system perceived based on NC and QoS

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101022677A (en) * 2006-02-16 2007-08-22 华为技术有限公司 Dynamic service flow classification and mapping method and optical network terminal and optical insertion network
CN101035389A (en) * 2006-03-08 2007-09-12 上海交通大学 System and method for bandwidth allocation in the remote device of the passive optical network
CN101662417A (en) * 2008-08-26 2010-03-03 华为技术有限公司 Method and equipment for multi-service adapting and multi-service loading
JP2014011666A (en) * 2012-06-29 2014-01-20 Nippon Telegr & Teleph Corp <Ntt> Method for allocating band for uplink data and communication device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112804162A (en) * 2019-11-13 2021-05-14 深圳市中兴微电子技术有限公司 Scheduling method, scheduling device, terminal equipment and storage medium
CN112804162B (en) * 2019-11-13 2024-04-09 深圳市中兴微电子技术有限公司 Scheduling method, scheduling device, terminal equipment and storage medium
WO2024007334A1 (en) * 2022-07-08 2024-01-11 Huawei Technologies Co., Ltd. A device and methodology for hybrid scheduling using strict priority and packet urgentness
WO2024021801A1 (en) * 2022-07-26 2024-02-01 华为技术有限公司 Packet forwarding apparatus and method, communication chip, and network device

Also Published As

Publication number Publication date
CN108462649B (en) 2020-07-07
CN108462649A (en) 2018-08-28

Similar Documents

Publication Publication Date Title
WO2018149102A1 (en) Method and device for reducing transmission latency of high-priority data, and storage medium
JP5863076B2 (en) Method, apparatus, and system for reconstructing and reordering packets
US10044646B1 (en) Systems and methods for efficiently storing packet data in network switches
US9444740B2 (en) Router, method for controlling router, and program
US7227841B2 (en) Packet input thresholding for resource distribution in a network switch
CN102045258B (en) Data caching management method and device
US11785113B2 (en) Client service transmission method and apparatus
US7406041B2 (en) System and method for late-dropping packets in a network switch
KR102410422B1 (en) Distributed processing in a network
JP5793690B2 (en) Interface device and memory bus system
EP2936305A1 (en) Parallel processing using multi-core processor
US20080063004A1 (en) Buffer allocation method for multi-class traffic with dynamic spare buffering
TWI474656B (en) Ethernet passive optical network with report threshold calculations
US11212600B2 (en) Integrated dynamic bandwidth allocation method and apparatus in passive optical networks
US9942169B1 (en) Systems and methods for efficiently searching for stored data
US9063841B1 (en) External memory management in a network device
CN105335323A (en) Buffering device and method of data burst
WO2018000820A1 (en) Method and device for queue management
CN107911317B (en) Message scheduling method and device
CN117499351A (en) Message forwarding device and method, communication chip and network equipment
WO2019095942A1 (en) Data transmission method and communication device
US8422396B2 (en) Rate monitoring apparatus
CN117041186B (en) Data transmission method, chip system, computing device and storage medium
CN117938764A (en) Time-sensitive network asynchronous flow scheduling method and system based on FPGA
KR950009428B1 (en) The distributed queue dual bus communication system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17896563

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17896563

Country of ref document: EP

Kind code of ref document: A1