WO2020029680A1 - Docsis map preemption protocol - Google Patents

Docsis map preemption protocol Download PDF

Info

Publication number
WO2020029680A1
WO2020029680A1 PCT/CN2019/090918 CN2019090918W WO2020029680A1 WO 2020029680 A1 WO2020029680 A1 WO 2020029680A1 CN 2019090918 W CN2019090918 W CN 2019090918W WO 2020029680 A1 WO2020029680 A1 WO 2020029680A1
Authority
WO
WIPO (PCT)
Prior art keywords
mini
interval
data
management message
slots
Prior art date
Application number
PCT/CN2019/090918
Other languages
French (fr)
Inventor
Sanjay Gupta
Lifan XU
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Publication of WO2020029680A1 publication Critical patent/WO2020029680A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2854Wide area networks, e.g. public data networks
    • H04L12/2856Access arrangements, e.g. Internet access
    • H04L12/2869Operational details of access network equipments
    • H04L12/2878Access multiplexer, e.g. DSLAM
    • H04L12/2879Access multiplexer, e.g. DSLAM characterised by the network type on the uplink side, i.e. towards the service provider network
    • H04L12/2885Arrangements interfacing with optical systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2801Broadband local area networks

Definitions

  • the disclosure generally relates to reducing latency in the Data Over Cable Service Interface Specification (DOCSIS) applications.
  • DOCSIS Data Over Cable Service Interface Specification
  • the Data Over Cable Service Interface Specification is an international telecommunications standard that permits the addition of high-bandwidth data transfer via existing cable TV (CATV) systems.
  • DOCSIS is widely used by cable operators to offer Internet access through an existing hybrid fiber coaxial infrastructure.
  • the DOCSIS system uses a series of Cable Modem Termination Systems (CMTS) , each serving multiple cable modems at a subscriber’s premises, to route network traffic between the cable modems and a wide area network.
  • CMTS Cable Modem Termination Systems
  • Other devices that recognize and support DOCSIS include HDTVs and Web-enabled set-top boxes for televisions.
  • a device for managing data flow during an interval includes a non-transitory memory storage comprising instructions, and one or more processors in communication with the memory.
  • the one or more processors execute the instructions to: receive a new bandwidth request during the interval; generate a remap management message during the interval, with the remap management message remapping at least some of mini-slots that were allocated for receipt of data in a data management message sent during the interval; and output the remap management message during the interval to re-allocate the mini-slots.
  • the device is connected to a plurality of endpoints and each endpoint of the plurality of endpoints receives the data management message and the remap management message.
  • the device is connected to a plurality of endpoints and wherein the data management message includes the mini-slots allocated for at least a subset of the plurality of endpoints.
  • the one or more processors execute the instructions to receive data via the mini-slots during the interval, with the mini-slots being re-allocated in the remap management message.
  • the mini-slots allocated for receipt of data (which were defined in the data management message sent at a beginning of the interval) define next mini-slots in a next interval.
  • the one or more processors execute the instructions to generate the remap management message responsive to the new bandwidth request, the new bandwidth request being based on low latency service demand in an endpoint.
  • the new bandwidth request comprises a low latency service bandwidth request.
  • the new bandwidth request comprises a low latency service bandwidth request, wherein the one or more processors execute the instructions to receive the new bandwidth request and generate the remap management message on a data channel having a lowest latency compared to other data channels connecting with the device.
  • the new bandwidth request comprises a priority bandwidth request.
  • a computer-implemented method for managing data flow during an interval comprising: generating a data management message comprising mini-slots allocated for receipt of data during the interval; receiving a new bandwidth request during the interval; generating a remap management message during the interval, with the remap management message remapping at least some of the mini-slots that were allocated in the data management message; and outputting the remap management message during the interval to re-allocate the mini-slots.
  • the device is connected to a plurality of endpoints.
  • Each data management message includes mini-slots allocated for at least a subset of the plurality of endpoints.
  • Each remap management message includes re- allocated mini-slots which were originally allocated to one or one or more endpoints of the subset of the plurality of endpoints.
  • the mini-slots allocated for receipt of data define next mini-slots in a next interval.
  • the mini-slots allocated for receipt of data (which mini-slots were defined in the data management message sent at a beginning of the interval) define mini-slots in a next interval.
  • the new bandwidth request comprises a low latency service bandwidth request.
  • the new bandwidth request is received and remap management message is output on a data channel having a lowest latency compared to other data channels connecting with the device.
  • the new bandwidth request comprising a priority bandwidth request.
  • a non-transitory computer-readable medium storing computer instructions for managing data flow in an endpoint during an interval, that when executed by one or more processors, cause the one or more processors to perform the steps of: receiving a data management message during the interval, with the data management message comprising mini-slots allocated for receipt of data; receiving a low latency service bandwidth request during the interval; generating a re-allocation request during the interval for a new allocation of the mini-slots; and receiving a remap management message during the interval, with the remap management message re-allocating at least some of the mini-slots allocated by the data management message.
  • the mini-slots allocated for receipt of data define next mini-slots in a next interval.
  • the steps include transmitting data during the interval in the re-allocated mini-slots and in the next interval in the next mini-slots.
  • the remap management message includes a service identifier for a low latency service and an offset defining when data transmission must end.
  • the remap management message uses an interval usage code that indicates the remapped mini-slots.
  • the re-allocation request is outputted and the remap management message is received on a lowest latency data channel having a lowest latency compared to other data channels.
  • a device for managing data flow during an interval includes a request reception means for receiving, before an end of a regular interval, a priority bandwidth request; a remap means for generating a remap management message remapping mini-slots allocated for receipt of data defined in a data management message sent during the interval; and an output means for outputting the remap management message during the interval to re-allocate the mini-slots.
  • the device further includes a message generation means for generating a data management message during the interval, with the data management message defining the mini-slots allocated for receipt of data during the interval.
  • the device further includes a data reception means for receiving data during the interval from the mini-slots allocated in the remap management message.
  • Embodiments of the present disclosure advantageously reduce network latency in DOCSIS systems.
  • the technology can allow a CMTS to provide guaranteed bandwidth resources more quickly to endpoints requiring low latency service flows by eliminating waits due to MAP message cycling.
  • FIG. 1 illustrates the basic DOCSIS architecture including a cable modem termination system and a plurality of cable modems.
  • FIG. 2 is a flowchart describing a prior art data transmission process of the DOCSIS protocol.
  • FIG. 3 is a timing diagram which illustrates the negotiation between a CMTS and a CM for data channels between the CMTS and an endpoint, such as a CM.
  • FIG. 4 is a flowchart describing a new data transmission process for use in accordance with the DOCSIS protocol.
  • FIG. 5 is a timing diagram which illustrates the negotiation between a CMTS and a CM for data channels using an rMAP message between the CMTS and a CM.
  • FIG. 6 is a flowchart illustrating a process which occurs in a CM to generate a rMAP message.
  • FIG. 7 is a block diagram illustrating channel assignments in a MAP message and remapped channel assignments in an rMAP message.
  • FIG. 8 is a table illustrating an rMAP Protocol Data Unit.
  • FIG. 9 illustrates a block diagram of a processing system that can be used to implement various embodiments of a CMTS.
  • FIGS. 10 and 11 are timing diagrams illustrating alternative negotiation timing between a CMTS and a CM for data channels using an rMAP message between the CMTS and a CM.
  • the present disclosure will now be described with reference to the figures, which in general relate to enhancement of the DOCSIS MAP functionality by implementing a “preemption MAP” , also referred to herein as a remap message.
  • the preemption MAP is a bandwidth allocation mechanism in DOCSIS systems that can re-appropriate a number of upstream data mini-slots defined in a previous MAP message before a second, periodic message is issued. This new mechanism enables faster media access for low-latency traffic, thus reducing the maximum latency incurred.
  • an endpoint bandwidth request waits for a next regularly occurring MAP (management message) cycle to be processed.
  • bandwidth allocations may be delayed by up to a time period equaling the MAP spacing (plus processing pipeline delays) .
  • the technology implements a preemption scheme for bandwidth allocation such that when a new request is received from a low latency service flow, a CMTS does not wait for the next MAP cycle, but instead immediately generates a preemption MAP PDU (rMAP) .
  • the technology remaps any appropriate number of mini-slots of the previous MAP to satisfy the low latency request.
  • the rMAP is sent on the downstream channel using smallest time interleaving.
  • the technology implements a dual-scheduling bandwidth allocation scheme, by using an overlaid scheduler on top of a periodic DOCSIS scheduler.
  • FIG. 1 illustrates a common Data Over Cable Service Interface Specification DOCSIS architecture.
  • the DOCSIS architecture consists of two primary components: 1) cable modems 120a –120n (or other DOCSIS enabled devices) located at a user endpoint, and 2) a cable modem termination system (CMTS) 110 operated by cable service providers.
  • the CMTS 110 manages one to hundreds of cable modems 120, and also communicates with a system network such as WAN 100.
  • DOCSIS defines protocol for bi-directional signal exchange between these two components through the use of cable.
  • Bandwidth management between a CMTS 110 and CMs 120 uses a request-grant arbitration mechanism.
  • Each CM 120 makes requests to the CMTS 110 for bandwidth and the CMTS 110 issues grants to each managed CM 120 using a MAP message.
  • Each MAP message contains an assignment of mini-slots (small, equal mini-slots of time) in the upstream channel shared by the CMs 120 using a Time Division Multiple Access (TDMA) format. Once the mini-slots are assigned, each CM 120 may transmit data during its assigned channel period.
  • An upstream transmission is described by a burst profile which specifies parameters for the upstream transmission.
  • a DOCSIS-compliant CMTS 110 can provide different upstream scheduling modes for different packet streams or applications through the concept of a service flow.
  • a service flow represents either an upstream or a downstream flow of data, which a service flow ID (SFID or SID) uniquely identifies.
  • SFID service flow ID
  • SID service flow ID
  • Each service flow can have its own quality of service (QoS) parameters, for example, maximum throughput, minimum guaranteed throughput, and priority.
  • QoS quality of service
  • VoIP voice over IP
  • the cable modems 120 and CMTS 110 are able to direct the correct types of traffic into the appropriate service flows with the use of classifiers.
  • Classifiers are special filters, like access-lists, that match packet properties such as UDP and TCP port numbers to determine the appropriate service flow for packets to travel through.
  • FIGS. 2 and 3 illustrate the assignment process.
  • the upstream channel is divided into mini-slots of equal size.
  • the CMTS designates these mini-slots as either request mini-slots (RMS) or data mini-slots (DMS) .
  • RMS request mini-slots
  • DMS data mini-slots
  • a CM 120 wishes to transmit data, it first uses a RMS to transmit a request protocol data unit (PDU) to the CMTS 110.
  • PDU request protocol data unit
  • the CMTS 110 executes the scheduling algorithm and allocates DMS’s to the CM 120.
  • the CMTS 110 notifies all the CM’s 120 of the mini-slot allocation results in the upstream channels by means of a Bandwidth Allocation Map (MAP) message.
  • MAP Bandwidth Allocation Map
  • the MAP message informs each CM 120 when it will be given the opportunity to transmit its upstream data PDU.
  • the CM 120 waits until it receives the DMS assigned to it by the CMTS 110 before transmitting the data PDU to the CMTS 110.
  • the CMTS 110 sends out MAP1, which records the mini-slot allocation results for the period between t3 and t8.
  • MAP1 arrives at the CM 120 at time t2. If the CM 120 wishes to transmit data, it sends a request PDU (REQUEST) to the CMTS 110 via a RMS at time t4, which arrives at the CMTS 110 at time t5.
  • REQUEST request PDU
  • the CMTS 110 Following the resulting scheduling process, the CMTS 110 generates MAP2, which stores the mini-slot allocation results for the period between and t8 and t11. The CMTS 110 then sends MAP2 through the downstream channels to all the CM’s 120. The CM 120 receives MAP2 from the CMTS 110 at time t7. This message notifies the CM 120 of the position of the DMS reserved for its data PDU by the CMTS 110. The CM 120 transmits the data PDU in the assigned DMS when it arrives at time t9. The CM 120 data transmission is completed when the data PDU arrives at the CMTS 110 at time t10.
  • the flowchart of FIG 3 illustrates the MAP generation process occurring at regular periods or intervals (referred to in FIG. 2 as the “map period” , ) typically 2 milliseconds (ms) .
  • a request may be received and if so, at 220, an allocation algorithm in the CMTS 110 allocates minis-slot resources based on the request received during the MAP period.
  • a MAP message is generated and forwarded at 240. The method then repeats at the next period 250.
  • FIGS. 4 and 5 illustrate a novel technology using re-mapped mini-slots within a MAP period.
  • the CMTS 110 sends out MAP1, which records the mini-slot allocation results for the period between t3 and t8.
  • MAP1 arrives at the CM 120 at time t2.
  • the CM 120 sends a request during the subsequent MAP period, if the CM 120 wishes to transmit data for a low latency service (based on, for example, a low latency service bandwidth demand to the CM 120) , it sends a remap request PDU (LOW LATENCY or LL REQUEST) to the CMTS 110 at time Ta (before time t4 in FIG. 2) which arrives at the CMTS 110 at time Tb.
  • the LL request is a new, priority request for bandwidth during the MAP period.
  • the CMTS 110 Following a resulting remapping scheduling process, the CMTS 110 generates rMAP1, which stores the mini-slot allocation results for the period between and t5 and t8.
  • the CMTS 110 sends MAP PDU2 through the downstream channels to all the CM’s 120 at the end of the MAP PDU 1 period.
  • the rMAP process has improved the latency of the upstream data transmission by allowing the CM 120 to transmit the data PDU at time Te which arrives at the CMTS 110 at time Tf. This is accomplished by sending an rMAP message at time Tc, which arrives at time Td.
  • the technology thus grants low latency services low latency service bandwidth at a higher priority than other services which might be required by the CMs 120.
  • the bandwidth requested in low latency request is a priority request such that bandwidth to be allocated to this request is prioritized over other, lower latency service needs.
  • the delay due to the request grant process may be calculated as the time between the start of the MAP message and end of the data transmission.
  • this time (T end –T start ) requires two MAP periods, whereas in the framework of the current technology, low latency services are provided with a latency of one map period.
  • the flowchart of FIG 4 illustrates the rMAP allocation and generation process.
  • a request may be received and if so, at 420, an allocation algorithm in the CMTS 110 allocates minis-slot resources based on the request (s) received during the MAP period.
  • a MAP message is generated at forwarded at 440. If at 450, a remap request is received, then at 460, an allocation algorithm in the CMTS 110 re-allocates mini-slot resources based on the remap request (s) .
  • the allocation algorithm used at 460 may use the same prioritization scheme as the allocation mechanism for the periodic MAP messages, but will account for the time period consumed between the issuance of the remap request and the delivery of the remap message to the CMs 120.
  • an rMAP message is generated and forwarded at 480 and the method then repeats at the next period 490.
  • FIG. 6 illustrates a process which occurs in a CM 120.
  • a MAP allocation will be active.
  • the process proceeds to steps 630 –660 to generate a low latency bandwidth request. If not, at 690 the CM 120 uses the existing allocation from the previous MAP message to forward data at 690. If a low latency service request is made at 620, then at 630 an rMAP request is generated and forwarded by the CMTS 110 to be executed by the CM 120. If rMAP is not received, the CM 120 continues to use existing allocation as in 690.
  • an rMAP message including a remapped allocation of mini-slots, is received and registered at 650.
  • data is sent on the remapped mini-slots and the method waits for the next MAP message in the next MAP period at 670.
  • FIG. 7 is a block diagram illustrating one embodiment of remapped mini-slots.
  • FIG. 7 illustrates MAP and rMAP messages MAP1, rMAP1 and MAP2 transmitted via the downstream channel, and the mini-slots (0 –25, 26 –n) allocated to the upstream channels for four CMs 120 (CM1 through CM4) , each CM 120 having two service flows (SFs) .
  • the MAP 1 message (arriving at time slot -4 in FIG. 7) defines the location of slots 0 –25, with various mini-slots allocated to CM1 –CM4.
  • CM1 is assigned mini-slots 0 –6 (710, 715)
  • CM2 is assigned mini-slots 7 –8 (720)
  • CM 3 is assigned 9 –19 (730, 740) and CM4mini-slots 20 –25 (750) .
  • the CMTS 110 receives a low latency request at slot 2
  • the CMTS 110 immediately generates rMAP1 (for mini-slots 7-19) at slot 3, which grant mini-slots to low latency service flow and withdraws the mini-slots allocated to CM2-SF1, CM3-SF1 and CM3-SF2 and reallocates them to CM2-SF2 (725) .
  • the delta ( ⁇ ) indicates the reassignment minimum delay as rMAP 1 cannot reassign earlier than this period to allow for pipeline and processing time in the CMs 120.
  • the rMAP messages are transmitted and requests received on the lowest latency downstream channel such that the rMAP is sent on the downstream channel with the lowest interleaving.
  • the CMTS 110 reserves enough time so that the endpoint’s low latency service flow can effectively respond to the start mini-slot allocated to it in the rMAP (i.e., a delta ( ⁇ ) in FIG. 7) .
  • the CMTS 110 acts on the rMAP PDU over other requests in conjunction with forwarding the rMAP over the lowest latency channel.
  • FIG. 8 is a table representing an exemplary PDU for an rMAP in accordance with the present technology.
  • the table of FIG. 8 illustrates six MAP Information Elements (IEs) which follow the DOCSIS SID, IUC, OFFSET format.
  • the SID is the Service Identifier of the cable modem, eMTA or other DOCSIS-based device.
  • the IUC is the particular burst descriptor that the MAP grant is providing a timeslot or timeslots.
  • the offset is the time until the modem can transmit, and must stop by the next SIDs offset.
  • IUC 0 indicates the overlapped mini-slots which have been withdrawn by CMTS.
  • the start offset is the start mini-slot number responding to an allocated start time of the rMAP, where the start mini-slot granted in the previous MAP overlapped with the first IE.
  • the end offset is the end mini-slot number responding to the allocated start time of the rMAP, where the end mini-slot granted in the previous MAP overlapped with the first IE.
  • FIG. 9 is a block diagram of a network device 900 that can be used to implement various embodiments of a CMTS in accordance with the present technology.
  • Specific network devices 900 may utilize all of the components shown, or only a subset of the components, and levels of integration may vary from device to device.
  • the network device 900 may contain multiple instances of a component, such as multiple processing units, processors, memories, transmitters, receivers, etc.
  • the network device 900 may comprise a processing unit 901 equipped with one or more input/output devices, such as network interfaces 950, input/output (I/O) interfaces 960, storage interfaces (not shown) , and the like.
  • the network device 900 may include a central processing unit (CPU) 910, a memory 920, a mass storage device 930, and an I/O interface 960 connected to a bus 970.
  • the bus 970 may be one or more of any type of several bus architectures including a memory bus or memory controller, a peripheral bus, or the like.
  • the CPU 910 may comprise any type of electronic data processor.
  • the memory 920 may comprise any type of system memory such as static random access memory (SRAM) , dynamic random access memory (DRAM) , synchronous DRAM (SDRAM) , read-only memory (ROM) , a combination thereof, or the like.
  • the memory 920 may include ROM for use at boot-up, and DRAM for program and data storage for use while executing programs.
  • the memory 920 is non-transitory.
  • the memory 920 includes: a CM resource scheduler 920A allocating upstream mini-slots for MAP messages; a rMAP resource scheduler 920B allocating upstream mini-slots for rMAP messages; a MAP generator 920C generating MAP messages responsive to each transmission request received from a CM during a regular MAP period; a rMAP generator 920D generating rMAP messages responsive to each low latency service request received from a CM; a MAP transmitter 920E outputting MAP messages on each MAP period, and an rMAP transmitter 920F outputting rMAP messages.
  • the mass storage device 930 may comprise any type of storage device configured to store data, programs, and other information and to make the data, programs, and other information accessible via the bus 970.
  • the mass storage device 930 may comprise, for example, one or more of a solid state drive, hard disk drive, a magnetic disk drive, an optical disk drive, or the like.
  • the network device 900 also includes one or more network interfaces 950, which may comprise wired links, such as an Ethernet cable or the like, and/or wireless links to access nodes or one or more networks 980.
  • the network interface 950 allows the network device 900 to communicate with remote units via the networks 980.
  • the network interface 950 may provide wireless communication via one or more transmitters/transmit antennas and one or more receivers/receive antennas.
  • the network device 900 is coupled to a local-area network or a wide-area network for data processing and communications with remote devices, such as other processing units, the Internet, remote storage facilities, or the like.
  • Certain embodiments of the present disclosure may advantageously reduce network latency in DOCSIS systems.
  • the technology can allow a CMTS to provide guaranteed bandwidth resources to endpoints requiring low latency service flows more quickly by eliminating waits due to MAP message cycling.
  • the network device 900 includes a message module generating a data management message comprising mini-slots allocated for receipt of data during the interval, a request reception module receiving a new bandwidth request during the interval, a remap module generating a remap management message during the interval, with the remap management message remapping at least some of the mini-slots that were allocated in the data management message, and an output module outputting the remap management message during the interval to re-allocate the mini-slots.
  • the network device 900 may include other or additional modules for performing any one of or combination of steps described in the embodiments. Further, any of the additional or alternative embodiments or aspects of the method, as shown in any of the figures or recited in any of the claims, are also contemplated to include similar modules.
  • FIG. 10 illustrates a further embodiment of the technology wherein a low latency service request the rMAP message may map mini-slots extending beyond the end of the MAP period.
  • a low latency request and rMAP message may map slots which are not constrained within MAP1 slots.
  • the mini-slots defined in the rMAP2 message may assigned to slots in the second MAP period (of MAP2) or subsequent MAP periods
  • FIG. 11 illustrates another embodiment of the technology wherein multiple rMAP messages may be issued within a MAP period such that the second rMAP message (rMAP PDU 2 ) overwrites mini-slots defined by rMAP PDU 1 . It should be recognized that the minislots defined by rMAP1 or rMAP2 may extend beyond the MAP period of MAP1. Although two rMAP messages are illustrated, more than two low latency requests and two rMAP messages may occur within a MAP period.
  • the computer-readable non-transitory media includes all types of computer readable media, including magnetic storage media, optical storage media, and solid state storage media and specifically excludes signals.
  • the software can be installed in and sold with the device. Alternatively the software can be obtained and loaded into the device, including obtaining the software via a disc medium or from any manner of network or distribution system, including, for example, from a server owned by the software creator or from a server not owned but used by the software creator.
  • the software can be stored on a server for distribution over the Internet, for example.
  • Computer-readable storage media excludedes propagated signals per se, can be accessed by a computer and/or processor (s) , and includes volatile and non-volatile internal and/or external media that is removable and/or non-removable.
  • processors volatile and non-volatile internal and/or external media that is removable and/or non-removable.
  • the various types of storage media accommodate the storage of data in any suitable digital format. It should be appreciated by those skilled in the art that other types of computer readable medium can be employed such as zip drives, solid state drives, magnetic tape, flash memory cards, flash drives, cartridges, and the like, for storing computer executable instructions for performing the novel methods (acts) of the disclosed architecture.
  • each process associated with the disclosed technology may be performed continuously and by one or more computing devices.
  • Each step in a process may be performed by the same or different computing devices as those used in other steps, and each step need not necessarily be performed by a single computing device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The disclosure relates to technology managing data flow in a device which uses a data management messages at intervals. An apparatus and method are provided to receive a new bandwidth request during the interval. A remap management message is generated during the interval, with the remap management message remapping at least some of mini-slots that were allocated for receipt of data in a data management message sent during the interval. The device outputs the remap management message during the interval to re-allocate the at least some of the mini-slots.

Description

DOCSIS MAP PREEMPTION PROTOCOL
CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims priority to and benefit of U.S. Provisional Application No. 62/715,092, filed on August 6, 2018, entitled “DOCSIS Preemption Protocol, ” which application is hereby incorporated by reference.
FIELD
The disclosure generally relates to reducing latency in the Data Over Cable Service Interface Specification (DOCSIS) applications.
BACKGROUND
The Data Over Cable Service Interface Specification (DOCSIS) is an international telecommunications standard that permits the addition of high-bandwidth data transfer via existing cable TV (CATV) systems. DOCSIS is widely used by cable operators to offer Internet access through an existing hybrid fiber coaxial infrastructure. The DOCSIS system uses a series of Cable Modem Termination Systems (CMTS) , each serving multiple cable modems at a subscriber’s premises, to route network traffic between the cable modems and a wide area network. Other devices that recognize and support DOCSIS include HDTVs and Web-enabled set-top boxes for televisions.
Content providers are using ever higher bandwidth service offerings which require low latency performance in network data transmission. For example, services such as “4K” video streaming can require as much as 15 Mbps data rates and with newer, higher resolution video services on the horizon, more bandwidth at lower latencies will be required.
BRIEF SUMMARY
According to one aspect of the present disclosure, there is provided a device for managing data flow during an interval. The device includes a non-transitory memory storage comprising instructions, and one or more processors in communication with the memory. The one or more processors execute the instructions to: receive a new bandwidth request during the interval; generate a remap management message during the interval, with the remap management message remapping at least some of mini-slots that were allocated for  receipt of data in a data management message sent during the interval; and output the remap management message during the interval to re-allocate the mini-slots.
Optionally, in any of the preceding aspects, the device is connected to a plurality of endpoints and each endpoint of the plurality of endpoints receives the data management message and the remap management message. Optionally, in any of the preceding aspects the device is connected to a plurality of endpoints and wherein the data management message includes the mini-slots allocated for at least a subset of the plurality of endpoints. Optionally, in any of the preceding aspects the one or more processors execute the instructions to receive data via the mini-slots during the interval, with the mini-slots being re-allocated in the remap management message. Optionally, in any of the preceding aspects the mini-slots allocated for receipt of data (which were defined in the data management message sent at a beginning of the interval) define next mini-slots in a next interval. Optionally, in any of the preceding aspects wherein the one or more processors execute the instructions to generate the remap management message responsive to the new bandwidth request, the new bandwidth request being based on low latency service demand in an endpoint. Optionally, in any of the preceding aspects the new bandwidth request comprises a low latency service bandwidth request. Optionally, in any of the preceding aspects the new bandwidth request comprises a low latency service bandwidth request, wherein the one or more processors execute the instructions to receive the new bandwidth request and generate the remap management message on a data channel having a lowest latency compared to other data channels connecting with the device. Optionally, in any of the preceding aspects the new bandwidth request comprises a priority bandwidth request.
According to still one other aspect of the present disclosure, there is provided a computer-implemented method for managing data flow during an interval, comprising: generating a data management message comprising mini-slots allocated for receipt of data during the interval; receiving a new bandwidth request during the interval; generating a remap management message during the interval, with the remap management message remapping at least some of the mini-slots that were allocated in the data management message; and outputting the remap management message during the interval to re-allocate the mini-slots.
Optionally, in any of the preceding aspects, the device is connected to a plurality of endpoints. Each data management message includes mini-slots allocated for at least a subset of the plurality of endpoints. Each remap management message includes re- allocated mini-slots which were originally allocated to one or one or more endpoints of the subset of the plurality of endpoints. Optionally, in any of the preceding aspects the mini-slots allocated for receipt of data define next mini-slots in a next interval. Optionally, in any of the preceding aspects the mini-slots allocated for receipt of data (which mini-slots were defined in the data management message sent at a beginning of the interval) define mini-slots in a next interval. Optionally, in any of the preceding aspects the new bandwidth request comprises a low latency service bandwidth request. Optionally, in any of the preceding aspects with the new bandwidth request comprises a low latency service bandwidth request, the new bandwidth request is received and remap management message is output on a data channel having a lowest latency compared to other data channels connecting with the device. Optionally, in any of the preceding aspects the new bandwidth request comprising a priority bandwidth request.
According to still one other aspect of the present disclosure, there is provided a non-transitory computer-readable medium storing computer instructions for managing data flow in an endpoint during an interval, that when executed by one or more processors, cause the one or more processors to perform the steps of: receiving a data management message during the interval, with the data management message comprising mini-slots allocated for receipt of data; receiving a low latency service bandwidth request during the interval; generating a re-allocation request during the interval for a new allocation of the mini-slots; and receiving a remap management message during the interval, with the remap management message re-allocating at least some of the mini-slots allocated by the data management message.
Optionally, in any of the preceding aspects the mini-slots allocated for receipt of data define next mini-slots in a next interval. Optionally, in any of the preceding aspects the steps include transmitting data during the interval in the re-allocated mini-slots and in the next interval in the next mini-slots. Optionally, in any of the preceding aspects the remap management message includes a service identifier for a low latency service and an offset defining when data transmission must end. Optionally, in any of the preceding aspects the remap management message uses an interval usage code that indicates the remapped mini-slots. Optionally, in any of the preceding aspects the re-allocation request is outputted and the remap management message is received on a lowest latency data channel having a lowest latency compared to other data channels.
According to still one other aspect of the present disclosure, there is provided. a device for managing data flow during an interval. The device includes a request reception means for receiving, before an end of a regular interval, a priority bandwidth request; a remap means for generating a remap management message remapping mini-slots allocated for receipt of data defined in a data management message sent during the interval; and an output means for outputting the remap management message during the interval to re-allocate the mini-slots.
Optionally, in any of the preceding aspects the device further includes a message generation means for generating a data management message during the interval, with the data management message defining the mini-slots allocated for receipt of data during the interval. Optionally, in any of the preceding aspects the device further includes a data reception means for receiving data during the interval from the mini-slots allocated in the remap management message.
Embodiments of the present disclosure advantageously reduce network latency in DOCSIS systems. The technology can allow a CMTS to provide guaranteed bandwidth resources more quickly to endpoints requiring low latency service flows by eliminating waits due to MAP message cycling.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. The claimed subject matter is not limited to implementations that solve any or all disadvantages noted in the Background.
BRIEF DESCRIPTION OF THE DRAWINGS
Aspects of the present disclosure are illustrated by way of example and are not limited by the accompanying figures for which like references indicate elements.
FIG. 1 illustrates the basic DOCSIS architecture including a cable modem termination system and a plurality of cable modems.
FIG. 2 is a flowchart describing a prior art data transmission process of the DOCSIS protocol.
FIG. 3 is a timing diagram which illustrates the negotiation between a CMTS and a CM for data channels between the CMTS and an endpoint, such as a CM.
FIG. 4 is a flowchart describing a new data transmission process for use in accordance with the DOCSIS protocol.
FIG. 5 is a timing diagram which illustrates the negotiation between a CMTS and a CM for data channels using an rMAP message between the CMTS and a CM.
FIG. 6 is a flowchart illustrating a process which occurs in a CM to generate a rMAP message.
FIG. 7 is a block diagram illustrating channel assignments in a MAP message and remapped channel assignments in an rMAP message.
FIG. 8 is a table illustrating an rMAP Protocol Data Unit.
FIG. 9 illustrates a block diagram of a processing system that can be used to implement various embodiments of a CMTS.
FIGS. 10 and 11 are timing diagrams illustrating alternative negotiation timing between a CMTS and a CM for data channels using an rMAP message between the CMTS and a CM.
DETAILED DESCRIPTION
The present disclosure will now be described with reference to the figures, which in general relate to enhancement of the DOCSIS MAP functionality by implementing a “preemption MAP” , also referred to herein as a remap message. The preemption MAP is a bandwidth allocation mechanism in DOCSIS systems that can re-appropriate a number of upstream data mini-slots defined in a previous MAP message before a second, periodic message is issued. This new mechanism enables faster media access for low-latency traffic, thus reducing the maximum latency incurred.
In present DOCSIS systems, an endpoint bandwidth request waits for a next regularly occurring MAP (management message) cycle to be processed. Hence bandwidth allocations may be delayed by up to a time period equaling the MAP spacing (plus processing pipeline delays) . The technology implements a preemption scheme for bandwidth allocation such that when a new request is received from a low latency service flow, a CMTS does not wait for the next MAP cycle, but instead immediately generates a preemption MAP PDU (rMAP) . The technology remaps any appropriate number of mini-slots of the previous MAP to satisfy the low latency request. The rMAP is sent on the downstream channel using smallest time interleaving. Thus, the technology implements a dual-scheduling bandwidth allocation scheme, by using an overlaid scheduler on top of a periodic DOCSIS scheduler.
It is understood that the present embodiments of the disclosure may be implemented in many different forms and that claim scope should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the inventive embodiment concepts to those skilled in the art. Indeed, the disclosure is intended to cover alternatives, modifications and equivalents of these embodiments, which are included within the scope and spirit of the disclosure as defined by the appended claims. Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a thorough understanding. However, it will be clear to those of ordinary skill in the art that the present embodiments of the disclosure may be practiced without such specific details.
FIG. 1 illustrates a common Data Over Cable Service Interface Specification DOCSIS architecture. The DOCSIS architecture consists of two primary components: 1) cable modems 120a –120n (or other DOCSIS enabled devices) located at a user endpoint, and 2) a cable modem termination system (CMTS) 110 operated by cable service providers. The CMTS 110 manages one to hundreds of cable modems 120, and also communicates with a system network such as WAN 100. DOCSIS defines protocol for bi-directional signal exchange between these two components through the use of cable.
Bandwidth management between a CMTS 110 and CMs 120 uses a request-grant arbitration mechanism. Each CM 120 makes requests to the CMTS 110 for bandwidth and the CMTS 110 issues grants to each managed CM 120 using a MAP message. Each MAP message contains an assignment of mini-slots (small, equal mini-slots of time) in the upstream channel shared by the CMs 120 using a Time Division Multiple Access (TDMA) format. Once the mini-slots are assigned, each CM 120 may transmit data during its assigned channel period. An upstream transmission is described by a burst profile which specifies parameters for the upstream transmission. A DOCSIS-compliant CMTS 110 can provide different upstream scheduling modes for different packet streams or applications through the concept of a service flow. A service flow represents either an upstream or a downstream flow of data, which a service flow ID (SFID or SID) uniquely identifies. Each service flow can have its own quality of service (QoS) parameters, for example, maximum throughput, minimum guaranteed throughput, and priority. In the case of upstream service flows, you can also specify a scheduling mode can also be specified.
One can define more than one upstream service flow for every cable modem 120, to accommodate different types of applications. For example, web and email can use one  service flow, voice over IP (VoIP) can use another, and Internet gaming can use yet another service flow. In order to be able to provide an appropriate type of service for each of these applications, the characteristics of these service flows may be different.
The cable modems 120 and CMTS 110 are able to direct the correct types of traffic into the appropriate service flows with the use of classifiers. Classifiers are special filters, like access-lists, that match packet properties such as UDP and TCP port numbers to determine the appropriate service flow for packets to travel through.
FIGS. 2 and 3 illustrate the assignment process. As noted, the upstream channel is divided into mini-slots of equal size. The CMTS designates these mini-slots as either request mini-slots (RMS) or data mini-slots (DMS) . When a CM 120 wishes to transmit data, it first uses a RMS to transmit a request protocol data unit (PDU) to the CMTS 110. When the CMTS 110 receives this request PDU, it executes the scheduling algorithm and allocates DMS’s to the CM 120. Following completion of the scheduling process, the CMTS 110 notifies all the CM’s 120 of the mini-slot allocation results in the upstream channels by means of a Bandwidth Allocation Map (MAP) message. The MAP message informs each CM 120 when it will be given the opportunity to transmit its upstream data PDU. The CM 120 waits until it receives the DMS assigned to it by the CMTS 110 before transmitting the data PDU to the CMTS 110. In FIG. 2, at time T1, the CMTS 110 sends out MAP1, which records the mini-slot allocation results for the period between t3 and t8. MAP1 arrives at the CM 120 at time t2. If the CM 120 wishes to transmit data, it sends a request PDU (REQUEST) to the CMTS 110 via a RMS at time t4, which arrives at the CMTS 110 at time t5. Following the resulting scheduling process, the CMTS 110 generates MAP2, which stores the mini-slot allocation results for the period between and t8 and t11. The CMTS 110 then sends MAP2 through the downstream channels to all the CM’s 120. The CM 120 receives MAP2 from the CMTS 110 at time t7. This message notifies the CM 120 of the position of the DMS reserved for its data PDU by the CMTS 110. The CM 120 transmits the data PDU in the assigned DMS when it arrives at time t9. The CM 120 data transmission is completed when the data PDU arrives at the CMTS 110 at time t10.
The flowchart of FIG 3 illustrates the MAP generation process occurring at regular periods or intervals (referred to in FIG. 2 as the “map period” , ) typically 2 milliseconds (ms) . At 200, for each period, at 210 a request may be received and if so, at 220, an allocation algorithm in the CMTS 110 allocates minis-slot resources based on the request  received during the MAP period. At 230, a MAP message is generated and forwarded at 240. The method then repeats at the next period 250.
FIGS. 4 and 5 illustrate a novel technology using re-mapped mini-slots within a MAP period.
In the present technology, in FIG. 5, at time T1, the CMTS 110 sends out MAP1, which records the mini-slot allocation results for the period between t3 and t8. MAP1 arrives at the CM 120 at time t2. Instead of the CM 120 sending a request during the subsequent MAP period, if the CM 120 wishes to transmit data for a low latency service (based on, for example, a low latency service bandwidth demand to the CM 120) , it sends a remap request PDU (LOW LATENCY or LL REQUEST) to the CMTS 110 at time Ta (before time t4 in FIG. 2) which arrives at the CMTS 110 at time Tb. The LL request is a new, priority request for bandwidth during the MAP period. Following a resulting remapping scheduling process, the CMTS 110 generates rMAP1, which stores the mini-slot allocation results for the period between and t5 and t8. The CMTS 110 sends MAP PDU2 through the downstream channels to all the CM’s 120 at the end of the MAP PDU 1 period. But, using the preemption scheme, the rMAP process has improved the latency of the upstream data transmission by allowing the CM 120 to transmit the data PDU at time Te which arrives at the CMTS 110 at time Tf. This is accomplished by sending an rMAP message at time Tc, which arrives at time Td. The technology thus grants low latency services low latency service bandwidth at a higher priority than other services which might be required by the CMs 120. As such, the bandwidth requested in low latency request is a priority request such that bandwidth to be allocated to this request is prioritized over other, lower latency service needs.
During the normal request-grant process, the delay due to the request grant process may be calculated as the time between the start of the MAP message and end of the data transmission. In the current request-grant process, this time (T end –T start) requires two MAP periods, whereas in the framework of the current technology, low latency services are provided with a latency of one map period.
The flowchart of FIG 4 illustrates the rMAP allocation and generation process. At 400, for each period, at 410 a request may be received and if so, at 420, an allocation algorithm in the CMTS 110 allocates minis-slot resources based on the request (s) received during the MAP period. At 430, a MAP message is generated at forwarded at 440.  If at 450, a remap request is received, then at 460, an allocation algorithm in the CMTS 110 re-allocates mini-slot resources based on the remap request (s) .
The allocation algorithm used at 460 may use the same prioritization scheme as the allocation mechanism for the periodic MAP messages, but will account for the time period consumed between the issuance of the remap request and the delivery of the remap message to the CMs 120. At 470, an rMAP message is generated and forwarded at 480 and the method then repeats at the next period 490.
FIG. 6 illustrates a process which occurs in a CM 120. During each MAP period at 600, at 610 a MAP allocation will be active. At 620, if a low latency service request is made in the modem, the process proceeds to steps 630 –660 to generate a low latency bandwidth request. If not, at 690 the CM 120 uses the existing allocation from the previous MAP message to forward data at 690. If a low latency service request is made at 620, then at 630 an rMAP request is generated and forwarded by the CMTS 110 to be executed by the CM 120. If rMAP is not received, the CM 120 continues to use existing allocation as in 690. At 640, an rMAP message, including a remapped allocation of mini-slots, is received and registered at 650. At 660, data is sent on the remapped mini-slots and the method waits for the next MAP message in the next MAP period at 670.
FIG. 7 is a block diagram illustrating one embodiment of remapped mini-slots. FIG. 7 illustrates MAP and rMAP messages MAP1, rMAP1 and MAP2 transmitted via the downstream channel, and the mini-slots (0 –25, 26 –n) allocated to the upstream channels for four CMs 120 (CM1 through CM4) , each CM 120 having two service flows (SFs) . The MAP 1 message (arriving at time slot -4 in FIG. 7) defines the location of slots 0 –25, with various mini-slots allocated to CM1 –CM4. In FIG 7, CM1 is assigned mini-slots 0 –6 (710, 715) , CM2 is assigned mini-slots 7 –8 (720) , CM 3 is assigned 9 –19 (730, 740) and CM4mini-slots 20 –25 (750) . When the CMTS 110 receives a low latency request at slot 2, the CMTS 110 immediately generates rMAP1 (for mini-slots 7-19) at slot 3, which grant mini-slots to low latency service flow and withdraws the mini-slots allocated to CM2-SF1, CM3-SF1 and CM3-SF2 and reallocates them to CM2-SF2 (725) . The delta (Δ) indicates the reassignment minimum delay as rMAP 1 cannot reassign earlier than this period to allow for pipeline and processing time in the CMs 120.
In accordance with the technology, the rMAP messages are transmitted and requests received on the lowest latency downstream channel such that the rMAP is sent on the downstream channel with the lowest interleaving. In accordance with the technology, the  CMTS 110 reserves enough time so that the endpoint’s low latency service flow can effectively respond to the start mini-slot allocated to it in the rMAP (i.e., a delta (Δ) in FIG. 7) . The CMTS 110 acts on the rMAP PDU over other requests in conjunction with forwarding the rMAP over the lowest latency channel.
FIG. 8 is a table representing an exemplary PDU for an rMAP in accordance with the present technology. The table of FIG. 8 illustrates six MAP Information Elements (IEs) which follow the DOCSIS SID, IUC, OFFSET format. The SID is the Service Identifier of the cable modem, eMTA or other DOCSIS-based device. The IUC is the particular burst descriptor that the MAP grant is providing a timeslot or timeslots. The offset is the time until the modem can transmit, and must stop by the next SIDs offset. In this example, IUC = 0 indicates the overlapped mini-slots which have been withdrawn by CMTS. The start offset is the start mini-slot number responding to an allocated start time of the rMAP, where the start mini-slot granted in the previous MAP overlapped with the first IE. The end offset is the end mini-slot number responding to the allocated start time of the rMAP, where the end mini-slot granted in the previous MAP overlapped with the first IE.
FIG. 9 is a block diagram of a network device 900 that can be used to implement various embodiments of a CMTS in accordance with the present technology. Specific network devices 900 may utilize all of the components shown, or only a subset of the components, and levels of integration may vary from device to device. Furthermore, the network device 900 may contain multiple instances of a component, such as multiple processing units, processors, memories, transmitters, receivers, etc. The network device 900 may comprise a processing unit 901 equipped with one or more input/output devices, such as network interfaces 950, input/output (I/O) interfaces 960, storage interfaces (not shown) , and the like. The network device 900 may include a central processing unit (CPU) 910, a memory 920, a mass storage device 930, and an I/O interface 960 connected to a bus 970. The bus 970 may be one or more of any type of several bus architectures including a memory bus or memory controller, a peripheral bus, or the like.
The CPU 910 may comprise any type of electronic data processor. The memory 920 may comprise any type of system memory such as static random access memory (SRAM) , dynamic random access memory (DRAM) , synchronous DRAM (SDRAM) , read-only memory (ROM) , a combination thereof, or the like. In an embodiment, the memory 920 may include ROM for use at boot-up, and DRAM for program and data storage for use while executing programs. In embodiments, the memory 920 is non-transitory. In one embodiment,  the memory 920 includes: a CM resource scheduler 920A allocating upstream mini-slots for MAP messages; a rMAP resource scheduler 920B allocating upstream mini-slots for rMAP messages; a MAP generator 920C generating MAP messages responsive to each transmission request received from a CM during a regular MAP period; a rMAP generator 920D generating rMAP messages responsive to each low latency service request received from a CM; a MAP transmitter 920E outputting MAP messages on each MAP period, and an rMAP transmitter 920F outputting rMAP messages.
The mass storage device 930 may comprise any type of storage device configured to store data, programs, and other information and to make the data, programs, and other information accessible via the bus 970. The mass storage device 930 may comprise, for example, one or more of a solid state drive, hard disk drive, a magnetic disk drive, an optical disk drive, or the like.
The network device 900 also includes one or more network interfaces 950, which may comprise wired links, such as an Ethernet cable or the like, and/or wireless links to access nodes or one or more networks 980. The network interface 950 allows the network device 900 to communicate with remote units via the networks 980. For example, the network interface 950 may provide wireless communication via one or more transmitters/transmit antennas and one or more receivers/receive antennas. In an embodiment, the network device 900 is coupled to a local-area network or a wide-area network for data processing and communications with remote devices, such as other processing units, the Internet, remote storage facilities, or the like.
Certain embodiments of the present disclosure may advantageously reduce network latency in DOCSIS systems. The technology can allow a CMTS to provide guaranteed bandwidth resources to endpoints requiring low latency service flows more quickly by eliminating waits due to MAP message cycling.
n an example embodiment, the network device 900 includes a message module generating a data management message comprising mini-slots allocated for receipt of data during the interval, a request reception module receiving a new bandwidth request during the interval, a remap module generating a remap management message during the interval, with the remap management message remapping at least some of the mini-slots that were allocated in the data management message, and an output module outputting the remap management message during the interval to re-allocate the mini-slots. In some embodiments, the network device 900 may include other or additional modules for performing any one of or  combination of steps described in the embodiments. Further, any of the additional or alternative embodiments or aspects of the method, as shown in any of the figures or recited in any of the claims, are also contemplated to include similar modules.
FIG. 10 illustrates a further embodiment of the technology wherein a low latency service request the rMAP message may map mini-slots extending beyond the end of the MAP period. As such, a low latency request and rMAP message may map slots which are not constrained within MAP1 slots. The mini-slots defined in the rMAP2 message may assigned to slots in the second MAP period (of MAP2) or subsequent MAP periods
FIG. 11 illustrates another embodiment of the technology wherein multiple rMAP messages may be issued within a MAP period such that the second rMAP message (rMAP PDU 2) overwrites mini-slots defined by rMAP PDU 1. It should be recognized that the minislots defined by rMAP1 or rMAP2 may extend beyond the MAP period of MAP1. Although two rMAP messages are illustrated, more than two low latency requests and two rMAP messages may occur within a MAP period.
It is understood that the present subject matter may be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this subject matter will be thorough and complete and will fully convey the disclosure to those skilled in the art. Indeed, the subject matter is intended to cover alternatives, modifications and equivalents of these embodiments, which are included within the scope and spirit of the subject matter as defined by the appended claims. Furthermore, in the following detailed description of the present subject matter, numerous specific details are set forth in order to provide a thorough understanding of the present subject matter. However, it will be clear to those of ordinary skill in the art that the present subject matter may be practiced without such specific details.
Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatuses (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable instruction execution  apparatus, create a mechanism for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The computer-readable non-transitory media includes all types of computer readable media, including magnetic storage media, optical storage media, and solid state storage media and specifically excludes signals. It should be understood that the software can be installed in and sold with the device. Alternatively the software can be obtained and loaded into the device, including obtaining the software via a disc medium or from any manner of network or distribution system, including, for example, from a server owned by the software creator or from a server not owned but used by the software creator. The software can be stored on a server for distribution over the Internet, for example.
Computer-readable storage media (excludes propagated signals per se, can be accessed by a computer and/or processor (s) , and includes volatile and non-volatile internal and/or external media that is removable and/or non-removable. For the computer, the various types of storage media accommodate the storage of data in any suitable digital format. It should be appreciated by those skilled in the art that other types of computer readable medium can be employed such as zip drives, solid state drives, magnetic tape, flash memory cards, flash drives, cartridges, and the like, for storing computer executable instructions for performing the novel methods (acts) of the disclosed architecture.
The terminology used herein is for the purpose of describing particular aspects only and is not intended to be limiting of the disclosure. As used herein, the singular forms "a" , "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising, " when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The aspects of the disclosure herein were chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to  understand the disclosure with various modifications as are suited to the particular use contemplated.
For purposes of this document, each process associated with the disclosed technology may be performed continuously and by one or more computing devices. Each step in a process may be performed by the same or different computing devices as those used in other steps, and each step need not necessarily be performed by a single computing device.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (25)

  1. A device for managing data flow during an interval, comprising:
    a non-transitory memory storage comprising instructions; and
    one or more processors in communication with the memory, wherein the one or more processors execute the instructions to:
    receive a new bandwidth request during the interval;
    generate a remap management message during the interval, with the remap management message remapping at least some of the mini-slots that were allocated for receipt of data in a data management message sent during the interval; and
    output the remap management message during the interval to re-allocate the mini-slots.
  2. The device of claim 1 wherein the device is connected to a plurality of endpoints and each endpoint of the plurality of endpoints receives the data management message and the remap management message.
  3. The device of any of claims 1-2 wherein the device is connected to a plurality of endpoints and wherein the data management message includes the mini-slots allocated for at least a subset of the plurality of endpoints.
  4. The device of any of claims 1-3 wherein the one or more processors execute the instructions to receive data via the mini-slots during the interval, with the mini-slots being re-allocated in the remap management message.
  5. The device of any of claims 1-4 wherein the mini-slots allocated for receipt of data define next mini-slots in a next interval.
  6. The device of any of claims 1-5 wherein the one or more processors execute the instructions to generate the remap management message responsive to the new bandwidth request, the new bandwidth request being based on low latency service demand in an endpoint.
  7. The device of any of claims 1-5, with the new bandwidth request comprising a low latency service bandwidth request.
  8. The device of any of claims 1–5, with the new bandwidth request comprising a low latency service bandwidth request, wherein the one or more processors execute the instructions to receive the new bandwidth request and generate the remap management message on a data channel having a lowest latency compared to other data channels connecting with the device.
  9. The device of any of claims 1-5, with the new bandwidth request comprising a priority bandwidth request.
  10. A computer-implemented method for managing data flow in a device during an interval, comprising:
    generating a data management message comprising mini-slots allocated for receipt of data during the interval;
    receiving a new bandwidth request during the interval;
    generating a remap management message during the interval, with the remap management message remapping at least some of the mini-slots that were allocated in the data management message; and
    outputting the remap management message during the interval to re-allocate the mini-slots.
  11. The computer-implemented method of claim 10 wherein the device is connected to a plurality of endpoints and the data management message includes the mini-slots allocated for at least a subset of the plurality of endpoints.
  12. The computer-implemented method of any of claims 10-11 wherein the device is connected to the plurality of endpoints and the remap management message includes re-allocated mini-slots originally allocated to one or more endpoints of the subset of the plurality of endpoints.
  13. The computer-implemented method of any of claims 10-12 wherein the mini-slots  allocated for receipt of data define next mini-slots in a next interval.
  14. The computer-implemented method of any of claims 10-13, with the new bandwidth request comprising a low latency service bandwidth request.
  15. The computer implemented method of any of claims 10-13, with the new bandwidth request comprising a low latency service bandwidth request, wherein the new bandwidth request is received and remap management message is output on a data channel having a lowest latency compared to other data channels connecting with the device.
  16. The computer-implemented method of any of claims 10-13, with the new bandwidth request comprising a priority bandwidth request.
  17. A non-transitory computer-readable medium storing computer instructions for managing data flow in an endpoint during an interval, that when executed by one or more processors, cause the one or more processors to perform the steps of:
    receiving a data management message during the interval, with the data management message comprising mini-slots allocated for receipt of data;
    receiving a low latency service bandwidth request during the interval;
    generating a re-allocation request during the interval for a new allocation of the mini-slots; and
    receiving a remap management message during the interval, with the remap management message re-allocating at least some of the mini-slots allocated by the data management message.
  18. The non-transitory computer-readable medium of claim 17 wherein the mini-slots allocated for receipt of data define next mini-slots in a next interval.
  19. The non-transitory computer-readable medium of any of claims 17-18 further including transmitting data during the interval in the re-allocated mini-slots and in the next interval in the next mini-slots.
  20. The non-transitory computer-readable medium of any of claims 17-19 wherein the  remap management message includes a service identifier for a low latency service and an offset defining when data transmission must end.
  21. The non-transitory computer-readable medium of any of claims 17-20 wherein the remap management message uses an interval usage code that indicates the remapped mini-slots.
  22. The non-transitory computer-readable medium of any of claims 17-21 wherein the re-allocation request is outputted and the remap management message is received on a lowest latency data channel having a lowest latency compared to other data channels.
  23. A device for managing data flow during an interval, comprising:
    a request reception means for receiving, before an end of a regular interval, a priority bandwidth request;
    a remap means for generating a remap management message remapping mini-slots allocated for receipt of data defined in a data management message sent during the interval; and
    an output means for outputting the remap management message during the interval to re-allocate the mini-slots.
  24. The computer-implemented method of claim 23 further including a message generation means for generating a data management message during the interval, with the data management message defining the mini-slots allocated for receipt of data during the interval.
  25. The computer-implemented method of any of claims 23-24 further including a data reception means for receiving data during the interval from the mini-slots allocated in the remap management message.
PCT/CN2019/090918 2018-08-06 2019-06-12 Docsis map preemption protocol WO2020029680A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862715092P 2018-08-06 2018-08-06
US62/715,092 2018-08-06

Publications (1)

Publication Number Publication Date
WO2020029680A1 true WO2020029680A1 (en) 2020-02-13

Family

ID=69413910

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/090918 WO2020029680A1 (en) 2018-08-06 2019-06-12 Docsis map preemption protocol

Country Status (1)

Country Link
WO (1) WO2020029680A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1964275A (en) * 2005-11-08 2007-05-16 中兴通讯股份有限公司 A realization method to dynamically change maximum length of bandwidth request
US20100238950A1 (en) * 2001-10-31 2010-09-23 Juniper Networks, Inc. Use of group poll scheduling for broadband communication systems
CN101971571A (en) * 2008-03-14 2011-02-09 思科技术公司 Modem timing offset compensation for line card redundancy failover
CN103220105A (en) * 2011-12-30 2013-07-24 美国博通公司 Convergence layer bonding over multiple carriers
WO2016172943A1 (en) * 2015-04-30 2016-11-03 华为技术有限公司 Uplink bandwidth allocation method, device and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100238950A1 (en) * 2001-10-31 2010-09-23 Juniper Networks, Inc. Use of group poll scheduling for broadband communication systems
CN1964275A (en) * 2005-11-08 2007-05-16 中兴通讯股份有限公司 A realization method to dynamically change maximum length of bandwidth request
CN101971571A (en) * 2008-03-14 2011-02-09 思科技术公司 Modem timing offset compensation for line card redundancy failover
CN103220105A (en) * 2011-12-30 2013-07-24 美国博通公司 Convergence layer bonding over multiple carriers
WO2016172943A1 (en) * 2015-04-30 2016-11-03 华为技术有限公司 Uplink bandwidth allocation method, device and system

Similar Documents

Publication Publication Date Title
US10715461B2 (en) Network control to improve bandwidth utilization and parameterized quality of service
US8397267B2 (en) Hi-split upstream design for DOCSIS
US8850509B2 (en) Multiple frequency channel data distribution
US8797854B2 (en) Scheduling for RF over fiber optic cable [RFoG]
US8266265B2 (en) Data transmission over a network with channel bonding
Lin et al. Allocation and scheduling algorithms for IEEE 802.14 and MCNS in hybrid fiber coaxial networks
US20020095684A1 (en) Methods, systems and computer program products for bandwidth allocation based on throughput guarantees
TWI478534B (en) Scheduling in a two-tier network
EP1298860B1 (en) Method and system for flexible channel association
US20070076766A1 (en) System And Method For A Guaranteed Delay Jitter Bound When Scheduling Bandwidth Grants For Voice Calls Via A Cable Network
US10075282B2 (en) Managing burst transmit times for a buffered data stream over bonded upstream channels
US7227871B2 (en) Method and system for real-time change of slot duration
US20180048586A1 (en) Upstream Bandwidth Allocation Method, Apparatus, and System
CN110710163B (en) Method and apparatus for transmitting upstream data in a cable network
WO2020029680A1 (en) Docsis map preemption protocol
US9270616B1 (en) Low-latency quality of service
US20130051443A1 (en) Solutions for upstream channel bonding
KR101097943B1 (en) Headend cable modem and method for verifying performance of upstream channel allocation algorithm in hfc networks
US9094232B2 (en) Allocation adjustment in network domains
KR100819118B1 (en) Method and apparatus for upstreaming data scheduling in cable modem termination system using channel-bonding
US9853909B2 (en) Methods and apparatus for traffic management in a communication network
Shrivastav A Network Simulator model of the DOCSIS protocol and a solution to the bandwidth-hog problem in Cable Networks.
Huang et al. Allocation and Scheduling Algorithms for IEEE 802.14 and MCNS in Hybrid Fiber Coaxial Networks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19847926

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19847926

Country of ref document: EP

Kind code of ref document: A1