WO2020083301A1 - 一种网络切片的方法、计算机设备及存储介质 - Google Patents

一种网络切片的方法、计算机设备及存储介质 Download PDF

Info

Publication number
WO2020083301A1
WO2020083301A1 PCT/CN2019/112617 CN2019112617W WO2020083301A1 WO 2020083301 A1 WO2020083301 A1 WO 2020083301A1 CN 2019112617 W CN2019112617 W CN 2019112617W WO 2020083301 A1 WO2020083301 A1 WO 2020083301A1
Authority
WO
WIPO (PCT)
Prior art keywords
priority queue
priority
service
queue
network
Prior art date
Application number
PCT/CN2019/112617
Other languages
English (en)
French (fr)
Inventor
杨永欢
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2020083301A1 publication Critical patent/WO2020083301A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0893Assignment of logical groups to network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/16Central resource management; Negotiation of resources or communication parameters, e.g. negotiating bandwidth or QoS [Quality of Service]

Definitions

  • the present invention relates to the field of communications, and in particular, to a network slicing method, computer equipment, and storage medium.
  • 5G services have multi-scenarios and different characteristics. For example, autonomous driving services require low latency and jitter protection, industrial control has strict reliability requirements, mobile Internet services focus on bandwidth, and IoT services must support a huge number of connections. If you build a network for each business, the cost will be very high.
  • Network slicing technology builds an independent end-to-end logical network for different types of services in a physical network, and implements logical isolation between the control plane, forwarding plane, and operation plane between network slices. In this way, the slicing technology can reduce the physical cost, and at the same time provide differentiated services, ensure that each business can get the best bearing requirements according to its business characteristics, and at the same time help the safety management of equipment and storage resources.
  • the 5G bearer network is part of the 5G end-to-end business path.
  • Each bearer network slice is like an independent physical network.
  • the bearer network network slicing is to organize multiple virtual networks vNet (namely, slicing network) by virtualizing the topological resources of the network (such as links, nodes, ports, and internal resources of network elements) and organizing them as needed.
  • the forwarding plane can determine the slicing method according to business requirements, that is, slicing is divided according to physical resources. Specifically, soft slicing schemes, such as IP / MPLS-based tunnels / pseudowires, virtualization technologies based on VPN and VLAN, and hard slicing schemes, such as FlexE, OTN, and WDM can be used. Transmission channels, etc .; hard slicing and soft slicing schemes can also be mixed.
  • the hard slicing method ensures the isolation of services and low latency.
  • the soft slicing method supports bandwidth reuse of services.
  • the existing slicing is divided according to physical resources, which has the problems of high slicing strength, low resource utilization rate, and non-dynamic adjustment of bandwidth resources.
  • the main purpose of the present invention is to propose a network slicing method, device, computer equipment and storage medium, which overcomes the slicing strength and low resource utilization rate in the prior art when slicing is divided according to physical resources. And the problem that the bandwidth resources cannot be adjusted dynamically.
  • a method of network slicing comprising: configuring a network slice adapted to a service; based on the network slice, associating the service with a corresponding priority queue ; Scheduling the services associated with the priority queue based on the bandwidth resources of the priority queue.
  • an apparatus for network slicing includes: a configuration module for configuring a network slice adapted to a service; an association module for configuring a network slice based on the network slice
  • the service is associated with a corresponding priority queue;
  • a scheduling module is used to schedule the service associated with the priority queue based on the bandwidth resources of the priority queue.
  • a computer device including a processor and a memory
  • the memory is used to store computer instructions, and the processor is used to run the computer instructions stored in the memory to implement the above-mentioned network slicing method.
  • a computer-readable storage medium storing one or more programs, the one or more programs may be executed by one or more processors To implement the above-mentioned network slicing method.
  • FIG. 1 is a flowchart of a network slicing method according to a first embodiment of the present invention
  • FIG. 2 is a flow block diagram of a network slicing method according to a second embodiment of the invention.
  • FIG. 3 is a flowchart of a network slicing method according to a third embodiment of the invention.
  • FIG. 4 is a flow block diagram of a network slicing method according to a fourth embodiment of the invention.
  • FIG. 5 is a flowchart of a network slicing method according to a fifth embodiment of the present invention.
  • FIG. 6 is a flowchart of a sixth embodiment of the invention.
  • FIG. 7 is a simplified networking diagram of a bearer network in a sixth embodiment of the present invention.
  • FIG. 8 is a schematic diagram of the service trend of PE1 in the sixth embodiment of the present invention.
  • FIG. 9 is a priority queue included in PE1 according to the sixth embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of a network slicing device according to a seventh embodiment of the present invention
  • a first embodiment of the present invention provides a method for network slicing.
  • the method includes: configuring a network slice adapted to a service; based on the network slice, associating the service with a corresponding priority queue; based on the The bandwidth resources of the priority queue schedule the services associated with the priority queue.
  • the business is associated with the corresponding priority queue, so that the same service is allocated to the priority queue of the same priority, and the priority queue is due to its priority.
  • the bandwidth resources of the corresponding bit rate thus, the priority based on the priority queue is implemented, and the services associated with the priority queue are scheduled, that is, priority scheduling.
  • the service forwarding through the priority queue makes the service forwarding process have the advantages of finer granularity, higher utilization of bandwidth resources, and dynamic adjustment of bandwidth resources.
  • FIG. 1 is a flowchart of a network slicing method according to a first embodiment of the present invention.
  • the first embodiment of the present invention provides a network slicing method, which includes:
  • a corresponding network slice needs to be configured for the service, and the configured network slice is adapted to the corresponding service. That is, in this embodiment, it is possible to separately configure corresponding and adapted network slices for different services.
  • this service includes but is not limited to: ultra-reliable low-latency communication uRLLC, enhanced mobile broadband eMBB, and large-scale machine type communication mMTC. Also, configure network slice 1 for uRLLC, network slice 2 for eMBB, and network slice 3 for mMTC.
  • each network slice is isolated from each other.
  • the network is adapted to the service to associate the service with the corresponding priority queue.
  • the same services are all associated with the priority queue of the same priority.
  • this priority queue in this embodiment, it may be from multiple ports or only from a specific port.
  • each priority queue has a corresponding priority, and the priority of each priority queue is different.
  • the priority queue with a higher priority has a larger bandwidth resource.
  • uRLLC adapted for network slice 1 is associated with a higher priority queue
  • eMBB adapted for network slice 2 is associated with a medium priority queue
  • mMTC adapted for network slice 3 is associated with a lower priority queue.
  • the priority of the high priority queue is higher than the priority of the medium priority queue
  • the priority of the medium priority queue is higher than the priority of the low priority queue.
  • the priority queue is a collection of 0 or more elements, each element has a priority, the operations performed on the priority queue are (1) search (2) insert a new element (3 ) Delete
  • search operation is used to search for the element with the highest priority
  • delete operation is used to delete the element. Elements with the same priority can be processed in first-in first-out order or at any priority.
  • S13 Scheduling the service associated with the priority queue based on the bandwidth resources of the priority queue.
  • the service associated with the priority queue is scheduled through the bandwidth resource to which each priority queue belongs.
  • the higher the priority of the priority queue the larger the bandwidth resource to which the priority queue belongs. Therefore, it is only necessary to associate the higher priority service with the priority queue corresponding to the priority level through the corresponding network slice.
  • the service can be scheduled according to the large bandwidth resource of the priority queue, so that the services associated with the priority queue can be scheduled in sequence according to the priority of the priority queue.
  • the business is associated with the corresponding priority queue, so that the same service is allocated to the priority queue of the same priority, and the priority queue is due to its priority.
  • the bandwidth resources of the corresponding bit rate thereby realizing the priority based on the priority queue, scheduling the services associated with the priority queue.
  • the service forwarding through the priority queue makes the service forwarding process have the advantages of finer granularity, higher utilization of bandwidth resources, and dynamic adjustment of bandwidth resources.
  • FIG. 2 is a flowchart of a network slicing method according to a second embodiment of the present invention.
  • a second embodiment of the present invention provides a method for network slicing. The method includes:
  • uRLLC services For example, after establishing end-to-end services, classify the established services to obtain uRLLC services, eMBB services, and mMTC services.
  • the types of services include but are not limited to: uRLLC, source eMBB, mMTC.
  • a corresponding network slice is configured for each type of service, and the configured network slice is adapted to the corresponding service. That is, in this embodiment, corresponding and adapted network slices can be configured for different types of services, respectively.
  • network slice 1 for uRLLC
  • network slice 2 for eMBB
  • network slice 3 for mMTC.
  • each network slice is isolated from each other.
  • the network is adapted to the service to associate the service with the corresponding priority queue.
  • the same services are all associated with the priority queue of the same priority.
  • this priority queue in this embodiment, it may be from multiple ports or only from a specific port.
  • each priority queue has a corresponding priority, and the priority of each priority queue is different.
  • the priority queue with a higher priority has a larger bandwidth resource.
  • uRLLC adapted for network slice 1 is associated with a higher priority queue
  • eMBB adapted for network slice 2 is associated with a medium priority queue
  • mMTC adapted for network slice 3 is associated with a lower priority queue.
  • the priority of the high priority queue is higher than the priority of the medium priority queue
  • the priority of the medium priority queue is higher than the priority of the low priority queue.
  • the priority queue is a collection of 0 or more elements, each element has a priority, the operations performed on the priority queue are (1) search (2) insert a new element (3 ) Delete
  • search operation is used to search for the element with the highest priority
  • delete operation is used to delete the element. Elements with the same priority can be processed in first-in first-out order or at any priority.
  • S24 Scheduling the service associated with the priority queue based on the bandwidth resources of the priority queue
  • the service associated with the priority queue is scheduled through the bandwidth resource to which each priority queue belongs.
  • the higher the priority of the priority queue the larger the bandwidth resource to which the priority queue belongs. Therefore, it is only necessary to associate the higher priority service with the priority queue corresponding to the priority level through the corresponding network slice.
  • the service can be scheduled according to the large bandwidth resource of the priority queue, so that the services associated with the priority queue can be scheduled in sequence according to the priority of the priority queue.
  • the business is associated with the corresponding priority queue, so that the same service is allocated to the priority queue of the same priority, and the priority queue is due to its priority.
  • the bandwidth resources of the corresponding bit rate thereby realizing the priority based on the priority queue, scheduling the services associated with the priority queue.
  • the service forwarding through the priority queue makes the service forwarding process have the advantages of finer granularity, higher utilization of bandwidth resources, and dynamic adjustment of bandwidth resources.
  • FIG. 3 is a flowchart of a network slicing method according to a third embodiment of the present invention.
  • a third embodiment of the present invention provides a method for network slicing. The method includes:
  • uRLLC services For example, after establishing end-to-end services, classify the established services to obtain uRLLC services, eMBB services, and mMTC services.
  • the types of services include but are not limited to: uRLLC, source eMBB, mMTC.
  • a corresponding network slice is configured for each type of service, and the configured network slice is adapted to the corresponding service. That is, in this embodiment, corresponding and adapted network slices can be configured for different types of services, respectively.
  • network slice 1 for uRLLC
  • network slice 2 for eMBB
  • network slice 3 for mMTC.
  • each network slice is isolated from each other.
  • S33 Associate the network slice and the priority queue with the same priority; configure the service in the network slice adapted to the service, so as to associate the service with the corresponding Priority queues are associated;
  • the service configured in the network slice is associated with the corresponding priority queue.
  • the network is adapted to the service to associate the service with the corresponding priority queue.
  • the same services are all associated with the priority queue of the same priority.
  • S34 Scheduling the service associated with the priority queue based on the bandwidth resources of the priority queue.
  • the service associated with the priority queue is scheduled through the bandwidth resource to which each priority queue belongs.
  • the higher the priority of the priority queue the larger the bandwidth resource to which the priority queue belongs. Therefore, it is only necessary to associate the higher priority service with the priority queue corresponding to the priority level through the corresponding network slice.
  • the service can be scheduled according to the large bandwidth resource of the priority queue, so that the services associated with the priority queue can be scheduled in sequence according to the priority of the priority queue.
  • the business is associated with the corresponding priority queue, so that the same service is allocated to the priority queue of the same priority, and the priority queue is due to its priority.
  • the bandwidth resources of the corresponding bit rate thereby realizing the priority based on the priority queue, scheduling the services associated with the priority queue.
  • the service forwarding through the priority queue makes the service forwarding process have the advantages of finer granularity, higher utilization of bandwidth resources, and dynamic adjustment of bandwidth resources.
  • FIG. 4 is a flowchart of a network slicing method according to a fourth embodiment of the present invention. As shown in FIG. 4, a fourth embodiment of the present invention provides a method for network slicing. The method includes:
  • uRLLC services For example, after establishing end-to-end services, classify the established services to obtain uRLLC services, eMBB services, and mMTC services.
  • the types of services include but are not limited to: uRLLC, source eMBB, mMTC.
  • a corresponding network slice is configured for each type of service, and the configured network slice is adapted to the corresponding service. That is, in this embodiment, corresponding and adapted network slices can be configured for different types of services, respectively.
  • network slice 1 for uRLLC
  • network slice 2 for eMBB
  • network slice 3 for mMTC.
  • each network slice is isolated from each other.
  • S43 Associate the network slice with the same priority and the priority queue; configure the service in the network slice adapted to the service, to associate the service with the corresponding Priority queues are associated;
  • the service configured in the network slice is associated with the corresponding priority queue.
  • the network is adapted to the service to associate the service with the corresponding priority queue.
  • the same services are all associated with the priority queue of the same priority.
  • the corresponding bandwidth resource is configured for the priority queue based on the priority of the priority queue. For example, the dual-bucket bucket algorithm is used to configure the corresponding bandwidth resources for the priority queue. That is, if the priority of the priority queue is higher, the bandwidth resource configured for the priority queue is larger.
  • the priority queue includes a high priority queue, a medium priority queue, and a low priority queue
  • the priority of the high priority queue is higher than that of the medium priority queue Priority
  • the priority of the medium priority queue is higher than the priority of the low priority queue.
  • the priority queue in the case where the priority queue is a high priority queue, the priority queue is configured with a committed information rate bandwidth CIR (Commited Information), and the priority queue is In the case of a medium priority queue, configure CIR and excess information rate bandwidth EIR (Excess Information Rate) for the priority queue; when the priority queue is a low priority queue, Configure EIR for the priority queue.
  • CIR Common Information Rate
  • EIR Excess Information Rate
  • S45 Based on the configured bandwidth resources, schedule the service associated with the priority queue.
  • the service associated with the priority queue is scheduled through the bandwidth resource. Thereby, the priority queue-based priority is implemented, and the services associated with the priority queue are scheduled.
  • the service associated with the priority queue is scheduled through the bandwidth resource to which each priority queue belongs.
  • the priority queue includes a high priority queue, a medium priority queue, and a low priority queue
  • the priority of the high priority queue is higher than that of the medium priority queue Priority
  • the priority of the medium priority queue is higher than the priority of the low priority queue.
  • the services associated with the high priority queue are scheduled according to the CIR; the medium priority is included in the associated priority queue
  • the services associated with the medium-priority queue are scheduled according to the CIR and EIR; when the associated priority queue includes only the low-priority queue, The services associated with the low priority queue are scheduled according to EIR.
  • the higher the priority of the priority queue the larger the bandwidth resource to which the priority queue belongs. Therefore, it is only necessary to associate the higher priority service with the priority queue corresponding to the priority level through the corresponding network slice.
  • the service can be scheduled according to the large bandwidth resource of the priority queue, so that the services associated with the priority queue can be scheduled in sequence according to the priority of the priority queue.
  • the business is associated with the corresponding priority queue, so that the same service is allocated to the priority queue of the same priority, and the priority queue has its own priority.
  • the bandwidth resources of the corresponding bit rate thereby realizing the priority based on the priority queue, scheduling the services associated with the priority queue.
  • the service forwarding through the priority queue makes the service forwarding process have the advantages of finer granularity, higher utilization of bandwidth resources, and dynamic adjustment of bandwidth resources.
  • FIG. 5 is a flowchart of a network slicing method according to a fifth embodiment of the present invention. As shown in FIG. 5, a fifth embodiment of the present invention provides a method for network slicing. The method includes:
  • uRLLC services For example, after establishing end-to-end services, classify the established services to obtain uRLLC services, eMBB services, and mMTC services.
  • the types of services include but are not limited to: uRLLC, source eMBB, mMTC.
  • a corresponding network slice is configured for each type of service, and the configured network slice is adapted to the corresponding service. That is, in this embodiment, corresponding and adapted network slices can be configured for different types of services, respectively.
  • network slice 1 for uRLLC
  • network slice 2 for eMBB
  • network slice 3 for mMTC.
  • each network slice is isolated from each other.
  • S53 Associate the network slice and the priority queue with the same priority; configure the service in the network slice adapted to the service, to associate the service with the corresponding Priority queues are associated;
  • the service configured in the network slice is associated with the corresponding priority queue.
  • the network is adapted to the service to associate the service with the corresponding priority queue.
  • the same services are all associated with the priority queue of the same priority.
  • each priority queue carries a priority identifier.
  • the priority identifier is used to mark each priority queue, or the service is configured with When the service is adapted in the network slice, each priority queue is marked by the priority identifier.
  • the specific content of the priority mark and the setting timing of the priority mark are not limited, and only need to meet the requirements of this embodiment.
  • the priority of the priority queue is determined according to the priority identifier, for example, it is determined that the priority queue is a high priority queue, a medium priority queue, or a low priority queue.
  • the corresponding bandwidth resource is configured for the priority queue based on the priority of the priority queue. For example, the dual-bucket bucket algorithm is used to configure the corresponding bandwidth resources for the priority queue. That is, if the priority of the priority queue is higher, the bandwidth resource configured for the priority queue is larger.
  • the priority queue includes a high priority queue, a medium priority queue, and a low priority queue
  • the priority of the high priority queue is higher than that of the medium priority queue Priority
  • the priority of the medium priority queue is higher than the priority of the low priority queue.
  • the priority queue in the case where the priority queue is a high priority queue, the priority queue is configured with a committed information rate bandwidth CIR (Commited Information), and the priority queue is In the case of a medium priority queue, configure CIR and excess information rate bandwidth EIR (Excess Information Rate) for the priority queue; when the priority queue is a low priority queue, Configure EIR for the priority queue.
  • CIR Common Information Rate
  • EIR Excess Information Rate
  • the service associated with the priority queue is scheduled through the bandwidth resource. Thereby, the priority queue-based priority is implemented, and the services associated with the priority queue are scheduled.
  • the service associated with the priority queue is scheduled through the bandwidth resource to which each priority queue belongs.
  • the priority queue includes a high priority queue, a medium priority queue, and a low priority queue
  • the priority of the high priority queue is higher than that of the medium priority queue Priority
  • the priority of the medium priority queue is higher than the priority of the low priority queue.
  • the services associated with the high priority queue are scheduled according to the CIR; the medium priority is included in the associated priority queue
  • the services associated with the medium-priority queue are scheduled according to the CIR and EIR; when the associated priority queue includes only the low-priority queue, The services associated with the low priority queue are scheduled according to EIR.
  • the medium-priority queue is associated with the CIR and EIR Scheduling services; after scheduling the services associated with the medium priority queue according to the CIR and EIR, if the associated priority queue also includes a low priority queue, the low priority is based on the remaining EIR The services associated with the queue are scheduled.
  • the higher the priority of the priority queue the larger the bandwidth resource to which the priority queue belongs. Therefore, it is only necessary to associate the higher priority service with the priority queue corresponding to the priority level through the corresponding network slice.
  • the service can be scheduled according to the large bandwidth resource of the priority queue, so that the services associated with the priority queue can be scheduled in sequence according to the priority of the priority queue.
  • the business is associated with the corresponding priority queue, so that the same service is allocated to the priority queue of the same priority, and the priority queue is due to its priority.
  • the bandwidth resources of the corresponding bit rate thereby realizing the priority based on the priority queue, scheduling the services associated with the priority queue.
  • the service forwarding through the priority queue makes the service forwarding process have the advantages of finer granularity, higher utilization of bandwidth resources, and dynamic adjustment of bandwidth resources.
  • 5G services have multi-scenarios and different characteristics. For example, autonomous driving services require low latency and jitter protection, industrial control has strict reliability requirements, mobile Internet services focus on bandwidth, and IoT services must support a huge number of connections. If you build a network for each business, the cost will be very high.
  • Network slicing technology builds an independent end-to-end logical network for different types of services in a physical network. The network slices are logically isolated on the control plane, forwarding plane, and operation plane. In this way, the slicing technology can reduce the physical cost, and at the same time provide differentiated services, ensure that each business can get the best bearing requirements according to its business characteristics, and at the same time help the safety management of equipment and storage resources.
  • the 5G bearer network is part of the 5G end-to-end service path.
  • Each bearer network slice is like an independent physical network.
  • the bearer network network slicing is to organize multiple virtual networks vNet (namely, slicing network) by virtualizing the topological resources of the network (such as links, nodes, ports, and internal resources of network elements) and organizing them as needed.
  • the forwarding plane can determine the slicing method according to business needs.
  • Soft slicing schemes can be used, such as IP / MPLS-based tunnels / pseudo wires, and virtualization technologies based on VPN, VLAN, etc .; hard slicing schemes, such as flexible Ethernet technology F1exE , OTN technology, WDM multi-transmission channels, etc .; you can also mix hard slicing and soft slicing solutions.
  • the hard slicing method ensures business isolation security and low latency.
  • the soft slicing method supports bandwidth reuse of services.
  • FIG. 6 is a flow block diagram in the sixth embodiment of the present invention
  • FIG. 7 is a simplified networking diagram of the bearer network in the sixth embodiment of the present invention
  • FIG. 8 is a schematic diagram of the service trend of PE1 in the sixth embodiment of the present invention
  • FIG. 9 This is the priority queue included in PE1 in the sixth embodiment of the present invention.
  • PE1 and PE2 are edge devices
  • P is an intermediate device
  • CE1 and CE2 are client devices.
  • a sixth embodiment of the present invention provides a method for network slicing.
  • the method is applied to schedule services based on network slicing in PE1.
  • the method includes :
  • the S1 includes:
  • S11 The physical network of PE1 is divided into three types of network slices, and the network slices are isolated from each other.
  • one or more of the physical network of PE1, the physical network of PE2, and the physical network of P may also be divided into three types of network slices.
  • the three types of network slices include: network slice 1, network slice 2, and network slice 3.
  • the priority of network slice 1 is greater than that of network slice 2, and the priority of network slice 2 is greater than that of network slice 3.
  • port1, port2, and port3 are physical ports of the device, which are used to receive or establish services.
  • port4 is the physical port of the device and is used to output services.
  • the three types of services include: service 1, service 2, and service 3.
  • service 1 is ultra-reliable and low-latency communication uRLLC
  • service 2 For enhanced mobile broadband eMBB
  • service 3 is mMTC for large-scale machine communications.
  • the priority of uRLLC is greater than that of eMBB, and the priority of eMBB is greater than that of mMTC.
  • the service 1, the service 2, and the service 3 correspond to the above-mentioned network slice 1, network slice 2, and network slice 3, respectively.
  • the S2 includes:
  • S21 Associate the network slice with the same priority and the priority queue; configure the service in the network slice adapted to the service, to associate the service with the corresponding Priority queues are associated;
  • PE1 includes a high-priority queue, a medium-priority queue, and a low-priority queue, where the priority of the high-priority queue is greater than the priority of the medium-priority queue, the medium-priority queue The priority of is greater than the priority of the low priority queue.
  • the high priority queue is first associated with network slice 1, the medium priority queue is associated with network slice 2, and the low priority queue is associated with network slice 3. Then, configure uRLLC in network slice 1; configure eMBB in network slice 2; configure mMBB in network slice 3.
  • uRLLC in network slice 1
  • eMBB in network slice 2
  • mMBB in network slice 3.
  • S22 Record the high priority queue as CS1, the medium priority queue as CS2, and the low priority queue as CS3;
  • the dual token bucket algorithm is used to set the bandwidth resources of queues at different levels, that is, CIR is configured for high-priority queues, CIR and EIR are configured for medium-priority queues, and EIR is configured for low-priority queues.
  • the priority queue is scheduled at the exit port 4 of PE1.
  • the required priority queues include medium priority queues, and the services to be scheduled include the eMBB corresponding to the medium priority queues. In this case, even if there are both medium priority queues and other rank queues, priority is given to The CIR and EIR of the medium-priority queue schedule the eMBB corresponding to the medium-priority queue; specifically, the eMBB is scheduled according to the bandwidth resource being “the maximum available CIR of EIR + CS2”.
  • the remaining bandwidth resources are used to schedule the mMBB corresponding to the low-priority queue.
  • the chip of the PE1 device is also arranged according to the priority of the priority queue from high to low, and the processing delay of the service is also from small to large.
  • the priority queues of the same level of multiple ports are allocated to the same type of service to obtain greater bandwidth.
  • the business is associated with the corresponding priority queue, so that the same service is allocated to the priority queue of the same priority, and the priority queue is due to its priority.
  • Bandwidth resources at the corresponding bit rate thus, the priority based on the priority queue is implemented, and the services associated with the priority queue are scheduled, that is, the scheduling capabilities of the priority queue are utilized, and high reliability services are first guaranteed in the event of congestion Scheduling and maximize the use of port bandwidth.
  • the service forwarding through the priority queue makes the service forwarding process have the advantages of finer granularity, higher utilization of bandwidth resources, and dynamic adjustment of bandwidth resources.
  • FIG. 10 is a schematic structural diagram of a network slicing apparatus according to a seventh embodiment of the present invention.
  • a tenth embodiment of the present invention provides an apparatus for providing network slicing.
  • the apparatus includes: a configuration module 110 for configuring network slices adapted to services; and an association module 210 for The network slice associates the service with a corresponding priority queue; a scheduling module 310 is configured to schedule the service associated with the priority queue based on the bandwidth resources of the priority queue.
  • the configuration module 110 is specifically configured to: classify the services; configure the adapted network slice for each type of the services.
  • the association module 210 includes: a first association unit 211 for associating the network slice and the priority queue with the same priority; a second association unit 212 for associating the service It is configured in the network slice adapted to the service to associate the service with the corresponding priority queue.
  • the scheduling module 310 includes: a bandwidth configuration unit 311 for configuring corresponding bandwidth resources for the priority queue based on the priority of the priority queue; a resource scheduling unit 312 for configuring based Bandwidth resources to schedule the services associated with the priority queue.
  • the bandwidth configuration unit 311 is specifically configured to: when the priority queue is a high priority queue, configure CIR for the priority queue; when the priority queue is a medium priority queue In the case, CIR and EIR are configured for the priority queue; when the priority queue is a low priority queue, EIR is configured for the priority queue.
  • the bandwidth configuration unit 312 is specifically configured to: when the associated priority queue includes a high priority queue, schedule services associated with the high priority queue according to the CIR; When the priority queue includes a medium priority queue and does not include the high priority queue, the services associated with the medium priority queue are scheduled according to the CIR and EIR; only the associated priority queue includes In the case of a low priority queue, the services associated with the low priority queue are scheduled according to EIR.
  • the bandwidth configuration unit 312 is further configured to: after scheduling the services associated with the high priority queue according to the CIR, if the associated priority queue also includes a medium priority queue, then The CIR and EIR schedule services associated with the medium priority queue; after scheduling services associated with the medium priority queue according to the CIR and EIR, if the associated priority queue also includes low priority In the queue, the services associated with the low priority queue are scheduled according to the remaining EIR.
  • the priority judgment method of the priority queue includes: marking the priority queue by a priority identifier; determining that the priority queue is a high priority queue and a medium priority according to the priority identifier Level queue, or low priority queue.
  • the scheduling module implements the priority based on the priority queue and schedules the services associated with the priority queue.
  • the service forwarding through the priority queue makes the service forwarding process have the advantages of finer granularity, higher utilization of bandwidth resources, and dynamic adjustment of bandwidth resources.
  • An eighth embodiment of the present invention provides a computer device, including a processor and a memory; the memory is used to store computer instructions, and the processor is used to run the computer instructions stored in the memory to implement the above-mentioned network slicing Methods.
  • the ninth embodiment of the present invention provides a computer-readable storage medium that stores one or more modules, and the one or more modules can be executed by one or more processors to implement The above is a network slicing method.
  • the beneficial effects of the embodiments according to the present invention are as follows: by slicing the network adapted to the service, associating the service with the corresponding priority queue, thereby allocating the same service to the priority queue with the same priority,
  • the priority queue has bandwidth resources with a corresponding bit rate because of its priority, thereby implementing priority based on the priority queue and scheduling services associated with the priority queue.
  • the service forwarding through the priority queue makes the service forwarding process have the advantages of finer granularity, higher utilization of bandwidth resources, and dynamic adjustment of bandwidth resources.
  • the methods in the above embodiments can be implemented by means of software plus a necessary general hardware platform, and of course, can also be implemented by hardware, but in many cases the former is better Implementation.
  • the technical solution of the present invention can be embodied in the form of a software product in essence or part that contributes to the existing technology, and the computer software product is stored in a storage medium (such as ROM / RAM, magnetic disk,
  • the CD-ROM includes several instructions to enable a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the method described in each embodiment of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明公开了一种网络切片的方法、计算机设备及存储介质,其中,方法包括:配置与业务适配的网络切片;基于网络切片,将业务与对应的优先级队列进行关联;基于优先级队列的带宽资源,对优先级队列关联的业务进行调度。

Description

一种网络切片的方法、计算机设备及存储介质
相关申请的交叉引用
本申请基于申请号为201811229677.7、申请日为2018年10月22日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本发明涉及通信领域,尤其涉及一种网络切片的方法、计算机设备及存储介质。
背景技术
5G业务具有多场景、差异性的特点,比如:自动驾驶业务需要低延时和抖动保障,工业控制对可靠性要求苛刻,移动上网业务聚焦带宽,物联网业务要支持巨大的连接数量。如果为每种业务都建立一张网络,成本将非常高昂。网络切片技术,在一张物理网络中针对不同类型业务构建独立的端到端逻辑网络,网络切片之间在控制面、转发面、操作面上实现逻辑隔离。这样,通过切片技术可以降低物理成本,同时提供差异化的服务,保证每种业务都能根据其业务特点得到最佳承载要求,同时有助于设备和存储资源的安全管理。
5G承载网是5G端到端业务路径的一部分。每个承载网切片就象一个独立的物理网络。承载网网络切片是通过对网络的拓扑资源(如链路、节点、端口及网元内部资源)进行虚拟化,按需组织形成多个虚拟网络vNet(即切片网络)。转发面可根据业务需求确定切片方式,即:根据物理资源进行切片划分。具体的,可以采用软切片方案,如基于IP/MPLS的隧道/伪线,基于VPN、VLAN等的虚拟化技术;也可以采用硬切片方案,如灵活以太网技术FlexE、OTN技术、WDM的多传送通道等;也可以混合采用硬切片、软切片的方案,硬切片方式保证业务的隔离安全、低时延等需求,软切片方式支持业务的带宽复用。
但是,现有的根据物理资源进行切片划分,具有切片力度较大、资源利用率不高、及带宽资源不可动态调整的问题。
发明内容
本发明的主要目的在于提出一种网络切片的方法、装置、计算机设备及存储介质,其克服了现有技术中根据物理资源进行切片划分时,具有的切片力度较大、资源利用率不高、及带宽资源不可动态调整的问题。
根据本发明的第一个方面,提供了一种网络切片的方法,所述方法包括:配置与业务适配的网络切片;基于所述网络切片,将所述业务与对应的优先级队列进行关联;基于所述优先级队列的带宽资源,对所述优先级队列关联的所述业务进行调度。
根据本发明的第二个方面,提供了一种网络切片的装置,所述装置包括:配置模块,用于配置与业务适配的网络切片;关联模块,用于基于所述网络切片,将所述业务与对应的优先级队列进行关联;调度模块,用于基于所述优先级队列的带宽资源,对所述优先级队列关联的所述业务进行调度。
剩余的EIR对所述低优先级队列
根据本发明的第三个方面,提供了一种计算机设备,包括处理器和存储器;
所述存储器用于存储计算机指令,所述处理器用于运行所述存储器存储的计算机指 令,以实现上述的一种网络切片的方法。
根据本发明的第四个方面,提供了一种计算机可读存储介质,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现上述的一种网络切片的方法。
附图说明
图1为本发明第一实施例一种网络切片的方法的流程框图;
图2为本发明第二实施例一种网络切片的方法的流程框图;
图3为本发明第三实施例一种网络切片的方法的流程框图;
图4为本发明第四实施例一种网络切片的方法的流程框图;
图5为本发明第五实施例一种网络切片的方法的流程框图;
图6为本发明第六实施例中的流程框图;
图7为本发明第六实施例中承载网的简化组网图;
图8为本发明第六实施例中PE1的业务走向示意图;
图9为本发明第六实施例PE1中包含的优先级队列。
图10为本发明第七实施例一种网络切片的装置的结构示意图
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本发明的说明,其本身没有特定的意义。因此,“模块”、“部件”或“单元”可以混合地使用。
为了便于理解本发明实施例,下面通过几个具体实施例对本发明的实施过程进行详细的阐述。
本发明第一实施例提供一种网络切片的方法,所述方法包括:配置与业务适配的网络切片;基于所述网络切片,将所述业务与对应的优先级队列进行关联;基于所述优先级队列的带宽资源,对所述优先级队列关联的所述业务进行调度。
就此,通过将与业务适配的网络切片,将业务与对应的优先级队列进行关联,从而实现了将相同的业务分配至相同优先级的优先级队列中,而优先级队列因其优先级具备相应比特率的带宽资源,从而,实现了基于优先级队列的优先级,对优先级队列关联的业务进行调度,即优先级调度。而且,通过优先级队列进行业务转发,使得该业务转发过程具有颗粒度更加精细、带宽资源利用率更高、带宽资源可动态调整的优点。
图1为本发明第一实施例一种网络切片的方法的流程框图。根据图1所示,本发明第一实施例提供了一种网络切片的方法,所述方法包括:
S11:配置与业务适配的网络切片;
首先,在建立端到端的业务后,需要为该业务配置对应的网络切片,而且,该配置的网络切片与对应的业务适配。即:在本实施例中,可以实现为不同的业务分别配置相应且适配的网络切片。
其中,该业务包括但不限于:超可靠低时延通信uRLLC、增强型移动宽带eMBB、大规 模机器类通信mMTC。而且,为uRLLC配置网络切片1,为eMBB配置网络切片2,为mMTC配置网络切片3。
当然,在本实施例中,各个网络切片之间相互隔离。
S12:基于所述网络切片,将所述业务与对应的优先级队列进行关联;
在本实施例中,通过与业务适配的网络切片,将该业务与对应的优先级队列进行关联。从而实现了将相同的业务均关联至同一优先级的优先级队列中。
针对该优先级队列,在本实施例中,其可以是来自多个端口,也可只来自某个特定端口。
而且,每个优先级队列均具有对应的优先级,每个优先级队列的优先级均不相同,优先级越高的优先级队列,所属的带宽资源越大。
如:为网络切片1适配的uRLLC关联高等优先级队列,为网络切片2适配的eMBB关联中等优先级队列,为网络切片3适配的mMTC关联低等优先级队列。其中,高等优先级队列的优先级高于中等优先级队列的优先级,中等优先级队列的优先级高于低等优先级队列的优先级。
在本实施例中,该优先级队列是0个或多个元素的集合,每个元素都有一个优先权,对优先级队列执行的操作有(1)查找(2)插入一个新元素(3)删除一般情况下,查找操作用来搜索优先权最大的元素,删除操作用来删除该元素。对于优先权相同的元素,可按先进先出次序处理或按任意优先权进行。
S13:基于所述优先级队列的带宽资源,对所述优先级队列关联的所述业务进行调度。
在将网络切片适配的业务与对应的优先级队列关联后,在本实施例中,通过每个优先级队列所属的带宽资源,将该优先级队列关联的业务进行调度。
因为优先级队列的优先级越高,该优先级队列所属的带宽资源越大,所以,只需通过相应的网络切片,将优先级越高的业务与对应优先级级别的优先级队列进行关联,就可以根据该优先级队列的大带宽资源对该业务进行调度,从而实现了根据优先级队列的优先级的高低,将优先级队列关联的业务进行依次调度。
就此,通过将与业务适配的网络切片,将业务与对应的优先级队列进行关联,从而实现了将相同的业务分配至相同优先级的优先级队列中,而优先级队列因其优先级具备相应比特率的带宽资源,从而,实现了基于优先级队列的优先级,对优先级队列关联的业务进行调度。而且,通过优先级队列进行业务转发,使得该业务转发过程具有颗粒度更加精细、带宽资源利用率更高、带宽资源可动态调整的优点。
图2为本发明第二实施例一种网络切片的方法的流程框图。根据图2所示,本发明第二实施例提供了一种网络切片的方法,所述方法包括:
S21:对所述业务进行分类;
首先,在建立端到端的业务后,对该业务进行分类。
如:在建立端到端的业务后,将建立的业务进行分类,得到种类为uRLLC的业务、业务eMBB、mMTC业务。
当然,该业务的种类包括但不限于:uRLLC、源eMBB、mMTC。
S22:为每类所述业务配置适配的所述网络切片;
在得到经分类之后的业务后,为每类业务配置对应的网络切片,而且,该配置的网络切片与对应的业务适配。即:在本实施例中,可以实现为不同种类的业务分别配置相应且适配的网络切片。
如:为uRLLC配置网络切片1,为eMBB配置网络切片2,为mMTC配置网络切片3。
当然,在本实施例中,各个网络切片之间相互隔离。
S23:基于所述网络切片,将所述业务与对应的优先级队列进行关联;
在本实施例中,通过与业务适配的网络切片,将该业务与对应的优先级队列进行关联。从而实现了将相同的业务均关联至同一优先级的优先级队列中。
针对该优先级队列,在本实施例中,其可以是来自多个端口,也可只来自某个特定端口。
而且,每个优先级队列均具有对应的优先级,每个优先级队列的优先级均不相同,优先级越高的优先级队列,所属的带宽资源越大。
如:为网络切片1适配的uRLLC关联高等优先级队列,为网络切片2适配的eMBB关联中等优先级队列,为网络切片3适配的mMTC关联低等优先级队列。其中,高等优先级队列的优先级高于中等优先级队列的优先级,中等优先级队列的优先级高于低等优先级队列的优先级。
在本实施例中,该优先级队列是0个或多个元素的集合,每个元素都有一个优先权,对优先级队列执行的操作有(1)查找(2)插入一个新元素(3)删除一般情况下,查找操作用来搜索优先权最大的元素,删除操作用来删除该元素。对于优先权相同的元素,可按先进先出次序处理或按任意优先权进行。
S24:基于所述优先级队列的带宽资源,对所述优先级队列关联的所述业务进行调度;
在将网络切片适配的业务与对应的优先级队列关联后,在本实施例中,通过每个优先级队列所属的带宽资源,将该优先级队列关联的业务进行调度。
因为优先级队列的优先级越高,该优先级队列所属的带宽资源越大,所以,只需通过相应的网络切片,将优先级越高的业务与对应优先级级别的优先级队列进行关联,就可以根据该优先级队列的大带宽资源对该业务进行调度,从而实现了根据优先级队列的优先级的高低,将优先级队列关联的业务进行依次调度。
就此,通过将与业务适配的网络切片,将业务与对应的优先级队列进行关联,从而实现了将相同的业务分配至相同优先级的优先级队列中,而优先级队列因其优先级具备相应比特率的带宽资源,从而,实现了基于优先级队列的优先级,对优先级队列关联的业务进行调度。而且,通过优先级队列进行业务转发,使得该业务转发过程具有颗粒度更加精细、带宽资源利用率更高、带宽资源可动态调整的优点。
图3为本发明第三实施例一种网络切片的方法的流程框图。根据图3所示,本发明第三实施例提供了一种网络切片的方法,所述方法包括:
S31:对所述业务进行分类;
首先,在建立端到端的业务后,对该业务进行分类。
如:在建立端到端的业务后,将建立的业务进行分类,得到种类为uRLLC的业务、业务eMBB、mMTC业务。
当然,该业务的种类包括但不限于:uRLLC、源eMBB、mMTC。
S32:为每类所述业务配置适配的所述网络切片;
在得到经分类之后的业务后,为每类业务配置对应的网络切片,而且,该配置的网络切片与对应的业务适配。即:在本实施例中,可以实现为不同种类的业务分别配置相应且适配的网络切片。
如:为uRLLC配置网络切片1,为eMBB配置网络切片2,为mMTC配置网络切片3。
当然,在本实施例中,各个网络切片之间相互隔离。
S33:将具有相同优先级的所述网络切片及所述优先级队列进行关联;将所述业务配置在与所述业务适配的所述网络切片中,以将所述业务与对应的所述优先级队列进行关联;
在本实施例中,通过将具有相同优先级的所述网络切片及所述优先级队列进行关联,然后,将所述业务配置在与所述业务适配的所述网络切片中,从而,就实现了将配置于网络切片中的所述业务与对应的所述优先级队列进行关联。
即:在本实施例中,通过与业务适配的网络切片,将该业务与对应的优先级队列进行关联。从而实现了将相同的业务均关联至同一优先级的优先级队列中。
S34:基于所述优先级队列的带宽资源,对所述优先级队列关联的所述业务进行调度。
在将网络切片适配的业务与对应的优先级队列关联后,在本实施例中,通过每个优先级队列所属的带宽资源,将该优先级队列关联的业务进行调度。
因为优先级队列的优先级越高,该优先级队列所属的带宽资源越大,所以,只需通过相应的网络切片,将优先级越高的业务与对应优先级级别的优先级队列进行关联,就可以根据该优先级队列的大带宽资源对该业务进行调度,从而实现了根据优先级队列的优先级的高低,将优先级队列关联的业务进行依次调度。
就此,通过将与业务适配的网络切片,将业务与对应的优先级队列进行关联,从而实现了将相同的业务分配至相同优先级的优先级队列中,而优先级队列因其优先级具备相应比特率的带宽资源,从而,实现了基于优先级队列的优先级,对优先级队列关联的业务进行调度。而且,通过优先级队列进行业务转发,使得该业务转发过程具有颗粒度更加精细、带宽资源利用率更高、带宽资源可动态调整的优点。
图4为本发明第四实施例一种网络切片的方法的流程框图。根据图4所示,本发明第四实施例提供了一种网络切片的方法,所述方法包括:
S41:对所述业务进行分类;
首先,在建立端到端的业务后,对该业务进行分类。
如:在建立端到端的业务后,将建立的业务进行分类,得到种类为uRLLC的业务、业务eMBB、mMTC业务。
当然,该业务的种类包括但不限于:uRLLC、源eMBB、mMTC。
S42:为每类所述业务配置适配的所述网络切片;
在得到经分类之后的业务后,为每类业务配置对应的网络切片,而且,该配置的网络切片与对应的业务适配。即:在本实施例中,可以实现为不同种类的业务分别配置相应且适配的网络切片。
如:为uRLLC配置网络切片1,为eMBB配置网络切片2,为mMTC配置网络切片3。
当然,在本实施例中,各个网络切片之间相互隔离。
S43:将具有相同优先级的所述网络切片及所述优先级队列进行关联;将所述业务配置在与所述业务适配的所述网络切片中,以将所述业务与对应的所述优先级队列进行关联;
在本实施例中,通过将具有相同优先级的所述网络切片及所述优先级队列进行关联,然后,将所述业务配置在与所述业务适配的所述网络切片中,从而,就实现了将配置于网络切片中的所述业务与对应的所述优先级队列进行关联。
即:在本实施例中,通过与业务适配的网络切片,将该业务与对应的优先级队列进行关联。从而实现了将相同的业务均关联至同一优先级的优先级队列中。
S44:基于所述优先级队列的优先级,为所述优先级队列配置对应的带宽资源;
在将业务与对应的优先级队列进行关联后,基于优先级队列的优先级,为优先级队列配置对应的带宽资源。如:通过双令牌桶算法为优先级队列配置对应的带宽资源。即:若优先级队列的优先级越高,则为该优先级队列所配置的带宽资源越大。
具体的,若所述优先级队列包括:高等优先级队列、中等优先级队列、及低等优先级队列,而且,在本实施例中,高等优先级队列的优先级高于中等优先级队列的优先级,中等优先级队列的优先级高于低等优先级队列的优先级。
在本实施例中,在所述优先级队列为高优先级队列的情况下,为所述优先级队列配置承诺信息速率带宽CIR(Commited Information Rate,承诺信息速率);在所述优先级队列为中优先级队列的情况下,为所述优先级队列配置CIR和超额信息速率带宽EIR(Excess Information Rate,超额信息速率);在所述优先级队列为低优先级队列的情况下,为所述优先级队列配置EIR。
S45:基于配置的带宽资源,对所述优先级队列关联的所述业务进行调度。
在为优先级队列配置对应的带宽资源后,通过该带宽资源,将优先级队列关联的业务进行调度。从而实现了基于优先级队列的优先级,对优先级队列关联的业务进行调度。
在将网络切片适配的业务与对应的优先级队列关联后,在本实施例中,通过每个优先级队列所属的带宽资源,将该优先级队列关联的业务进行调度。
具体的,若所述优先级队列包括:高等优先级队列、中等优先级队列、及低等优先级队列,而且,在本实施例中,高等优先级队列的优先级高于中等优先级队列的优先级,中等优先级队列的优先级高于低等优先级队列的优先级。
在本实施例中,在关联的优先级队列中包括高优先级队列的情况下,依据所述CIR将所述高优先级队列关联的业务进行调度;在关联的优先级队列中包括中优先级队列且不包括所述高优先级队列的情况下,依据所述CIR和EIR将所述中优先级队列关联的业务进行调度;在关联的优先级队列中只包括低优先级队列的情况下,依据EIR对所述低优先级队列关联的业务进行调度。
因为优先级队列的优先级越高,该优先级队列所属的带宽资源越大,所以,只需通过相应的网络切片,将优先级越高的业务与对应优先级级别的优先级队列进行关联,就可以根据该优先级队列的大带宽资源对该业务进行调度,从而实现了根据优先级队列的优先级的高低,将优先级队列关联的业务进行依次调度。
就此,通过将与业务适配的网络切片,将业务与对应的优先级队列进行关联,从而实 现了将相同的业务分配至相同优先级的优先级队列中,而优先级队列因其优先级具备相应比特率的带宽资源,从而,实现了基于优先级队列的优先级,对优先级队列关联的业务进行调度。而且,通过优先级队列进行业务转发,使得该业务转发过程具有颗粒度更加精细、带宽资源利用率更高、带宽资源可动态调整的优点。
图5为本发明第五实施例一种网络切片的方法的流程框图。根据图5所示,本发明第五实施例提供了一种网络切片的方法,所述方法包括:
S51:对所述业务进行分类;
首先,在建立端到端的业务后,对该业务进行分类。
如:在建立端到端的业务后,将建立的业务进行分类,得到种类为uRLLC的业务、业务eMBB、mMTC业务。
当然,该业务的种类包括但不限于:uRLLC、源eMBB、mMTC。
S52:为每类所述业务配置适配的所述网络切片;
在得到经分类之后的业务后,为每类业务配置对应的网络切片,而且,该配置的网络切片与对应的业务适配。即:在本实施例中,可以实现为不同种类的业务分别配置相应且适配的网络切片。
如:为uRLLC配置网络切片1,为eMBB配置网络切片2,为mMTC配置网络切片3。
当然,在本实施例中,各个网络切片之间相互隔离。
S53:将具有相同优先级的所述网络切片及所述优先级队列进行关联;将所述业务配置在与所述业务适配的所述网络切片中,以将所述业务与对应的所述优先级队列进行关联;
在本实施例中,通过将具有相同优先级的所述网络切片及所述优先级队列进行关联,然后,将所述业务配置在与所述业务适配的所述网络切片中,从而,就实现了将配置于网络切片中的所述业务与对应的所述优先级队列进行关联。
即:在本实施例中,通过与业务适配的网络切片,将该业务与对应的优先级队列进行关联。从而实现了将相同的业务均关联至同一优先级的优先级队列中。
S54:确定优先级队列的优先级;
具体的,在本实施例中,每个优先级队列均携带有优先级标识。当然,也可以设置为:在将具有相同优先级的所述网络切片及所述优先级队列进行关联,通过该优先级标识对每个优先级队列进行标记、或在将所述业务配置在与所述业务适配的所述网络切片中时,通过该优先级标识对每个优先级队列进行标记。
在本实施例中,对优先级标识的具体内容及该优先级标识的设定时机并不限定,只需其满足本实施例的要求即可。
在本实施例中,依据所述优先级标识确定该优先级队列的优先级,如:确定所述优先级队列是高优先级队列、中优先级队列、或低优先级队列。
S55:基于所述优先级队列的优先级,为所述优先级队列配置对应的带宽资源;
在将业务与对应的优先级队列进行关联后,基于优先级队列的优先级,为优先级队列配置对应的带宽资源。如:通过双令牌桶算法为优先级队列配置对应的带宽资源。即:若优先级队列的优先级越高,则为该优先级队列所配置的带宽资源越大。
具体的,若所述优先级队列包括:高等优先级队列、中等优先级队列、及低等优先级队列,而且,在本实施例中,高等优先级队列的优先级高于中等优先级队列的优先级,中等优先级队列的优先级高于低等优先级队列的优先级。
在本实施例中,在所述优先级队列为高优先级队列的情况下,为所述优先级队列配置承诺信息速率带宽CIR(Commited Information Rate,承诺信息速率);在所述优先级队列为中优先级队列的情况下,为所述优先级队列配置CIR和超额信息速率带宽EIR(Excess Information Rate,超额信息速率);在所述优先级队列为低优先级队列的情况下,为所述优先级队列配置EIR。
S56:基于配置的带宽资源,对所述优先级队列关联的所述业务进行调度。
在为优先级队列配置对应的带宽资源后,通过该带宽资源,将优先级队列关联的业务进行调度。从而实现了基于优先级队列的优先级,对优先级队列关联的业务进行调度。
在将网络切片适配的业务与对应的优先级队列关联后,在本实施例中,通过每个优先级队列所属的带宽资源,将该优先级队列关联的业务进行调度。
具体的,若所述优先级队列包括:高等优先级队列、中等优先级队列、及低等优先级队列,而且,在本实施例中,高等优先级队列的优先级高于中等优先级队列的优先级,中等优先级队列的优先级高于低等优先级队列的优先级。
在本实施例中,在关联的优先级队列中包括高优先级队列的情况下,依据所述CIR将所述高优先级队列关联的业务进行调度;在关联的优先级队列中包括中优先级队列且不包括所述高优先级队列的情况下,依据所述CIR和EIR将所述中优先级队列关联的业务进行调度;在关联的优先级队列中只包括低优先级队列的情况下,依据EIR对所述低优先级队列关联的业务进行调度。
当然,在依据所述CIR将所述高优先级队列关联的业务进行调度后,若关联的优先级队列中还包括中优先级队列,则依据所述CIR和EIR将所述中优先级队列关联的业务进行调度;在依据所述CIR和EIR将所述中优先级队列关联的业务进行调度后,若关联的优先级队列中还包括低优先级队列,依据剩余的EIR对所述低优先级队列关联的业务进行调度。
因为优先级队列的优先级越高,该优先级队列所属的带宽资源越大,所以,只需通过相应的网络切片,将优先级越高的业务与对应优先级级别的优先级队列进行关联,就可以根据该优先级队列的大带宽资源对该业务进行调度,从而实现了根据优先级队列的优先级的高低,将优先级队列关联的业务进行依次调度。
就此,通过将与业务适配的网络切片,将业务与对应的优先级队列进行关联,从而实现了将相同的业务分配至相同优先级的优先级队列中,而优先级队列因其优先级具备相应比特率的带宽资源,从而,实现了基于优先级队列的优先级,对优先级队列关联的业务进行调度。而且,通过优先级队列进行业务转发,使得该业务转发过程具有颗粒度更加精细、带宽资源利用率更高、带宽资源可动态调整的优点。
为了更好说明本实施例所述方法的实施过程,下面结合一个具体应用示例,对本实施例所述方法进行说明。
5G业务具有多场景、差异性的特点,比如:自动驾驶业务需要低延时和抖动保障,工业控制对可靠性要求苛刻,移动上网业务聚焦带宽,物联网业务要支持巨大的连接数量。如果为每种业务都建立一张网络,成本将非常高昂。网络切片技术,在一张物理网络中针对不同类型业务构建独立的端到端逻辑网络,网络切片之间在控制面、转发面、操作面上 实现逻辑隔离。这样,通过切片技术可以降低物理成本,同时提供差异化的服务,保证每种业务都能根据其业务特点得到最佳承载要求,同时有助于设备和存储资源的安全管理。
而且,5G承载网是5G端到端业务路径的一部分。每个承载网切片就象一个独立的物理网络。承载网网络切片是通过对网络的拓扑资源(如链路、节点、端口及网元内部资源)进行虚拟化,按需组织形成多个虚拟网络vNet(即切片网络)。转发面可根据业务需求确定切片方式,可以采用软切片方案,如基于IP/MPLS的隧道/伪线,基于VPN、VLAN等的虚拟化技术;也可以采用硬切片方案,如灵活以太网技术F1exE、OTN技术、WDM的多传送通道等;也可以混合采用硬切片、软切片的方案,硬切片方式保证业务的隔离安全、低时延等需求,软切片方式支持业务的带宽复用。
图6为本发明第六实施例中的流程框图;图7为本发明第六实施例中承载网的简化组网图;图8为本发明第六实施例中PE1的业务走向示意图;图9为本发明第六实施例PE1中包含的优先级队列。
其中,根据图7所示,PE1及PE2均为边缘设备,P为中间设备,而CE1及CE2为客户设备。
针对上述问题,根据图6所示,本发明第六实施例提供了一种网络切片的方法,在本实施例中,该方法应用于基于PE1中的网络切片对业务进行调度,所述方法包括:
S1:配置三种业务及与该业务分别对应的网络切片;
具体的,该S1包括:
S11:将PE1的物理网络划分为三类网络切片,网络切片之间相互隔离。
当然,在本实施例中,也将可以PE1的物理网络、PE2的物理网络、P的物理网络中的一种或多种划分为三类网络切片。
该三类网络切片包括:网络切片1、网络切片2、及网络切片3。
网络切片1的优先级大于网络切片2的优先级,网络切片2的优先级大于网络切片3的优先级。
根据图8所示,port1、port2、及port3为设备的物理端口,其用于接收或建立业务。port4为设备的物理端口,用于输出业务。
S12:根据图8所示,建立端到端的业务;
在本实施例中,建立了端到端的三类业务,该三类业务包括:业务1、业务2、及业务3,在本实施例中,业务1为超可靠低时延通信uRLLC,业务2为增强型移动宽带eMBB,业务3为大规模机器类通信mMTC。
uRLLC的优先级大于eMBB的优先级,eMBB的优先级大于mMTC的优先级。
其中,该业务1、业务2、及业务3分别与上述的网络切片1、网络切片2、及网络切片3一一对应。
S2:将各业务与优先级队列关联;
具体的,该S2包括:
S21:将具有相同优先级的所述网络切片及所述优先级队列进行关联;将所述业务配置在与所述业务适配的所述网络切片中,以将所述业务与对应的所述优先级队列进行关联;
此外,根据图9所示,PE1中包括高优先级队列、中优先级队列、及低优先级队列,其中,该高优先级队列的优先级大于中优先级队列的优先级,中优先级队列的优先级大于低优先级队列的优先级。
在本实施例中,先将高优先级队列与网络切片1关联,将中优先级队列与网络切片2关联,将低优先级队列与网络切片3关联。然后,将uRLLC配置在网络切片1;将eMBB配置在网络切片2;将mMBB配置在网络切片3。从而,实现了将高优先级队列与uRLLC关联,将中优先级队列与eMBB关联,将低优先级队列与mMBB关联。
S22:将高等优先级队列记为CS1,中等优先级队列记为CS2,低等优先级队列记为CS3;
而且,利用双令牌桶算法设置各等级队列的带宽资源,即:为高优先级队列配置配置CIR,中优先级队列配置CIR和EIR,低优先级队列配置EIR。
S3:在出口进行优先级队列调度;
即:在PE1的出口port4进行优先级队列的调度。
针对该S3,在本实施例中,S3的具体步骤包括:
S31:如果在PE1的出口检测到业务报文中包含CS1时,即可确定在PE1的出口检测到了业务报文所需的优先级队列包括高优先级队列,而且,待调度的业务包括与该高优先级队列对应的uRLLC,在此情况下,即使同时存在高优先级队列和其他等级的队列,也优先按照高优先级队列的CIR将该高优先级队列对应的uRLLC调度出去;
在高优先级队列对应的uRLLC调度完成、或未检测到CS1时,如果按照出口调度规则,在PE1的出口检测到业务报文中包含CS2时,即可确定在PE1的出口检测到了业务报文所需的优先级队列包括中优先级队列,而且,待调度的业务包括与该中优先级队列对应的eMBB,在此情况下,即使同时存在中优先级队列和其他等级的队列,也优先按照中优先级队列的CIR及EIR将该中优先级队列对应的eMBB调度出去;具体的,按照带宽资源为“能够获得的最大EIR+CS2的CIR”对该eMBB进行调度。
在优先满足高优先级队列的CIR和中优先级队列的CIR&EIR后,剩余的带宽资源均用于对低优先级队列对应的mMBB进行调度。
此外,在本实施例中执行S3时,该PE1设备的芯片也根据优先级队列的优先级从高到低的顺序,对业务的处理时延也从小到大。
就此,将多个端口的相同等级的优先级队列分配给同一类业务,以获得更大的带宽。
就此,通过将与业务适配的网络切片,将业务与对应的优先级队列进行关联,从而实现了将相同的业务分配至相同优先级的优先级队列中,而优先级队列因其优先级具备相应比特率的带宽资源,从而,实现了基于优先级队列的优先级,对优先级队列关联的业务进行调度,即:利用了优先级队列的调度能力,在拥塞时优先保证了高可靠性业务的调度,并最大化利用了端口带宽。而且,通过优先级队列进行业务转发,使得该业务转发过程具有颗粒度更加精细、带宽资源利用率更高、带宽资源可动态调整的优点。
图10为本发明第七实施例一种网络切片的装置的结构示意图。根据图10所示,本发明第10实施例提供了提供了一种网络切片的装置,所述装置包括:配置模块110,用于配置与业务适配的网络切片;关联模块210,用于基于所述网络切片,将所述业务与对应的优先级队列进行关联;调度模块310,用于基于所述优先级队列的带宽资源,对所述优先级队列关联的所述业务进行调度。
可选的,所述配置模块110具体用于:对所述业务进行分类;为每类所述业务配置适配的所述网络切片。
可选的,所述关联模块210包括:第一关联单元211,用于将具有相同优先级的所述网络切片及所述优先级队列进行关联;第二关联单元212,用于将所述业务配置在与所述业务适配的所述网络切片中,以将所述业务与对应的所述优先级队列进行关联。
可选的,所述调度模块310包括:带宽配置单元311,用于基于所述优先级队列的优先级,为所述优先级队列配置对应的带宽资源;资源调度单元312,用于基于配置的带宽资源,对所述优先级队列关联的所述业务进行调度。
可选的,所述带宽配置单元311具体用于:在所述优先级队列为高优先级队列的情况下,为所述优先级队列配置CIR;在所述优先级队列为中优先级队列的情况下,为所述优先级队列配置CIR和EIR;在所述优先级队列为低优先级队列的情况下,为所述优先级队列配置EIR。
可选的,所述带宽配置单元312具体用于:在关联的优先级队列中包括高优先级队列的情况下,依据所述CIR将所述高优先级队列关联的业务进行调度;在关联的优先级队列中包括中优先级队列且不包括所述高优先级队列的情况下,依据所述CIR和EIR将所述中优先级队列关联的业务进行调度;在关联的优先级队列中只包括低优先级队列的情况下,依据EIR对所述低优先级队列关联的业务进行调度。
可选的,所述带宽配置单元312还用于:在依据所述CIR将所述高优先级队列关联的业务进行调度后,若关联的优先级队列中还包括中优先级队列,则依据所述CIR和EIR将所述中优先级队列关联的业务进行调度;在依据所述CIR和EIR将所述中优先级队列关联的业务进行调度后,若关联的优先级队列中还包括低优先级队列,依据剩余的EIR对所述低优先级队列关联的业务进行调度。可选的,所述优先级队列的优先级的判断方式包括:通过优先级标识对所述优先级队列进行标记;依据所述优先级标识确定所述优先级队列是高优先级队列、中优先级队列、或低优先级队列。
就此,通过配置模块为业务配置适配的网络切片,然后通过关联模块,将业务与对应的优先级队列进行关联,从而实现了将相同的业务分配至相同优先级的优先级队列中,而优先级队列因其优先级具备相应比特率的带宽资源,从而通过调度模块,实现了基于优先级队列的优先级,对优先级队列关联的业务进行调度。而且,通过优先级队列进行业务转发,使得该业务转发过程具有颗粒度更加精细、带宽资源利用率更高、带宽资源可动态调整的优点。
本发明第八实施例提供了一种计算机设备,包括处理器和存储器;所述存储器用于存储计算机指令,所述处理器用于运行所述存储器存储的计算机指令,以实现上述的一种网络切片的方法。
本发明第八实施例中的一种计算机设备所涉及的名词及实现原理具体可以参照本发明实施例中的第一至六实施例的一种网络切片的方法,在此不再赘述。
本发明第九实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有一个或者多个模块,所述一个或者多个模块可被一个或者多个处理器执行,以实现上述的一种网络切片的方法。
本发明第九实施例中的一种计算机可读存储介质所涉及的名词及实现原理具体可以参照本发明实施例中的第一至六实施例的一种网络切片的方法,在此不再赘述。
根据本发明的实施例的有益效果如下:通过将与业务适配的网络切片,将业务与对应的优先级队列进行关联,从而实现了将相同的业务分配至相同优先级的优先级队列中,而优先级队列因其优先级具备相应比特率的带宽资源,从而,实现了基于优先级队列的优先级,对优先级队列关联的业务进行调度。而且,通过优先级队列进行业务转发,使得该业务转发过程具有颗粒度更加精细、带宽资源利用率更高、带宽资源可动态调整的优点。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本发明各个实施例所述的方法。
上面结合附图对本发明的实施例进行了描述,但是本发明并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本发明的启示下,在不脱离本发明宗旨和权利要求所保护的范围情况下,还可做出很多形式,这些均属于本发明的保护之内。

Claims (10)

  1. 一种网络切片的方法,其中,所述方法包括:
    配置与业务适配的网络切片;
    基于所述网络切片,将所述业务与对应的优先级队列进行关联;
    基于所述优先级队列的带宽资源,对所述优先级队列关联的所述业务进行调度。
  2. 根据权利要求1所述的方法,其中,所述配置与业务适配的网络切片,包括:
    对所述业务进行分类;
    为每类所述业务配置适配的所述网络切片。
  3. 根据权利要求1所述的方法,其中,所述基于网络切片,将所述业务与对应的优先级队列进行关联,包括:
    将具有相同优先级的所述网络切片及所述优先级队列进行关联;
    将所述业务配置在与所述业务适配的所述网络切片中,以将所述业务与对应的所述优先级队列进行关联。
  4. 根据权利要求1所述的方法,其中,所述基于优先级队列的带宽资源,对所述优先级队列关联的所述业务进行调度,包括:
    基于所述优先级队列的优先级,为所述优先级队列配置对应的带宽资源;
    基于配置的带宽资源,对所述优先级队列关联的所述业务进行调度。
  5. 根据权利要求4所述的方法,其中,
    所述基于优先级队列的优先级,为所述优先级队列配置对应的带宽资源,包括:
    在所述优先级队列为高优先级队列的情况下,为所述优先级队列配置承诺信息速率CIR带宽;
    在所述优先级队列为中优先级队列的情况下,为所述优先级队列配置CIR和超额信息速率EIR带宽;
    在所述优先级队列为低优先级队列的情况下,为所述优先级队列配置EIR。
  6. 根据权利要求5所述的方法,其中,所述基于配置的带宽资源,对所述优先级队列关联的所述业务进行调度,包括:
    在关联的优先级队列中包括高优先级队列的情况下,依据所述CIR将所述高优先级队列关联的业务进行调度;
    在关联的优先级队列中包括中优先级队列且不包括所述高优先级队列的情况下,依据所述CIR和EIR将所述中优先级队列关联的业务进行调度;
    在关联的优先级队列中只包括低优先级队列的情况下,依据EIR对所述低优先级队列关联的业务进行调度。
  7. 根据权利要求6所述的方法,其中,所述基于配置的带宽资源,对所述优先级队列关联的所述业务进行调度,还包括:
    在依据所述CIR将所述高优先级队列关联的业务进行调度后,若关联的优先级队列中还包括中优先级队列,则依据所述CIR和EIR将所述中优先级队列关联的业务进行调度;
    在依据所述CIR和EIR将所述中优先级队列关联的业务进行调度后,若关联的优先级队列中还包括低优先级队列,依据剩余的EIR对所述低优先级队列关联的业务进行调度。
  8. 根据权利要求5-7任一项所述的方法,其中,所述优先级队列的优先级的判断方式包括:
    通过优先级标识对所述优先级队列进行标记;
    依据所述优先级标识确定所述优先级队列是高优先级队列、中优先级队列、或低优先级队列。
  9. 一种计算机设备,其中,包括处理器和存储器;
    所述存储器用于存储计算机指令,所述处理器用于运行所述存储器存储的计算机指令,以实现权利要求1至8中任一项所述的一种网络切片的方法。
  10. 一种计算机可读存储介质,其中,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现权利要求1至8中任一项所述的一种网络切片的方法。
PCT/CN2019/112617 2018-10-22 2019-10-22 一种网络切片的方法、计算机设备及存储介质 WO2020083301A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811229677.7A CN111082955A (zh) 2018-10-22 2018-10-22 一种网络切片的方法、计算机设备及存储介质
CN201811229677.7 2018-10-22

Publications (1)

Publication Number Publication Date
WO2020083301A1 true WO2020083301A1 (zh) 2020-04-30

Family

ID=70309746

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/112617 WO2020083301A1 (zh) 2018-10-22 2019-10-22 一种网络切片的方法、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN111082955A (zh)
WO (1) WO2020083301A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112272108B (zh) * 2020-10-14 2022-09-27 中国联合网络通信集团有限公司 一种调度方法和装置
CN112491741B (zh) * 2020-10-19 2022-09-23 国网上海市电力公司 一种虚拟网资源分配方法、装置和电子设备
CN112423347B (zh) * 2020-11-02 2023-08-11 中国联合网络通信集团有限公司 QoS保障方法及装置
CN113296957B (zh) * 2021-06-18 2024-03-05 中国科学院计算技术研究所 一种用于动态分配片上网络带宽的方法及装置
CN114143831A (zh) * 2021-12-06 2022-03-04 中兴通讯股份有限公司 一种报文处理方法、客户前置设备及计算机可读存储介质
CN117439887A (zh) * 2022-07-15 2024-01-23 中兴通讯股份有限公司 数据调度方法、电子设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102158420A (zh) * 2011-05-25 2011-08-17 杭州华三通信技术有限公司 一种基于优先队列的业务流量调度方法及其装置
CN104079501A (zh) * 2014-06-05 2014-10-01 深圳市邦彦信息技术有限公司 一种基于多优先级的队列调度方法
CN105577563A (zh) * 2015-12-22 2016-05-11 中国电子科技集团公司第三十二研究所 流量管理的方法
CN105591970A (zh) * 2015-08-31 2016-05-18 杭州华三通信技术有限公司 一种流量控制的方法和装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108632068B (zh) * 2017-03-22 2020-09-11 大唐移动通信设备有限公司 一种网络切片模板生成、网络切片模板应用方法和装置
CN107743100B (zh) * 2017-09-30 2020-11-06 重庆邮电大学 一种基于业务预测的在线自适应网络切片虚拟资源分配方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102158420A (zh) * 2011-05-25 2011-08-17 杭州华三通信技术有限公司 一种基于优先队列的业务流量调度方法及其装置
CN104079501A (zh) * 2014-06-05 2014-10-01 深圳市邦彦信息技术有限公司 一种基于多优先级的队列调度方法
CN105591970A (zh) * 2015-08-31 2016-05-18 杭州华三通信技术有限公司 一种流量控制的方法和装置
CN105577563A (zh) * 2015-12-22 2016-05-11 中国电子科技集团公司第三十二研究所 流量管理的方法

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHINA TELECOM ET AL.: "Discussion on Network Slice Priority", S5-185607, 3GPP TSG SA WG5 (TELECOM MANAGEMENT) MEETING #120, 24 August 2018 (2018-08-24), XP051544208, DOI: 20200103172549Y *
OPPO: "Discussion on IDLE/INACTIVE UE Mobility", R2-1700957, 3GPP TSG-RAN2#97, 4 February 2017 (2017-02-04), XP051211723 *
SAMSUNG: "Layer 2 Design to Support Multiple Service Verticals", R2-163802, 3GPP TSG-RAN WG2 MEETING #94, 13 May 2016 (2016-05-13), XP051095672 *

Also Published As

Publication number Publication date
CN111082955A (zh) 2020-04-28

Similar Documents

Publication Publication Date Title
WO2020083301A1 (zh) 一种网络切片的方法、计算机设备及存储介质
Stallings Foundations of modern networking: SDN, NFV, QoE, IoT, and Cloud
US9489225B2 (en) Allocating resources for multi-phase, distributed computing jobs
EP3949293A1 (en) Slice-based routing
US11595315B2 (en) Quality of service in virtual service networks
US10649822B2 (en) Event ingestion management
US11463346B2 (en) Data processing method, device, and system
JP2020503769A (ja) フレキシブルイーサネットに基づいてサービスフローを送信するための方法および装置、ならびに通信システム
Medhat et al. Near optimal service function path instantiation in a multi-datacenter environment
CN109450793B (zh) 一种业务流量调度的方法和装置
WO2020087523A1 (zh) 网络通信的方法、装置及电子设备
CN108471629A (zh) 传输网络中业务服务质量的控制方法、设备及系统
CN101471854A (zh) 一种转发报文的方法及装置
CN105122747A (zh) Sdn网络中的控制设备和控制方法
CN112583636B (zh) 一种政务网络切片的构造方法、电子设备和存储介质
Wang et al. Low complexity multi-resource fair queueing with bounded delay
WO2016150020A1 (zh) 基于调度流标识的报文调度方法和装置
CN102780630A (zh) 一种基于FPGA队列实现QoS队列的方法和设备
CN111970149B (zh) 一种基于硬件防火墙qos的共享带宽实现方法
US10862820B2 (en) Method for prioritizing network packets at high bandwidth speeds
CN109922003A (zh) 一种数据发送方法、系统及相关组件
CN103346950A (zh) 一种机架式无线控制器用户业务板间负载均摊方法及装置
US9705698B1 (en) Apparatus and method for network traffic classification and policy enforcement
WO2022183879A1 (zh) 报文转发方法、电子设备和存储介质
WO2019152942A2 (en) Dynamic software architecture reconfiguration for converged cable access platform (ccap)

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19877371

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 7.09.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19877371

Country of ref document: EP

Kind code of ref document: A1