CN111989979A - Method and system for controlling operation of a communication network to reduce latency - Google Patents

Method and system for controlling operation of a communication network to reduce latency Download PDF

Info

Publication number
CN111989979A
CN111989979A CN201880092681.5A CN201880092681A CN111989979A CN 111989979 A CN111989979 A CN 111989979A CN 201880092681 A CN201880092681 A CN 201880092681A CN 111989979 A CN111989979 A CN 111989979A
Authority
CN
China
Prior art keywords
network
packet
type
information
network node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880092681.5A
Other languages
Chinese (zh)
Inventor
S·沙尔玛
E·格林什蓬
A·法兰西尼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Oyj
Nokia Solutions and Networks Oy
Original Assignee
Nokia Networks Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Networks Oy filed Critical Nokia Networks Oy
Publication of CN111989979A publication Critical patent/CN111989979A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/08Testing, supervising or monitoring using real traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/02Details
    • H04J3/06Synchronising arrangements
    • H04J3/0635Clock or time synchronisation in a network
    • H04J3/0638Clock or time synchronisation among nodes; Internode synchronisation
    • H04J3/0658Clock or time synchronisation among packet nodes
    • H04J3/0661Clock or time synchronisation among packet nodes using timestamps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W56/00Synchronisation arrangements
    • H04W56/001Synchronization between nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W76/00Connection management
    • H04W76/10Connection setup
    • H04W76/11Allocation or use of connection identifiers

Abstract

The method comprises the following steps: transmitting a request message to at least one first network node, the request message each comprising at least a sampling time window defining a duration and a network slice identifier identifying a designated network slice within the communication network; receiving a packet report from at least one first network node, the packet report including latency information for packets processed by the at least one first network node during a sampling time window for a specified network slice; and controlling operation of the communication network based on the latency information.

Description

Method and system for controlling operation of a communication network to reduce latency
And (4) background.
Technical Field
Example embodiments generally relate to a method and system for controlling the operation of a communication network by measuring and monitoring network latency. The method and system have applicability to packet switched networks, including 5 th generation wireless communication networks (5G networks).
Background
Communication networks are constantly receiving new service demands from various users, machines, industries, governments and other organizations. In fifth generation wireless communication networks (5G networks), new services will be supported and enabled through dedicated, secure, customized end-to-end network slicing. Network slices support their associated services and ensure their isolation of traffic in the shared physical infrastructure by means of dedicated virtualized network functions in the data and control planes.
Many new services and applications (e.g., Virtual Reality (VR), network assisted vehicle and drone autonomous control, network assisted automation plant and city, telerobotic control, telesurgery, etc.) have strict end-to-end latency requirements, while they generate "bursty" network traffic (i.e., traffic with intervals of high load demand). End-to-end latency is expected to become an important Key Performance Indicator (KPI) in Service Level Agreements (SLAs) associated with these services and their respective network slices.
Disclosure of Invention
At least one example embodiment includes a method of controlling operation of a communication network.
In one example embodiment, the method comprises: transmitting, by at least one first processor of a central node, a request message to at least one first network node, the request message each including at least a sampling time window defining a duration and a network slice identifier identifying a designated network slice within a communication network; receiving, at the at least one first processor, a packet report from the at least one first network node, the packet report including latency information for packets processed by the at least one first network node during a sampling time window for a specified network slice; and controlling, by the at least one first processor, operation of the communication network based on the latency information.
In one example embodiment, the at least one first network node comprises: at least one network node of a first type having a first link in the designated network slice, the first link having a termination endpoint within the communication network.
In one example embodiment, the at least one first network node comprises: at least one network node of a second type having a second link in the designated network slice, the second link having a termination endpoint outside the network slice.
In one example embodiment, the receiving of the packet report includes: receiving, from a network node of a first type, packet reports of the first type for a specified network slice, the packet reports of the first type each comprising: packet identifier information, packet size information, and timestamp information.
In one example embodiment, the at least one first network node comprises: at least one network node of a first type having a first link in the designated network slice, the first link having a terminating endpoint within the communication network, and the receiving of the packet report comprising: receiving a first type of packet report for the specified network slice from a first type of network node, the first type of packet report each including packet identifier information, packet size information, and timestamp information; and receiving, from the network nodes of the second type, packet reports of the second type for the specified network slice, the packet reports of the second type each including latency information for the network nodes of the second type.
In one example embodiment, the network slice identifier identifies a communication direction for the specified network slice, the direction being one of an uplink direction and a downlink direction.
In one example embodiment, the central node and the network nodes of the first type are synchronized to the same network clock for the communication network, and the request message to the at least one network node of the first type comprises: a start time defined by a sampling time window.
In one example embodiment, the at least one network node of the first type comprises: a downstream network node of a first type and an upstream network node of the first type, the transmission of the request message comprising: transmitting a first request message having a first sampling time window defining a first start time and a first duration to a downstream network node of the first type, and transmitting a second request message having a second sampling time window defining a second start time and a second duration to an upstream network node of the first type, the first duration being one of the same as the second duration and different from the second duration.
In one example embodiment, the receiving of the packet report includes: receiving a first packet report from a downstream network node of a first type, the first packet report comprising: a first set of packet identifier information associated with a first set of timestamp information, and receiving a second packet report from the upstream network node of the first type, the second packet report including a second set of packet identifier information associated with a second set of timestamp information.
In one example embodiment, the method further comprises: the latency information is calculated by matching identifier information between a first set of packet identifier information and a second set of packet identifier information to obtain a matching subset of identifier information, and determining a difference between a first portion of a first set of timestamp information and a second portion of a second set of timestamp information, the first portion and the second portion being associated with the matching subset of identifier information.
At least one other example embodiment relates to an example embodiment of a method of controlling operation of a communication network in a system comprising a central node and at least a first network node.
In one example embodiment, the method comprises: transmitting, by at least one first processor of a central node, a request message to at least one second processor of at least one first network node, the request message each including at least a sampling time window defining a duration and a network slice identifier identifying a designated network slice within a communication network; creating, by the at least one second processor, a packet report upon receiving the request message, the packet report including latency information for packets processed by the at least one first network node during the sampling time window for the specified network slice; receiving, at the at least one first processor, a packet report from the at least one second processor; and controlling, by the at least one first processor, operation of the communication network based on the latency information.
In one example embodiment, the at least one first network node comprises: at least one network node of a first type having a first link in the designated network slice, the first link having a termination endpoint within the communication network.
In one example embodiment, the at least one first network node comprises: at least one network node of a second type having a second link in the designated network slice, the second link having a termination endpoint outside the network slice.
In one example embodiment, the receiving of the packet report includes: a first type of packet report is received from a first type of network node for a specified network slice, the first type of packet report each including packet identifier information, packet size information, and timestamp information.
In one example embodiment, the at least one first network node comprises: at least one network node of a first type having a first link in the designated network slice, the first link having a terminating endpoint within the communication network, and the receiving of the packet report comprising: receiving a first type of packet report for the specified network slice from a first type of network node, the first type of packet report each including packet identifier information, packet size information, and timestamp information; and receiving, from the network nodes of the second type, packet reports of the second type for the specified network slice, the packet reports of the second type each including latency information for the network nodes of the second type.
In one example embodiment, the network slice identifier identifies a communication direction for the specified network slice, the direction being one of an uplink direction and a downlink direction.
In one example embodiment, the central node and the network nodes of the first type are synchronized to the same network clock for the communication network, and the request message to the at least one network node of the first type comprises: a start time defined by a sampling time window.
In one example embodiment, the at least one network node of the first type comprises: a downstream network node of a first type and an upstream network node of the first type, the transmission of the request message comprising: transmitting a first request message having a first sampling time window defining a first start time and a first duration to a downstream network node of the first type, and transmitting a second request message having a second sampling time window defining a second start time and a second duration to an upstream network node of the first type, the first duration being one of the same as the second duration and different from the second duration.
In one example embodiment, the receiving of the packet report includes: receiving a first packet report from a downstream network node of a first type, the first packet report comprising: a first set of packet identifier information associated with a first set of timestamp information; and receiving a second packet report from an upstream network node of the first type, the second packet report comprising: a second set of packet identifier information associated with a second set of timestamp information.
In one example embodiment, the method further comprises: the latency information is calculated by matching identifier information between a first set of packet identifier information and a second set of packet identifier information to obtain a matching subset of identifier information, and determining a difference between a first portion of a first set of timestamp information and a second portion of a second set of timestamp information, the first portion and the second portion being associated with the matching subset of identifier information.
In one example embodiment, the creation of the packet report includes: calculating, by at least one second processor of the at least one second type of network node, Physical Resource Block (PRB) rate information and bearer (bearer) information, the bearer information comprising a quantization of a plurality of very active bearers, and determining, by the at least one second processor, latency information based on the PRB rate information and the bearer information.
At least one other example embodiment relates to a central node.
In one example embodiment, the central node comprises: a memory storing computer readable instructions; and at least one first processor configured to execute computer-readable instructions such that the at least one first processor is configured to transmit a request message to the at least one first network node, the request message each comprising at least a sampling time window defining a duration and a network slice identifier, the network slice identifier identifying a specified network slice within the communication network, receive a packet report from the at least one first network node, the packet report comprising latency information for a packet, the packet processed by the at least one first network node during the sampling time window for the specified network slice, and control operation of the communication network based on the latency information.
At least one other example embodiment includes a system.
In one example embodiment, the system includes: a central node, comprising: a first memory storing first computer readable instructions; and at least one first processor configured to execute the first computer readable instructions such that the at least one first processor is configured to: transmitting a request message to at least one second processor, the request message each including at least a sampling time window defining a duration and a network slice identifier identifying a designated network slice within the communication network; and at least one first network node comprising: a second memory storing second computer readable instructions, and at least one second processor configured to execute the second computer readable instructions such that the at least one second processor is configured to create a packet report upon receipt of the request message, the packet report including latency information for packets processed by the at least one first network node during the sampling time window for the specified network slice, the at least one first processor further configured to receive the packet report from the at least one second processor, and to control operation of the communication network based on the latency information.
Drawings
Fig. 1 illustrates an architecture of a system of communication networks in accordance with an example embodiment;
FIG. 2 illustrates a central (control) node of a system in accordance with an example embodiment;
fig. 3 illustrates a first measurement node of a system in accordance with an example embodiment;
fig. 4 illustrates a second measurement node of the system according to an example embodiment;
fig. 5 illustrates an example of a system for latency measurement in a representative 5G communication network having multiple slices, in accordance with an example embodiment;
fig. 6 illustrates a system for latency measurement in a mobile Radio Access Network (RAN) communication network in accordance with an example embodiment;
fig. 7 illustrates the operation of a first measurement node in accordance with an example embodiment; and
fig. 8 illustrates a method of a central node in accordance with an example embodiment.
Detailed Description
While example embodiments are capable of various modifications and alternative forms, embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that there is no intention to limit example embodiments to the specific forms disclosed, but on the contrary, example embodiments are to cover all modifications, equivalents, and alternatives falling within the scope of the invention. Like numbers refer to like elements throughout the description of the figures.
Before discussing example embodiments in more detail, it is noted that some example embodiments are described as processes or methods, which are depicted as flow diagrams. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel, concurrently, or simultaneously. In addition, the order of the operations may be rearranged. These processes may terminate when their operations are complete, but may also have additional steps not included in the figure. These processes may correspond to methods, functions, procedures, subroutines, subprograms, etc.
The methods discussed below, some of which are illustrated by flowcharts, may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine or computer readable medium such as a storage medium, such as a non-transitory storage medium. The processor(s) may perform the necessary tasks.
Specific structural and functional details disclosed herein are merely representative for purposes of describing example embodiments. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being "directly connected" or "directly coupled" to another element, there are no intervening elements present. Other words used to describe the relationship between elements (e.g., "between" versus "directly between...," adjacent "versus" directly adjacent, "etc.) should be interpreted in a similar manner.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be noted that, in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may in fact be executed substantially concurrently or the figures may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Portions of the example embodiments and corresponding detailed description are presented in terms of software, or algorithms and symbolic representations of operations on data bits within a computer memory. These descriptions and representations are the ones by which those of ordinary skill in the art effectively convey the substance of their work to others of ordinary skill in the art. An algorithm, as the term is used here, and as it is used generally, is conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of optical, electrical, or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
In the following description, the illustrative embodiments will be described with reference to acts and symbolic representations of operations (e.g., in the form of flowcharts) that may be implemented as program modules or functional processes including routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types and that may be implemented using existing hardware at existing network elements. Such existing hardware may include one or more Central Processing Units (CPUs), Digital Signal Processors (DSPs), application specific integrated circuits, Field Programmable Gate Arrays (FPGAs) computers, and the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise, or as is apparent from the discussion, terms such as "processing" or "computing" or "calculating" or "determining" or "displaying" or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical, electronic quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Note also that the software implemented aspects of the example embodiments are typically encoded on some form of program storage medium or implemented over some type of transmission medium. The program storage medium may be any non-transitory storage medium such as a magnetic medium, an optical medium, or a flash memory. Similarly, the transmission medium may be twisted wire pairs, coaxial cable, optical fiber, or some other suitable transmission medium known to the art. The example embodiments are not limited by these aspects of any given implementation.
The general method comprises the following steps:
to ensure that network slices comply with Service Level Agreements (SLAs), 5G Service Providers (SPs) need reliable and cost-effective means to measure and monitor latency in real-time. Passive methods for latency measurement do not inject dedicated measurement traffic in the data plane and do not modify the header of user traffic. Service providers may benefit from such passive methods because they do not interfere with network performance and data usage based charging schemes. The relevant latency indicators include the end-to-end latency of each slice between the service endpoints, and the latency of traversing partial segments of the end-to-end network slice, such as individual virtual links, subslices, and virtual network functions. The latter has significant advantages in improving troubleshooting, root cause analysis, and the ability to implement corrective measures to preserve SLA guarantees. The present example embodiments include systems and methods for passive, accurate, and scalable measurement of latency for network slices.
There are limited methods for latency measurement, but they are either (a) inadequate to meet the new requirements associated with the 5G and network slicing concepts, or (b) are non-passive and intrusive in nature, and thus limited in applicability. The following is a representative set of such techniques:
I. active measurement:special round-trip packets may be injected into the network using conventional diagnostic tools, end hosts or network nodesA network (a round-trip packet is a packet that travels from a first network endpoint to a second network endpoint and then back to the first endpoint; the data in the packet may be modified as the packet travels in each direction and may also be modified by the second endpoint before transmission to the first endpoint). Each returned round-trip packet contains a timestamp of the original packet, and the added timestamp may possibly be added by the remote host and the intermediate node. The source analyzes the timestamps to infer latency information for the entire and partial data paths. This approach is not sufficient for 5G network slices and their strict SLA Key Performance Indicators (KPIs) at least for the following reasons.
A. The diagnostic tools involved are not passive: the injected packets may increase the network load. Although the load increase is usually insignificant, in some cases it may disturb the performance of latency sensitive applications and their KPI metrics.
B. The new functions specified need to be installed on the mobile terminal device, dedicated to producing latency measurements, and then to signal the measurement results to the network, which requires the consent of the end user and is burdensome due to the sheer number of terminal devices.
C. In a network slice environment, the packet probes of the latency measurement utility do not undergo the same type of processing as the packets of the application for which each slice is designed to support. Measurements obtained from the probe may be inaccurate due to processing latency.
TCP header check:the round trip time to the TCP receiver may be calculated using a sequence number in the header of a Transmission Control Protocol (TCP) data packet, and an acknowledgement number in the header of a return Acknowledgement (ACK) packet, the intermediate network node (not just the TCP source). This method is only applicable to applications that use TCP for end-to-end transport. Real-time applications with constrained latency requirements are less likely to use TCP for end-to-end transmission due to the delay added by TCP packet retransmissions. It may become commonplace for a transport protocol that is better tailored to the specific needs of each application Especially after the introduction of the industry fast User Datagram Protocol (UDP) internet connection protocol (i.e., the QUIC protocol), it acts as a shim layer over UDP, which allows the design of custom methods for network reliability and congestion control.
Deep Packet Inspection (DPI) or other methods for signature-based packet identification:the intermediate node may look deep into the packets (both header and payload) to identify packet flows and individual packets, and store the arrival/departure data for each flow and each packet. The inter-node one-way latency samples may be computed by matching records collected at neighboring nodes for the same packet. This approach faces the following challenges:
a: a large number of packet arrival/departure records need to be stored at each participating node and then transmitted for matching. It is not feasible to randomly sample only a portion of the packets in a transmission because the latency calculation must compare the event times of the same packet at different nodes and if each node samples the packet randomly and independently of the other nodes, the likelihood of sampling the same packet at different nodes is low.
B. The method is only suitable for measuring the waiting time of the slice segment, but not suitable for measuring the end-to-end one-way waiting time of the multi-hop data path of the network slice.
The ability to configure oversubscription of shared networks and computing resources through different slices is desirable for communication network operators to enable maximum utilization of the network and thus maximum return on infrastructure investment.
Accordingly, example embodiments provide a system and method comprising: two types of distributed agents that can coordinate latency measurements for a slice with a central node for a specified network slice, where the distributed agents can be associated with links within a communication network or with links used for communication outside the communication network. The method and system may calculate (a) end-to-end slice latency and (b) latency for individual segments of the slice.
The specific embodiment is as follows:
example embodiments include a system and method for passively measuring packet latency in a network slice, both for end-to-end latency (between slice endpoints) and within an internal segment of the entire slice, where a segment may be a subnet of the slice, a link (physical or virtual) within the slice, a set of contiguous links within the slice, a network node associated with the slice or a well-defined network domain that is part of the slice, etc. Latency measurements may be generated by a centralized element (also referred to as a "central node" or "control node") that controls and coordinates periodic sampling by various elements or nodes in the communication network, and periodically collects and correlates sets of packet timestamp records received from different sampling points of the network. A packet timestamp record (also referred to as "timestamp information") may include a unique packet identifier and a timestamp associated with the time the record was created. During the record collection period, a record is created for each packet that traverses the sample point. A comparison of timestamps associated with the same packet identifier in records received from different points in the network yields a sample of the latency between the two points. The centralized element controls the duration of each recording collection period, the interval between collection periods, and the start time of the collection period at each point of the network involved in the process.
Some key elements of the example embodiments are summarized by the following attributes:
I. network-based:example embodiments do not require the terminal device to participate in the measurements.
Passive:example embodiments do not relate to injection of probe packets, or modification of headers of transport packets for a system or method.
Extensible:the exemplary embodiments rely on sampling the transport packets at intermittent intervals compatible with the processing, storage and transmission capacity of the node at which the sampling occurs.
Accurate:example embodiments enable fine-tuning of the latency measurement process based on characteristics of the data path to which the latency measurement process is applied.
The structure embodiment is as follows:
fig. 1 illustrates the architecture of a system 100 for a communication network 50 according to an example embodiment. The system 100 may provide latency measurements and thereby obtain latency information. System 100 may provide latency measurements (information) for a packet network that includes: a set of nodes { N-i } (10 a, 10b, 10c, 20 a), a set of internal links { IL-j } (12 a, 12b, 12 c), and a set of external links { EL-k } (22 a). The nodes N-i may be packet switches, portions of packet switches, network functions, network capabilities, application servers or endpoints, or any other network element capable of receiving and transmitting packets without limitation to scope or granularity. The nodes N-i may be instantiated as physically separate entities or as virtualized entities, possibly sharing underlying infrastructure resources with other virtualized network functions. Internal linkIL-j connects two nodes within communication network 50.External linkEL-k connects nodes of the communication network 50 with nodes not belonging to the communication network 50. The wireless connection between the serving cell and the served end user equipment is an example of an external link 22 a. The internal and external links may be physical links or virtual links.
The system 100 of the exemplary embodiment includes the following components:
A. latency Measurement Engine (LME) 30:a centralized component ("central node" or "control node" or latency measurement engine "LME") 30 having a processor 300a, the processor 300a controlling the synchronization of the sampling (start and duration) at the different nodes (10 a, 10b, 10c and 20 a) to ensure that node traversals of the same packet are time stamped. Processor 300a of LME 30 collects and processes latency measurements originating from different points of network 50. There may typically be one such engine (LME 30) per operator network (e.g., physical network or virtual network), although it should be appreciated thatIt is understood that multiple LMEs 30 may be deployed within the same infrastructure or communication network 50, wherein multiple LMEs 30 may be shared by multiple virtual networks or network slices.
B. Type 1 latency measurement proxy (T1 LMA) -a first type of network node:distributed components (or network nodes/elements, designated as nodes 10a, 10b, and 10c in system 100) associated with the endpoints of internal link 12 a. They collect time synchronized measurement samples (measurement information of a first type) for one link 12a direction or for both directions of the link 12a and transmit this information to the processor 300 of the LME 30. Not all internal links of the network node have to be coupled to the T1 LMA.
C. Type 2 latency measurement proxy (T2 LMA) -a second type of network node:the distributed components (or network nodes/elements, designated as node 20a in system 100) associated with the network endpoints of external link 22e (i.e., the links whose second endpoints are outside of network 50 and therefore not included in the measurement system). An example of an external link 22e is a wireless access link, where the external endpoint is a mobile device (such as user equipment/equipment) served by the network 50. The T2LMA collects measurement samples (second type of measurement information) for one link 22e direction or for both directions of the link 22e and transmits the measurement information to the processor 300 of the LME 30. Not all external links of the network node have to be coupled to the T2 LMA.
D. LME 30 and all LMAs 10a in network 50:these components are synchronized to a common time reference (i.e., a common "network clock"). Any known method of causing the synchronization may be implemented, with the details of the synchronization and its implementation being outside the scope of the example embodiments currently described.
Structure example embodiment:
figure 2 illustrates a central (control) node 30 or LME 30 of system 100, according to an example embodiment. The node 30 includes: a network interface 304 for communicating with other nodes in the system 100, a signaling interface 306 (which may be considered a "backhaul"), and a memory storage 302. The node 30 further comprises: a processor 300 that may control the operation of node 30. Some of these operations of the node 30 include: storing and retrieving information/data to and from memory 302, communicating signaling and information to other nodes in system 100 using interface 304/306, and carrying out processing based at least in part on computer readable instructions stored in a Latency Measurement Control Module (LMCM) 300a within processor 300. The computer readable instructions in LMCM 300a may provide instructions that cause processor 300 to carry out the method steps of node 30 commensurate with the steps described in the current method example embodiment in this document. It should be understood that processor 300 also includes a Physical (PHY) layer (having different configuration modes), a Medium Access Control (MAC) layer (having different configuration modes), a Packet Data Convergence Protocol (PDCP) layer (having different configuration modes), a user plane layer (having different configuration modes), a scheduling table, and a Radio Link Control (RLC) buffer, wherein these elements of processor 300 are not shown in the figure.
Fig. 3 illustrates a first measurement node 10a or T1LMA of the system 100 in accordance with an example embodiment. The node 10a includes: a network interface 204, a backhaul interface 206, and a memory storage 202 for communicating with other nodes in the system 100. The node 10a further comprises: a processor 200 that may control the operation of the node 10 a. Some of these operations of node 10a include: storing and retrieving information/data to and from memory 202, communicating signaling and information to other nodes in system 100 using interface 204/206, and carrying out processing based at least in part on computer readable instructions stored in latency measurement module type 1 (LMMT 1) 200a within processor 200. The computer readable instructions in LMMT 1200 a may provide instructions that cause processor 200 to carry out the method steps of node 10a commensurate with the steps described in the current method example embodiment in this document. In an embodiment, processor 200 may include Physical (PHY) layers (with different configuration modes), Medium Access Control (MAC) layers (with different configuration modes), Packet Data Convergence Protocol (PDCP) layers (with different configuration modes), user plane layers (with different configuration modes), scheduling and Radio Link Control (RLC) buffers, wherein these elements of processor 300 are not shown in the figure.
Fig. 4 illustrates a second measurement node 20a or T2LMA of the system 100 in accordance with an example embodiment. The node 20a includes: a network interface 404, a backhaul interface 406, and a memory storage 402 for communicating with other nodes in the system 100. The node 20a further comprises: a processor 400 that may control the operation of node 20 a. Some of these operations of node 20a include: storing and retrieving information/data to and from memory 402, transmitting signaling and information to other nodes in system 100 using interface 404/406, and carrying out processing based at least in part on computer readable instructions stored in latency measurement module type 2 (LMMT 2) 400a within processor 400. The computer readable instructions in the LMMT 2400 a may provide instructions that cause the processor 400 to carry out the method steps of the node 20a commensurate with the steps described in the current method example embodiment in this document. In an embodiment, processor 400 may include Physical (PHY) layers (with different configuration modes), Medium Access Control (MAC) layers (with different configuration modes), Packet Data Convergence Protocol (PDCP) layers (with different configuration modes), user plane layers (with different configuration modes), scheduling and Radio Link Control (RLC) buffers, where these elements of processor 300 are not shown in the figure.
Using the network slices within the system:
the system 100 includes: two types of distributed agents 10a/20a, which may be coupled with a virtual network function for collecting latency related measurement samples (type 1 and type 2 latency measurement agents) relating to network slices, are referred to as a centralized Latency Measurement Engine (LME) 30. The processor 300 of the LME 30 coordinates sampling operations and processes the proxy data to calculate (a) end-to-end slice latency and (b) latency for individual segments of the slice.
In an embodiment, the method comprises:
I. the data sampling by the processors 300 of the LMEs 30, by the different agents 10a/20a, is coordinated.
Framework to control the agent 10a/20a sampling operation via continuously adapting the duration and frequency of the sampling period and based on the reported sampling data collected at the processor 300 of the LME 30.
An algorithm or instructions for collecting sample data and calculating latency at the processor 300 of the LME 30 based on samples collected from the agents 10a/20 a.
The exemplary embodiments of the system and method make it possible to establish reliable solutions for latency measurements based on periodic sampling at vantage points within the network slice, and to enable an "optimal trade-off" to be achieved between the accuracy of the measurements and the signalling and processing burden they impose on the infrastructure of the communications network 50.
In an example embodiment, the communication network may be a 5G network 50a that is a packet network. Thus, fig. 5 illustrates an instantiation of the system 100 for making latency measurements in a representative 5G communication network 50a having a plurality of slices 600, 602, 604, in accordance with an example embodiment. In particular, fig. 5 illustrates instantiations of components of an example embodiment for three slices 600, 602, 604 sharing a common physical infrastructure. The slices are coupled with the terminals of the same physical link, wherein LMAs ( nodes 20a, 20b, 10a, 10b, etc.) may belong to different slices 600, 602, 604 at any time, which are logically independent when associated with a network function shared by multiple slices 600, 602, 604, but may be implemented as a single entity. Each of the various LMA nodes 10a, 20a may be grouped into portions of the network 50a, where these portions may include, by way of example, a 5G gbb (base station) 606a, a 5G User Plane Function (UPF) 606b, a layer 3 (L3) router, and a layer 2 (L2) switch.
Fig. 5 illustrates that three network slices 600, 602, 604 provide data paths between respective application servers 600b, 602b, 604b and client devices (UEs) 600a, 602a, 604 a. Each of the application servers 600b, 602b, 604b belongs to a different respective slice 600, 602, 604, where the application servers 600b, 602b, 604b may provide services or content to the respective UEs 600a, 602a, 604 a. To this end, the links between the application server 600b and the respective L2 switch 606d ports are internal links, coupled with the T1 LMAs 10q, 10r (T1A-m, n being the nth T1LMA of slice m). Meanwhile, because the UE 600a is not within the slice boundary of the network 50a, the links between the 5G gbb 606a and the UE 600a are external links, so the 5G gbb 606a endpoints of those links are coupled with the T2LMA 20a (T2A-p, q being the qth T2LMA of slice p). Not all slices supported by the same physical infrastructure must be equipped with the latency measurement capabilities of the current example embodiment. In fig. 5, only two top slices 600, 602 are so. A single LME 30 may control the individual LMAs 10a, 20a of the monitored and controlled slices 600, 602.
Example embodiments of the method
Operation of type 1 (node 10 a) LMA:
the processor 200 of the T1LMA (node 10 a) may begin sampling packets received by the node 10a after a time period in which the node 10a receives a trigger (request) message from the LME 30, where the sampled packets carry a given network slice id (nsid) specifying the identity of the specified slice. It should be noted that the nature and format of the NSID is not within the scope of the present exemplary embodiment. Because node 10a is associated with a link termination that handles traffic in both directions, each traffic direction requires a different trigger, and thus the NSID may specify the traffic direction for sampling. The trigger includes the following set of items (which represent the network slice ID, sample start time, and sample end time):<NSID, sample_start_time, sample_end_time, direction>. After receiving the trigger message, the processor 200 of the node 10a processes the designation received during the designated sampling time window, transmitted in the designated directionAll groupings of slices of (a). Note that the sample time window(s) is compared to the arrival time of the trigger message received by node 10a from LME 30sample_start_ timeAndsample_end_time) Should be a future time period. When the processor 200 of node 10a processes a packet, it will add a packet report record to the running log for the current sampling period. The packet report record may contain at least the following: <packet_ID, packet_size, timestamp>Whereinpacket_IDMay be a unique signature identifier for the packet,packet_sizeis the length of the packet that can be measured in bytes, andtimestampis the time at which the packet report record is generated from the time reference shared by the LME 30, node 10a, and other LMAs of the slice being monitored. When the time reference is reachedsample_end_time(i.e., the end of the sampling time window), the processor 200 of node 10a may stop creating packet report records and send the entire log accumulated over the sampling period to the LME 30.
Operation of type 2 (node 20 a) LMA:
the T2LMA (node 20 a) is associated with a network endpoint for external links (links that communicate with nodes outside of network 50). An example of an external link is a wireless access link for a mobile wireless network, where the link may be communicating with an end user (user equipment). One example of the placement of network endpoints of a wireless access link is the Media Access Control (MAC) layer of the Radio Access Network (RAN) protocol stack. The MAC layer includes: a scheduler for accessing the wireless medium in both the Downlink (DL) and Uplink (UL) directions. This section describes an instantiation of a node 20a associated with a radio link scheduler of a radio access network.
Fig. 6 illustrates a system 100a for making latency measurements in a mobile Radio Access Network (RAN) cell 50b in accordance with an example embodiment. The mobile RAN 50b is shared by three network slices 610, 620, 630 (with respective NSIDs represented asa、bAndc). The following description focuses on the operation of system 100a, which is specific to that systemFor slice 610 (with NSID)a) Is measured. T1A-a, 1 (node 10 b) and T1A-a, 2 (node 10 a) are T1 LMAs associated with the end points of the link connecting the RAN cell 50b to the core network (T1A-a, 1) and with the interface (T1A-a, 2) between the Radio Link Control (RLC) and the MAC layer in the RAN protocol stack. These two T1 LMAs may be implemented in conjunction with T1 LMAs serving other slices at the same data path point. T2A-a, 1 (node 20 a) serves slices in association with a MAC schedulera(T2 LMAs for other slices may also be implemented together).
The processor 400 of the node 20a uses the information obtained from the MAC scheduler at each Transmission Time Interval (TTI) to calculate the latency contribution of the radio access link. This information may include the following data, where this description refers to the Downlink (DL) data direction (although this type of data would be the same for the Uplink (UL) data direction):
A. To be in the duration of time△tCalculated slice within a time interval ofaAggregate Physical Resource Block (PRB) rates for all DL bearersPRB_agg(a,△t,DL)And (optionally) further averaged (or "smoothed").
B. For slicingaVirtual Very Active (VA) DL bearer average PRB RatePRB_avg(a,△t,DL)(where the "VA bearer" almost always has data available for wireless transmission). In one embodiment of the present invention,PRB_avg(a,t,DL)can be calculated as described in U.S. patent No.9,794,825 issued in 2017, 10, 17, the entire contents of which are incorporated herein by reference in their entirety, and at the time interval ΔtAnd (4) carrying out internal averaging.
C. Number of VA DL bearers in average sliceNVA_avg(a,t,DL). In an embodiment of the present invention,NVA_avg(a,t,DL)can be calculated as described in us patent No.9,794,825, and at the time interval ΔtAnd (4) carrying out internal averaging.
It should be noted that it is preferable that,PRB_agg(a,t,DL)the representation is assigned to the sliceaAnd an average amount of DL cell resources ofPRB_avg(a,t,DL)AndNVA_avg(a,t,DL)is dependent onPRB_agg(a,t,DL)And on the sliceaOf the traffic flow in (1) is an application-dependent property.
Processor 400 of node 20a at time intervals△tPeriod to slicea to DBit count DL radio link latencyL(a,D,t,DL),Wherein△tIs transmitted at an average data rate obtained over a sliced Very Active (VA) bearer DTime spent on a bitThe same processor 400 calculates the UL radio link latencyL(a,D,t,UL)。
If it is notC(a,t,DL)Is across slicesaAll DL bearers at time interval ΔtThe average number of useful bits per PRB during which the scheduler allocates (does not count retransmissions), and "Long-term application-level wireless link quality prediction" as in E.Grishpun et al (see FIGS.), (see FIGS.)The 36 th IEEE Sarnoff seminar2015, 9 months (available), which is incorporated by reference in this document) the processor 400 of node 20a calculates the DL radio link latency as follows:
L(a,D,t,DL)= D/(PRB_avg(a,t,DL)* C(a,t,DL))-formula 1
The equation for UL latency is as follows:
L(a,D,t,UL)= D/(PRB_avg(a,t,UL)* C(a,t,UL))-formula 2
D D = In equations 1 and 2, an independent average latency can be calculated by selecting in equation 1: D1+D2where D1 is the average IP packet size of the slice, and D2 is the average size of the buffer of accumulated packets before they were transmitted over the link (e.g., the average size of the Radio Link Control (RLC) buffer).
As described above, the processor 400 of the node 20a for the calculation of latency is not CPU intensive and does not need to store large amounts of data. Thus, the calculation may be carried out continuously by the processor 400 of the node 20 a. In an embodiment, processor 300 of LME 30 triggers transmission of the latest latency value with a data message carrying the following parameters: <NSID, D, t, direction>。Processor 300 of LME 30 sets traffic SLAs based on slicesDAnd△twill beDAnd ΔtLarger values of (a) are used for slices with higher expected flows, while smaller values are used for slices with lower expected flows. Processor 400 of node 20a may also send its latency report periodically, or at times exceeding a configured (determined) threshold.
LME 30 operation:
processor 300 of LME 30 receives sample reports from node 10a and node 20a, and combines them to calculate the end-to-end latency of each monitored slice and selected portions of the data path within each monitored slice. The processor 300 of the LME 30 sets the frequency of the sampling period for each node 10a/20a it controls, and the start and end times of the sampling period for each node 10a it controls.
Fig. 7 illustrates the operation of a first measurement node (node 10 a) in accordance with an example embodiment. In particular, FIG. 7 illustrates an example of a node in a network 50c (similar to network 50 of FIG. 1), although the network 50c involves Virtualized Network Function (VNF) instances 50c1/50c 2. VNF X (50 c 1) and VNF Y (50 c 2) are composed of three slices 640, 650, 660 (with a pass-by abAndcthe corresponding NSID or "network slice identifier" of the representation). The following description focuses only on sheetsaThe processor 300 of the LME 30 controls the operation of three node ( node 10a, 10b, 10 c) instances (in particular node 10a/10b in VNF X and node 10c in VNF Y) for it). Can be sliced into slicesaIs instantiated as a stand-alone function or is instantiated in common with T1 LMAs of other slices associated with the same link endpoint.
The respective processors 200 of nodes 10a, 10b, 10c collect a set of consecutive packet report records based on the trigger messages they receive from the processors 300 of the LMEs 30 and send them to the LMEs 30 once the respective sampling periods reach their end times. The processor 300 of the LME 30 uses the set of packet report records to calculate the latency of a data path from VNF X to VNF Y, where the data path may include a single link (possibly virtual), or multiple links in series, possibly joining other VNF instances.
Scalability requires the duration of the T1LMA sampling periodHAs short as possible. However, shortening the sampling period reduces the size of the intersection of the sets of packet identifiers collected at the endpoints of the latency measurement path. While the processor 300 of the LME 30 maintains a set of collected packet report records, the processor 300 of the LME 30 may fine-tune both the duration and start time of the sampling period at the measurement endpoint to increase the size of the intersection set.
In an embodiment, the processor 300 of the LME 30 controls the parameters of the sampling period when measuring latency between the endpoints of the measurement data path using the following method.
First, the processor 300 of the LME 30 must set the start and end times of the sampling period at the T1LMA of the measurement path endpoint. Is provided witht (a, 2, start)Is the start time of the sampling period at T1A-a, 2 (in VNF X), andt (a, 3, on) First) isThe start time of the sampling period at T1A-a, 3 (in VNF Y). The end time of the same period ist (a, 2, end) And t (a, 3, end). Thus, the duration of the sampling period at two T1 LMA's isH (a, 2) = t (a, 2, end) -t (a, 2, start),and isH (a, 3) = t (a, 3, end) -t (a, 3, start). Is provided withE (a, 2, 3) isExpected wait times from T1A-a, 2 to T1A-a, 3 based on the most recent measurements, thenOf S (a, 2, 3) -E (a, 2, 3)A small partFor example,S(a,2,3)Can be defined as the product of a configurable factor and the standard deviation of the same sampleAverage of its outputE(a,2,3))。The processor 300 of the LME 30 sets the start and end times of the downstream T1LMA as follows:
t (a, 3, start) = t (a, 2, start) + E (a, 2, 3) -S (a, 2, 3)
t (a, 3, end) = t (a, 3, start) + H (a, 2) + S (a, 2, 3). -formula 3
In this waySampling interval of downstream T1LMA (T1A-a, 3)H(a,3)Is longer than the interval of the upstream T1LMA (T1A-a, 2)H(a,2)Is long in duration2 * S(A,2,3)
Next, the processor 300 of the LME 30 must use the set of packet report records received from the two T1 LMAs to calculate the latency from T1A-a, 2 to T1A-a, 3.
Figure 8 illustrates a method of a central node (LME) 30 in accordance with an example embodiment. In particular, FIG. 8 depicts a method of operation of LME 30 in conjunction with T1LMA T1A-a, 2 and T1A-a, 3, including: processing a set of packet report records to control sampling parametersHAndSand calculating the average waiting timeE. As set forth above, LMCM 300a, LMMT1, and LMMT2 may include computer readable instructions for the respective processors 300, 200, 400 (node 10a and node 20a for LME 30) to carry out the steps of the method.
In step S600, the processor 300 of the LME 30 may begin the method by determining that latency measurements and latency control metrics are needed. The method can begin based on operator policies associated with network events (such as the addition or reconfiguration of network slices, detected application degradation of experience, detected network congestion, etc.), on a regular periodic schedule, commands from parent nodes, manual commands from the network operator. .
In step S602, the controller 300 transmits a sample trigger (request message) to a node in the network 50. The node may be node 10a within network 50 or the node may be node 20a with links extending outside network 50. Once the nodes 10a and/or 20a receive the request message, they measure and compile a packet report commensurate with the example embodiments described above.
In step S604, the processor 300 of the LME 30 receives a packet report from the node 10a and/or 20a, and in step S606, the processor 300 determines a match signature (as described above), wherein the determination may also involve determining a threshold value that quantifies the number of matches. The threshold value may be, for example, the total number of matches.
If, in step S606, the processor 300 determines that there is a sufficient match, then in step S608, the processor 300 may calculate latency information for the match, or otherwise investigate the latency information. In particular, processor 300 may calculate latency and control parameters, where latency and control parameters may be used for end-to-end latency, or latency of slice segments within an end-to-end transmission (as described above). In an embodiment, processor 400 of node 20a may optionally determine the latency information, as opposed to processor 30 of LME 30 determining the latency information, and then send the latency information to LME 30. And thus, in step S608, the processor 300 may analyze the matched latency information. In an embodiment, in step S608, the processor 300 may control the operation of the network in response to the latency analysis. This can be achieved by: rerouting slices through different nodes, rerouting slices through different configurable layers (PHY layer, MAC layer, PDCP layer, user plane layer) of the same node and/or changing settings of these layers, rerouting slices through different configurable modes of a scheduler and/or RLC of the same node and/or changing settings of a scheduler and/or RLC, informing a network node to adjust settings, increasing or decreasing network throughput, etc., where these actions controlling network operation may optionally be coordinated with other LMEs 30, a central office of the network (not shown in the figures), or otherwise coordinated with multiple nodes within the network 50 or even nodes outside the network 50. After step S608, the method may be repeated (with the method returning to step S602).
If, in step S606, the processor 300 determines that there are not enough matches, then, in step S6610, the processor 300 of the LME 30 may increase H (a, 2) and S (a, 2, 3), with the goal of increasing the number of matches during another iteration of the method. After step S610, the method may be repeated (with the method returning to step S602).
The processor 300 of the LME 30 may disable the collection of packet report records from the intermediate T1LMA when the latency measurements between slice endpoints are stable, and reactivate it when the latency measurements increase or the set of packet report records from slice endpoints become severely misaligned (with little or no intersection).
The processor 300 of the LME 30 calculates the latency of each packet between two T1 LMAs by matching their identifiers (signatures) in each set and subtracting their timestamps. The processor 300 of the LME 30 accumulates the latency samples with the combined metrics (e.g., the selected average) and further normalizes them. The type of normalization depends on the flow properties of the slice flow. For example, for slices with continuously high traffic, the reference data unit size may be DCarrying out internal standardization; for small bulk burst traffic, it may instead be in time△tNormalization within the number of packets processed during a period
It should be appreciated that the nodes of the example embodiments described herein may be physical or virtual routers, switches, 4G wireless enodebs, SGWs, PGWs, MMEs, 5G wireless nodes (gbnodebs, UPFs), gateways, or other structural elements capable of performing the functions and method steps outlined in this document.
Although depicted and described herein with respect to embodiments in which, for example, programs and logic are stored within a data storage device and the memory is communicatively connected to the processor, it will be appreciated that such information may be stored in any other suitable manner (e.g., using any suitable number of memories, storage devices, or databases); any suitable arrangement using a memory, storage or database communicatively connected to any suitable arrangement of equipment; storing information in any suitable combination of memory(s), storage device(s), or internal or external database(s); or using any suitable number of externally accessible memories, storage devices or databases. As such, the term "data storage" referred to herein is intended to encompass all suitable combinations of memory(s), storage(s), and database(s).
The description and drawings merely illustrate the principles of example embodiments. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the invention and are included within its spirit and scope. Moreover, all examples recited herein are principally intended expressly to be only for pedagogical purposes to aid the reader in understanding the principles of the invention and the concepts contributed by the inventor(s) to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass equivalents thereof.
The functions of the various elements shown in the exemplary embodiments, including any functional blocks labeled as a "processor," may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term "processor" or "controller" should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation: digital Signal Processor (DSP) hardware, a network processor, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), Read Only Memory (ROM) for storing software, Random Access Memory (RAM), and non-volatile storage. Other hardware, conventional and custom, may also be included.
Example embodiments may be utilized in conjunction with various telecommunications networks and systems, such as the following (where this is merely an example list): universal Mobile Telecommunications System (UMTS); global system for mobile communications (GSM); advanced Mobile Phone Service (AMPS) systems; narrow-band AMPS system (NAMPS); total Access Communication System (TACS); personal Digital Cellular (PDC) systems; the United States Digital Cellular (USDC) system; code Division Multiple Access (CDMA) systems described in EIA/TIA IS-95; high Rate Packet Data (HRPD) system, Worldwide Interoperability for Microwave Access (WiMAX); ultra Mobile Broadband (UMB); third generation partnership project LTE (3 GPP LTE); and 5G networks.
Having thus described the example embodiments, it will be apparent that they may be varied in many ways. Such variations are not to be regarded as a departure from the intended spirit and scope of the example embodiments, and all such modifications as would be obvious to one skilled in the art are intended to be included within the scope of the following claims.

Claims (23)

1. A method of controlling operation of a communication network, comprising:
transmitting, by at least one first processor of a central node, request messages to at least one first network node, the request messages each including at least a sampling time window defining a duration and a network slice identifier identifying a designated network slice within a communication network;
Receiving, at the at least one first processor, a packet report from the at least one first network node, the packet report including latency information for packets processed by the at least one first network node during a sampling time window for a specified network slice; and
controlling, by the at least one first processor, operation of the communication network based on the latency information.
2. The method of claim 1, wherein at least one first network node comprises: at least one network node of a first type having a first link in a designated network slice, the first link having a termination endpoint within the communication network.
3. The method of claim 1, wherein at least one first network node comprises: at least one network node of a second type having a second link in the designated network slice, the second link having a termination endpoint outside the network slice.
4. The method of claim 2, wherein the receiving of the packet report comprises:
receiving, from a network node of a first type, packet reports of a first type for a specified network slice, the packet reports of the first type each comprising:
The information of the identifier of the packet is,
packet size information, and
time stamp information.
5. The method of claim 3, wherein at least one first network node comprises: at least one network node of a first type having a first link in a designated network slice, the first link having a termination endpoint within the communication network, and
the receiving of the packet report includes:
receiving, from a network node of a first type, packet reports of a first type for a specified network slice, the packet reports of the first type each comprising:
the information of the identifier of the packet is,
packet size information, and
timestamp information; and
receiving, from the network nodes of the second type, packet reports of the second type for the specified network slice, the packet reports of the second type each including latency information for the network nodes of the second type.
6. The method of claim 1, wherein the network slice identifier identifies a communication direction for the specified network slice, the direction being one of an uplink direction and a downlink direction.
7. The method of claim 4, wherein the central node and the network node of the first type are synchronized to the same network clock for the communication network, and
The request message to the at least one network node of the first type comprises: a start time defined by a sampling time window.
8. The method of claim 7, wherein the at least one network node of the first type comprises: a downstream network node of a first type and an upstream network node of a first type,
the transmission of the request message comprises:
transmitting a first request message having a first sampling time window defining a first start time and a first duration to a network node of a downstream first type, an
Transmitting a second request message having a second sampling time window defining a second start time and a second duration to an upstream network node of the first type,
the first duration is one of the same as the second duration and different from the second duration.
9. The method of claim 8, wherein the receiving of the packet report comprises:
receiving a first packet report from a downstream network node of a first type, the first packet report comprising: a first set of packet identifier information associated with a first set of timestamp information, an
Receiving a second packet report from an upstream network node of the first type, the second packet report comprising: a second set of packet identifier information associated with a second set of timestamp information.
10. The method of claim 9, further comprising:
the latency information is calculated in the following manner,
matching identifier information between a first set of packet identifier information and a second set of packet identifier information to obtain a matching subset of identifier information; and
a difference is determined between a first portion of a first set of timestamp information and a second portion of a second set of timestamp information, the first and second portions being associated with matching subsets of identifier information.
11. A method of controlling operation of a communication network in a system comprising a central node and at least a first network node, the method comprising:
transmitting, by at least one first processor of a central node, a request message to at least one second processor of at least one first network node, the request message each including at least a sampling time window defining a duration and a network slice identifier identifying a designated network slice within a communication network;
Creating, by the at least one second processor, a packet report upon receiving the request message, the packet report including latency information for packets processed by the at least one first network node during the sampling time window for the specified network slice;
receiving, at the at least one first processor, a packet report from the at least one second processor; and
controlling, by the at least one first processor, operation of the communication network based on the latency information.
12. The method of claim 11, wherein at least one first network node comprises: at least one network node of a first type having a first link in a designated network slice, the first link having a termination endpoint within the communication network.
13. The method of claim 11, wherein at least one first network node comprises: at least one network node of a second type having a second link in the designated network slice, the second link having a termination endpoint outside the network slice.
14. The method of claim 12, wherein the receiving of the packet report comprises:
receiving, from a network node of a first type, packet reports of a first type for a specified network slice, the packet reports of the first type each comprising:
The information of the identifier of the packet is,
packet size information, and
time stamp information.
15. The method of claim 13, wherein at least one first network node comprises: at least one network node of a first type having a first link in a designated network slice, the first link having a termination endpoint within the communication network, and
the receiving of the packet report includes:
receiving, from a network node of a first type, packet reports of a first type for a specified network slice, the packet reports of the first type each comprising:
the information of the identifier of the packet is,
packet size information, and
timestamp information; and
receiving, from the network nodes of the second type, packet reports of the second type for the specified network slice, the packet reports of the second type each including latency information for the network nodes of the second type.
16. The method of claim 11, wherein the network slice identifier identifies a communication direction for the designated network slice, the direction being one of an uplink direction and a downlink direction.
17. The method of claim 14, wherein the central node and the network node of the first type are synchronized to the same network clock for the communication network, and
The request message to the at least one network node of the first type comprises: a start time defined by a sampling time window.
18. The method of claim 17, wherein at least one network node of a first type comprises: a downstream network node of a first type and an upstream network node of a first type,
the transmission of the request message comprises:
transmitting a first request message having a first sampling time window defining a first start time and a first duration to a network node of a downstream first type, an
Transmitting a second request message having a second sampling time window defining a second start time and a second duration to an upstream network node of the first type,
the first duration is one of the same as the second duration and different from the second duration.
19. The method of claim 18, wherein the receiving of the packet report comprises:
receiving a first packet report from a downstream network node of a first type, the first packet report comprising: a first set of packet identifier information associated with a first set of timestamp information, an
Receiving a second packet report from an upstream network node of the first type, the second packet report comprising: a second set of packet identifier information associated with a second set of timestamp information.
20. The method of claim 19, further comprising:
the latency information is calculated in the following manner,
matching identifier information between a first set of packet identifier information and a second set of packet identifier information to obtain a matching subset of identifier information; and
a difference is determined between a first portion of a first set of timestamp information and a second portion of a second set of timestamp information, the first and second portions being associated with matching subsets of identifier information.
21. The method of claim 13, wherein the creation of a packet report comprises:
calculating, by at least one second processor of at least one second type of network node, Physical Resource Block (PRB) rate information and bearer information, the bearer information comprising a quantification of a plurality of very active bearers, and
determining, by the at least one second processor, latency information based on the PRB rate information and the bearer information.
22. A central node, comprising:
A memory storing computer readable instructions; and
at least one first processor configured to execute computer-readable instructions, such that the at least one first processor is configured to,
transmitting a request message to at least one first network node, the request message each comprising at least a sampling time window defining a duration and a network slice identifier identifying a designated network slice in a communication network,
receiving a packet report from at least one first network node, the packet report including latency information for packets processed by the at least one first network node during a sampling time window for a specified network slice, and
the operation of the communication network is controlled based on the latency information.
23. A system, comprising:
a central node, which includes,
a first memory storing first computer readable instructions, an
At least one first processor configured to execute first computer readable instructions such that the at least one first processor is configured to,
transmitting a request message to at least one second processor, the request message each including at least a sampling time window defining a duration and a network slice identifier identifying a designated network slice within a communication network; and
At least one first network node, comprising:
a second memory storing second computer readable instructions, an
At least one second processor configured to execute second computer-readable instructions, such that the at least one second processor is configured to,
creating a packet report upon receiving the request message, the packet report including latency information for packets processed by at least one first network node during a sampling time window for the specified network slice,
at least one first processor further configured to receive packet reports from at least one second processor and to control operation of the communication network based on the latency information.
CN201880092681.5A 2018-02-25 2018-02-25 Method and system for controlling operation of a communication network to reduce latency Pending CN111989979A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2018/019605 WO2019164517A1 (en) 2018-02-25 2018-02-25 Method and system for controlling an operation of a communication network to reduce latency

Publications (1)

Publication Number Publication Date
CN111989979A true CN111989979A (en) 2020-11-24

Family

ID=67687847

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880092681.5A Pending CN111989979A (en) 2018-02-25 2018-02-25 Method and system for controlling operation of a communication network to reduce latency

Country Status (4)

Country Link
US (1) US11212687B2 (en)
EP (1) EP3756413B1 (en)
CN (1) CN111989979A (en)
WO (1) WO2019164517A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11224486B2 (en) * 2018-08-22 2022-01-18 Verily Life Sciences Llc Global synchronization of user preferences
CN111770587B (en) * 2019-04-02 2023-11-28 华为技术有限公司 Data processing method, device and system
CN115426267A (en) * 2019-12-31 2022-12-02 华为技术有限公司 Method and device for acquiring network slice identifier
IT202100030458A1 (en) * 2021-12-01 2023-06-01 Telecom Italia Spa Transmission of a measurement result via a packet-switched communications network
CN114363146B (en) * 2021-12-30 2023-04-07 达闼机器人股份有限公司 Network detection method, system, device and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070195797A1 (en) * 2006-02-23 2007-08-23 Patel Alpesh S Network device that determines application-level network latency by monitoring option values in a transport layer message
US20170079059A1 (en) * 2015-09-11 2017-03-16 Intel IP Corporation Slicing architecture for wireless communication
TW201715910A (en) * 2015-10-28 2017-05-01 英特爾Ip公司 Slice-based operation in wireless networks with end-to-end network slicing
WO2017140340A1 (en) * 2016-02-15 2017-08-24 Telefonaktiebolaget Lm Ericsson (Publ) Network nodes and methods performed therein for enabling communication in a communication network
US20170331785A1 (en) * 2016-05-15 2017-11-16 Lg Electronics Inc. Method and apparatus for supporting network slicing selection and authorization for new radio access technology
WO2017200978A1 (en) * 2016-05-16 2017-11-23 Idac Holdings, Inc. Security-based slice selection and assignment
CN107395388A (en) * 2016-05-17 2017-11-24 财团法人工业技术研究院 Network dicing method and the user equipment using methods described and base station
US20170367036A1 (en) * 2016-06-15 2017-12-21 Convida Wireless, Llc Network Slice Discovery And Selection
CN107682135A (en) * 2017-09-30 2018-02-09 重庆邮电大学 A kind of network slice adaptive virtual resource allocation method based on NOMA
WO2018030710A1 (en) * 2016-08-11 2018-02-15 Samsung Electronics Co., Ltd. Method and apparatus for scheduling uplink data in mobile communication system

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007143218A2 (en) * 2006-06-09 2007-12-13 The Directv Group, Inc. Presentation modes for various format bit streams
US9330119B2 (en) * 2013-04-11 2016-05-03 Oracle International Corporation Knowledge intensive data management system for business process and case management
US9794825B2 (en) 2014-11-06 2017-10-17 Alcatel-Lucent Usa Inc. System and method for determining cell congestion level
US10999012B2 (en) * 2014-11-07 2021-05-04 Strong Force Iot Portfolio 2016, Llc Packet coding based network communication
US11026165B2 (en) * 2016-01-11 2021-06-01 Telefonaktiebolaget Lm Ericsson (Publ) Radio network node, network node, database, configuration control node, and methods performed thereby
WO2017140356A1 (en) * 2016-02-17 2017-08-24 Nec Europe Ltd. A method for operating a wireless network, a wireless network and a management entity
US9961713B2 (en) * 2016-02-23 2018-05-01 Motorola Mobility Llc Procedures to support network slicing in a wireless communication system
US10893455B2 (en) * 2016-04-01 2021-01-12 Telefonaktiebolaget Lm Ericsson (Publ) Handover in a wireless communication network with network slices
CN112165725B (en) * 2016-06-15 2024-03-19 华为技术有限公司 Message processing method and device
US10470149B2 (en) 2016-07-27 2019-11-05 Lg Electronics Inc. Method and apparatus for performing MM attach and service request procedure for network slice based new radio access technology in wireless communication system
CN109474967B (en) * 2017-01-25 2019-11-19 华为技术有限公司 Communication means and communication device
JP2020506625A (en) * 2017-02-03 2020-02-27 ノキア ソリューションズ アンド ネットワークス オサケユキチュア Choosing sustainable services
US10567102B2 (en) * 2017-02-06 2020-02-18 Valens Semiconductor Ltd. Efficient double parity forward error correction on a communication network
CN111918273B (en) * 2017-02-27 2021-09-14 华为技术有限公司 Network slice selection method and device
US10986516B2 (en) * 2017-03-10 2021-04-20 Huawei Technologies Co., Ltd. System and method of network policy optimization
US11051210B2 (en) * 2017-04-28 2021-06-29 NEC Laboratories Europe GmbH Method and system for network slice allocation
EP3622744B1 (en) * 2017-05-10 2022-10-26 Nokia Solutions and Networks Oy Methods relating to network slice selection requests
US10716096B2 (en) * 2017-11-07 2020-07-14 Apple Inc. Enabling network slicing in a 5G network with CP/UP separation
WO2019120485A1 (en) * 2017-12-19 2019-06-27 Telefonaktiebolaget Lm Ericsson (Publ) Managing network slices in a communications network
US10785804B2 (en) * 2018-02-17 2020-09-22 Ofinno, Llc Bandwidth part configuration information

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070195797A1 (en) * 2006-02-23 2007-08-23 Patel Alpesh S Network device that determines application-level network latency by monitoring option values in a transport layer message
US20170079059A1 (en) * 2015-09-11 2017-03-16 Intel IP Corporation Slicing architecture for wireless communication
TW201715910A (en) * 2015-10-28 2017-05-01 英特爾Ip公司 Slice-based operation in wireless networks with end-to-end network slicing
WO2017140340A1 (en) * 2016-02-15 2017-08-24 Telefonaktiebolaget Lm Ericsson (Publ) Network nodes and methods performed therein for enabling communication in a communication network
US20170331785A1 (en) * 2016-05-15 2017-11-16 Lg Electronics Inc. Method and apparatus for supporting network slicing selection and authorization for new radio access technology
WO2017200978A1 (en) * 2016-05-16 2017-11-23 Idac Holdings, Inc. Security-based slice selection and assignment
CN107395388A (en) * 2016-05-17 2017-11-24 财团法人工业技术研究院 Network dicing method and the user equipment using methods described and base station
US20170367036A1 (en) * 2016-06-15 2017-12-21 Convida Wireless, Llc Network Slice Discovery And Selection
WO2018030710A1 (en) * 2016-08-11 2018-02-15 Samsung Electronics Co., Ltd. Method and apparatus for scheduling uplink data in mobile communication system
CN107682135A (en) * 2017-09-30 2018-02-09 重庆邮电大学 A kind of network slice adaptive virtual resource allocation method based on NOMA

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
""S1-152420_was_S1-152103_mobility_on_demand"", 3GPP TSG_SA\\WG1_SERV *
HUAWEI: "R3-161759 "RAN Support for Core Network Slicing"", 3GPP TSG_RAN\\WG3_IU, no. 3 *

Also Published As

Publication number Publication date
EP3756413A1 (en) 2020-12-30
EP3756413B1 (en) 2023-04-12
WO2019164517A1 (en) 2019-08-29
US11212687B2 (en) 2021-12-28
US20200389804A1 (en) 2020-12-10
EP3756413A4 (en) 2021-11-03

Similar Documents

Publication Publication Date Title
US10349297B2 (en) Quality of user experience analysis
CN111989979A (en) Method and system for controlling operation of a communication network to reduce latency
US9237474B2 (en) Network device trace correlation
JP6553196B2 (en) Traffic flow monitoring
EP3235177B1 (en) Measurement coordination in communications
US10523534B2 (en) Method and apparatus for managing user quality of experience in network
EP3295612B1 (en) Uplink performance management
US20030225549A1 (en) Systems and methods for end-to-end quality of service measurements in a distributed network environment
US9407522B2 (en) Initiating data collection based on WiFi network connectivity metrics
US10015688B2 (en) Technique for monitoring data traffic
US10952091B2 (en) Quality of user experience analysis
US8879403B2 (en) Link microbenchmarking with idle link correction
US9635569B2 (en) Method and apparatus for measuring end-to-end service level agreement in service provider network
CN107371179B (en) Measurement result reporting method, measurement result receiving method, related equipment and system
KR20220123083A (en) Systems and methods for real-time monitoring and optimization of mobile networks
JP7213339B2 (en) Data transmission method and device
KR102126036B1 (en) An apparatus and method to measure and manage quality of experience in network system
Arvidsson et al. Transport bottlenecks of edge computing in 5G networks
JP2018032983A (en) Terminal device and communication monitoring method
JP2013232851A (en) Available band measuring apparatus, method and program
Aburkhiss et al. Measurement of GPRS Performance over Libyan GSM Networks: Experimental Results

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination