US11552872B2 - Systems and methods for automated remote network performance monitoring - Google Patents

Systems and methods for automated remote network performance monitoring Download PDF

Info

Publication number
US11552872B2
US11552872B2 US17/101,882 US202017101882A US11552872B2 US 11552872 B2 US11552872 B2 US 11552872B2 US 202017101882 A US202017101882 A US 202017101882A US 11552872 B2 US11552872 B2 US 11552872B2
Authority
US
United States
Prior art keywords
network
traffic
virtual packet
random access
packet brokers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US17/101,882
Other versions
US20220166701A1 (en
Inventor
Scott FAGAN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Verizon Patent and Licensing Inc
Original Assignee
Verizon Patent and Licensing Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Verizon Patent and Licensing Inc filed Critical Verizon Patent and Licensing Inc
Priority to US17/101,882 priority Critical patent/US11552872B2/en
Assigned to VERIZON PATENT AND LICENSING INC. reassignment VERIZON PATENT AND LICENSING INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: FAGAN, SCOTT
Publication of US20220166701A1 publication Critical patent/US20220166701A1/en
Priority to US18/147,831 priority patent/US20230135322A1/en
Application granted granted Critical
Publication of US11552872B2 publication Critical patent/US11552872B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/02Capturing of monitoring data
    • H04L43/028Capturing of monitoring data by filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/12Network monitoring probes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/82Miscellaneous aspects
    • H04L47/827Aggregation of resource allocation or reservation requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/02Standardisation; Integration
    • H04L41/0213Standardised network management protocols, e.g. simple network management protocol [SNMP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/02Capturing of monitoring data
    • H04L43/022Capturing of monitoring data by sampling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/06Generation of reports
    • H04L43/062Generation of reports related to network traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/20Arrangements for monitoring or testing data switching networks the monitoring system or the monitored elements being virtualised, abstracted or software-defined entities, e.g. SDN or NFV
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/10Scheduling measurement reports ; Arrangements for measurement reports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W74/00Wireless channel access, e.g. scheduled or random access
    • H04W74/08Non-scheduled or contention based access, e.g. random access, ALOHA, CSMA [Carrier Sense Multiple Access]
    • H04W74/0833Non-scheduled or contention based access, e.g. random access, ALOHA, CSMA [Carrier Sense Multiple Access] using a random access procedure

Definitions

  • Multi-access edge computing (MEC) devices are provided for monitoring cloud RANs (CRANs) and traffic aggregation points (TAPs) in networks.
  • CRAN cloud RANs
  • TAPs traffic aggregation points
  • a CRAN may be a cloud-native software solution for handling RAN functionality.
  • CRANs enable greater flexibility and versatility to both large-scale and centralized 5G network deployments.
  • FIGS. 1 A- 1 F are diagrams of an example associated with automated remote network performance monitoring.
  • FIG. 2 is a diagram of an example environment in which systems and/or methods described herein may be implemented.
  • FIG. 3 is a diagram of example components of one or more devices of FIG. 2 .
  • FIG. 4 is a flowchart of an example process relating to automated remote network performance monitoring.
  • a user device may request access, via a network node, to a service provided via multi-access edge computing (MEC).
  • MEC multi-access edge computing
  • the network node may be associated with a wireless network that provides access to MEC.
  • the network node may facilitate a connection between the user device and a MEC node based on a geographical location of the user device and/or the MEC node. For example, the network node may receive the request including a domain name and convert the domain name into a network address (e.g., an internet protocol (IP) address) of the MEC node that is geographically closest to the user device.
  • IP internet protocol
  • MEC devices are provided for cloud radio access networks (CRANs) and traffic aggregation points (TAPs) in a network.
  • the TAPs may involve clusters of base stations for aggregating traffic.
  • a TAP may aggregate multiple CRANs.
  • a probe system may use probes to monitor performance indicators and customer performance issues at CRANs and TAPs.
  • the probes may perform key performance indicator (KPI) calculations and store related packets.
  • KPI key performance indicator
  • a session aggregation point (SAP) of the network may include a stack of probes, and may not be able to monitor traffic at all of the CRANs or TAPs. Without the functionality of such probes monitoring the CRANs and the TAPs, the network may suffer performance issues that contribute to wasted time, power, processing resources, and signaling resources.
  • a probe system may deploy virtual packet brokers (vPBs) on commercial off-the-self (COTS) hardware in CRANs and TAPs to enable network traffic filtering. Traffic may be filtered based on virtual internet protocol (VIP) addresses, subnets, or tuples.
  • VIP virtual internet protocol
  • the vPBs may be lightweight and inexpensive.
  • the vPBs may have no storage, are more passive, and may not calculate key performance indicators (KPIs).
  • KPIs key performance indicators
  • the vPBs may be COTS vPBs.
  • the vPBs may be software executing on COTS hardware or on a COTS packet broker.
  • the vPBs may also be virtual PBs executing on a server.
  • a TAP may be part of a visibility system, where traffic that is observed and copied remotely can be passed as visibility traffic.
  • the vPBs in the CRANs and TAPs may pass visibility traffic that is of interest on demand or according to an automated schedule to a SAP where probes are aggregated.
  • the vPBs may enable full-time simple network management protocol (SNMP) polling of some or all visibility traffic, including tenant traffic separation to determine bandwidth utilization in a multi-tenant environment.
  • SNMP simple network management protocol
  • the vPBs may enable CRAN and TAP MEC troubleshooting and packet capturing on demand.
  • network operators may eliminate the need to install probes at all CRAN and TAP locations. As a result, the probe system may enable the network to monitor and improve performance, which may cause the network to conserve processing resources and signaling resources. Network providers may also reduce costs.
  • FIGS. 1 A- 1 F are diagrams of an example 100 associated with automated remote network performance monitoring.
  • example 100 includes clusters of base stations (e.g., gNBs), TAPs, and CRANs.
  • Each CRAN may include a network device 110 , an MEC device 120 , and a vPB 115 .
  • Multiple CRANs may be aggregated at a TAP, and TAPs may be aggregated at an SAP.
  • the vPBs 115 may be deployed on COTS devices.
  • Each TAP may also include an MEC device.
  • Example 100 also shows an SAP with a packet broker 125 and a stack of probes 130 .
  • the probe system 135 may obtain remote traffic from a large quantity of inexpensive vPBs 115 .
  • an optical tap may split some light on a fiber cable and provide a copy of traffic to a vPB 115 .
  • the probe system 135 may provide traffic information to the stack of probes 130 , which can perform calculations. For example, the vPB 115 may forward a copy of all of the traffic to the probes 130 , forward only MEC traffic, or filter traffic for a subnet or customer of interest.
  • the more expensive probes 130 may calculate KPIs, but the probes 130 may have limited capacity. However, the less expensive vPBs 115 may be left on full time or switched to an on-demand model, based on the capacity of the probes 130 . For example, if the probes 130 have capacity, the vPBs may be on all the time. If the probes 130 have more limited capacity, the vPBs may operate on demand and/or filter for traffic of interest.
  • Example 100 further shows a probe system 135 , which may include one or more devices that may control the vPBs.
  • FIG. 1 A shows a probe system 135 that determines one or more parameters for filtering network traffic.
  • the probe system 135 may operate with a network that includes a plurality of vPBs provided for a plurality of CRANs and a plurality of TAPs.
  • the parameters may filter network traffic based on VIP addresses, virtual local area network (VLAN), subnets, tuples (e.g., 5-tuples), a subset of traffic of interest, and/or SNMPs of the network traffic.
  • VLAN virtual local area network
  • tuples e.g., 5-tuples
  • a subset of traffic of interest e.g., 5-tuples
  • SNMPs SNMPs of the network traffic.
  • probe system 135 may provide the one or more parameters to the vPBs 115 , to cause the vPBs 115 to filter the network traffic to obtain network visibility traffic.
  • the probe system 135 may receive data identifying the vPBs 115 , generate a user interface (UI) based on the data, and provide the user interface for display.
  • the vPBs 115 may be accessible via a single graphic UI.
  • An egressing visibility port may timeout and be disabled after 24 hours to eliminate over-subscription of unified transport (UT) links or probe capacity.
  • probe system 135 may receive, from one or more probes of an SAP of the network, one or more metrics calculated based on the network visibility traffic by the vPBs 115 .
  • the metrics may be associated with an availability, latency, utilization, and/or jitter of a CRAN or a TAP.
  • the probe system 135 may determine one or more actions to be implemented based on the metrics. As shown by reference number 155 , the probe system 135 may cause the actions to be implemented in the network. In some implementations, the probe system 135 may dispatch a technician to service a network device associated with one of the CRANs or one of the TAPs, dispatch an autonomous vehicle to service a network device associated with a CRAN or a TAP, or order a replacement network device to replace a network device associated with a CRAN or TAP. The probe system 135 may also determine and cause other actions to be implemented in the network.
  • the probe system 135 may configure probes 130 for automated remote performance sampling.
  • the probe system 135 may cycle through (e.g., daily, hourly) performance sampling of CRANs and/or TAPs, based on a capacity of the probes 130 .
  • the probe system 135 may receive data identifying bandwidths (e.g., available backhaul bandwidths) of the vPBs 115 and the probes 130 .
  • the probe system 165 may determine a schedule for performance sampling based on the data identifying the bandwidths.
  • Probe capacity may be reserved for on-demand probing, including for multi-tenant environments.
  • Packet broker 125 may cause vPBs 115 to utilize optical taps in CRANs, TAPs, and/or an SAP between backhaul MEC entities and MEC tenants to determine bandwidth utilization.
  • the vPBs may receive copies of traffic (e.g., packets) from the optical tap (which creates copies), determine how to filter the traffic, and then send the filtered traffic to a probe 130 for KPI calculation. This may include full-time SNMP polling of all visibility traffic.
  • the probe system 135 may pass visibility traffic of interest on demand by opening up or throttling an egressing port to a unified transport link. That is, the probe system 135 may turn the vPBs on and off on demand, based on need.
  • the probe system 135 may cause the vPBs 115 to implement the schedule for the performance sampling. For example, some probes 130 may be scheduled for CRAN 1 for 24 hour periods on Monday, Thursday, and Sunday. Some probes 130 may be scheduled for CRAN 2 for 24 hour periods on Tuesday and Friday. Some probes 130 may be scheduled for CRAN 3 for 24 hour periods on Wednesday and Saturday.
  • the probe system 135 may receive, from the probes 130 , network performance data generated based on the vPBs 115 implementing the schedule for the performance sampling. Network performance data may be received only during scheduled time frames.
  • the probe system 135 may use the vPBs 115 to obtain network visibility traffic. As shown in FIG. 1 E , and by reference number 180 , the probe system 135 may receive an instruction to resume obtaining the network visibility traffic. As shown by reference number 185 , the probe system 135 may provide, to the vPBs 115 and based on the instruction, the parameters for filtering the network traffic to the network visibility traffic. As shown by FIG. 1 F , and by reference number 190 , the probe system 135 may receive, from the one or more probes, one or more additional metrics calculated based on the network visibility traffic by the vPBs 115 . The probe system may determine one or more additional actions to be implemented based on the one or more additional metrics and cause the one or more additional actions to be implemented in the network.
  • the prove system 135 may provide parameters that cause a first set of the vPBs 115 to not capture the network traffic, and cause a second set of the vPBs 115 to capture the network traffic, filter the network traffic to the network visibility traffic, and provide the network visibility traffic to the probes 130 .
  • FIG. 2 is a diagram of an example environment 200 in which systems and/or methods described herein may be implemented.
  • environment 200 may include a probe system 201 , which may include one or more elements of and/or may execute within a cloud computing system 202 .
  • the cloud computing system 202 may include one or more elements 203 - 213 , as described in more detail below.
  • environment 200 may include a probe system 201 , a cloud computing system 202 , a network 220 , one or more gNBs 105 , one or more network devices 110 , one or more vPBs 15 , one or more MECs 120 , multiple probes 130 , and a packet broker 125 .
  • Devices and/or elements of environment 200 may interconnect via wired connections and/or wireless connections.
  • the cloud computing system 202 includes computing hardware 203 , a resource management component 204 , a host operating system (OS) 205 , and/or one or more virtual computing systems 206 .
  • the resource management component 204 may perform virtualization (e.g., abstraction) of computing hardware 203 to create the one or more virtual computing systems 206 .
  • virtualization e.g., abstraction
  • the resource management component 204 enables a single computing device (e.g., a computer, a server, and/or the like) to operate like multiple computing devices, such as by creating multiple isolated virtual computing systems 206 from computing hardware 203 of the single computing device. In this way, computing hardware 203 can operate more efficiently, with lower power consumption, higher reliability, higher availability, higher utilization, greater flexibility, and lower cost than using separate computing devices.
  • Computing hardware 203 includes hardware and corresponding resources from one or more computing devices.
  • computing hardware 203 may include hardware from a single computing device (e.g., a single server) or from multiple computing devices (e.g., multiple servers), such as multiple computing devices in one or more data centers.
  • computing hardware 203 may include one or more processors 207 , one or more memories 208 , one or more storage components 209 , and/or one or more networking components 210 . Examples of a processor, a memory, a storage component, and a networking component (e.g., a communication component) are described elsewhere herein.
  • the resource management component 204 includes a virtualization application (e.g., executing on hardware, such as computing hardware 203 ) capable of virtualizing computing hardware 203 to start, stop, and/or manage one or more virtual computing systems 206 .
  • the resource management component 204 may include a hypervisor (e.g., a bare-metal or Type 1 hypervisor, a hosted or Type 2 hypervisor, and/or the like) or a virtual machine monitor, such as when the virtual computing systems 206 are virtual machines 211 .
  • the resource management component 204 may include a container manager, such as when the virtual computing systems 206 are containers 212 .
  • the resource management component 204 executes within and/or in coordination with a host operating system 205 .
  • a virtual computing system 206 includes a virtual environment that enables cloud-based execution of operations and/or processes described herein using computing hardware 203 .
  • a virtual computing system 206 may include a virtual machine 211 , a container 212 , a hybrid environment 213 that includes a virtual machine and a container, and/or the like.
  • a virtual computing system 206 may execute one or more applications using a file system that includes binary files, software libraries, and/or other resources required to execute applications on a guest operating system (e.g., within the virtual computing system 206 ) or the host operating system 205 .
  • the probe system 201 may include one or more elements 203 - 213 of the cloud computing system 202 , may execute within the cloud computing system 202 , and/or may be hosted within the cloud computing system 202 , in some implementations, the probe system 201 may not be cloud-based (e.g., may be implemented outside of a cloud computing system) or may be partially cloud-based.
  • the probe system 201 may include one or more devices that are not part of the cloud computing system 202 , such as device 300 of FIG. 3 , which may include a standalone server or another type of computing device.
  • the probe system 201 may perform one or more operations and/or processes described in more detail elsewhere herein, such as for probe system 135 .
  • Network 220 includes one or more wired and/or wireless networks.
  • network 220 may include a cellular network, a public land mobile network (PLMN), a local area network (LAN), a wide area network (WAN), a private network, the Internet, and/or the like, and/or a combination of these or other types of networks.
  • the network 220 enables communication among the devices of environment 200 .
  • the network device 110 may be a network entity in a CRAN that handles functionality for the CRAN, including for a vPB and/or an MEC 120 .
  • the MEC 120 may facilitate moving computing of traffic and services from cloud computing system 202 closer to an edge of the network 220 .
  • the packet broker 125 may help to provide network packet data to the probe system 135 for analysis.
  • the number and arrangement of devices and networks shown in FIG. 2 are provided as an example. In practice, there may be additional devices and/or networks, fewer devices and/or networks, different devices and/or networks, or differently arranged devices and/or networks than those shown in FIG. 2 . Furthermore, two or more devices shown in FIG. 2 may be implemented within a single device, or a single device shown in FIG. 2 may be implemented as multiple, distributed devices. Additionally, or alternatively, a set of devices (e.g., one or more devices) of environment 200 may perform one or more functions described as being performed by another set of devices of environment 200 .
  • FIG. 3 is a diagram of example components of a device 300 , which may correspond to probe system 201 .
  • probe system 201 may include one or more devices 300 and/or one or more components of device 300 .
  • device 300 may include a bus 310 , a processor 320 , a memory 330 , a storage component 340 , an input component 350 , an output component 360 , and a communication component 370 .
  • Bus 310 includes a component that enables wired and/or wireless communication among the components of device 300 .
  • Processor 320 includes a central processing unit, a graphics processing unit, a microprocessor, a controller, a microcontroller, a digital signal processor, a field-programmable gate array, an application-specific integrated circuit, and/or another type of processing component.
  • Processor 320 is implemented in hardware, firmware, or a combination of hardware and software.
  • processor 320 includes one or more processors capable of being programmed to perform a function.
  • Memory 330 includes a random access memory, a read only memory, and/or another type of memory (e.g., a flash memory, a magnetic memory, and/or an optical memory).
  • Storage component 340 stores information and/or software related to the operation of device 300 .
  • storage component 340 may include a hard disk drive, a magnetic disk drive, an optical disk drive, a solid state disk drive, a compact disc, a digital versatile disc, and/or another type of non-transitory computer-readable medium.
  • Input component 350 enables device 300 to receive input, such as user input and/or sensed inputs.
  • input component 350 may include a touch screen, a keyboard, a keypad, a mouse, a button, a microphone, a switch, a sensor, a global positioning system component, an accelerometer, a gyroscope, and/or an actuator.
  • Output component 360 enables device 300 to provide output, such as via a display, a speaker, and/or one or more light-emitting diodes.
  • Communication component 370 enables device 300 to communicate with other devices, such as via a wired connection and/or a wireless connection.
  • communication component 370 may include a receiver, a transmitter, a transceiver, a modem, a network interface card, and/or an antenna.
  • Device 300 may perform one or more processes described herein.
  • a non-transitory computer-readable medium e.g., memory 330 and/or storage component 340
  • may store a set of instructions e.g., one or more instructions, code, software code, and/or program code
  • Processor 320 may execute the set of instructions to perform one or more processes described herein.
  • execution of the set of instructions, by one or more processors 320 causes the one or more processors 320 and/or the device 300 to perform one or more processes described herein.
  • hardwired circuitry may be used instead of or in combination with the instructions to perform one or more processes described herein.
  • implementations described herein are not limited to any specific combination of hardware circuitry and software.
  • Device 300 may include additional components, fewer components, different components, or differently arranged components than those shown in FIG. 3 . Additionally, or alternatively, a set of components (e.g., one or more components) of device 300 may perform one or more functions described as being performed by another set of components of device 300 .
  • FIG. 4 is a flowchart of an example process 400 associated with systems and methods for automated remote network performance monitoring.
  • one or more process blocks of FIG. 4 may be performed by a device (e.g., device of probe system 135 , device of probe system 201 ).
  • one or more process blocks of FIG. 4 may be performed by another device or a group of devices separate from or including the device, such as MEC 120 , network device 110 , packet broker 125 , and/or gNB 105 .
  • one or more process blocks of FIG. 4 may be performed by one or more components of device 300 , such as processor 320 , memory 330 , storage component 340 , input component 350 , output component 360 , and/or communication component 370 .
  • a network may include a plurality of virtual packet brokers.
  • the virtual packet brokers may be provided for a plurality of cloud random access networks and a plurality of traffic aggregation points.
  • process 400 may include determining one or more parameters for filtering network traffic of the network (block 410 ).
  • the device may determine one or more parameters for filtering network traffic, of a network that includes a plurality of virtual packet brokers provided for a plurality of cloud random access networks and a plurality of traffic aggregation points, as described above.
  • each of the plurality of virtual packet brokers is deployed on a commercial off-the-shelf device.
  • process 400 may include providing the one or more parameters to the plurality of virtual packet brokers, to cause the plurality of virtual packet brokers to filter the network traffic to obtain network visibility traffic (block 420 ).
  • the device may provide the one or more parameters to the plurality of virtual packet brokers, to cause the plurality of virtual packet brokers to filter the network traffic to obtain network visibility traffic, as described above.
  • process 400 may include receiving, from one or more probes of a session aggregation point of the network, one or more metrics calculated based on the network visibility traffic by the plurality of virtual packet brokers (block 430 ).
  • the device may receive, from one or more probes of a session aggregation point of the network, one or more metrics calculated based on the network visibility traffic by the plurality of virtual packet brokers, as described above.
  • the one or more metrics include one or more of a metric associated with an availability of one of the plurality of cloud random access networks or one of the plurality of traffic aggregation points, a metric associated with a latency of one of the plurality of cloud random access networks or one of the plurality of traffic aggregation points, a metric associated with utilization of one of the plurality of cloud random access networks or one of the plurality of traffic aggregation points, or a metric associated with jitter of one of the plurality of cloud random access networks or one of the plurality of traffic aggregation points.
  • process 400 includes receiving data identifying bandwidths of the plurality of virtual packet brokers and the one or more probes, determining a schedule for performance sampling based on the data identifying the bandwidths, causing the plurality of virtual packet brokers to implement the schedule for the performance sampling, and receiving, from the one or more probes, network performance data generated based on the plurality of virtual packet brokers implementing the schedule for the performance sampling.
  • process 400 may include determining one or more actions to be implemented based on the one or more metrics (block 440 ).
  • the device may determine one or more actions to be implemented based on the one or more metrics, as described above.
  • process 400 includes receiving an instruction to resume obtaining the network visibility traffic, providing, to the plurality of virtual packet brokers and based on the instruction, the one or more parameters for filtering the network traffic to the network visibility traffic, receiving, from the one or more probes, one or more additional metrics calculated based on the network visibility traffic by the plurality of virtual packet brokers, determining one or more additional actions to be implemented based on the one or more additional metrics, and causing the one or more additional actions to be implemented in the network.
  • process 400 may include causing the one or more actions to be implemented in the network (block 450 ).
  • the device may cause the one or more actions to be implemented in the network, as described above.
  • process 400 includes determining one or more additional actions to be implemented based on the network performance data, and causing the one or more additional actions to be implemented in the network.
  • causing the one or more actions to be implemented comprises one or more of dispatching a technician to service a network device associated with one of the plurality of cloud random access networks or one of the plurality of traffic aggregation points, dispatching an autonomous vehicle to service a network device associated with one of the plurality of cloud random access networks or one of the plurality of traffic aggregation points, or ordering a replacement network device to replace a network device associated with one of the plurality of cloud random access networks or one of the plurality of traffic aggregation points.
  • process 400 may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted in FIG. 4 . Additionally, or alternatively, two or more of the blocks of process 400 may be performed in parallel.
  • a centralized cloud platform may receive vPB bandwidth and probe bandwidth, and determine when to turn the vPBs on and off to focus on specific traffic. Deploying the vPBs and probes as described above renders great cost savings and conserves processing and signaling resources.
  • the term “component” is intended to be broadly construed as hardware, firmware, or a combination of hardware and software. It will be apparent that systems and/or methods described herein may be implemented in different forms of hardware, firmware, and/or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the implementations. Thus, the operation and behavior of the systems and/or methods are described herein without reference to specific software code—it being understood that software and hardware can be used to implement the systems and/or methods based on the description herein.
  • satisfying a threshold may, depending on the context, refer to a value being greater than the threshold, greater than or equal to the threshold, less than the threshold, less than or equal to the threshold, equal to the threshold, not equal to the threshold, or the like.
  • “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiple of the same item.
  • the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Also, as used herein, the term “or” is intended to be inclusive when used in a series and may be used interchangeably with “and/or,” unless explicitly stated otherwise (e.g., if used in combination with “either” or “only one of”).

Abstract

In some implementations, a device may determine one or more parameters for filtering network traffic of a network that includes a plurality of virtual packet brokers provided for a plurality of cloud random access networks and a plurality of traffic aggregation points. The device may provide the one or more parameters to the plurality of virtual packet brokers, to cause the plurality of virtual packet brokers to filter the network traffic to obtain network visibility traffic. The device may receive, from one or more probes of a session aggregation point of the network, one or more metrics calculated based on the network visibility traffic by the plurality of virtual packet brokers. The device may determine one or more actions to be implemented based on the one or more metrics. The device may cause the one or more actions to be implemented in the network.

Description

BACKGROUND
Multi-access edge computing (MEC) devices are provided for monitoring cloud RANs (CRANs) and traffic aggregation points (TAPs) in networks. A CRAN may be a cloud-native software solution for handling RAN functionality. CRANs enable greater flexibility and versatility to both large-scale and centralized 5G network deployments.
BRIEF DESCRIPTION OF THE DRAWINGS
FIGS. 1A-1F are diagrams of an example associated with automated remote network performance monitoring.
FIG. 2 is a diagram of an example environment in which systems and/or methods described herein may be implemented.
FIG. 3 is a diagram of example components of one or more devices of FIG. 2 .
FIG. 4 is a flowchart of an example process relating to automated remote network performance monitoring.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
The following detailed description of example implementations refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements.
In some instances, a user device may request access, via a network node, to a service provided via multi-access edge computing (MEC). The network node may be associated with a wireless network that provides access to MEC. The network node may facilitate a connection between the user device and a MEC node based on a geographical location of the user device and/or the MEC node. For example, the network node may receive the request including a domain name and convert the domain name into a network address (e.g., an internet protocol (IP) address) of the MEC node that is geographically closest to the user device.
MEC devices are provided for cloud radio access networks (CRANs) and traffic aggregation points (TAPs) in a network. The TAPs may involve clusters of base stations for aggregating traffic. A TAP may aggregate multiple CRANs. A probe system may use probes to monitor performance indicators and customer performance issues at CRANs and TAPs. The probes may perform key performance indicator (KPI) calculations and store related packets. However, the quantity of CRANs and TAPs in a network may be large, and the cost of installing probes at all of the CRANs and TAPs may be too expensive. A session aggregation point (SAP) of the network may include a stack of probes, and may not be able to monitor traffic at all of the CRANs or TAPs. Without the functionality of such probes monitoring the CRANs and the TAPs, the network may suffer performance issues that contribute to wasted time, power, processing resources, and signaling resources.
In some implementations, a probe system may deploy virtual packet brokers (vPBs) on commercial off-the-self (COTS) hardware in CRANs and TAPs to enable network traffic filtering. Traffic may be filtered based on virtual internet protocol (VIP) addresses, subnets, or tuples. The vPBs may be lightweight and inexpensive. The vPBs may have no storage, are more passive, and may not calculate key performance indicators (KPIs). The vPBs may be COTS vPBs. The vPBs may be software executing on COTS hardware or on a COTS packet broker. The vPBs may also be virtual PBs executing on a server. A TAP may be part of a visibility system, where traffic that is observed and copied remotely can be passed as visibility traffic. The vPBs in the CRANs and TAPs may pass visibility traffic that is of interest on demand or according to an automated schedule to a SAP where probes are aggregated. The vPBs may enable full-time simple network management protocol (SNMP) polling of some or all visibility traffic, including tenant traffic separation to determine bandwidth utilization in a multi-tenant environment. The vPBs may enable CRAN and TAP MEC troubleshooting and packet capturing on demand. By deploying vPBs, network operators may eliminate the need to install probes at all CRAN and TAP locations. As a result, the probe system may enable the network to monitor and improve performance, which may cause the network to conserve processing resources and signaling resources. Network providers may also reduce costs.
FIGS. 1A-1F are diagrams of an example 100 associated with automated remote network performance monitoring. As shown in FIGS. 1A-1F, example 100 includes clusters of base stations (e.g., gNBs), TAPs, and CRANs. Each CRAN may include a network device 110, an MEC device 120, and a vPB 115. Multiple CRANs may be aggregated at a TAP, and TAPs may be aggregated at an SAP. The vPBs 115 may be deployed on COTS devices. Each TAP may also include an MEC device. Example 100 also shows an SAP with a packet broker 125 and a stack of probes 130. The probe system 135 may obtain remote traffic from a large quantity of inexpensive vPBs 115. For example, an optical tap may split some light on a fiber cable and provide a copy of traffic to a vPB 115. The probe system 135 may provide traffic information to the stack of probes 130, which can perform calculations. For example, the vPB 115 may forward a copy of all of the traffic to the probes 130, forward only MEC traffic, or filter traffic for a subnet or customer of interest. The more expensive probes 130 may calculate KPIs, but the probes 130 may have limited capacity. However, the less expensive vPBs 115 may be left on full time or switched to an on-demand model, based on the capacity of the probes 130. For example, if the probes 130 have capacity, the vPBs may be on all the time. If the probes 130 have more limited capacity, the vPBs may operate on demand and/or filter for traffic of interest.
Example 100 further shows a probe system 135, which may include one or more devices that may control the vPBs.
FIG. 1A shows a probe system 135 that determines one or more parameters for filtering network traffic. The probe system 135 may operate with a network that includes a plurality of vPBs provided for a plurality of CRANs and a plurality of TAPs. The parameters may filter network traffic based on VIP addresses, virtual local area network (VLAN), subnets, tuples (e.g., 5-tuples), a subset of traffic of interest, and/or SNMPs of the network traffic. As shown in FIG. 1A, and by reference number 140, probe system 135 may provide the one or more parameters to the vPBs 115, to cause the vPBs 115 to filter the network traffic to obtain network visibility traffic.
The probe system 135 may receive data identifying the vPBs 115, generate a user interface (UI) based on the data, and provide the user interface for display. The vPBs 115 may be accessible via a single graphic UI. An egressing visibility port may timeout and be disabled after 24 hours to eliminate over-subscription of unified transport (UT) links or probe capacity.
As shown by reference number 145, probe system 135 may receive, from one or more probes of an SAP of the network, one or more metrics calculated based on the network visibility traffic by the vPBs 115. The metrics may be associated with an availability, latency, utilization, and/or jitter of a CRAN or a TAP.
As shown by reference number 150, the probe system 135 may determine one or more actions to be implemented based on the metrics. As shown by reference number 155, the probe system 135 may cause the actions to be implemented in the network. In some implementations, the probe system 135 may dispatch a technician to service a network device associated with one of the CRANs or one of the TAPs, dispatch an autonomous vehicle to service a network device associated with a CRAN or a TAP, or order a replacement network device to replace a network device associated with a CRAN or TAP. The probe system 135 may also determine and cause other actions to be implemented in the network.
In some implementations, the probe system 135 may configure probes 130 for automated remote performance sampling. The probe system 135 may cycle through (e.g., daily, hourly) performance sampling of CRANs and/or TAPs, based on a capacity of the probes 130. As shown by FIG. 1C, and by reference number 160, the probe system 135 may receive data identifying bandwidths (e.g., available backhaul bandwidths) of the vPBs 115 and the probes 130. As shown by reference number 165, the probe system 165 may determine a schedule for performance sampling based on the data identifying the bandwidths.
Probe capacity may be reserved for on-demand probing, including for multi-tenant environments. Packet broker 125 may cause vPBs 115 to utilize optical taps in CRANs, TAPs, and/or an SAP between backhaul MEC entities and MEC tenants to determine bandwidth utilization. The vPBs may receive copies of traffic (e.g., packets) from the optical tap (which creates copies), determine how to filter the traffic, and then send the filtered traffic to a probe 130 for KPI calculation. This may include full-time SNMP polling of all visibility traffic. The probe system 135 may pass visibility traffic of interest on demand by opening up or throttling an egressing port to a unified transport link. That is, the probe system 135 may turn the vPBs on and off on demand, based on need.
As shown by FIG. 1D, and by reference number 170, the probe system 135 may cause the vPBs 115 to implement the schedule for the performance sampling. For example, some probes 130 may be scheduled for CRAN 1 for 24 hour periods on Monday, Thursday, and Sunday. Some probes 130 may be scheduled for CRAN 2 for 24 hour periods on Tuesday and Friday. Some probes 130 may be scheduled for CRAN 3 for 24 hour periods on Wednesday and Saturday.
As shown by reference number 175, the probe system 135 may receive, from the probes 130, network performance data generated based on the vPBs 115 implementing the schedule for the performance sampling. Network performance data may be received only during scheduled time frames.
In some implementations, the probe system 135 may use the vPBs 115 to obtain network visibility traffic. As shown in FIG. 1E, and by reference number 180, the probe system 135 may receive an instruction to resume obtaining the network visibility traffic. As shown by reference number 185, the probe system 135 may provide, to the vPBs 115 and based on the instruction, the parameters for filtering the network traffic to the network visibility traffic. As shown by FIG. 1F, and by reference number 190, the probe system 135 may receive, from the one or more probes, one or more additional metrics calculated based on the network visibility traffic by the vPBs 115. The probe system may determine one or more additional actions to be implemented based on the one or more additional metrics and cause the one or more additional actions to be implemented in the network.
In some implementations, the prove system 135 may provide parameters that cause a first set of the vPBs 115 to not capture the network traffic, and cause a second set of the vPBs 115 to capture the network traffic, filter the network traffic to the network visibility traffic, and provide the network visibility traffic to the probes 130.
FIG. 2 is a diagram of an example environment 200 in which systems and/or methods described herein may be implemented. As shown in FIG. 2 , environment 200 may include a probe system 201, which may include one or more elements of and/or may execute within a cloud computing system 202. The cloud computing system 202 may include one or more elements 203-213, as described in more detail below. As further shown in FIG. 2 , environment 200 may include a probe system 201, a cloud computing system 202, a network 220, one or more gNBs 105, one or more network devices 110, one or more vPBs 15, one or more MECs 120, multiple probes 130, and a packet broker 125. Devices and/or elements of environment 200 may interconnect via wired connections and/or wireless connections.
The cloud computing system 202 includes computing hardware 203, a resource management component 204, a host operating system (OS) 205, and/or one or more virtual computing systems 206. The resource management component 204 may perform virtualization (e.g., abstraction) of computing hardware 203 to create the one or more virtual computing systems 206. Using virtualization, the resource management component 204 enables a single computing device (e.g., a computer, a server, and/or the like) to operate like multiple computing devices, such as by creating multiple isolated virtual computing systems 206 from computing hardware 203 of the single computing device. In this way, computing hardware 203 can operate more efficiently, with lower power consumption, higher reliability, higher availability, higher utilization, greater flexibility, and lower cost than using separate computing devices.
Computing hardware 203 includes hardware and corresponding resources from one or more computing devices. For example, computing hardware 203 may include hardware from a single computing device (e.g., a single server) or from multiple computing devices (e.g., multiple servers), such as multiple computing devices in one or more data centers. As shown, computing hardware 203 may include one or more processors 207, one or more memories 208, one or more storage components 209, and/or one or more networking components 210. Examples of a processor, a memory, a storage component, and a networking component (e.g., a communication component) are described elsewhere herein.
The resource management component 204 includes a virtualization application (e.g., executing on hardware, such as computing hardware 203) capable of virtualizing computing hardware 203 to start, stop, and/or manage one or more virtual computing systems 206. For example, the resource management component 204 may include a hypervisor (e.g., a bare-metal or Type 1 hypervisor, a hosted or Type 2 hypervisor, and/or the like) or a virtual machine monitor, such as when the virtual computing systems 206 are virtual machines 211. Additionally, or alternatively, the resource management component 204 may include a container manager, such as when the virtual computing systems 206 are containers 212. In some implementations, the resource management component 204 executes within and/or in coordination with a host operating system 205.
A virtual computing system 206 includes a virtual environment that enables cloud-based execution of operations and/or processes described herein using computing hardware 203. As shown, a virtual computing system 206 may include a virtual machine 211, a container 212, a hybrid environment 213 that includes a virtual machine and a container, and/or the like. A virtual computing system 206 may execute one or more applications using a file system that includes binary files, software libraries, and/or other resources required to execute applications on a guest operating system (e.g., within the virtual computing system 206) or the host operating system 205.
Although the probe system 201 may include one or more elements 203-213 of the cloud computing system 202, may execute within the cloud computing system 202, and/or may be hosted within the cloud computing system 202, in some implementations, the probe system 201 may not be cloud-based (e.g., may be implemented outside of a cloud computing system) or may be partially cloud-based. For example, the probe system 201 may include one or more devices that are not part of the cloud computing system 202, such as device 300 of FIG. 3 , which may include a standalone server or another type of computing device. The probe system 201 may perform one or more operations and/or processes described in more detail elsewhere herein, such as for probe system 135.
Network 220 includes one or more wired and/or wireless networks. For example, network 220 may include a cellular network, a public land mobile network (PLMN), a local area network (LAN), a wide area network (WAN), a private network, the Internet, and/or the like, and/or a combination of these or other types of networks. The network 220 enables communication among the devices of environment 200. The network device 110 may be a network entity in a CRAN that handles functionality for the CRAN, including for a vPB and/or an MEC 120. The MEC 120 may facilitate moving computing of traffic and services from cloud computing system 202 closer to an edge of the network 220. The packet broker 125 may help to provide network packet data to the probe system 135 for analysis.
The number and arrangement of devices and networks shown in FIG. 2 are provided as an example. In practice, there may be additional devices and/or networks, fewer devices and/or networks, different devices and/or networks, or differently arranged devices and/or networks than those shown in FIG. 2 . Furthermore, two or more devices shown in FIG. 2 may be implemented within a single device, or a single device shown in FIG. 2 may be implemented as multiple, distributed devices. Additionally, or alternatively, a set of devices (e.g., one or more devices) of environment 200 may perform one or more functions described as being performed by another set of devices of environment 200.
FIG. 3 is a diagram of example components of a device 300, which may correspond to probe system 201. In some implementations, probe system 201 may include one or more devices 300 and/or one or more components of device 300. As shown in FIG. 3 , device 300 may include a bus 310, a processor 320, a memory 330, a storage component 340, an input component 350, an output component 360, and a communication component 370.
Bus 310 includes a component that enables wired and/or wireless communication among the components of device 300. Processor 320 includes a central processing unit, a graphics processing unit, a microprocessor, a controller, a microcontroller, a digital signal processor, a field-programmable gate array, an application-specific integrated circuit, and/or another type of processing component. Processor 320 is implemented in hardware, firmware, or a combination of hardware and software. In some implementations, processor 320 includes one or more processors capable of being programmed to perform a function. Memory 330 includes a random access memory, a read only memory, and/or another type of memory (e.g., a flash memory, a magnetic memory, and/or an optical memory).
Storage component 340 stores information and/or software related to the operation of device 300. For example, storage component 340 may include a hard disk drive, a magnetic disk drive, an optical disk drive, a solid state disk drive, a compact disc, a digital versatile disc, and/or another type of non-transitory computer-readable medium. Input component 350 enables device 300 to receive input, such as user input and/or sensed inputs. For example, input component 350 may include a touch screen, a keyboard, a keypad, a mouse, a button, a microphone, a switch, a sensor, a global positioning system component, an accelerometer, a gyroscope, and/or an actuator. Output component 360 enables device 300 to provide output, such as via a display, a speaker, and/or one or more light-emitting diodes. Communication component 370 enables device 300 to communicate with other devices, such as via a wired connection and/or a wireless connection. For example, communication component 370 may include a receiver, a transmitter, a transceiver, a modem, a network interface card, and/or an antenna.
Device 300 may perform one or more processes described herein. For example, a non-transitory computer-readable medium (e.g., memory 330 and/or storage component 340) may store a set of instructions (e.g., one or more instructions, code, software code, and/or program code) for execution by processor 320. Processor 320 may execute the set of instructions to perform one or more processes described herein. In some implementations, execution of the set of instructions, by one or more processors 320, causes the one or more processors 320 and/or the device 300 to perform one or more processes described herein. In some implementations, hardwired circuitry may be used instead of or in combination with the instructions to perform one or more processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software.
The number and arrangement of components shown in FIG. 3 are provided as an example. Device 300 may include additional components, fewer components, different components, or differently arranged components than those shown in FIG. 3 . Additionally, or alternatively, a set of components (e.g., one or more components) of device 300 may perform one or more functions described as being performed by another set of components of device 300.
FIG. 4 is a flowchart of an example process 400 associated with systems and methods for automated remote network performance monitoring. In some implementations, one or more process blocks of FIG. 4 may be performed by a device (e.g., device of probe system 135, device of probe system 201). In some implementations, one or more process blocks of FIG. 4 may be performed by another device or a group of devices separate from or including the device, such as MEC 120, network device 110, packet broker 125, and/or gNB 105. Additionally, or alternatively, one or more process blocks of FIG. 4 may be performed by one or more components of device 300, such as processor 320, memory 330, storage component 340, input component 350, output component 360, and/or communication component 370.
A network may include a plurality of virtual packet brokers. The virtual packet brokers may be provided for a plurality of cloud random access networks and a plurality of traffic aggregation points. As shown in FIG. 4 , process 400 may include determining one or more parameters for filtering network traffic of the network (block 410). For example, the device may determine one or more parameters for filtering network traffic, of a network that includes a plurality of virtual packet brokers provided for a plurality of cloud random access networks and a plurality of traffic aggregation points, as described above. In some implementations, each of the plurality of virtual packet brokers is deployed on a commercial off-the-shelf device.
As further shown in FIG. 4 , process 400 may include providing the one or more parameters to the plurality of virtual packet brokers, to cause the plurality of virtual packet brokers to filter the network traffic to obtain network visibility traffic (block 420). For example, the device may provide the one or more parameters to the plurality of virtual packet brokers, to cause the plurality of virtual packet brokers to filter the network traffic to obtain network visibility traffic, as described above.
As further shown in FIG. 4 , process 400 may include receiving, from one or more probes of a session aggregation point of the network, one or more metrics calculated based on the network visibility traffic by the plurality of virtual packet brokers (block 430). For example, the device may receive, from one or more probes of a session aggregation point of the network, one or more metrics calculated based on the network visibility traffic by the plurality of virtual packet brokers, as described above. In some implementations, the one or more metrics include one or more of a metric associated with an availability of one of the plurality of cloud random access networks or one of the plurality of traffic aggregation points, a metric associated with a latency of one of the plurality of cloud random access networks or one of the plurality of traffic aggregation points, a metric associated with utilization of one of the plurality of cloud random access networks or one of the plurality of traffic aggregation points, or a metric associated with jitter of one of the plurality of cloud random access networks or one of the plurality of traffic aggregation points.
In some implementations, process 400 includes receiving data identifying bandwidths of the plurality of virtual packet brokers and the one or more probes, determining a schedule for performance sampling based on the data identifying the bandwidths, causing the plurality of virtual packet brokers to implement the schedule for the performance sampling, and receiving, from the one or more probes, network performance data generated based on the plurality of virtual packet brokers implementing the schedule for the performance sampling.
As further shown in FIG. 4 , process 400 may include determining one or more actions to be implemented based on the one or more metrics (block 440). For example, the device may determine one or more actions to be implemented based on the one or more metrics, as described above.
In some implementations, process 400 includes receiving an instruction to resume obtaining the network visibility traffic, providing, to the plurality of virtual packet brokers and based on the instruction, the one or more parameters for filtering the network traffic to the network visibility traffic, receiving, from the one or more probes, one or more additional metrics calculated based on the network visibility traffic by the plurality of virtual packet brokers, determining one or more additional actions to be implemented based on the one or more additional metrics, and causing the one or more additional actions to be implemented in the network.
As further shown in FIG. 4 , process 400 may include causing the one or more actions to be implemented in the network (block 450). For example, the device may cause the one or more actions to be implemented in the network, as described above. In some implementations, process 400 includes determining one or more additional actions to be implemented based on the network performance data, and causing the one or more additional actions to be implemented in the network.
In some implementations, causing the one or more actions to be implemented comprises one or more of dispatching a technician to service a network device associated with one of the plurality of cloud random access networks or one of the plurality of traffic aggregation points, dispatching an autonomous vehicle to service a network device associated with one of the plurality of cloud random access networks or one of the plurality of traffic aggregation points, or ordering a replacement network device to replace a network device associated with one of the plurality of cloud random access networks or one of the plurality of traffic aggregation points.
Although FIG. 4 shows example blocks of process 400, in some implementations, process 400 may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted in FIG. 4 . Additionally, or alternatively, two or more of the blocks of process 400 may be performed in parallel.
By using probe system 135, a centralized cloud platform may receive vPB bandwidth and probe bandwidth, and determine when to turn the vPBs on and off to focus on specific traffic. Deploying the vPBs and probes as described above renders great cost savings and conserves processing and signaling resources.
As used herein, the term “component” is intended to be broadly construed as hardware, firmware, or a combination of hardware and software. It will be apparent that systems and/or methods described herein may be implemented in different forms of hardware, firmware, and/or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the implementations. Thus, the operation and behavior of the systems and/or methods are described herein without reference to specific software code—it being understood that software and hardware can be used to implement the systems and/or methods based on the description herein.
As used herein, satisfying a threshold may, depending on the context, refer to a value being greater than the threshold, greater than or equal to the threshold, less than the threshold, less than or equal to the threshold, equal to the threshold, not equal to the threshold, or the like.
To the extent the aforementioned implementations collect, store, or employ personal information of individuals, it should be understood that such information shall be used in accordance with all applicable laws concerning protection of personal information. Additionally, the collection, storage, and use of such information can be subject to consent of the individual to such activity, for example, through well known “opt-in” or “opt-out” processes as can be appropriate for the situation and type of information. Storage and use of personal information can be in an appropriately secure manner reflective of the type of information, for example, through various encryption and anonymization techniques for particularly sensitive information.
Even though particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of various implementations. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of various implementations includes each dependent claim in combination with every other claim in the claim set. As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiple of the same item.
No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Further, as used herein, the article “the” is intended to include one or more items referenced in connection with the article “the” and may be used interchangeably with “the one or more.” Furthermore, as used herein, the term “set” is intended to include one or more items (e.g., related items, unrelated items, or a combination of related and unrelated items), and may be used interchangeably with “one or more.” Where only one item is intended, the phrase “only one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Also, as used herein, the term “or” is intended to be inclusive when used in a series and may be used interchangeably with “and/or,” unless explicitly stated otherwise (e.g., if used in combination with “either” or “only one of”).
In the preceding specification, various example embodiments have been described with reference to the accompanying drawings. It will, however, be evident that various modifications and changes may be made thereto, and additional embodiments may be implemented, without departing from the broader scope of the invention as set forth in the claims that follow. The specification and drawings are accordingly to be regarded in an illustrative rather than restrictive sense.

Claims (20)

What is claimed is:
1. A method, comprising:
determining, by a device, one or more parameters for filtering network traffic of a network that includes a plurality of virtual packet brokers provided for a plurality of cloud random access networks and a plurality of traffic aggregation points,
wherein one of the plurality of cloud random access networks is a cloud-native solution for handling random access network functionality;
providing, by the device, the one or more parameters to the plurality of virtual packet brokers, to cause the plurality of virtual packet brokers to filter the network traffic to obtain network visibility traffic;
receiving, by the device and from one or more probes of a session aggregation point of the network, one or more metrics based on the network visibility traffic by the plurality of virtual packet brokers,
wherein the one or more metrics include availability, latency, or utilization of the one of the plurality of cloud random access networks or one of the plurality of traffic aggregation points;
determining, by the device, one or more actions to be implemented based on the one or more metrics,
wherein the one or more actions to be implemented are associated with the one of the plurality of cloud random access networks or the one of the plurality of traffic aggregation points;
causing, by the device, the one or more actions to be implemented in the network;
receiving, by the device, data identifying bandwidths of the plurality of virtual packet brokers and the one or more probes,
wherein the plurality of virtual packet brokers utilize optical taps in the one of the plurality of cloud random access networks, the one of the plurality of traffic aggregation points, or the session aggregation point of the network to determine bandwidth utilization;
determining, by the device, a schedule for performance sampling based on the data identifying the bandwidths;
causing, by the device, the plurality of virtual packet brokers to implement the schedule for the performance sampling; and
receiving, by the device, from the one or more probes, network performance data generated based on the plurality of virtual packet brokers implementing the schedule for the performance sampling.
2. The method of claim 1, further comprising:
determining one or more additional actions to be implemented based on the network performance data; and
causing the one or more additional actions to be implemented in the network.
3. The method of claim 1, further comprising:
receiving an instruction to resume obtaining the network visibility traffic;
providing, to the plurality of virtual packet brokers and based on the instruction, the one or more parameters for filtering the network traffic;
receiving, from the one or more probes, one or more additional metrics calculated based on the network visibility traffic by the plurality of virtual packet brokers;
determining one or more additional actions to be implemented based on the one or more additional metrics; and
causing the one or more additional actions to be implemented in the network.
4. The method of claim 3, wherein causing the one or more additional actions to be implemented comprises one or more of:
dispatching a technician to service a network device associated with the one of the plurality of cloud random access networks or one of the plurality of traffic aggregation points;
dispatching an autonomous vehicle to service a network device associated with the one of the plurality of cloud random access networks or one of the plurality of traffic aggregation points; or
ordering a replacement network device to replace a network device associated with the one of the plurality of cloud random access networks or one of the plurality of traffic aggregation points.
5. The method of claim 3, wherein the one or more additional metrics also include
a metric associated with jitter of one of the plurality of traffic aggregation points.
6. The method of claim 1, wherein each of the plurality of virtual packet brokers is deployed on a commercial off-the-shelf device.
7. The method of claim 1, wherein the one or more parameters cause a first set of the plurality of virtual packet brokers to not capture the network traffic, and cause a second set of the plurality of virtual packet brokers to capture the network traffic, filter the network traffic to the network visibility traffic, and provide the network visibility traffic to the one or more probes.
8. A device, comprising:
one or more processors configured to:
determine one or more parameters for filtering network traffic, of a network that includes a plurality of virtual packet brokers provided for a plurality of cloud random access networks and a plurality of traffic aggregation points,
wherein one of the plurality of cloud random access networks is a cloud-native solution for handling random access network functionality;
provide the one or more parameters to the plurality of virtual packet brokers, to cause the plurality of virtual packet brokers to filter the network traffic to obtain network visibility traffic;
receive, from one or more probes of a session aggregation point of the network, two or more metrics based on the network visibility traffic by the plurality of virtual packet brokers,
wherein the two or more metrics include two or more of:
a metric associated with an availability of one of the plurality of cloud random access networks or one of the plurality of traffic aggregation points,
a metric associated with a latency of one of the plurality of cloud random access networks or one of the plurality of traffic aggregation points, or
a metric associated with utilization of the one of the plurality of cloud random access networks or one of the plurality of traffic aggregation points;
determine one or more actions to be implemented based on the two or more metrics,
wherein the one or more actions to be implemented are associated with the one of the plurality of cloud random access networks or the one of the plurality of traffic aggregation points;
cause the one or more actions to be implemented in the network;
receive data identifying bandwidths of the plurality of virtual packet brokers and the one or more probes,
wherein the plurality of virtual packet brokers utilize optical taps in the one of the plurality of cloud random access networks, the one of the plurality of traffic aggregation points, or the session aggregation point of the network to determine bandwidth utilization;
determine a schedule for performance sampling based on the data identifying the bandwidths;
cause the plurality of virtual packet brokers to implement the schedule for the performance sampling; and
receive, from the one or more probes, network performance data generated based on the plurality of virtual packet brokers implementing the schedule for the performance sampling.
9. The device of claim 8, wherein each of the plurality of cloud random access networks and each of the plurality of traffic aggregation points includes a mobile edge computing device.
10. The device of claim 8, wherein the one or more parameters include one or more of:
a parameter for filtering the network traffic based on virtual Internet protocol addresses of the network traffic,
a parameter for filtering the network traffic based on subnets of the network traffic,
a parameter for filtering the network traffic based on tuples of the network traffic, or
a parameter for filtering the network traffic based on a simple network management protocol.
11. The device of claim 8, wherein the one or more processors are further configured to:
receive data identifying the plurality of virtual packet brokers and the one or more probes;
generate a user interface based on the data identifying the plurality of virtual packet brokers and the one or more probes; and
provide the user interface for display.
12. The device of claim 8, wherein the one or more parameters cause the plurality of virtual packet brokers to:
filter the network traffic to the network visibility traffic, and
provide the network visibility traffic to the one or more probes.
13. The device of claim 8, wherein the one or more parameters cause a first set of the plurality of virtual packet brokers to not capture the network traffic, and cause a second set of the plurality of virtual packet brokers to capture the network traffic, filter the network traffic to the network visibility traffic, and provide the network visibility traffic to the one or more probes.
14. The device of claim 8, wherein each of the plurality of cloud random access networks is associated with a corresponding cluster of gNodeBs of the network.
15. A non-transitory computer-readable medium storing a set of instructions, the set of instructions comprising:
one or more instructions that, when executed by one or more processors of a device, cause the device to:
determine one or more parameters for filtering network traffic, of a network that includes a plurality of virtual packet brokers provided for a plurality of cloud random access networks and a plurality of traffic aggregation points,
wherein one of the plurality of cloud random access networks, is a cloud-native solution for handling random access network functionality;
provide the one or more parameters to the plurality of virtual packet brokers, to cause the plurality of virtual packet brokers to filter the network traffic to obtain network visibility traffic;
receive, from one or more probes of a session aggregation point of the network, one or more metrics calculated based on the network visibility traffic by the plurality of virtual packet brokers,
wherein the one or more metrics include availability, latency, or utilization of the one of the plurality of cloud random access networks or one of the plurality of traffic aggregation points;
receive data identifying the plurality of virtual packet brokers and the one or more probes;
generate a user interface based on the one or more metrics and the data identifying the plurality of virtual packet brokers and the one or more probes;
provide the user interface for display;
receive data identifying bandwidths of the plurality of virtual packet brokers and the one or more probes,
wherein the plurality of virtual packet brokers utilize optical taps in the one of the plurality of cloud random access networks, the one of the plurality of traffic aggregation points, or the session aggregation point of the network to determine bandwidth utilization;
determine a schedule for performance sampling based on the data identifying the bandwidths;
cause the plurality of virtual packet brokers to implement the schedule for the performance sampling; and
receive, from the one or more probes, network performance data generated based on the plurality of virtual packet brokers implementing the schedule for the performance sampling.
16. The non-transitory computer-readable medium of claim 15, wherein the one or more instructions further cause the device to:
determine one or more actions to be implemented based on the network performance data; and
cause the one or more actions to be implemented in the network.
17. The non-transitory computer-readable medium of claim 15, wherein the one or more instructions further cause the device to one or more of:
dispatch a technician to service a network device associated with the one of the plurality of cloud random access networks or one of the plurality of traffic aggregation points;
dispatch an autonomous vehicle to service a network device associated with the one of the plurality of cloud random access networks or one of the plurality of traffic aggregation points; or
order a replacement network device to replace a network device associated with the one of the plurality of cloud random access networks or one of the plurality of traffic aggregation points.
18. The non-transitory computer-readable medium of claim 15, wherein each of the plurality of cloud random access networks and each of the plurality of traffic aggregation points includes a mobile edge computing device.
19. The non-transitory computer-readable medium of claim 15, wherein the one or more parameters include one or more of:
a parameter for filtering the network traffic based on virtual Internet protocol addresses of the network traffic,
a parameter for filtering the network traffic based on subnets of the network traffic,
a parameter for filtering the network traffic based on tuples of the network traffic, or
a parameter for filtering the network traffic based on a simple network management protocol.
20. The non-transitory computer-readable medium of claim 15, wherein the one or more parameters cause a first set of the plurality of virtual packet brokers to not capture the network traffic, and cause a second set of the plurality of virtual packet brokers to capture the network traffic, filter the network traffic to the network visibility traffic, and provide the network visibility traffic to the one or more probes.
US17/101,882 2020-11-23 2020-11-23 Systems and methods for automated remote network performance monitoring Active US11552872B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/101,882 US11552872B2 (en) 2020-11-23 2020-11-23 Systems and methods for automated remote network performance monitoring
US18/147,831 US20230135322A1 (en) 2020-11-23 2022-12-29 Systems and methods for automated remote network performance monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US17/101,882 US11552872B2 (en) 2020-11-23 2020-11-23 Systems and methods for automated remote network performance monitoring

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/147,831 Continuation US20230135322A1 (en) 2020-11-23 2022-12-29 Systems and methods for automated remote network performance monitoring

Publications (2)

Publication Number Publication Date
US20220166701A1 US20220166701A1 (en) 2022-05-26
US11552872B2 true US11552872B2 (en) 2023-01-10

Family

ID=81658702

Family Applications (2)

Application Number Title Priority Date Filing Date
US17/101,882 Active US11552872B2 (en) 2020-11-23 2020-11-23 Systems and methods for automated remote network performance monitoring
US18/147,831 Pending US20230135322A1 (en) 2020-11-23 2022-12-29 Systems and methods for automated remote network performance monitoring

Family Applications After (1)

Application Number Title Priority Date Filing Date
US18/147,831 Pending US20230135322A1 (en) 2020-11-23 2022-12-29 Systems and methods for automated remote network performance monitoring

Country Status (1)

Country Link
US (2) US11552872B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230135322A1 (en) * 2020-11-23 2023-05-04 Verizon Patent And Licensing Inc. Systems and methods for automated remote network performance monitoring

Citations (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030174689A1 (en) * 2002-03-04 2003-09-18 Shozo Fujino GPRS network system
US20070076746A1 (en) * 2005-09-14 2007-04-05 Faska Thomas S Device, system, and method for transporting data using combined broadband and legacy network infrastructures
US20070192344A1 (en) * 2005-12-29 2007-08-16 Microsoft Corporation Threats and countermeasures schema
US20090287734A1 (en) * 2005-10-21 2009-11-19 Borders Kevin R Method, system and computer program product for comparing or measuring information content in at least one data stream
US20100050241A1 (en) * 2008-08-20 2010-02-25 Mei Yan Accessing memory device content using a network
US20110131645A1 (en) * 2009-11-30 2011-06-02 Johnson Robert A Load balancing and failover of gateway devices
US20110295925A1 (en) * 2010-03-30 2011-12-01 Jason Lieblich Systems and methods for selecting an alternative computing infrastructure
US20150016249A1 (en) * 2013-07-10 2015-01-15 Sunil Mukundan Edge-gateway multipath method and system
US20160316485A1 (en) * 2015-04-21 2016-10-27 Anoop Kumar Traffic scheduling system for wireless communication system
US20170054648A1 (en) * 2015-08-19 2017-02-23 Samsung Electronics Co., Ltd. Data transfer apparatus, data transfer controlling method and data stream
US20170149665A1 (en) * 2014-07-16 2017-05-25 Nec Europe Ltd. Method and system for managing flows in a network
US20170230211A1 (en) * 2016-02-04 2017-08-10 Charter Communications Operating, Llc System and method of analyzing cmts data streams
US20170244648A1 (en) * 2016-02-23 2017-08-24 Microsemi Storage Solutions (U.S.), Inc. Systems and methods for transportation of multiple constant bitrate data streams
US20170339022A1 (en) * 2016-05-17 2017-11-23 Brocade Communications Systems, Inc. Anomaly detection and prediction in a packet broker
US20180132227A1 (en) * 2016-11-04 2018-05-10 At&T Intellectual Property I, L.P. Asynchronous multi-point transmission schemes
US20180234882A1 (en) * 2017-02-10 2018-08-16 Electronics And Telecommunications Research Institute Operation method of communication node in communication network
US20190058725A1 (en) * 2017-05-18 2019-02-21 Qadium, Inc. Security management platforms for assessing threats to an internal network
US10243657B1 (en) * 2018-02-12 2019-03-26 Fujitsu Limited Efficient optical transport in radio access networks
US20200037354A1 (en) * 2018-07-30 2020-01-30 Samsung Electronics Co., Ltd. Method and apparatus for frame based equipment operation of nr unlicensed
US10572879B1 (en) * 2005-10-03 2020-02-25 Avaya Inc. Agent driven media-agnostic work item grouping and sharing over a consult medium
US20200145337A1 (en) * 2019-12-20 2020-05-07 Brian Andrew Keating Automated platform resource management in edge computing environments
US10693751B2 (en) * 2018-03-12 2020-06-23 Saudi Arabian Oil Company End-to-end IT service performance monitoring
US10785791B1 (en) * 2015-12-07 2020-09-22 Commscope Technologies Llc Controlling data transmission in radio access networks
US20200374751A1 (en) * 2019-05-20 2020-11-26 Commscope Technologies Llc Load-testing a cloud radio access network
US20200382396A1 (en) * 2019-05-28 2020-12-03 Servicenow, Inc. Data packet loss detection
US20200389386A1 (en) * 2019-06-05 2020-12-10 Nokia Solutions And Networks Oy Dynamic shared protection using redundant network paths
US20200396301A1 (en) * 2019-06-11 2020-12-17 At&T Intellectual Property I, L.P. Telecommunication network edge cloud interworking via edge exchange point
US20210144050A1 (en) * 2019-11-08 2021-05-13 Verizon Patent And Licensing Inc. Remote probing for failover
US20210226647A1 (en) * 2015-12-30 2021-07-22 Teraki Gmbh Method and system for obtaining and storing sensor data

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6374352B1 (en) * 1998-08-26 2002-04-16 Intel Corporation Temporary configuration with fall-back
US20030177048A1 (en) * 2001-04-02 2003-09-18 Ronald Jacobson Monitoring system and process for the food service industry
US6872516B2 (en) * 2003-04-16 2005-03-29 Advanced Breath Diagnostics, Llc Methods of producing carbon-13 labeled biomass
JP2005266933A (en) * 2004-03-16 2005-09-29 Fujitsu Ltd Storage management system and storage management method
US8547855B1 (en) * 2006-03-21 2013-10-01 Cisco Technology, Inc. Method and apparatus to schedule multiple probes for active or passive monitoring of networks
US7889710B2 (en) * 2006-09-29 2011-02-15 Rosemount Inc. Wireless mesh network with locally activated fast active scheduling of wireless messages
US8042100B2 (en) * 2007-08-27 2011-10-18 International Business Machines Corporation Methods, systems, and computer products for evaluating robustness of a list scheduling framework
US8907981B2 (en) * 2009-11-05 2014-12-09 International Business Machines Corporation Method and system for dynamic composing and creating 3D virtual devices
WO2011076737A1 (en) * 2009-12-21 2011-06-30 Koninklijke Kpn N.V. Content distribution system
US9852636B2 (en) * 2012-05-18 2017-12-26 International Business Machines Corproation Traffic event data source identification, data collection and data storage
US10667438B2 (en) * 2012-12-27 2020-05-26 Intel Corporation Maintenance prediction of electronic devices using periodic thermal evaluation
US9106565B2 (en) * 2013-01-04 2015-08-11 International Business Machines Corporation Loop avoidance for event-driven virtual link aggregation
US8903999B2 (en) * 2013-03-12 2014-12-02 Level 3 Communications, Llc Method and system for calculating and charting website performance
US9971619B2 (en) * 2014-10-15 2018-05-15 Keysight Technologies Singapore (Holdings) Pte Ltd Methods and systems for forwarding network packets within virtual machine host systems
US20180082066A1 (en) * 2016-09-16 2018-03-22 Microsoft Technology Licensing, Llc Secure data erasure in hyperscale computing systems
US10104407B2 (en) * 2016-10-13 2018-10-16 Cisco Technology, Inc. Method and device for network-assisted client-centric QoE optimization
US10348741B2 (en) * 2017-05-10 2019-07-09 Alcatel Lucent Method and apparatus for virtually tapping network traffic using a virtual packet broker
US10541901B2 (en) * 2017-09-19 2020-01-21 Keysight Technologies Singapore (Sales) Pte. Ltd. Methods, systems and computer readable media for optimizing placement of virtual network visibility components
US11552872B2 (en) * 2020-11-23 2023-01-10 Verizon Patent And Licensing Inc. Systems and methods for automated remote network performance monitoring

Patent Citations (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030174689A1 (en) * 2002-03-04 2003-09-18 Shozo Fujino GPRS network system
US20070076746A1 (en) * 2005-09-14 2007-04-05 Faska Thomas S Device, system, and method for transporting data using combined broadband and legacy network infrastructures
US10572879B1 (en) * 2005-10-03 2020-02-25 Avaya Inc. Agent driven media-agnostic work item grouping and sharing over a consult medium
US20090287734A1 (en) * 2005-10-21 2009-11-19 Borders Kevin R Method, system and computer program product for comparing or measuring information content in at least one data stream
US20070192344A1 (en) * 2005-12-29 2007-08-16 Microsoft Corporation Threats and countermeasures schema
US20100050241A1 (en) * 2008-08-20 2010-02-25 Mei Yan Accessing memory device content using a network
US20110131645A1 (en) * 2009-11-30 2011-06-02 Johnson Robert A Load balancing and failover of gateway devices
US20110295925A1 (en) * 2010-03-30 2011-12-01 Jason Lieblich Systems and methods for selecting an alternative computing infrastructure
US20150016249A1 (en) * 2013-07-10 2015-01-15 Sunil Mukundan Edge-gateway multipath method and system
US20170149665A1 (en) * 2014-07-16 2017-05-25 Nec Europe Ltd. Method and system for managing flows in a network
US20160316485A1 (en) * 2015-04-21 2016-10-27 Anoop Kumar Traffic scheduling system for wireless communication system
US20170054648A1 (en) * 2015-08-19 2017-02-23 Samsung Electronics Co., Ltd. Data transfer apparatus, data transfer controlling method and data stream
US10785791B1 (en) * 2015-12-07 2020-09-22 Commscope Technologies Llc Controlling data transmission in radio access networks
US20210226647A1 (en) * 2015-12-30 2021-07-22 Teraki Gmbh Method and system for obtaining and storing sensor data
US20170230211A1 (en) * 2016-02-04 2017-08-10 Charter Communications Operating, Llc System and method of analyzing cmts data streams
US20170244648A1 (en) * 2016-02-23 2017-08-24 Microsemi Storage Solutions (U.S.), Inc. Systems and methods for transportation of multiple constant bitrate data streams
US20170339022A1 (en) * 2016-05-17 2017-11-23 Brocade Communications Systems, Inc. Anomaly detection and prediction in a packet broker
US20180132227A1 (en) * 2016-11-04 2018-05-10 At&T Intellectual Property I, L.P. Asynchronous multi-point transmission schemes
US20180234882A1 (en) * 2017-02-10 2018-08-16 Electronics And Telecommunications Research Institute Operation method of communication node in communication network
US20190058725A1 (en) * 2017-05-18 2019-02-21 Qadium, Inc. Security management platforms for assessing threats to an internal network
US10243657B1 (en) * 2018-02-12 2019-03-26 Fujitsu Limited Efficient optical transport in radio access networks
US10693751B2 (en) * 2018-03-12 2020-06-23 Saudi Arabian Oil Company End-to-end IT service performance monitoring
US20200037354A1 (en) * 2018-07-30 2020-01-30 Samsung Electronics Co., Ltd. Method and apparatus for frame based equipment operation of nr unlicensed
US20200374751A1 (en) * 2019-05-20 2020-11-26 Commscope Technologies Llc Load-testing a cloud radio access network
US20200382396A1 (en) * 2019-05-28 2020-12-03 Servicenow, Inc. Data packet loss detection
US20200389386A1 (en) * 2019-06-05 2020-12-10 Nokia Solutions And Networks Oy Dynamic shared protection using redundant network paths
US20200396301A1 (en) * 2019-06-11 2020-12-17 At&T Intellectual Property I, L.P. Telecommunication network edge cloud interworking via edge exchange point
US20210144050A1 (en) * 2019-11-08 2021-05-13 Verizon Patent And Licensing Inc. Remote probing for failover
US20200145337A1 (en) * 2019-12-20 2020-05-07 Brian Andrew Keating Automated platform resource management in edge computing environments

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230135322A1 (en) * 2020-11-23 2023-05-04 Verizon Patent And Licensing Inc. Systems and methods for automated remote network performance monitoring

Also Published As

Publication number Publication date
US20220166701A1 (en) 2022-05-26
US20230135322A1 (en) 2023-05-04

Similar Documents

Publication Publication Date Title
US11902121B2 (en) System and method of detecting whether a source of a packet flow transmits packets which bypass an operating system stack
EP3588292A1 (en) Monitoring and policy control of distributed data and control planes for virtual nodes
US10057112B2 (en) Fault detection of service chains in a SDN/NFV network environment
US10936549B2 (en) Cluster-wide container optimization and storage compression
EP3944081B1 (en) Data center resource monitoring with managed message load balancing with reordering consideration
EP3934206B1 (en) Scalable control plane for telemetry data collection within a distributed computing system
US20220116287A1 (en) Virtual network function bus-based auto-registration
CN113867884B (en) Method and system for computer network and storage medium
US11327794B2 (en) Periodic task execution in an automated context
US20230135322A1 (en) Systems and methods for automated remote network performance monitoring
WO2018098478A1 (en) System and method for analyzing and associating elements of a computer system by shared characteristics
CN113867885A (en) Method, computing system and computer readable medium for application flow monitoring
CN114189425A (en) Intent-based distributed alert service
US10397128B2 (en) Routing handler for rule-based action triggering
US11665262B2 (en) Analyzing network data for debugging, performance, and identifying protocol violations using parallel multi-threaded processing
Lin et al. Achieving scalability in the 5G-enabled Internet of Things
US11570260B1 (en) Data collection configuration file generation
US20180248777A1 (en) Context aware streaming of server monitoring data
US20230281100A1 (en) Orchestration of analytics in workload resource domains
US20230195544A1 (en) Event log management
CN115913927A (en) Method for managing configuration data, API server device and storage medium

Legal Events

Date Code Title Description
AS Assignment

Owner name: VERIZON PATENT AND LICENSING INC., NEW JERSEY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FAGAN, SCOTT;REEL/FRAME:054448/0903

Effective date: 20201118

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE