WO2023201079A1 - Gestion de liaisons redondantes - Google Patents

Gestion de liaisons redondantes Download PDF

Info

Publication number
WO2023201079A1
WO2023201079A1 PCT/US2023/018716 US2023018716W WO2023201079A1 WO 2023201079 A1 WO2023201079 A1 WO 2023201079A1 US 2023018716 W US2023018716 W US 2023018716W WO 2023201079 A1 WO2023201079 A1 WO 2023201079A1
Authority
WO
WIPO (PCT)
Prior art keywords
data center
distributed unit
network
virtualized distributed
virtualized
Prior art date
Application number
PCT/US2023/018716
Other languages
English (en)
Inventor
Dhaval Mehta
Sourabh Gupta
Gurpreet Sohi
Original Assignee
Dish Wireless L.L.C.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/974,983 external-priority patent/US20230337047A1/en
Priority claimed from US17/974,980 external-priority patent/US20230337046A1/en
Priority claimed from US17/974,977 external-priority patent/US20230336287A1/en
Application filed by Dish Wireless L.L.C. filed Critical Dish Wireless L.L.C.
Publication of WO2023201079A1 publication Critical patent/WO2023201079A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/22Alternate routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0604Management of faults, events, alarms or notifications using filtering, e.g. reduction of information by using priority, element types, position or time
    • H04L41/0622Management of faults, events, alarms or notifications using filtering, e.g. reduction of information by using priority, element types, position or time based on time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0663Performing the actions predefined by failover planning, e.g. switching to standby network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/40Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/121Shortest path evaluation by minimising delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/24Multipath
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/28Routing or path finding of packets in data switching networks using route fault recovery

Definitions

  • a control plane may comprise a part of a network that controls how data packets are forwarded or routed. The control plane may be responsible for populating routing tables or forwarding tables to enable data plane functions.
  • a data plane (or forwarding plane) may comprise a part of a network that forwards and routes data packets based on control plane logic. Control plane logic may also identify packets to be discarded and packets to which a high quality of service should apply.
  • 5G networks may leverage the use of cyclic prefix orthogonal frequency-division multiplexing (CP-OFDM) to increase channel utilization and reduce interference, the use of multiple-input multiple-output (MIMO) antennas to increase spectral efficiency, and the use of millimeter wave spectrum (mmWave) operation to increase throughput and reduce latency in data transmission.
  • 5G wireless user equipment UE may communicate over both a lower frequency sub-6 GHz band between 410 MHz and 7125 MHz and a higher frequency mmWave band between 24.25 GHz and 52.6 GHz.
  • lower frequencies may provide a lower maximum bandwidth and lower data rates than higher frequencies, lower frequencies may provide higher spectral efficiency and greater range.
  • the mmWave spectrum may provide higher data rates, the millimeter waves may not penetrate through objects, such as walls and glass, and may have a more limited range.
  • the radio access network components may include virtualized distributed units (VDUs) and virtualized centralized units (VCUs).
  • VDUs virtualized distributed units
  • VCUs virtualized centralized units
  • To satisfy a power requirement for the network slice, various components of the radio access network may need to be redeployed closer to core network components (e.g., at an edge data center).
  • the virtualized components of the radio access network may be dynamically reassigned to different layers within a data center hierarchy in order to satisfy changing latency requirements and/or power requirements for the network slice.
  • Redundant links may be automatically generated in response to server failures and/or link failures occurring within the data center hierarchy.
  • the virtualized network functions may be deployed across different data centers with varying electrical distances from user equipment and devices.
  • the user devices may include mobile computing devices, such as laptop computers and smartphones.
  • One or more of the virtualized network functions may be assigned to computing resources within a particular data center based on latency requirements, power requirements, and/or quality of service requirements for one or more network slices supported by the virtualized network functions.
  • a network slice may comprise an end-to-end logical communications network that extends from a user device to a data network.
  • a network slice may comprise a set of virtualized network functions.
  • the set of virtualized networks functions may include a set of shared core network functions that are shared by two or more network slices.
  • the technical benefits of the systems and methods disclosed herein include increasing system availability, decreasing system downtime, reducing data communication latency, enabling real-time interactivity between user equipment and cloud-based services, increasing data rates such that user equipment (e.g., wireless electronic devices) and data networks may transmit and receive content more quickly, and reducing energy consumption of the computing and data storage resources required for providing a tel ecommuni cati ons infrastructure .
  • one or more processors e.g., a virtual processor or a hardware processor
  • the first data center layer includes a first router having a first redundant link between the first router and a third router residing within a third data center.
  • the one or more processors configured to detect that the first failure rate has exceeded a threshold failure rate and identify a second set of machines residing within a second data center layer in response to detection that the first failure rate has exceeded the threshold failure rate.
  • the second data center layer includes a second router.
  • the one or more processors configured to remove the first redundant link between the third router residing within the third data center layer and the first router in response to detection that the first failure rate has exceeded the threshold failure rate and add a second redundant link between the third router residing within the third data center layer and the second router.
  • one or more processors may be configured to determine a communication latency between a user device and a virtualized distributed unit deployed within a first data center layer.
  • the virtualized distributed unit may be configured to perform radio link control layer operations and medium access control layer operations.
  • the one or more processors may be configured to determine a latency requirement for communication between the user device and the virtualized distributed unit, detect that the communication latency is greater than the latency requirement for the communication between the user device and the virtualized distributed unit, determine a second data center layer for the virtualized distributed unit in response to detection that the communication latency is greater than the latency requirement for the communication between the user device and the virtualized distributed unit, and redeploy the virtualized distributed unit within the second data center layer.
  • a telecommunications network may increase system availability by determining a first number of replica pods for a virtualized distributed unit that performs medium access control layer operations, detecting that the first number of replica pods is different than a number of pods running the virtualized distributed unit, and transmitting an instruction or a control signal to a replication controller to adjust the number of pods running the virtualized distributed unit to the first number of replica pods.
  • the method may further include determining a service availability for the virtualized distributed unit and adjusting the first number of replica pods for the virtualized distributed unit based on the service availability.
  • Figure 1 A depicts an embodiment of a 5G network including a radio access network (RAN) and a core network.
  • RAN radio access network
  • Figures 1B-1C depict various embodiments of a radio access network and a core network for providing a communications channel (or channel) between user equipment and data network.
  • Figure ID depicts one embodiment of network functions interacting between user and control planes.
  • Figure IE depicts another embodiment of network functions interacting between user and control planes.
  • Figure IF depicts an embodiment of network slices sharing a set of shared core network functions.
  • Figures 1G-1H depict various embodiment of network slices after updates have been made based on changed to the network slice policy.
  • Figures 2A-2D depicts various embodiment of a radio access network.
  • Figure 2E depicts an embodiment of a core network.
  • Figure 2F depicts an embodiment of a containerized environment that includes a container engine running on top of a host operating system.
  • Figures 3 A-3D depict various embodiments of a 5G network comprising implementations of a radio access network and a core network with virtualized network functions arranged within a data center hierarchy.
  • Figure 4A depicts an embodiment of a data center hierarchy that includes a cell site, passthrough edge data center (EDC), and breakout EDC.
  • EDC passthrough edge data center
  • Figures 4B-4C depict various embodiments of an implementation of a data center hierarchy.
  • Figure 4D depicts an embodiment of a data center hierarchy implemented using a cloud-based compute and storage infrastructure.
  • Figure 4E depicts another embodiment of a data center hierarchy implemented using a cloud-based compute and storage infrastructure.
  • Figure 5 depicts an embodiment of two cell sites in communication with a local data center (LDC).
  • Figure 6A depicts a flowchart describing an embodiment of a process for running a user plane function for a core network.
  • Figure 6B depicts a flowchart describing an embodiment of a process for establishing network connections using a core network.
  • Figure 6C depicts a flowchart describing an embodiment of a process for establishing a network connection.
  • Figure 6D depicts a flowchart describing an embodiment of a process for adding and removing redundant links.
  • Figure 7A depicts one embodiment of a portion of a 5G network.
  • Figure 7B depicts one embodiment of the portion of the 5G network in Figure 7A with an additional communication path.
  • Figure 7C depicts one embodiment of a portion of a 5G network that includes a plurality of small cell structures.
  • Figure 7D depicts another embodiment of a portion of a 5G network that includes a plurality of small cell structures.
  • Figure 8A depicts a flowchart describing an embodiment of a process for deploying a distributed unit within a data center hierarchy.
  • Figure 8B depicts a flowchart describing an embodiment of a process for maintaining a distributed unit.
  • the radio access network components may include virtualized distributed units (VDUs) and virtualized centralized units (VCUs).
  • VDUs virtualized distributed units
  • VCUs virtualized centralized units
  • various components of a radio access network such as a VDU and/or a VCU, may need to be redeployed closer to user equipment (e.g., at a cell site).
  • various components of the radio access network may need to be redeployed closer to the core network components (e.g., at an edge data center).
  • various components of the radio access network may be dynamically reassigned to different layers within a data center hierarchy in order to satisfy changing latency requirements and power requirements for the network slice.
  • Radio access network components and redundant links within the data center hierarchy includes reduced downtime and increased system availability.
  • One technical issue with dynamically assigning radio access network components to computing resources (e.g., servers) may be increased power consumption due to the redeployment of virtualized components.
  • Technical issues with utilizing redundant links may include increased virtual infrastructure cost and increased power consumption to support the redundant links.
  • Technical benefits of dynamically assigning radio access network components to computing resources (e.g., servers) as changes in latency, power, availability, and/or quality of service requirements occur to network slices over time are that system performance may be increased, packet delay variation may be reduced, end-to-end latency may be reduced, and overall power consumption for implementing the network slices may be reduced.
  • a telecommunications link may refer to a communications channel that electrically connects two or more electronic devices.
  • a communications channel may refer to a wireless communications channel, a physical transmission medium (e.g., a wire or cable), or to a logical connection over a multiplexed medium (e.g., a radio channel).
  • the two or more electronic devices may include routers, servers, and computing devices.
  • the communications channel may allow data transmissions (e.g., data packets) to be exchanged between the two or more electronic devices.
  • a link may comprise a physical link or a virtual circuit that uses one or more physical links.
  • a redundant link may comprise a duplicate link between a router within a first layer of a data center hierarchy and one or more other routers within a second layer of the data center hierarchy.
  • the first layer of the data center hierarchy may correspond with a cell site layer and the second layer of the data center hierarchy may correspond with a local data center.
  • a redundant link may comprise a redundant link between two different data centers or between two server clusters located in different layers of the data center hierarchy that prevents a routing failure from being a single point of failure for a network connection. In some cases, the redundant link may provide load sharing between the two different data centers or between the server clusters.
  • redundant links may be dynamically generated in response to server failures (e.g., due to hardware failures or virtual machine failures) and/or link failures that affect virtualized radio access network components.
  • server failures e.g., due to hardware failures or virtual machine failures
  • link failures that affect virtualized radio access network components.
  • a threshold failure rate e.g., more than two failures over the past 24 hours
  • a redundant link to the first set of machines residing within the first data center layer may be removed or bypassed and a new redundant link to a different set of machines residing within a second data center layer may be generated or instantiated such that the new redundant link connects to a set of machines that have not exceeded the threshold failure rate.
  • Application containers may allow applications to be bundled with their own libraries and configuration files, and then executed in isolation on a single operating system (OS) kernel.
  • a container may include the compiled code for an application (e.g., composed of microservices) along with the binaries and libraries necessary to execute the application.
  • a pod may refer to or comprise one or more containers with shared computing, storage, and networking resources.
  • a pod may be run on a node, which may comprise a virtual machine or a physical machine.
  • a plurality of nodes or machines may correspond with a cluster. Each pod may communicate with other pods running on the same node or other nodes in a cluster.
  • the number of replica pods for a virtualized distributed unit may be adjusted over time based on power requirements and system availability requirements.
  • the total number of replica pods across every virtualized distributed unit running within a server cluster (or a node cluster) may be set based on a maximum power requirement for the entire cluster.
  • the number of replica pods per virtualized distributed unit may be determined such that service availability for the virtualized distributed units with a high-availability configuration or high-availability requirement are satisfied first subject to a maximum power requirement for the server cluster (or node cluster) executing the virtualized distributed units.
  • the server cluster (or node cluster) may run the virtualized distributed units as containerized applications and may run the virtualized distributed units using a plurality of virtual machines or a plurality of physical machines.
  • various virtualized network functions for a network slice may be assigned to different computing resources (e.g., servers or virtual machines) across a data center hierarchy based on latency requirements, power requirements, and/or quality of service requirements.
  • the assignment of a user plane function to a particular server or to a machine (e.g., a real or virtual machine) within a particular data center layer of the data center hierarchy may be determined based on a maximum latency requirement for a network slice.
  • a server within a local data center may be selected for running the user plane function to ensure that a 2ms one-way latency from a mobile computing device to the server may be sustained.
  • a server within an edge data center may be selected for running a virtualized distributed unit if at least a lms one-way latency from a mobile computing device to the server may be obtained or sustained.
  • a server within an edge data center may be selected for running a user plane function if at least a lms one-way latency from a virtualized distributed unit to the user plane function may be obtained or sustained.
  • the server assignments of both a virtualized distributed unit and a user plane function associated with a network slice may change over time in order to satisfy latency, power, and quality of service requirements for the network slice.
  • a set of shared core network functions that are shared by two or more network slices may be identified based on latency requirements, power requirements, and/or quality of service requirements for the two or more network slices.
  • the set of shared core network functions may be identified based on a first latency requirement associated with a first network slice and a second latency requirement for a second network slice.
  • the set of shared core network functions may be identified based on a first power requirement associated with a first network slice and a second power requirement for a second network slice.
  • a first set of network functions for the first network slice may include the set of shared core network functions and a second set of network functions for the second network slice may include the same set of shared core network functions.
  • Data communications e.g., data packets
  • one or more quality of service parameters associated with a network slice may be used to assign virtualized network functions for the network slice to computing resources within a data center hierarchy.
  • the computing resources may include hardware servers, virtual servers, real machines, and virtual machines.
  • One or more of the virtualized network functions may be implemented as containerized applications or microservices.
  • the one or more quality of service parameters may specify requirements for a bit rate, a bit error rate, a throughput, a packet loss, a maximum packet loss rate, a packet error rate, a packet delay variation, an end-to-end latency, a point-to-point latency between virtualized network functions, a network availability, and a network bandwidth associated with the network slice.
  • the point-to-point latency between two virtualized network functions may comprise a oneway data latency between a virtualized distributed unit and a user plane function.
  • quality of service parameters associated with the network slice may be updated (e.g., a maximum latency requirement may be relaxed or increased from 1ms to 5ms) causing a reassignment of the virtualized network functions for the network slice to different computing resources within the data center hierarchy.
  • FIG. 1 A depicts an embodiment of a 5G network 102 including a radio access network (RAN) 120 and a core network 130.
  • the radio access network 120 may comprise a new- generation radio access network (NG-RAN) that uses the 5G new radio interface (NR).
  • the 5G network 102 connects user equipment (UE) 108 to the data network (DN) 180 using the radio access network 120 and the core network 130.
  • the data network 180 may comprise the Internet, a local area network (LAN), a wide area network (WAN), a private data network, a wireless network, a wired network, or a combination of networks.
  • the UE 108 may comprise an electronic device with wireless connectivity or cellular communication capability, such as a mobile phone or handheld computing device.
  • the UE 108 may comprise a 5G smartphone or a 5G cellular device that connects to the radio access network 120 via a wireless connection.
  • the UE 108 may comprise one of a number of UEs not depicted that are in communication with the radio access network 120.
  • the UEs may include mobile and non-mobile computing devices.
  • the UEs may include laptop computers, desktop computers, an Internet-of- Things (loT) devices, and/or any other electronic computing device that includes a wireless communications interface to access the radio access network 120.
  • LoT Internet-of- Things
  • the radio access network 120 includes a remote radio unit (RRU) 202 for wirelessly communicating with UE 108.
  • the remote radio unit (RRU) 202 may comprise a radio unit (RU) and may include one or more radio transceivers for wirelessly communicating with UE 108.
  • the remote radio unit (RRU) 202 may include circuitry for converting signals sent to and from an antenna of a base station into digital signals for transmission over packet networks.
  • the radio access network 120 may correspond with a 5G radio base station that connects user equipment to the core network 130.
  • the 5G radio base station may be referred to as a generation Node B, a “gNodeB,” or a “gNB.”
  • a base station may refer to a network element that is responsible for the transmission and reception of radio signals in one or more cells to or from user equipment, such as UE 108.
  • the core network 130 may utilize a cloud-native service-based architecture (SB A) in which different core network functions (e.g., authentication, security, session management, and core access and mobility functions) are virtualized and implemented as loosely coupled independent services that communicate with each other, for example, using HTTP protocols and APIs.
  • SB A cloud-native service-based architecture
  • core network functions e.g., authentication, security, session management, and core access and mobility functions
  • CP control plane
  • CP control plane
  • a microservices-based architecture in which software is composed of small independent services that communicate over well-defined APIs may be used for implementing some of the core network functions.
  • control plane (CP) network functions for performing session management may be implemented as containerized applications or microservices.
  • a container-based implementation may offer improved scalability and availability over other approaches.
  • Network functions that have been implemented using microservices may store their state information using the unstructured data storage function (UDSF) that supports data storage for stateless network functions across the service-based architecture (SB A).
  • UDSF unstructured data storage function
  • the primary core network functions may comprise the access and mobility management function (AMF), the session management function (SMF), and the user plane function (UPF).
  • the UPF e.g., UPF 132
  • the UPF may perform packet processing including routing and forwarding, quality of service (QoS) handling, and packet data unit (PDU) session management.
  • the UPF may serve as an ingress and egress point for user plane traffic and provide anchored mobility support for user equipment.
  • the UPF 132 may provide an anchor point between the UE 108 and the data network 180 as the UE 108 moves between coverage areas.
  • the AMF may act as a single-entry point for a UE connection and perform mobility management, registration management, and connection management between a data network and UE.
  • the SMF may perform session management, user plane selection, and IP address allocation.
  • Other core network functions may include a network repository function (NRF) for maintaining a list of available network functions and providing network function service registration and discovery, a policy control function (PCF) for enforcing policy rules for control plane functions, an authentication server function (AUSF) for authenticating user equipment and handling authentication related functionality, a network slice selection function (NSSF) for selecting network slice instances, and an application function (AF) for providing application services.
  • NRF network repository function
  • PCF policy control function
  • AUSF authentication server function
  • NSSF network slice selection function
  • AF application function
  • Application-level session information may be exchanged between the AF and PCF (e.g., bandwidth requirements for QoS).
  • the PCF may dynamically decide if the user equipment should grant the requested access based on a location of the user equipment.
  • a network slice may comprise an independent end-to-end logical communications network that includes a set of logically separated virtual network functions.
  • Network slicing may allow different logical networks or network slices to be implemented using the same compute and storage infrastructure. Therefore, network slicing may allow heterogeneous services to coexist within the same network architecture via allocation of network computing, storage, and communication resources among active services.
  • the network slices may be dynamically created and adjusted over time based on network requirements. For example, some networks may require ultra-low-latency or ultra-reliable services.
  • components of the radio access network 120 may need to be deployed at a cell site or in a local data center (LDC) that is in close proximity to a cell site such that the latency requirements are satisfied (e.g., such that the one-way latency from the cell site to the DU component or CU component is less than 1.2ms).
  • LDC local data center
  • the distributed unit (DU) and the centralized unit (CU) of the radio access network 120 may be co-located with the remote radio unit (RRU) 202.
  • the distributed unit (DU) and the remote radio unit (RRU) 202 may be co-located at a cell site and the centralized unit (CU) may be located within a local data center (LDC).
  • LDC local data center
  • the 5G network 102 may provide one or more network slices, wherein each network slice may include a set of network functions that are selected to provide specific telecommunications services.
  • each network slice may comprise a configuration of network functions, network applications, and underlying cloud-based compute and storage infrastructure.
  • a network slice may correspond with a logical instantiation of a 5G network, such as an instantiation of the 5G network 102.
  • the 5G network 102 may support customized policy configuration and enforcement between network slices per service level agreements (SLAs) within the radio access network (RAN) 120.
  • SLAs service level agreements
  • User equipment such as UE 108, may connect to multiple network slices at the same time (e.g., eight different network slices).
  • a PDU session such as PDU session 104, may belong to only one network slice instance.
  • the 5G network 102 may dynamically generate network slices to provide telecommunications services for various use cases, such the enhanced Mobile Broadband (eMBB), Ultra-Reliable and Low-Latency Communication (URLCC), and massive Machine Type Communication (mMTC) use cases.
  • eMBB enhanced Mobile Broadband
  • URLCC Ultra-Reliable and Low-Latency Communication
  • mMTC massive Machine Type Communication
  • a cloud-based compute and storage infrastructure may comprise a networked computing environment that provides a cloud computing environment.
  • Cloud computing may refer to Internet-based computing, wherein shared resources, software, and/or information may be provided to one or more computing devices on-demand via the Internet (or other network).
  • the term “cloud” may be used as a metaphor for the Internet, based on the cloud drawings used in computer networking diagrams to depict the Internet as an abstraction of the underlying infrastructure it represents.
  • the core network 130 may include a plurality of network elements that are configured to offer various data and telecommunications services to subscribers or end users of user equipment, such as UE 108.
  • network elements include network computers, network processors, networking hardware, networking equipment, routers, switches, hubs, bridges, radio network controllers, gateways, servers, virtualized network functions, and network functions virtualization infrastructure.
  • a network element may comprise a real or virtualized component that provides wired or wireless communication network services.
  • Virtualization allows virtual hardware to be created and decoupled from the underlying physical hardware.
  • a virtualized component is a virtual router (or a vRouter).
  • Another example of a virtualized component is a virtual machine.
  • a virtual machine may comprise a software implementation of a physical machine.
  • the virtual machine may include one or more virtual hardware devices, such as a virtual processor, a virtual memory, a virtual disk, or a virtual network interface card.
  • the virtual machine may load and execute an operating system and applications from the virtual memory.
  • the operating system and applications used by the virtual machine may be stored using the virtual disk.
  • the virtual machine may be stored as a set of files including a virtual disk file for storing the contents of a virtual disk and a virtual machine configuration file for storing configuration settings for the virtual machine.
  • the configuration settings may include the number of virtual processors (e.g., four virtual CPUs), the size of a virtual memory, and the size of a virtual disk (e.g., a 64GB virtual disk) for the virtual machine.
  • a virtualized component is a software container or an application container that encapsulates an application’s environment.
  • applications and services may be run using virtual machines instead of containers in order to improve security.
  • a common virtual machine may also be used to run applications and/or containers for a number of closely related network services.
  • the 5G network 102 may implement various network functions, such as the core network functions and radio access network functions, using a cloud-based compute and storage infrastructure.
  • a network function may be implemented as a software instance running on hardware or as a virtualized network function.
  • Virtual network functions (VNFs) may comprise implementations of network functions as software processes or applications.
  • a virtual network function (VNF) may be implemented as a software process or application that is run using virtual machines (VMs) or application containers within the cloudbased compute and storage infrastructure.
  • Application containers or containers allow applications to be bundled with their own libraries and configuration files, and then executed in isolation on a single operating system (OS) kernel.
  • OS operating system
  • Application containerization may refer to an OS-level virtualization method that allows isolated applications to be run on a single host and access the same OS kernel.
  • Containers may run on bare-metal systems, cloud instances, and virtual machines.
  • Network functions virtualization may be used to virtualize network functions, for example, via virtual machines, containers, and/or virtual hardware that runs processor readable code or executable instructions stored in one or more computer-readable storage mediums (e.g., one or more data storage devices).
  • the core network 130 includes a user plane function (UPF) 132 for transporting IP data traffic (e.g., user plane traffic) between the UE 108 and the data network 180 and for handling packet data unit (PDU) sessions with the data network 180.
  • the UPF 132 may comprise an anchor point between the UE 108 and the data network 180.
  • the UPF 132 may be implemented as a software process or application running within a virtualized infrastructure or a cloud-based compute and storage infrastructure.
  • the 5G network 102 may connect the UE 108 to the data network 180 using a packet data unit (PDU) session 104, which may comprise part of an overlay network.
  • PDU packet data unit
  • the PDU session 104 may utilize one or more quality of service (QoS) flows, such as QoS flows 105 and 106, to exchange traffic (e.g., data and voice traffic) between the UE 108 and the data network 180.
  • QoS quality of service
  • the one or more QoS flows may comprise the finest granularity of QoS differentiation within the PDU session 104.
  • the PDU session 104 may belong to a network slice instance through the 5G network 102.
  • an AMF that supports the network slice instance may be selected and a PDU session via the network slice instance may be established.
  • the PDU session 104 may be of type IPv4 or IPv6 for transporting IP packets.
  • the radio access network 120 may be configured to establish and release parts of the PDU session 104 that cross the radio interface.
  • the radio access network 120 may include a set of one or more remote radio units (RRUs) that includes radio transceivers (or combinations of radio transmitters and receivers) for wirelessly communicating with UEs.
  • the set of RRUs may correspond with a network of cells (or coverage areas) that provide continuous or nearly continuous overlapping service to UEs, such as UE 108, over a geographic area. Some cells may correspond with stationary coverage areas and other cells may correspond with coverage areas that change over time (e.g., due to movement of a mobile RRU).
  • the UE 108 may be capable of transmitting signals to and receiving signals from one or more RRUs within the network of cells over time.
  • One or more cells may correspond with a cell site.
  • the cells within the network of cells may be configured to facilitate communication between UE 108 and other UEs and/or between UE 108 and a data network, such as data network 180.
  • the cells may include macrocells (e.g., capable of reaching 18 miles) and small cells, such as microcells (e.g., capable of reaching 1.2 miles), picocells (e.g., capable of reaching 0.12 miles), and femtocells (e.g., capable of reaching 32 feet). Small cells may communicate through macrocells.
  • Macrocells may transit and receive radio signals using multiple-input multipleoutput (MIMO) antennas that may be connected to a cell tower, an antenna mast, or a raised structure.
  • MIMO multiple-input multipleoutput
  • the UPF 132 may be responsible for routing and forwarding user plane packets between the radio access network 120 and the data network 180.
  • Uplink packets arriving from the radio access network 120 may use a general packet radio service (GPRS) tunneling protocol (or GTP tunnel) to reach the UPF 132.
  • GPRS general packet radio service
  • the GPRS tunneling protocol for the user plane may support multiplexing of traffic from different PDU sessions by tunneling user data over the interface between the radio access network 120 and the UPF 132.
  • the UPF 132 may remove the packet headers belonging to the GTP tunnel before forwarding the user plane packets towards the data network 180. As the UPF 132 may provide connectivity towards other data networks in addition to the data network 180, the UPF 132 must ensure that the user plane packets are forwarded towards the correct data network.
  • Each GTP tunnel may belong to a specific PDU session, such as PDU session 104.
  • Each PDU session may be set up towards a specific data network name (DNN) that uniquely identifies the data network to which the user plane packets should be forwarded.
  • DNN data network name
  • the UPF 132 may keep a record of the mapping between the GTP tunnel, the PDU session, and the DNN for the data network to which the user plane packets are directed.
  • a QoS flow may correspond with a stream of data packets that have equal quality of service (QoS).
  • QoS quality of service
  • a PDU session may have multiple QoS flows, such as the QoS flows 105 and 106 that belong to PDU session 104.
  • the UPF 132 may use a set of service data flow (SDF) templates to map each downlink packet onto a specific QoS flow.
  • SDF service data flow
  • the UPF 132 may receive the set of SDF templates from a session management function (SMF), such as the SMF 133 depicted in Figure IB, during setup of the PDU session 104.
  • SMF session management function
  • the SMF may generate the set of SDF templates using information provided from a policy control function (PCF), such as the PCF 135 depicted in Figure 1C.
  • PCF policy control function
  • the UPF 132 may track various statistics regarding the volume of data transferred by each PDU session, such as PDU session 104, and provide the information to an SMF.
  • Figure IB depicts an embodiment of a radio access network 120 and a core network 130 for providing a communications channel (or channel) between user equipment and data network 180.
  • the communications channel may comprise a pathway through which data is communicated between the UE 108 and the data network 180.
  • the user equipment in communication with the radio access network 120 includes UE 108, mobile phone 110, and mobile computing device 112.
  • the user equipment may include a plurality of electronic devices, including mobile computing device and non-mobile computing device.
  • the core network 130 includes network functions such as an access and mobility management function (AMF) 134, a session management function (SMF) 133, and a user plane function (UPF) 132.
  • the AMF may interface with user equipment and act as a single-entry point for a UE connection.
  • the AMF may interface with the SMF to track user sessions.
  • the AMF may interface with a network slice selection function (NSSF) not depicted to select network slice instances for user equipment, such as UE 108.
  • NSF network slice selection function
  • the AMF may be responsible for coordinating the handoff between the coverage areas whether the coverage areas are associated with the same radio access network or different radio access networks.
  • the UPF 132 may transfer downlink data received from the data network 180 to user equipment, such as UE 108, via the radio access network 120 and/or transfer uplink data received from user equipment to the data network 180 via the radio access network 180.
  • An uplink may comprise a radio link though which user equipment transmits data and/or control signals to the radio access network 120.
  • a downlink may comprise a radio link through which the radio access network 120 transmits data and/or control signals to the user equipment.
  • the radio access network 120 may be logically divided into a remote radio unit (RRU) 202, a distributed unit (DU) 204, and a centralized unit (CU) that is partitioned into a CU user plane portion CU-UP 216 and a CU control plane portion CU-CP 214.
  • the CU-UP 216 may correspond with the centralized unit for the user plane and the CU-CP 214 may correspond with the centralized unit for the control plane.
  • the CU-CP 214 may perform functions related to a control plane, such as connection setup, mobility, and security.
  • the CU-UP 216 may perform functions related to a user plane, such as user data transmission and reception functions. Additional details of radio access networks are described in reference to Figure 2A.
  • Decoupling control signaling in the control plane from user plane traffic in the user plane may allow the UPF 132 to be positioned in close proximity to the edge of a network compared with the AMF 134. As a closer geographic or topographic proximity may reduce the electrical distance, this means that the electrical distance from the UPF 132 to the UE 108 may be less than the electrical distance of the AMF 134 to the UE 108.
  • the radio access network 120 may be connected to the AMF 134, which may allocate temporary unique identifiers, determine tracking areas, and select appropriate policy control functions (PCFs) for user equipment, via an N2 interface.
  • PCFs policy control functions
  • the N3 interface may be used for transferring user data (e.g., user plane traffic) from the radio access network 120 to the user plane function UPF 132 and may be used for providing low-latency services using edge computing resources.
  • the electrical distance from the UPF 132 (e.g., located at the edge of a network) to user equipment, such as UE 108, may impact the latency and performance services provided to the user equipment.
  • the UE 108 may be connected to the SMF 133 via an N1 interface not depicted, which may transfer UE information directly to the AMF 134.
  • the UPF 132 may be connected to the data network 180 via an N6 interface.
  • the N6 interface may be used for providing connectivity between the UPF 132 and other external or internal data networks (e.g., to the Internet).
  • the radio access network 120 may be connected to the SMF 133, which may manage UE context and network handovers between base stations, via the N2 interface.
  • the N2 interface may be used for transferring control plane signaling
  • the RRU 202 may perform physical layer functions, such as employing orthogonal frequency-division multiplexing (OFDM) for downlink data transmission.
  • the DU 204 may be located at a cell site (or a cellular base station) and may provide realtime support for lower layers of the protocol stack, such as the radio link control (RLC) layer and the medium access control (MAC) layer.
  • the CU may provide support for higher layers of the protocol stack, such as the service data adaptation protocol (SDAP) layer, the packet data convergence control (PDCP) layer, and the radio resource control (RRC) layer.
  • SDAP service data adaptation protocol
  • PDCP packet data convergence control
  • RRC radio resource control
  • the SDAP layer may comprise the highest L2 sublayer in the 5GNR protocol stack.
  • a radio access network may correspond with a single CU that connects to multiple DUs (e.g., 10 DUs), and each DU may connect to multiple RRUs (e.g., 18 RRUs).
  • a single CU may manage 10 different cell sites (or cellular base stations) and 180 different RRUs.
  • the radio access network 120 or portions of the radio access network 120 may be implemented using multi-access edge computing (MEC) that allows computing and storage resources to be moved closer to user equipment. Allowing data to be processed and stored at the edge of a network that is located close to the user equipment may be necessary to satisfy low-latency application requirements.
  • MEC multi-access edge computing
  • the DU 204 and CU-UP 216 may be executed as virtual instances within a data center environment that provides single-digit millisecond latencies (e.g., less than 2ms) from the virtual instances to the UE 108.
  • FIG. 1C depicts an embodiment of a radio access network 120 and a core network 130 for providing a communications channel (or channel) between user equipment and data network 180.
  • the core network 130 includes UPF 132 for handling user data in the core network 130.
  • Data is transported between the radio access network 120 and the core network 130 via the N3 interface.
  • the data may be tunneled across the N3 interface (e.g., IP routing may be done on the tunnel header IP address instead of using end user IP addresses). This may allow for maintaining a stable IP anchor point even though UE 108 may be moving around a network of cells or moving from one coverage area into another coverage area.
  • the UPF 132 may connect to external data networks, such as the data network 180 via the N6 interface.
  • the data may not be tunneled across the N6 interface as IP packets may be routed based on end user IP addresses.
  • the UPF 132 may connect to the SMF 133 via the N4 interface.
  • the core network 130 includes a group of control plane functions 140 comprising SMF 133, AMF 134, PCF 135, NRF 136, AF 137, and NSSF 138.
  • the SMF 133 may configure or control the UPF 132 via the N4 interface.
  • the SMF 133 may control packet forwarding rules used by the UPF 132 and adjust QoS parameters for QoS enforcement of data flows (e.g., limiting available data rates).
  • multiple SMF/UPF pairs may be used to simultaneously manage user plane traffic for a particular user device, such as UE 108.
  • a set of SMFs may be associated with UE 108, wherein each SMF of the set of SMFs corresponds with a network slice.
  • the SMF 133 may control the UPF 132 on a per end user data session basis, in which the SMF 133 may create, update, and remove session information in the UPF 132.
  • the SMF 133 may select an appropriate UPF for a user plane path by querying the NRF 136 to identify a list of available UPFs and their corresponding capabilities and locations.
  • the SMF 133 may select the UPF 132 based on a physical location of the UE 108 and a physical location of the UPF 132 (e.g., corresponding with a physical location of a data center in which the UPF 132 is running). The SMF 133 may also select the UPF 132 based on a particular network slice supported by the UPF 132 or based on a particular data network that is connected to the UPF 132.
  • the ability to query the NRF 136 for UPF information eliminates the need for the SMF 133 to store and update the UPF information for every available UPF within the core network 130.
  • the SMF 133 may query the NRF 136 to identify a set of available UPFs for a packet data unit (PDU) session and acquire UPF information from a variety of sources, such as the AMF 134 or the UE 108.
  • the UPF information may include a location of the UPF 132, a location of the UE 108, the UPF’s dynamic load, the UPF’s static capacity among UPFs supporting the same data network, and the capability of the UPF 132.
  • the radio access network 120 may provide separation of the centralized unit for the control plane (CU-CP) 216 and the centralized unit for the user plane (CU-UP) 214 functionalities while supporting network slicing.
  • the CU-CP 216 may obtain resource utilization and latency information from the DU 204 and/or the CU-UP 216, and select a CU-UP to pair with the DU 204 based on the resource utilization and latency information in order to configure a network slice.
  • Network slice configuration information associated with the network slice may be provided to the UE 108 for purposes of initiating communication with the UPF 132 using the network slice.
  • Figure ID depicts one embodiment of network functions interacting between user and control planes.
  • the logical connections between the network functions depicted in Figure ID should not be interpreted as direct physical connections.
  • the RAN 120 is connected to the user plane function UPF 132 via interface N3.
  • the UPF 132 is connected to the data network DN 180 via the N6 interface.
  • the data network DN 180 may represent an edge computing network or resources, such as a mobile edge computing (MEC) network.
  • MEC mobile edge computing
  • UE 108 connects to the AMF 134, which is responsible for authentication and authorization of access requests, as well as mobility management functions via the N1 interface.
  • the AMF 134 may communicate with other network functions through a sendee-based interface 144 using application programming interfaces (APIs).
  • the SMF 133 may comprise a network function that is responsible for the allocation and management of IP addresses that are assigned to the UE 108, as well as the selection of the UPF 132 for traffic associated with a particular PDU session for the UE 108.
  • the SMF 133 may also communicate with other network functions through the service-based interface 144 using application programming interfaces (APIs).
  • APIs application programming interfaces
  • Each of the network functions NRF 136, PCF 135, UDSF 139, AF 137, NSSF 138, AMF 134, and SMF 133 may communicate with each other via the service-based interface 144 using application programming interfaces ( APIs).
  • the unstructured data storage function (UDSF) 139 may provide service interfaces to store, update, read, and delete network function data.
  • network functions such as the PCF 135, SMF 133, and AMF 134 may remain stateless or primarily stateless.
  • Figure IE depicts another embodiment of network functions interacting between user and control planes.
  • UPFs 132a-132b (also referred to as UPFs 132) are in communication with data networks (DNs) 180a-180b (also referred to as DNs 180).
  • DNs data networks
  • a plurality of UPFs 132 may be connected in series between the RAN 120 and a plurality of DNs 180.
  • the RAN 120 may include gNBs 146a-146b (also referred to as gNBs 146). Each gNB 146 may comprise at least a DU 204, a CU-UP 216, and a CU-CP 214.
  • Each UPF 132a-132-b may be associated with a PDU session, and may connect to a corresponding SMF' 133a-l 33b over an N4 interface to receive session control information. If the LIE 108 has multiple PDU sessions active, then each PDU session may be supported by a different UPF 132, each of which may be connected to an SMF 133 over an N4 interface. It should also be understood that any of the network functions may be virtualized within a network, and that the network itself may be provided as a network slice.
  • Figure IF depicts an embodiment of network slices 122a and 122b (also referred to as network slices 122) sharing a set of shared core network functions 131.
  • the set of shared core network functions 131 includes AMF 134 and NSSF 138.
  • the radio access network (RAN) 120 may support differentiated handling of traffic between isolated network slices 122a and 122b for the UE 108.
  • the network slice selection function (NSSF) 138 within the shared core network functions 131 may support the selection of network slice instances to serve the UE 108. In some cases, network slice selection may be determined by the network (e.g., using either NSSF 138 or AMF 134) based on network slice policy.
  • the UE 108 may simultaneously connect to data networks 180a and 180b via the network slices 122a and 122b to support different latency requirements.
  • FIG. 1G depicts an embodiment of network slices 122a and 122b after updates have been made based on changed to the network slice policy.
  • the network slices 122a and 122b share a set of shared core network functions 131 that includes PCF 135 and NSSF 138.
  • Each network slice 122 includes an AMF 134, an SMF 133, and a UPF 132.
  • FIG. 1H depicts another embodiment of network slices 122a and 122b after updates have been made based on changed to the network slice policy.
  • the network slices 122a and 122b share a set of shared core network functions 131 that includes AMF 134, PCF 135, and NSSF 138.
  • Each network slice 122 includes a CU-UP 216, SMF 133, and a UPF 132; accordingly, network slice 122a includes CU-UP 216a, SMF 133a, and UPF 132a and network slice 122b includes CU-UP 216b, SMF 133b, and UPF 132b.
  • FIG. 2A depicts an embodiment of a radio access network 120.
  • the radio access network 120 includes virtualized CU units 220, virtualized DU units 210, remote radio units (RRUs) 202, and a RAN intelligent controller (RIC) 230.
  • the virtualized DU units 210 may comprise virtualized versions of distributed units (DUs) 204.
  • the distributed unit (DU) 204 may comprise a logical node configured to provide functions for the radio link control (RLC) layer, the medium access control (MAC) layer, and the physical layer (PHY) layers.
  • RLC radio link control
  • MAC medium access control
  • PHY physical layer
  • the virtualized CU units 220 may comprise virtualized versions of centralized units (CUs) comprising a centralized unit for the user plane CU-CP 216 and a centralized unit for the control plane CU-CP 214.
  • the centralized units (CUs) may comprise a logical node configured to provide functions for the radio resource control (RRC) layer, the packet data convergence control (PDCP) layer, and the service data adaptation protocol (SDAP) layer.
  • RRC radio resource control
  • PDCP packet data convergence control
  • SDAP service data adaptation protocol
  • the centralized unit for the control plane CU-CP 214 may comprise a logical node configured to provide functions of the control plane part of the RRC and PDCP.
  • the centralized unit for the user plane CU-CP 216 may comprise a logical node configured to provide functions of the user plane part of the SDAP and PDCP. Virtualizing the control plane and user plane functions allows the centralized units (CUs) to be consolidated in one or more data centers on RAN-based open interfaces.
  • the remote radio units (RRUs) 202 may correspond with different cell sites.
  • a single DU may connect to multiple RRUs via a fronthaul interface 203.
  • the fronthaul interface 203 may provide connectivity between DUs and RRUs.
  • DU 204a may connect to 18 RRUs via the fronthaul interface 203.
  • a centralized units (CUs) may control the operation of multiple DUs via a midhaul Fl interface that comprises the Fl-C and Fl-U interfaces.
  • the Fl interface may support control plane and user plane separation, and separate the Radio Network Layer and the Transport Network Layer.
  • the centralized unit for the control plane CU-CP 214 may connect to ten different DUs within the virtualized DU units 210.
  • the centralized unit for the control plane CU-CP 214 may control ten DUs and 180 RRUs.
  • a single distributed unit (DU) 204 may be located at a cell site or in a local data center. Centralizing the distributed unit (DU) 204 at a local data center or at a single cell site location instead of distributing the DU 204 across multiple cell sites may result in reduced implementation costs.
  • the centralized unit for the control plane CU-CP 214 may host the radio resource control (RRC) layer and the control plane part of the packet data convergence control (PDCP) layer.
  • the El interface may separate the Radio Network Layer and the Transport Network Layer.
  • the CU-CP 214 terminates the El interface connected with the centralized unit for the user plane CU-UP 216 and the Fl -C interface connected with the distributed units (DUs) 204.
  • the centralized unit for the user plane CU-UP 216 hosts the user plane part of the packet data convergence control (PDCP) layer and the service data adaptation protocol (SDAP) layer.
  • the CU-UP 216 terminates the El interface connected with the centralized unit for the control plane CU-CP 214 and the Fl-U interface connected with the distributed units (DUs) 204.
  • the distributed units (DUs) 204 may handle the lower layers of the baseband processing up through the packet data convergence control (PDCP) layer of the protocol stack.
  • the interfaces Fl-C and El may carry signaling information for setting up, modifying, relocating, and/or releasing a UE context.
  • the RAN intelligent controller (RIC) 230 may control the underlying RAN elements via the E2 interface.
  • the E2 interface connects the RAN intelligent controller (RIC) 230 to the distributed units (DUs) 204 and the centralized units CU-CP 214 and CU-UP 216.
  • the RAN intelligent controller (RIC) 230 may comprise a near-real time RIC.
  • a non-real-time RIC may comprise a logical node allowing non-real time control rather than near-real-time control and the near-real-time RIC 230 may comprise a logical node allowing near-real-time control and optimization of RAN elements and resources on the bases of information collected from the distributed units (DUs) 204 and the centralized units CU-CP 214 and CU-UP 216 via the E2 interface.
  • DUs distributed units
  • CU-CP 214 and CU-UP 216 via the E2 interface.
  • both a distributed unit (DU) 204 and a corresponding centralized unit CU-UP 216 may be implemented at a cell site.
  • a distributed unit (DU) 204 may be implemented at a cell site and the corresponding centralized unit CU-UP 216 may be implemented at a local data center (LDC).
  • both a distributed unit (DU) 204 and a corresponding centralized unit CU-UP 216 may be implemented at a local data center (LDC).
  • both a distributed unit (DU) 204 and a corresponding centralized unit CU-UP 216 may be implemented at a cell site, but the corresponding the centralized unit CU-CP 214 may be implemented at a local data center (LDC).
  • a distributed unit (DU) 204 may be implemented at a local data center (LDC) and the corresponding centralized units CU-CP 214 and CU-UP 216 may be implemented at an edge data center (EDC).
  • EDC edge data center
  • network slicing operations may be communicated via the El, Fl-C, and Fl -U interfaces of the radio access network 120.
  • CU-CP 214 may select the appropriate DU 204 and CU-UP 216 entities to serve a network slicing request associated with a particular service level agreement (SLA).
  • SLA service level agreement
  • FIG. 2B depicts another embodiment of a radio access network 120.
  • the radio access network 120 includes hardware-level components and software-level components.
  • the hardware-level components include one or more processors 270, one or more memory 271, and one or more disks 272.
  • the software-level components include software applications, such as a RAN intelligent controller (RIC) 230, virtualized CU unit (VCU) 220, and virtualized DU unit (VDU) 210.
  • the software-level components may be run using the hardware-level components or executed using processor and storage components of the hardware-level components.
  • one or more of the RIC 230, VCU 220, and VDU 210 may be run using the processor 270, memory 271, and disk 272.
  • one or more of the RIC 230, VCU 220, and VDU 210 may be run using a virtual processor and a virtual memory that are themselves executed or generated using the processor 270, memory 271, and disk 272.
  • the software-level components also include virtualization layer processes, such as virtual machine 273, hypervisor 274, container engine 275, and host operating system 276.
  • the hypervisor 274 may comprise a native hypervisor (or bare-metal hypervisor) or a hosted hypervisor (or type 2 hypervisor).
  • the hypervisor 274 may provide a virtual operating platform for running one or more virtual machines, such as virtual machine 273.
  • a hypervisor may comprise software that creates and runs virtual machine instances.
  • Virtual machine 273 may include a plurality of virtual hardware devices, such as a virtual processor, a virtual memory, and a virtual disk.
  • the virtual machine 273 may include a guest operating system that has the capability to run one or more software applications, such as the RAN intelligent controller (RIC) 230.
  • the virtual machine 273 may run the host operation system 276 upon which the container engine 275 may run.
  • a virtual machine, such as virtual machine 273, may include one or more virtual processors.
  • a container engine 275 may run on top of the host operating system 276 in order to run multiple isolated instances (or containers) on the same operating system kernel of the host operating system 276.
  • Containers may perform virtualization at the operating system level and may provide a virtualized environment for running applications and their dependencies.
  • the container engine 275 may acquire a container image and convert the container image into running processes.
  • the container engine 275 may group containers that make up an application into logical units (or pods).
  • a pod may contain one or more containers and all containers in a pod may run on the same node in a cluster. Each pod may serve as a deployment unit for the cluster. Each pod may run a single instance of an application.
  • a "replica” may refer to a unit of replication employed by a computing platform to provision or deprovision resources. Some computing platforms may run containers directly and therefore a container may comprise the unit of replication. Other computing platforms may wrap one or more containers into a pod and therefore a pod may comprise the unit of replication.
  • a replication controller may be used to ensure that a specified number of replicas of a pod are running at the same time. If less than the specified number of pods are running (e.g., due to a node failure or pod termination), then the replication controller may automatically replace a failed pod with a new pod.
  • the number of replicas may be dynamically adjusted based on a prior number of node failures. For example, if it is detected that a prior number of node failures for nodes in a cluster running a particular network slice has exceeded a threshold number of node failures, then the specified number of replicas may be increased (e.g., increased by one). Running multiple pod instances and keeping the specified number of replicas constant may prevent users from losing access to their application in the event that a particular pod fails or becomes inaccessible.
  • a virtualized infrastructure manager not depicted may run on the radio access network (RAN) 120 in order to provide a centralized platform for managing a virtualized infrastructure for deploying various components of the radio access network (RAN) 120.
  • the virtualized infrastructure manager may manage the provisioning of virtual machines, containers, and pods.
  • the virtualized infrastructure manager may also manage a replication controller responsible for managing a number of pods.
  • the virtualized infrastructure manager may perform various virtualized infrastructure related tasks, such as cloning virtual machines, creating new virtual machines, monitoring the state of virtual machines, and facilitating backups of virtual machines.
  • FIG. 2C depicts an embodiment of the radio access network 120 of Figure 2B in which the virtualization layer includes a containerized environment 279.
  • the containerized environment 279 includes a container engine 275 for instantiating and managing application containers, such as container 277.
  • Containerized applications may comprise applications that run in isolated runtime environments (or containers).
  • the containerized environment 279 may include a container orchestration service for automating the deployments of containerized applications.
  • the container 277 may be used to deploy microservices for running network functions.
  • the container 277 may run DU components and/or CU components of the radio access network (RAN) 120.
  • the containerized environment 279 may be executed using hardware-level components or executed using processor and storage components of the hardware-level components.
  • the containerized environment 279 may be run using the processor 270, memory 271, and disk 272. In another example, the containerized environment 279 may be run using a virtual processor and a virtual memory that are themselves executed or generated using the processor 270, memory 271, and disk 272.
  • FIG. 2D depicts another embodiment of a radio access network 120.
  • the radio access network 120 includes hardware-level components and software-level components.
  • the hardware-level components include a plurality of machines (e.g., physical machines) that may be grouped together and presented as a single computing system or a cluster. Each machine of the plurality of machines may comprise a node in a cluster (e.g., a failover cluster).
  • the plurality of machines include machine 280 and machine 290.
  • the machine 280 includes a network interface 285, processor 286, memory 287, and disk 288 all in communication with each other.
  • Processor 286 allows machine 280 to execute computer readable instructions stored in memory 287 to perform processes described herein.
  • Processor 286 may include one or more processing units, such as one or more CPUs and/or one or more GPUs.
  • Memory 287 may comprise one or more types of memory (e.g., RAM, SRAM, DRAM, ROM, EEPROM, or Flash).
  • the disk 288 may comprise a hard disk drive and/or a solid-state drive.
  • the machine 290 includes a network interface 295, processor 296, memory 297, and disk 298 all in communication with each other.
  • Processor 296 allows machine 290 to execute computer readable instructions stored in memory 297 to perform processes described herein.
  • the plurality of machines may be used to implement a failover cluster.
  • the plurality of machines may be used to run one or more virtual machines or to execute or generate a containerized environment, such as the containerized environment 279 depicted in Figure 2C.
  • the software-level components include a RAN intelligent controller (RIC) 230, CU control plane (CU-CP) 214, CU user plane (CU-UP) 216, and distributed unit (DU) 204.
  • the software-level components may be run using a dedicated hardware server.
  • the software-level components may be run using a virtual machine running or containerized environment running on the plurality of machines.
  • the software-level components may be run from the cloud (e.g., the software-level components may be deployed using a cloud-based compute and storage infrastructure).
  • FIG. 2E depicts an embodiment of a core network 130.
  • the core network 130 includes implementation for core network functions UPF 132, SMF 133, and AMF 134.
  • the core network 130 may be used to provide Internet access for user equipment via a radio access network, such as the radio access network 120 in Figure 1C.
  • the AMF 134 may be configured to host various functions including SMF selection 252 and network slicing support 254.
  • the UPF 132 may be configured to host various functions including mobility anchoring 244, packet data unit (PDU) handling 242, and QoS handling for the user plane.
  • the SMF 133 may be configured to host various functions including UE IP address allocation and management 248, selection and control of user plane functions, and PDU session control 246.
  • the core network functions may be run using containers within the containerized environment 279 that includes a container engine 275 for instantiating and managing application containers, such as container 277.
  • the containerized environment 279 may be executed or generated using a plurality of machines as depicted in Figure 2D or may be executed or generated using hardware-level components, such as the processor 270, memory 271, and disk 272 depicted in Figure 2C.
  • FIG. 2F depicts an embodiment of a containerized environment 279 that includes a container engine 275 running on top of a host operating system 276.
  • the container engine 275 may manage or run containers 277 on the same operating system kernel of the host operating system 276.
  • the container engine 275 may acquire a container image and convert the container image into one or more running processes.
  • the container engine 275 may group containers that make up an application into logical units (or pods).
  • a pod may contain one or more containers and all containers in a pod may run on the same node in a cluster.
  • Each container 277 may include application code 278 and application dependencies 267, such as operating system libraries, required to run the application code 278.
  • Containers allow portability by encapsulating an application within a single executable package of software that bundles application code 278 together with the related configuration files, binaries, libraries, and dependencies required to run the application code 278.
  • Figure 3 A depicts an embodiment of a 5G network comprising a radio access network 120 and a core network 130.
  • the radio access network 120 and the core network 130 allow user equipment UE 108 to transfer data to the data network 180 and/or to receive data from the data network 180.
  • the VDU 210 and VCU 220 components of the radio access network 120 may be implemented using different data centers within a data center hierarchy that includes a local data center (LDC) 304 that is a first electrical distance away from the cell site 302, a breakout edge data center (BEDC) 306 that is a second electrical distance greater than the first electrical distance away from the cell site 302, and a regional data center (RDC) 308 that is a third electrical distance greater than the second electrical distance away from the cell site 302.
  • the local data center (LDC) 304 may correspond with a first one-way latency from the cell site 302.
  • the breakout edge data center (BEDC) 306 may correspond with a second one-way latency greater than the first one-way latency from the cell site 302.
  • the regional data center (RDC) 308 may correspond with a third one-way latency greater than the second one-way latency from the cell site 302.
  • the cell site 302 may include a cell tower or one or more remote radio units (RRUs) for sending and receiving wireless data transmissions.
  • RRUs remote radio units
  • the cell site 302 may correspond with a macrocell site or a small cell site, such as a microcell site.
  • a data center may refer to a networked group of computing and storage devices that may run applications and services.
  • the data center may include hardware servers, storage systems, routers, switches, firewalls, application-delivery controllers, cooling systems, and power subsystems.
  • a data center may refer to a collection of computing and storage resources provided by on-premises physical servers and/or virtual networks that support applications and services across pools of physical infrastructure.
  • a plurality of services may be connected together to provide a computing and storage resource pool upon which virtualized entities may be instantiated.
  • Multiple data centers may be interconnected with each other to form larger networks consisting of pooled computing and storage resources connected to each other by connectivity resources.
  • the connectivity resources may take the form of physical connections, such as Ethernet or optical communications links, and may include wireless communication channels as well. If two different data centers are connected by a plurality of different communication channels, the links may be combined together using various techniques including the formation of link aggregation groups (LAGs).
  • a link aggregation group (LAG) may comprise a logical interface that uses the link aggregation control protocol (LACP) to aggregate multiple connections at a single direct connect endpoint.
  • LACG link aggregation group
  • LACP link aggregation control protocol
  • the VDU 210 is running within the local data center (LDC) 304 and the VCU 220 is running within the breakout edge data center (BEDC) 306.
  • the core network functions SMF 133, AMF 134, PCF 135, and NRF 136 are running within the regional data center (RDC) 308.
  • the user plane function UPF 132 is running within the breakout edge data center (BEDC) 306.
  • the breakout edge data center (BEDC) 306 may comprise an edge data center at an edge of a network managed by a cloud service provider.
  • Edge computing including mobile edge computing, may refer to the arrangement of computing and associated storage resources at locations closer to the "edge" of a network in order to reduce data communication latency to and from user equipment (e.g., end user mobile phones).
  • Some technical benefits of positioning edge computing resources closer to UEs include low latency data transmissions (e.g., under 5ms), real-time (or near real-time) operations, reduced network backhaul traffic, and reduced energy consumption.
  • the edge computing resources may be located within on-premises data centers (on-prem), near or on cell towers, and at network aggregation points within the radio access networks and core networks.
  • Examples of applications and services that may be executed using edge computing include virtual network functions and 5G-enabled network services.
  • the virtual network functions may comprise software-based network functions that are executed using the edge computing resources.
  • a network slice may have a first configuration corresponding with a low-latency configuration in which a user plane function is deployed at a cell site and then subsequently be reconfigured to a second configuration corresponding with a low-power configuration in which the user plane function is redeployed at a breakout edge data center location.
  • the location of the UPF 132 places constraints on the transport network not depicted connecting the UPF 132 with the core network 130.
  • the transport network for the backhaul may either be minimized if the UPF is placed closer to the VCU 220 (or closer to the RAN edge) or maximized if the UPF is placed farther away from the VCU 220.
  • the applications and services running on the edge computing resources may communicate with a large number of UEs that may experience connectivity failures (e.g., due to battery life limitations or latency issues) over time.
  • the applications and services may utilize heartbeat tracking techniques to manage device connectivity to the UEs.
  • Figure 3B depicts an embodiment of the 5G network depicted in Figure 3 A in which the VDU 210 has been moved to run at the cell site 302, the VCU 220 and the UPF 132 have been moved to run at the local data center (LDC) 304, and the SMF 133 and the AMF 134 have been moved to run in the breakout edge data center (BEDC) 306.
  • a virtualized network function may be moved from a first data center to a second data center within a data center hierarchy by transferring an application or program code for the virtualized network function from a first server within the first data center to a second server within the second data center.
  • a second virtual processor that is instantiated and run within the second data center may acquire instructions or program code associated with a virtualized network function prior to a first virtual processor that previously run the virtualized network function within the first data center being deleted.
  • the shifting of network functions closer to the cell site 302 and/or closer to user equipment may have been performed in response to changes in a service level agreement (SLA) or a request to establish a lower-latency network connection from user equipment to a data network.
  • a service level agreement (SLA) may correspond with a service obligation in which penalties may apply if the SLA is violated.
  • SLA service metrics may include key performance indicators (KPIs), such as packet loss, latency, and guaranteed bit rate.
  • KPIs key performance indicators
  • network slices may be reconfigured in order to satisfy traffic isolation requirements, end-to-end latency requirements (e.g., the round-trip time between two end points in a network slice), and throughput requirements for each slice of the network slices.
  • traffic isolation, end-to-end latency, and throughput requirements may vary as a function of a priority level assigned to a given network slice (e.g., whether a network slice have been assigned a high priority or a low priority).
  • a first data center and a second data center within a data center hierarchy may both have the same applications or program code stored thereon such that both data centers can run one or more of the same virtualized network functions.
  • a virtualized network function may be moved from the first data center to the second data center by transferring control or execution of the virtualized network function from the first data center to the second data center without transferring applications or program code.
  • Figure 3C depicts an embodiment of the 5G network depicted in Figure 3B in which the VCU 220 has been partitioned such that the CU-CP 214 may run at the local data center (LDC) 304 and the CU-UP 216 may be moved to run at the cell site 302.
  • the cell site 302 may include computing and storage resources for running containerized applications.
  • Figure 3D depicts an embodiment of the 5G network depicted in Figure 3C in which the CU-CP 214 and the UPF 132 have been moved to run in the breakout edge data center (BEDC) 306, the VDU 212 and the CU-UP 216 have been moved to run at the local data center (LDC) 304, and the SMF 133 and the AMF 134 have been moved to run in the regional data center (RDC) 308.
  • Deploying the VDU 212 and the CU-UP 216 in the local data center (LDC) 304 may allow the VDU 212 to more efficiently support a number of cells sites including the cell site 302.
  • a data center hierarchy may include a plurality of data centers that span across different geographic regions.
  • a region may correspond with a large geographical area in which multiple data centers are deployed to provide different cloud services.
  • Each data center within the region may include a server cluster.
  • a server cluster (or cluster) may comprise a set of physical machines that are connected together via a network. The cluster may be used to process and store data and to run applications and services in a distributed manner. Applications and data associated with the applications may be replicated or mirrored over a plurality of machines within a cluster to improve fault tolerance.
  • Each machine in a cluster may comprise a node in the cluster.
  • the cluster may comprise a failover cluster.
  • Geo-redundancy may be achieved by running applications or services across two or more availability zones within the same region. Geo-redundancy may refer to the physical placement of servers or server clusters within geographically diverse data centers to safeguard against catastrophic events and natural disasters.
  • An availability zone may comprise a smaller geographical area that is smaller than the large geographical area of the region. Multiple availability zones may reside within a region. An availability zone may comprise one or more data centers with redundant power, networking, and connectivity within a region.
  • Each region may comprise a separate geographical area that does not overlap with any other regions.
  • a logical grouping of one or more data centers within a region may correspond with an availability zone.
  • Each region may include multiple availability zones that may comprise multiple isolated geographical areas within the region.
  • the data centers within the availability zones of a region may be physically isolated from each other inside the region to improve fault tolerance.
  • Each availability zone inside a geographical region may utilize its own power, cooling, and networking connections.
  • An application may be deployed across two or more availability zones in order to ensure high availability. In this case, if a first availability zone goes down (e.g., due to a power failure) within a geographical region, then the application may still be accessible and running within a second availability zone.
  • Each availability zone within the geographical region may be connected to each other with high bandwidth, low latency network connections to enable synchronous replication of applications and services across the two or more availability zones.
  • a local zone may correspond with a small geographical region in which one or more data centers are deployed to provide low latency (e.g., single-digit millisecond latency) applications and services.
  • User equipment that is located within the small geographical region or that is located within a threshold distance (e.g., within two miles) of the small geographical region may be able to provide low latency services.
  • a data center within a local zone may allow a direct private connection to compute and storage resources without requiring access to the Internet.
  • the direct private connection may utilize fiber optic cables to allow a server within the local zone to privately connect to other data centers without requiring access to the Internet.
  • Figure 4A depicts an embodiment of a data center hierarchy that includes a cell site 302 in which servers 370 and virtual router 382 reside, passthrough EDC 305 in which servers 372 and virtual router 384 reside, and breakout EDC 306 in which servers 373 and grouping of virtual routers 362 reside.
  • a direct private connection 324 may be used to connect servers 370 at the cell site 302 with servers 373 within the breakout EDC 306.
  • a direct private connection 322 may be used to connect servers 372 at the passthrough EDC 305 with servers 373 within the breakout EDC 306.
  • the direct private connection 324 may include fiber-optic cables and may be used to establish or connect to a virtual private cloud hosted by the breakout EDC 306.
  • a data center may include one or more servers in communication with one or more storage devices.
  • the servers and data storage devices within a data center may be in communication with each other via a networking fabric connecting server data storage units within the data center to each other.
  • a “server” may refer to a hardware device that acts as the host in a client-server relationship or a software process that shares a resource with or performs work for one or more clients. Communication between computing devices in a clientserver relationship may be initiated by a client sending a request to the server asking for access to a particular resource or for particular work to be performed. The server may subsequently perform the actions requested and send a response back to the client.
  • a 5G network implementation may comprise a logical hierarchical architecture consisting of national data centers (NDCs), regional data centers (RDCs), and breakout edge data centers (BEDCs). Each region may host one NDC and three RDCs. NDC functions may communicate with each other through a network transit hub (or transit gateway). The NDC may be used to host a nationwide global service, such as subscriber database, IP multimedia subsystem (IMS) for voice and video-based services, OSS (Operating Support System), and BSS (Billing Support System).
  • IMS IP multimedia subsystem
  • OSS Operating Support System
  • BSS Billing Support System
  • An NDC may be hosted in a region with a large geographical area that includes multiple availability zones for high availability.
  • High availability may be achieved by deploying two redundant networks functions (NFs) in two separate availability zones. Failover within an availability zone can be recovered within the region without the need to route traffic to other regions. NFs may failover between availability zones within the same region.
  • the in-region networking uses underlay and overlay constructs to enable on-prem traffic to seamlessly flow to a standby NF in a secondary availability zone in the event that an active NF becomes unavailable.
  • Geo-Redundancy may be achieved by deploying two redundant NFs in two separate availability zones within the same region or in more than one region. This may be achieved by interconnecting all virtual private clouds (VPCs) via inter-region transit gateways and leveraging virtual routers (e.g., VPC routers) for overlay networking. In some cases, a virtual private cloud may span across multiple availability zones.
  • the overlay network may be built as a full-mesh enabling service continuity using the NFs deployed across NDCs in other regions during outage scenarios (e.g., BEDCs and RDCs within a first region may continue to function using an NDC in a second region if an outage occurs for an NDC in the first region).
  • Each availability zone may comprise one or more discrete data centers with redundant power, networking, and connectivity within a particular region. All availability zones within the particular region may be interconnected with high-bandwidth, low-latency networking over dedicated metro fiber providing high-throughput, low-latency networking between the availability zones. In at least one example, each availability zone within the particular region may be physically separated by at least a threshold distance (e.g., 100 miles) from each other to protect against power outages and natural disasters.
  • a threshold distance e.g. 100 miles
  • the RDCs across multiple availability zones may be interconnected using interregion transit gateways and virtual routers (e.g., VPC routers) within an overlay network.
  • VPC routers e.g., VPC routers
  • An overlay network may comprise a virtual network of nodes and logical links that are built on top of an underlaying existing network (or an underlay network).
  • BEDCs may be deployed within availability zones of a region.
  • BEDCs may be deployed in local zone (LZ) data centers (e.g., comprising small data centers that are close to major population centers that provide core cloud features for applications that require low latency connections). Deployment of NFs within local zone (LZ) data centers may allow the NFs to satisfy strict latency budgets.
  • LZ local zone
  • Core network NFs e.g., AMF and SMF
  • LZ local zone
  • RDC regional data center
  • the redundant network functions may comprise backup core network functions within a neighboring availability zone that will take over and service requests in the event of an availability zone failure.
  • a 5G network there may be at least one network slice assigned to a LIE.
  • the 5G network slicing feature makes it possible to set up independent logical networks on a shared physical and virtual infrastructure.
  • a slice can, for example, ensure ultra-reliable low-latency communication (URLLC).
  • Each network slice may operate on specific tracking areas (TAs) served by a set of gNodeB base stations along with the access and mobility management function (AMF). This means that each network function can be placed in accordance with both the area and the service conveyed by the related slice.
  • TAs tracking areas
  • AMF access and mobility management function
  • One important aspect of network slicing orchestration is to map traffic from a single slice or group of slices to transport network resources that match the required end-to-end QoS for that slice or group of slices.
  • IP transport fabric may utilize virtual routers and segment routing with multiprotocol label switching (MPLS) for user plane traffic.
  • a network slice instance may extend end-to-end across a physical network.
  • a network slice instance may comprise one or more network slice subnet instances (NSSI) that may each be deployed by the download and instantiation of one or more virtual network functions.
  • NSSI network slice subnet instances
  • a programmable network element e.g., a programmable routing platform
  • each programmable network element may allow 100 virtual router instances to be configured.
  • Virtual router instances may also be configured and run using virtual servers.
  • Traffic from virtual routers may encapsulated using generic routing encapsulation (GRE) tunnels, creating an overlay network.
  • GRE generic routing encapsulation
  • the overlay network may utilize intermediate systems to intermediate systems (ISIS) routing protocol in conjunction with segment routing multi -protocol label switching (SR- MPLS) to distribute routing information and establish network reachability between the virtual routers.
  • ISIS intermediate systems to intermediate systems
  • SR- MPLS segment routing multi -protocol label switching
  • MP-BGP Multi-protocol border gateway protocol over GRE may be used to provide reachability from on-prem to overlay network and reachability between different regions in the cloud.
  • a network slice may comprise an isolated end-to-end (E2E) virtualized network across all the network domains running on a shared physical infrastructure and may be controlled and managed independently.
  • Each network slice may comprise a collection of network resources in the form of multiple virtual network functions (VNFs) that are network capabilities implemented as software instances running on commodity servers or commercial off-the-shelf (COTS) hardware.
  • VNFs virtual network functions
  • virtual network slices may be configured on- demand by downloading network resources into one or more existing network nodes or points of presence (PoP).
  • a point of presence (PoP) may comprise a demarcation point or access point at which two or more networks share a connection.
  • a PoP may include routers, switches, servers, and other devices necessary for network traffic (e.g., user plane traffic) to move between the two or more networks.
  • the virtual network slices may utilize the same shared physical network infrastructure in order to enable the end-to-end deployment of isolated network slices across different points of presence (PoPs) in a transport network.
  • each end-to-end network slice instance may include three network slice subnets corresponding with a core network, a transport network, and a radio access network.
  • the particular functionality of each network slice may be implemented by instantiating a virtual network function (VNF) associated with the particular functionality using one or more existing PoPs.
  • VNF virtual network function
  • a PoP may have downloaded and instantiated one or more VNFs, with each VNF corresponding to a network slice. When a network slice is no longer required, then the corresponding VNF for the network slice may be deactivated or removed from the PoP.
  • FIG. 4B depicts an embodiment of an implementation of a data center hierarchy for a region.
  • NDC national data center
  • NDC national data center
  • RDC regional data center
  • BEDC breakout edge data center
  • PEDCs Passthrough edge data centers
  • LDCs local data centers
  • BEDCs breakout edge data centers
  • FIG. 4C depicts an embodiment of an implementation of a data center hierarchy for a region.
  • a national data center (NDC) 310 may span across three availability zones 350a-350c. Within each availability zone 350 may reside a regional data center (RDC) 308.
  • Breakout edge data centers (BEDCs) 306a-306c reside within local zones 360a-360c. In some cases, passthrough edge data centers (PEDCs) that serve as aggregation points may be collocated with the breakout edge data centers (BEDC) 306a-306c.
  • PEDCs passthrough edge data centers
  • a radio access network such as the radio access network 120 in Figure IB, may connect through a passthrough edge data center (PEDC) to a breakout edge data center (BEDC) 306 using two different direct private networking connections.
  • PEDC passthrough edge data center
  • BEDC breakout edge data center
  • a direct private networking connection may provide direct connectivity from RAN DUs (on-prem) to local zones where cell sites are homed. Cell sites may be mapped to a particular local zone based on proximity to meet 5G RAN mid-haul latency expected between DU and CU.
  • a direct private networking connection may be used to make a private networking connection from a portion of the data center hierarchy into a data center owned by a cloud service provider.
  • the direct private networking connection may enable single-digit millisecond mid-haul connectivity between a radio access network and a breakout edge data center (BEDC).
  • BEDC breakout edge data center
  • FIG. 4D depicts an embodiment of a data center hierarchy implemented using a cloud-based compute and storage infrastructure.
  • the data center hierarchy includes multiple data center layers extending from a cell site layer (e.g., where an RRU resides). As the data center layers extend away from the cell site layer, the one-way latency for compute and storage resources may increase.
  • the data center layers include the local data center layer in which local data centers (LDCs) reside, the passthrough edge data center layer in which passthrough edge data centers (PEDCs) reside, the breakout edge data center layer in which breakthrough edge data centers (BEDCs) reside, the regional data center layer in which regional data centers (RDCs) reside, and the national data center layer in which national data centers (NDCs) reside.
  • LDCs local data centers
  • PEDCs passthrough edge data center layer in which passthrough edge data centers
  • BEDCs breakthrough edge data centers
  • RDCs regional data center layer in which regional data centers
  • NDCs national data center layer in which national data centers
  • the NDCs may house different server clusters for running regions 370a-370c.
  • region 370a may correspond with a first region (e.g., us-west-1)
  • region 370b may correspond with a second region (e.g., us-west-2)
  • region 370c may correspond with a third region (e.g., us-east-1).
  • Real and virtual routers within the data center layers may be connected together using an optical transport network (OTN) or high-speed pipes for RAN transport.
  • OTN optical transport network
  • a virtual router 382 residing in the cell site layer may connect to a virtual router 384 residing in the passthrough edge data center layer via link 391.
  • the link 391 may comprise a high-speed link or an optical fiber link. Data may be transmitted over the link 391 using an optical transport network.
  • a virtual router 383 residing in the cell site layer may connect to a virtual router 386 residing in the local data center layer via link 394.
  • the link 394 may comprise a high-speed link or an optical fiber link.
  • the one-way latency between the virtual router 382 and the virtual router 384 may comprise a first time delay and the one-way latency between the virtual router 383 and the virtual router 386 may comprise a second time delay that is less than the first time delay.
  • Various network functions may run using compute and storage resources within the data center hierarchy.
  • a virtual network function may be run at various levels within the data center hierarchy.
  • a UPF such as UPF 132 in Figure IB, may be run within a local data center (LDC) of the local data center layer or run within a breakout edge data center (BEDC) of the breakout edge data center layer.
  • LDC local data center
  • BEDC breakout edge data center
  • a first redundant link 392 between the virtual router 382 and the virtual router 385 residing in the passthrough edge data center layer may allow applications running within the cell site layer to access data from either the virtual router 384 or the virtual router 385.
  • the first redundant link 392 allows applications running within the cell site layer with access to the virtual router 382 to receive data when a failure occurs to the virtual router 384, a failure occurs to the local zone 360a, or a failure occurs to the availability zone 350a.
  • a second redundant link 394 between the virtual router 383 and the virtual router 386 may allow applications running within the cell site layer with access to the virtual router 383 to receive data when a failure occurs to the virtual router 387.
  • a third redundant link 396 between the virtual router 388 and the virtual router 385 may allow applications running within the local data center layer with access to the virtual router 388 to receive data when a failure occurs to the local zone 360c or a failure occurs to the availability zone 350c.
  • the redundant links 392, 394, and 396 may be created or established for high priority users or sites.
  • the redundant links 392, 394, and 396 may be established or instantiated over time using virtual routers.
  • FIG. 4E depicts another embodiment of a data center hierarchy implemented using a cloud-based compute and storage infrastructure. As depicted, a new redundant link 398 between the virtual router 383 and the virtual router 388 residing in the local data center layer has been created. Also, a new redundant link 397 between the virtual router 388 and a virtual router residing in the breakout edge data center layer has been created. As depicted, servers 370- 375 may reside within different layers of the data center hierarchy. [0147] Figure 5 depicts an embodiment of cell sites 302a and 302b in communication with the local data center (LDC) 304. Each cell site 302 may include a tower structure 503 to which one or more remote radio units (RRUs) may be attached.
  • RRUs remote radio units
  • Each cell site 302 may include a cabinet 504 that holds computer hardware and storage resources in close proximity to the tower structures 503.
  • the cabinet 504a holds a router 506a and a hardware server 508a.
  • the cabinet 504b holds a router 506b, but does not hold a hardware server; therefore DU and CU components are not able to run locally at the cell site 302b.
  • the local data center (LDC) 304 includes a router 516 that is in communication with the router 506a at the cell site 502a.
  • the local data center (LDC) 304 also includes a router 517 that is in communication with the router 506b at the cell site 502b.
  • the local data center (LDC) 304 includes servers 520 and may include one or more redundant servers for facilitating failovers and hardware upgrades.
  • server 508a at cell site 302a may run containerized applications.
  • the server 508a may run one baseband pod in the DU for L1-L2 processing for all cells connected to cell site 302a.
  • a pod restart due to any failure could result in downtime for the entire cell site.
  • DU application may be split into two pods to improve uptime and fault tolerance.
  • a multi-pod architecture may improve availability of services.
  • the server 508a may run containerized applications and microservices.
  • Microservices (or a microservice architecture) structures an application as a collection of small autonomous services that communicate through application programming interfaces (APIs).
  • An API may comprise a set of rules and protocols that define how applications connect to and communicate with each other.
  • a REST API may comprise an API that conforms to the design principles of the representational state transfer (REST) architectural style. REST APIs may be referred to as RESTful APIs.
  • REST APIs provide a flexible, lightweight way to integrate applications, and have emerged as the most common method for connecting components in microservices architectures.
  • REST APIs communicate via HTTP requests to perform standard database functions like creating, reading, updating, and deleting records (also known as CRUD) within a resource.
  • a creation operation may comprise a POST operation
  • a reading operation may comprise a GET operation
  • an updating operation may comprise a PUT operation
  • a delete operation may comprise a DELETE operation.
  • a REST API may use a GET request to retrieve a record, a POST request to create a record, a PUT request to update a record, and a DELETE request to delete a record.
  • a client request is made via a RESTful API, it transfers a representation of the state of the resource to the requester or endpoint.
  • JSON JavaScript Object Notation
  • HTML HyperText Markup Language
  • plain text JSON is popular because it’s readable by both humans and machines — and it is programming language-agnostic.
  • dynamic network slicing may be used to perform self- healing to compensate for a failure of a network node.
  • Self-healing may temporarily restore coverage by increasing power of neighboring cells to increase their coverage area.
  • Figure 6A depicts a flowchart describing one embodiment of a process for identifying a location within a data center hierarchy for running a user plane function of a core network.
  • the process of Figure 6A may be performed by a core network, such as the core network 130 in Figure 2E.
  • the process of Figure 6A may be performed using one or more virtual machines and/or one or more containerized applications.
  • the process of Figure 6A may be performed using a containerized environment, such as the containerized environment 279 in Figure 2E.
  • a latency requirement for a network connection to user equipment is acquired.
  • a first location of a distributed unit within a data center hierarchy is identified.
  • the distributed unit may correspond with distributed unit DU 204 in Figure IB.
  • the distribute unit may correspond with the virtualized distributed unit VDU 201 in Figure 3 A that is located within a local data center LDC 304 within a data center hierarchy.
  • a second location within the data center hierarchy for running a user plane function is determined based on the latency requirement for the network connection to the user equipment and/or the first location of the distributed unit within the data center hierarchy.
  • first location within the data center hierarchy and the second location within the data center hierarchy may correspond with the same location within the data center hierarchy. In other cases, the first location within the data center hierarchy and the second location within the data center hierarchy may correspond with the different locations within the data center hierarchy.
  • the second location within the data center hierarchy for running a user plane function may correspond with a local data center, such as the LDC 304 in Figure 3B.
  • the second location within the data center hierarchy for running a user plane function may correspond with a breakout edge data center, such as the BEDC 306 in Figure 3A.
  • the user plane function may be subsequently moved to a location within the data center hierarchy that is closer to a cell site layer or closer to the location of a VDU.
  • the user plane function may correspond with UPF 132 in Figure 3A being moved from the BEDC 306 to the LDC 304 in Figure 3B.
  • the user plane function is run at the second location within the data center hierarchy or is executed within a data center located at the second location within the data center hierarchy.
  • one or more user plane packets are routed between a radio access network in communication with the user equipment and a data network using the user plane function.
  • the latency requirement for the network connection may comprise a one-way latency requirement from a mobile computing device to the user plane function. In other embodiments, the latency requirement for the network connection may comprise a round-trip latency requirement between a mobile computing device and a data network from which data is being transferred to the mobile computing device. In other embodiments, the latency requirement for the network connection may comprise a one-way latency requirement between an RRU and a DU in communication with the RRU of less than 160 microseconds. In other embodiments, the latency requirement for the network connection may comprise a one-way latency requirement between a DU and a CU in communication with the DU of less than 4 milliseconds.
  • Different virtualized network functions such as the user plane function and the session management function may be assigned to different locations within a data center hierarchy based on a latency requirement for a network connection to user equipment (e.g., for a particular network slice for a mobile computing device) and/or the location of the distributed unit within the data center hierarchy.
  • a user plane function may be assigned to a first data center within a data center hierarchy and a session management function that is paired with the user plane function may be assigned to a second data center within the data center hierarchy different from the first data center.
  • a latency requirement for a network connection to a mobile computing device is acquired, a location of a distributive unit in communication with a user plane function is identified, a data center location for running the user plane function is determined based on the latency requirement for the network connection to the mobile computing device and the location of the distributed unit, and an instruction to cause the user plane function to be run at the data center location is outputted.
  • the instruction may be transmitted to a server that resides at the data center location.
  • Figure 6B depicts a flowchart describing an embodiment of a process for establishing network connections using a core network.
  • the process of Figure 6B may be performed by a core network, such as the core network 130 in Figure 2E.
  • the process of Figure 6B may be performed using one or more virtual machines and/or one or more containerized applications.
  • the process of Figure 6B may be performed using a containerized environment, such as the containerized environment 279 in Figure 2E.
  • a first latency requirement for a first network connection to user equipment is acquired.
  • a second latency requirement for a second network connection to the user equipment is acquired.
  • the user equipment may comprise a mobile computing device.
  • the first latency requirement may comprise a one-way latency requirement to or from the user equipment.
  • the first latency requirement may comprise a round-trip latency requirement between the user equipment and a data network from which data is being transferred to the user equipment.
  • the first latency requirement may be greater than or less than the second latency requirement.
  • a set of shared core network functions is identified based on the first latency requirement and the second latency requirement.
  • the set of shared core network functions may correspond with the shared core network functions 131 in Figure 1H.
  • a first set of network functions for a first network slice is determined based on the first latency requirement.
  • a second set of network functions for a second network slice is determined based on the second latency requirement.
  • both the first set of network functions and the second set of network functions may include the set of shared core network functions.
  • the first set of network functions may correspond with the network functions within the slice 122a of Figure 1H and the shared core network functions 131 in Figure 1H and the second set of network functions may correspond with the network functions within the slice 122b of Figure 1H and the shared core network functions 131 in Figure 1H.
  • the first network connection to the user equipment (e.g., a mobile computing device) is established using the first set of network functions for the first network slice and the second network connection to the user equipment is established using the second set of network functions for the second network slice.
  • Both the first network connection and the second network connection may be concurrently established such that a mobile computing device may simultaneously connect to a data network using both the first network connection and the second network connection.
  • a placement of the first set of network functions within a data center hierarchy may be adjusted based on a quality of service parameter associated with the first network connection to the user equipment.
  • the placement of the first set of network functions may correspond with the location of a data center within the data center hierarchy in which the first set of network functions are executed.
  • the quality of service parameter may comprise a minimum network speed to user equipment or an end-to-end latency from the user equipment to a data network.
  • a set of network functions for a network slice may be identified based on a latency requirement for a network connection to user equipment.
  • the set of network functions may be updated based on an updated latency requirement for the network connection to the user equipment, which may in turn cause a network slice to be reconfigured based on the updated set of network functions.
  • Figure 6C depicts a flowchart describing an embodiment of a process for establishing a network connection.
  • the process of Figure 6C may be performed by a core network, such as the core network 130 in Figure 2E, or a radio access network, such as the radio access network 120 in Figure 2C.
  • the process of Figure 6C may be performed using one or more virtual machines and/or one or more containerized applications.
  • the process of Figure 6C may be performed using a containerized environment, such as the containerized environment 279 in Figure 2E.
  • a set of quality of service parameters associated with a network connection to user equipment is acquired.
  • the set of quality of service parameters may include bit rate, bit error rate, throughput, packet loss, maximum packet loss rate, packet error rate, packet delay variation, end-to-end latency, network availability, jitter, and/or network bandwidth.
  • a set of network functions for establishing the network connection is identified.
  • the set of network connections may correspond with a set of virtualized network functions for a network slice, such as AMF 134a, SMF 133a, UPF 132a, NSSF 138 and PCF 135 depicted in Figure 1G.
  • the particular set of virtualized network functions for the network slice may be identified based on a network slice configuration (or use case) for the network slice, such as a high-reliability configuration or a low-latency configuration.
  • a data center location for running the set of network functions is determined based on the set of quality of service parameters or metrics.
  • the data center location may correspond with a local data center, such as the local data center LDC 304 in Figure 3C, and the set of network functions may include a user plane function, such as the user plane function UPF 132 in Figure 3C.
  • the set of network functions may be deployed using a containerized environment within the data center location.
  • the set of network functions is deployed within the containerized environment to establish the network connection in response to detection that the set of network functions may be deployed using the containerized environment.
  • the containerized environment may correspond with the containerized environment 279 in Figure 2E.
  • the determination of a data center location for running the set of network functions may be based on a latency requirement for the set of network functions. In other embodiments, the determination of a data center location for running the set of network functions may be based on a power requirement for the set of network functions, such as a maximum power requirement for the set of network functions. In one example, the maximum power requirement is associated with a maximum power consumption for computing resources executing the set of network functions (e.g., a server executing the set of network functions must consume less than 5W).
  • a set of network functions for establishing a network connection or that are associated with a network slice to establish a network connection may have a maximum power budget such that the total power consumed to execute the set of network functions across a data center hierarchy is restricted or limited.
  • a set of network functions for establishing a network connection or that are associated with a network slice to establish a network connection may have a maximum power budget per data center such that the power consumed to execute the set of network functions at each data center within a data center hierarchy is restricted or limited.
  • Each data center within a data center hierarchy may have a maximum power limit for network functions associated with a particular network slice.
  • Figure 6D depicts a flowchart describing an embodiment of a process for adding and removing redundant links between routers.
  • the process of Figure 6D may be performed by a core network, such as the core network 130 in Figure 2E, or a radio access network, such as the radio access network 120 in Figure 2C.
  • the process of Figure 6D may be performed using one or more virtual machines and/or one or more containerized applications.
  • the process of Figure 6D may be performed using a containerized environment, such as the containerized environment 279 in Figure 2E.
  • a first failure rate corresponding with a first set of machines residing within a first data center layer is acquired.
  • the failure rate may comprise the number of virtual machines that have failed over a period of time (e.g., that failed over the past hour). In other cases, the failure rate may correspond with the number of virtual machines that are no longer responsive. In some cases, the failure rate may correspond with the number of physical servers that have had a software or hardware failure within a past period of time.
  • the first data center layer may include a first router (e.g., a virtual router or a physical router).
  • step 676 it may be detected that the first set of machines have had more than four failures within the past week.
  • step 676 a second set of machines residing within a second data center layer is identified.
  • step 678 a first redundant link between a third router residing within a third data center layer and the first router is removed in response to detection that the first failure rate has exceeded the threshold failure rate.
  • step 680 a second redundant link is added between the third router residing within the third data center layer and the second router.
  • the first redundant link may be removed before adding the second redundant link.
  • the second set of machines residing in the second data center layer may be selected or identified as an end point for the second redundant link if it is detected that the second set of machines have not exceeded the threshold failure rate.
  • the third data center layer may correspond with a cell site layer
  • the first data center layer may correspond with a local data center layer
  • the second data center layer may correspond with a breakout edge data center layer
  • FIG. 7A depicts one embodiment of a portion of a 5G network including user equipment UE 108 in communication with a small cell structure 701.
  • the small cell structure 701 may comprise a pole or a mini -tower structure to which one or more remote radio units (RRUs) may be attached.
  • RRUs remote radio units
  • the small cell structure 701 may simultaneously support wireless communications with numerous mobile computing devices not depicted.
  • a cabinet 504d may hold computer hardware and storage resources in close proximity to the small cell structure 701 in order to perform packet routing and data processing tasks.
  • the cabinet 504d may house a router 506d (e.g., a real hardware router or a virtual router) and a server 508d (e.g., a real machine or a virtual machine) that is running a virtualized distributed unit (VDU) 705.
  • the cabinet 504d may be within a wireless communication distance of a set of small cell structures including the small cell structure 701 and/or may have one or more wired connections to the set of small cell structures (e.g., network cabling located below a street connecting the small cell structure 701 to hardware within the cabinet 504d).
  • the small cell structure 701 may communicate with hardware within the cabinet 504d via communication path 703 (e.g., a wireless communication path or a radio link path).
  • the server 508d within the cabinet 504d may run DU and CU components.
  • the router 506d may exchange data or be in communication with hardware within the local data center LDC 304 via communication path 711.
  • the local data center LDC 304 includes a router 516 and a server 520a.
  • the small cell structure 701 may communicate with hardware at the cell site 302c via communication path 702.
  • the cell site 302c may include a cabinet 504c.
  • the hardware within the cabinet 504c may include a router 506c and a server 508c.
  • the router 506c may be in communication with hardware resources within the local data center LDC via communication path 712.
  • a first communication path from the small cell structure 701 may include communication path 702 and communication path 712. Data corresponding with a first network slice may traverse the first communication path.
  • a second communication path from the small cell structure 701 may include communication path 703 and communication path 711. Data corresponding with a second network slice different from the first network slice may traverse the second communication path.
  • Figure 7B depicts one embodiment of the portion of the 5G network in Figure 7A with an additional communication path that comprises a direct private connection 714 between the hardware resources within the cabinet 504c and the hardware resources within the local data center LDC 304. Moreover, the virtualized distributed unit VDU 705 is now running on server 520a within the local data center LDC 304.
  • the direct private connection 714 may include fiberoptic cables and may be used to establish or connect to a virtual private cloud hosted by the local data center LDC 304.
  • the user equipment UE 108 may be in communication with one or more data networks not depicted via a plurality of network slices.
  • a first network slice of the plurality of network slices may traverse communication paths 703 and 711. The first network slice may correspond with a low-latency configuration that demands a first latency requirement.
  • a second network slice of the plurality of network slices may traverse communication paths 702 and 712. The second network slice may correspond with a high-reliability configuration that demands a second latency requirement that is greater than the first latency requirement.
  • a third network slice of the plurality of network slices may traverse communication paths 702 and 714. The third network slice may correspond with a high-security configuration that demands a third latency requirement greater than the second latency requirement.
  • the assignment of the virtualized restricted unit VDU 705 to a particular server within a data center hierarchy may depend on the requirements of one or more network slices supported by the VDU 705.
  • the addition of the third network slice through the direct private connection 714 may cause the location of the VDU 705 to be moved from the server 508d to the server 520a.
  • the virtualized distributed unit VDU 705 may only be redeployed within the local data center LDC 304 if the first latency requirement between the virtualized distributed unit VDU 705 and the user equipment 108 would still be satisfied.
  • a second virtualized distributed unit not depicted may be instantiated within a server within the local data center to support the second network slice and the third network slice.
  • the server assignment for the virtualized distributed unit 705 may be determined based on the latency requirements of the network slices supported by the virtualized distributed unit 705. In some cases, the server assignment for the virtualized distributed unit 705 may be determined based on the maximum latency requirements of the network slices supported by the virtualized distributed unit 705 and/or quality of service requirements of the network slices supported by the virtualized distributed unit 705.
  • Figure 7C depicts one embodiment of a portion of a 5G network in which a plurality of small cell structures including small cell structure 701 and small cell structure 706 are in communication with hardware resources within a cabinet 504d.
  • the user equipment UE 108 may be in communication with a local data center LDC 304, a pass-through EDC 764, and a breakout EDC 766 via data communication paths 732.
  • the data communication paths 732 may include a number of links 742-746 over which data may be transferred between the hardware resources within the cabinet 504d and the local data center LDC 304, the pass-through EDC 764, and the breakout EDC 766.
  • the data communication paths 732 may include a primary low- latency link 742, a primary high-reliability link 743, a secondary high-reliability link 744, a primary high-security link 745, and a redundant link 746.
  • the redundant link 746 may comprise a duplicate link between the router 506d and a router within one of the data centers to which the data communication paths 732 connects.
  • Figure 7D depicts one embodiment of a portion of a 5G network in which a plurality of small cell structures including small cell structure 701, small cell structure 706, and small cell structure 707 are in communication with hardware resources within the cabinet 504d.
  • the routers 506d and 506e may comprise virtual routers and the servers 508d and 508e may comprise virtual servers or virtual machines.
  • the number of virtualized hardware resources within the cabinet 504d may be scaled up or down depending on the number of mobile computing devices supported by the small cell structures within the small cell area 722. In some cases, a maximum power requirement may be used to determine the maximum number of virtual servers that may be instantiated within the cabinet 504d.
  • the data communication paths 732 includes redundant links 746-748.
  • redundant links between virtual routers within a data center hierarchy may be scaled up or down based on the number of high-reliability network slice configurations and/or the quality of service parameters associated with network slices supported by the virtual routers.
  • a data communication path for a network slice may be assigned a redundant link if the network slice has been configured with at least a minimum network speed and the network slice has experienced at least a threshold number of data errors.
  • a data communication path for a network slice may be assigned a redundant link if one or more routers and/or one or more servers supporting the network slice have experienced at least a threshold number of failures (e.g., at least two failures within the past 24 hours).
  • the total number of redundant links available for use within the data communications paths 732 may be set based on a power requirement for supporting the redundant links.
  • Figure 8A depicts a flowchart describing an embodiment of a process for deploying a distributed unit within a data center hierarchy.
  • the process of Figure 8A may be performed by a core network, such as the core network 130 in Figure 2E, or a radio access network, such as the radio access network 120 in Figure 2C.
  • the process of Figure 8 A may be performed using one or more virtual machines and/or one or more containerized applications.
  • the process of Figure 8 A may be performed using a containerized environment, such as the containerized environment 279 in Figure 2E.
  • a communication latency between a user device (e.g., a mobile computing device) and a virtualized distributed unit deployed within a first data center layer is determined.
  • the communication latency may correspond with a one-way data latency between the user device and the virtualized distributed unit.
  • the user device may correspond with user equipment.
  • a location of the user device is acquired. In one embodiment, the location of the user device may comprise a GPS location.
  • a network slice configuration is acquired.
  • the network slice configuration may be associated with a low latency configuration or a high reliability configuration.
  • a network slice configuration may be associated with a minimum network bandwidth or a maximum data transfer latency between the user device and a data network.
  • a latency requirement for communication (e.g., data communication) between the user device and the virtualized distributed unit is determined based on the location of the user device and the network slice configuration.
  • step 810 it is detected that the communication latency is greater than the latency requirement for the communication between the user device and the virtualized distributed unit.
  • step 812 a location of a remote radio unit in communication with the mobile computing device is identified.
  • the location of the remote radio unit may correspond with a data center within a data center hierarchy. In one example, the location of the remote radio unit may correspond with a cell site or cell tower.
  • step 814 a second data center layer for the virtualized distributed unit is determined based on the location of the remote radio unit and the network slice configuration.
  • the virtualized distributed unit is redeployed within the second data center layer.
  • the virtualized distributed unit may be transferred from the first data center layer to the second data center layer.
  • step 818 the virtualized distributed unit is maintained within the second data center layer.
  • Figure 8B One example of a process for maintaining a virtualized distributed unit is depicted in Figure 8B.
  • Figure 8B depicts a flowchart describing an embodiment of a process for maintaining a distributed unit.
  • the process of Figure 8B may be performed by a core network, such as the core network 130 in Figure 2E, or a radio access network, such as the radio access network 120 in Figure 2C.
  • the process of Figure 8B may be performed using one or more virtual machines and/or one or more containerized applications.
  • the process of Figure 8B may be performed using a containerized environment, such as the containerized environment 279 in Figure 2E or the containerized environment 279 in Figure 2C.
  • a number of remote radio units in communication with a virtualized distributed unit is determined.
  • the virtualized distributed unit may connect to at least ten different remote radio units.
  • a plurality of network slice configurations corresponding with a plurality of network slices supported by the virtualized distributed unit is acquired.
  • a threshold service availability for the virtualized distributed unit is determined based on the plurality of network slice configurations. In some cases, the service availability may correspond with a percentage of time that the virtualized distributed unit is available for operation or correspond with a particular system uptime. The threshold service availability may be set to the highest service availability required by the plurality of network slice configurations.
  • a first number of replica pods for the virtualized distributed unit is determined based on the number of remote radio units in communication with the virtualized distributed unit and the threshold service availability.
  • the first number of replica pods for the virtualized distributed unit may comprise the number of remote radio units in communication with the virtualized distributed unit.
  • step 840 it is detected that the first number of replica pods is different than a number of pods running the virtualized distributed unit.
  • a first instruction to adjust the number of pods running the virtualized distributed unit to the first number of replica pods is transmitted.
  • step 844 an uptime for the virtualized distributed unit is determined.
  • step 846 the number of pods running the virtualized distributed unit is adjusted based on the uptime for the virtualized distributed unit.
  • a second instruction may be transmitted to a replication controller to increase the first number of replica pods for the virtualized distributed unit.
  • the first number of replica pods for the virtualized distributed unit may be reduced in response to detection that an uptime for the virtualized distributed unit is greater than a threshold uptime.
  • At least one embodiment of the disclosed technology includes determining a first failure rate corresponding with a first set of machines residing within a first data center layer.
  • the first data center layer includes a first router.
  • the method further comprises detecting that the first failure rate has exceeded a threshold failure rate and identifying a second set of machines residing within a second data center layer based on the threshold failure rate in response to detection that the first failure rate has exceeded the threshold failure rate.
  • the second data center layer includes a second router.
  • the method further comprises removing a first redundant link between a third router residing within a third data center layer and the first router in response to detection that the first failure rate has exceeded the threshold failure rate and adding a second redundant link between the third router residing within the third data center layer and the second router.
  • At least one embodiment of the disclosed technology includes determining a data transfer latency between a mobile computing device and a virtualized distributed unit deployed within a first data center layer, acquiring a latency requirement for communication between the mobile computing device and the virtualized distributed unit, detecting that the data transfer latency is greater than the latency requirement for the communication between the mobile computing device and the virtualized distributed unit, identifying a second data center layer for the virtualized distributed unit in response to detection that the data transfer latency is greater than the latency requirement for the communication between the mobile computing device and the virtualized distributed unit, terminating the virtualized distributed unit within the first data center layer, and deploying the virtualized distributed unit within the second data center layer such that a data transfer latency between the mobile computing device and the virtualized distributed unit deployed within the second data center layer is less than the data transfer latency between the mobile computing device and the virtualized distributed unit when the virtualized distributed unit was deployed within the second data center layer.
  • At least one embodiment of the disclosed technology includes determining a first number of replica pods for a virtualized distributed unit, detecting that the first number of replica pods is different than a number of pods running the virtualized distributed unit, and transmitting an instruction to a replication controller to adjust the number of pods running the virtualized distributed unit to the first number of replica pods.
  • At least one embodiment of the disclosed technology includes acquiring a latency requirement for a network connection to user equipment, determining a location within a data center hierarchy for running a user plane function based on the latency requirement for the network connection to the user equipment, routing one or more user plane packets between a radio access network in communication with the user equipment and a data network using the user plane function, and running the user plane function at the location within the data center hierarchy.
  • the method may further comprise identifying a location of a distributed unit (e.g., a virtualized distributed unit) in communication with the user plane function and determining the location within the data center hierarchy for running the user plane function based on the location of the distributed unit.
  • a distributed unit e.g., a virtualized distributed unit
  • At least one embodiment of the disclosed technology includes determining a first latency requirement for a first network connection to user equipment, determining a second latency requirement for a second network connection to the user equipment, identifying a set of shared core network functions based on the first latency requirement and the second latency requirement, determining a first set of network functions for a first network slice based on the first latency requirement, and determining a second set of network functions for a second network slice based on the second latency requirement. Both the first set of network functions and the second set of network functions include the set of shared core network functions.
  • the method further comprises concurrently establishing the first network connection to the user equipment using the first set of network functions and the second network connection to the user equipment using the second set of network functions.
  • At least one embodiment of the disclosed technology includes acquiring a set of quality of service parameters associated with a network connection to user equipment, identifying a set of network functions for establishing the network connection, determining a data center location for running the set of network functions based on the set of quality of service parameters, detecting that the set of network functions may be deployed using a containerized environment within the data center location, and deploying the set of network functions within the containerized environment to establish the network connection in response to detection that the set of network functions may be deployed using the containerized environment.
  • the disclosed technology may be described in the context of computer-executable instructions being executed by a computer or processor.
  • the computer-executable instructions may correspond with portions of computer program code, routines, programs, objects, software components, data structures, or other types of computer-related structures that may be used to perform processes using a computer.
  • Computer program code used for implementing various operations or aspects of the disclosed technology may be developed using one or more programming languages, including an object oriented programming language such as Java or C++, a function programming language such as Lisp, a procedural programming language such as the “C” programming language or Visual Basic, or a dynamic programming language such as Python or JavaScript.
  • computer program code or machine-level instructions derived from the computer program code may execute entirely on an end user’s computer, partly on an end user’s computer, partly on an end user’s computer and partly on a remote computer, or entirely on a remote computer or server.
  • each step in a flowchart may correspond with a program module or portion of computer program code, which may comprise one or more computer-executable instructions for implementing the specified functionality.
  • the functionality noted within a step may occur out of the order noted in the figures. For example, two steps shown in succession may, in fact, be executed substantially concurrently, or the steps may sometimes be executed in the reverse order, depending upon the functionality involved. In some implementations, steps may be omitted and other steps added without departing from the spirit and scope of the present subject matter.
  • the functionality noted within a step may be implemented using hardware, software, or a combination of hardware and software.
  • the hardware may include microcontrollers, microprocessors, field programmable gate arrays (FPGAs), and electronic circuitry.
  • processor may refer to a real hardware processor or a virtual processor, unless expressly stated otherwise.
  • a virtual machine may include one or more virtual hardware devices, such as a virtual processor and a virtual memory in communication with the virtual processor.
  • a connection may be a direct connection or an indirect connection (e.g., via another part).
  • the element may be directly connected to the other element or indirectly connected to the other element via intervening elements.
  • intervening elements When an element is referred to as being directly connected to another element, then there are no intervening elements between the element and the other element.
  • set of objects may refer to a “set” of one or more of the objects.
  • phrases “a first object corresponds with a second object” and “a first object corresponds to a second object” may refer to the first object and the second object being equivalent, analogous, or related in character or function.
  • the term “or” should be interpreted in the conjunctive and the disjunctive. A list of items linked with the conjunction “or” should not be read as requiring mutual exclusivity among the items, but rather should be read as “and/or” unless expressly stated otherwise.
  • the terms “at least one,” “one or more,” and “and/or,” as used herein, are open-ended expressions that are both conjunctive and disjunctive in operation.
  • the phrase “A and/or B” covers embodiments having element A alone, element B alone, or elements A and B taken together.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

L'invention concerne des procédés et des appareils pour améliorer des services de télécommunications par déploiement intelligent de composants de réseau d'accès radio et de liaisons redondantes au sein d'une hiérarchie de centre de données pour satisfaire des exigences de latence, de puissance, de disponibilité et de qualité de service pour une ou plusieurs tranches de réseau. Les composants de réseau d'accès radio peuvent comprendre des unités distribuées virtualisées (VDU) et des unités centralisées virtualisées (VCU). Pour satisfaire une exigence de latence pour une tranche de réseau, il peut être nécessaire de redéployer divers composants d'un réseau d'accès radio plus près de l'équipement d'utilisateur. Pour satisfaire une exigence de puissance pour la tranche de réseau, il peut être nécessaire de redéployer divers composants du réseau d'accès radio plus près des composants du réseau central. Dans le temps, les composants du réseau d'accès radio peuvent être réattribués de manière dynamique à différentes couches au sein d'une hiérarchie de centre de données afin de satisfaire des exigences de latence changeantes et des exigences de puissance pour la tranche de réseau.
PCT/US2023/018716 2022-04-15 2023-04-14 Gestion de liaisons redondantes WO2023201079A1 (fr)

Applications Claiming Priority (12)

Application Number Priority Date Filing Date Title
US202263331643P 2022-04-15 2022-04-15
US202263331647P 2022-04-15 2022-04-15
US202263331645P 2022-04-15 2022-04-15
US63/331,647 2022-04-15
US63/331,643 2022-04-15
US63/331,645 2022-04-15
US17/974,983 US20230337047A1 (en) 2022-04-15 2022-10-27 Utilization of replicated pods with distributed unit applications
US17/974,977 2022-10-27
US17/974,983 2022-10-27
US17/974,980 2022-10-27
US17/974,980 US20230337046A1 (en) 2022-04-15 2022-10-27 Utilization of virtualized distributed units at cell sites
US17/974,977 US20230336287A1 (en) 2022-04-15 2022-10-27 Management of redundant links

Publications (1)

Publication Number Publication Date
WO2023201079A1 true WO2023201079A1 (fr) 2023-10-19

Family

ID=86331109

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/018716 WO2023201079A1 (fr) 2022-04-15 2023-04-14 Gestion de liaisons redondantes

Country Status (1)

Country Link
WO (1) WO2023201079A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11095543B1 (en) * 2020-11-23 2021-08-17 Verizon Patent And Licensing Inc. Systems and methods for high availability and performance preservation for groups of network functions
US20210392477A1 (en) * 2020-06-11 2021-12-16 Verizon Patent And Licensing Inc. Wireless network policy manager for a service mesh
US11252655B1 (en) * 2020-12-10 2022-02-15 Amazon Technologies, Inc. Managing assignments of network slices

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210392477A1 (en) * 2020-06-11 2021-12-16 Verizon Patent And Licensing Inc. Wireless network policy manager for a service mesh
US11095543B1 (en) * 2020-11-23 2021-08-17 Verizon Patent And Licensing Inc. Systems and methods for high availability and performance preservation for groups of network functions
US11252655B1 (en) * 2020-12-10 2022-02-15 Amazon Technologies, Inc. Managing assignments of network slices

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LATIF AMMAR ET AL: "Telco Meets AWS Cloud: Deploying DISH's 5G Network in AWS Cloud | AWS for Industries", 27 February 2022 (2022-02-27), XP093056715, Retrieved from the Internet <URL:https://aws.amazon.com/blogs/industries/telco-meets-aws-cloud-deploying-dishs-5g-network-in-aws-cloud/> [retrieved on 20230622] *

Similar Documents

Publication Publication Date Title
US11368862B2 (en) Point-to-multipoint or multipoint-to-multipoint mesh self-organized network over WIGIG standards with new MAC layer
US11206551B2 (en) System and method for using dedicated PAL band for control plane and GAA band as well as parts of PAL band for data plan on a CBRS network
Moradi et al. SkyCore: Moving core to the edge for untethered and reliable UAV-based LTE networks
US11743953B2 (en) Distributed user plane functions for radio-based networks
US11888701B1 (en) Self-healing and resiliency in radio-based networks using a community model
Khan et al. The reconfigurable mobile network
US20230337046A1 (en) Utilization of virtualized distributed units at cell sites
US20230336439A1 (en) Private network connections for ran distributed units
US20230337047A1 (en) Utilization of replicated pods with distributed unit applications
US20230336287A1 (en) Management of redundant links
US20230336440A1 (en) Containerization of telecommunication network functions
US20230337125A1 (en) Dynamic virtual networks
US20230336430A1 (en) Decoupling of packet gateway control and user plane functions
JP7437569B2 (ja) 無線式ネットワーク向けの高可用性データ処理ネットワーク機能
Khorsandi et al. Adaptive function chaining for efficient design of 5G xhaul
WO2023201079A1 (fr) Gestion de liaisons redondantes
WO2023201077A1 (fr) Découplage de commande de passerelle de paquets et de fonctions de plan utilisateur
CN116074896A (zh) 多接入边缘计算切片
Moradi Software-driven and virtualized architectures for scalable 5G networks
US11838150B2 (en) Leveraging a virtual router to bridge between an underlay and an overlay
US20240147260A1 (en) Atomic deterministic next action manager
US11843537B2 (en) Telecommunication service provider controlling an underlay network in cloud service provider environment
US20240147259A1 (en) Repair atomic deterministic next action
US20240143384A1 (en) Atomic deterministic next action
US20240064042A1 (en) Leveraging a virtual router to bridge between an underlay and an overlay

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23722997

Country of ref document: EP

Kind code of ref document: A1