WO2022125296A1 - Mechanisms for enabling in-network computing services - Google Patents

Mechanisms for enabling in-network computing services Download PDF

Info

Publication number
WO2022125296A1
WO2022125296A1 PCT/US2021/060270 US2021060270W WO2022125296A1 WO 2022125296 A1 WO2022125296 A1 WO 2022125296A1 US 2021060270 W US2021060270 W US 2021060270W WO 2022125296 A1 WO2022125296 A1 WO 2022125296A1
Authority
WO
WIPO (PCT)
Prior art keywords
service
network
rof
function
data
Prior art date
Application number
PCT/US2021/060270
Other languages
French (fr)
Inventor
Qian Li
Zongrui DING
Geng Wu
Original Assignee
Intel Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corporation filed Critical Intel Corporation
Priority to CN202180075376.7A priority Critical patent/CN116868556A/en
Publication of WO2022125296A1 publication Critical patent/WO2022125296A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/45Network directories; Name-to-address mapping
    • H04L61/4541Directories for service discovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/45Network directories; Name-to-address mapping
    • H04L61/4505Network directories; Name-to-address mapping using standardised directories; using standardised directory access protocols
    • H04L61/4511Network directories; Name-to-address mapping using standardised directories; using standardised directory access protocols using domain name system [DNS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/50Address allocation
    • H04L61/5007Internet protocol [IP] addresses
    • H04L61/5014Internet protocol [IP] addresses using dynamic host configuration protocol [DHCP] or bootstrap protocol [BOOTP]

Definitions

  • FIG. 1 illustrates an example workload distribution scheme used by cloud computing systems.
  • Embodiment 1 may address the long connection establishment time issue described above by enabling in-network DNS.
  • the CSF e.g., the CSF of Figure 2
  • the CSF may be equipped with a DNS function.
  • An example procedure for embodiment 1 may operate in accordance with Figures 2 and 3 as follows:
  • the in-network service mesh selects a service instance for the device.
  • the selection decision can be based on factors such as location of the device, location of the service instance, load of the service instance, estimated service response time, etc.
  • Option 1 A network controller or DHCP server managed association.
  • the network controller or the DHCP server when a device first attaches to a network, the network controller or the DHCP server will assign a CSF for the device. The network controller or the DHCP server may then configure the device with the CSF address.
  • each service may send a notification message to the service registry when there is status change.
  • Embodiment 2 may address be desirable to address the first two issues described above. Specifically, Embodiment 2 may resolve the latency related to computing service establishment time, as well as computing resource/service discovery/access inflexibility.
  • a service orchestration function may be introduced.
  • the service orchestration function (SOF) resides in both the device and the CSF.
  • the SOF may run on top of the device and/or CSF operating system (OS), and may be used to dynamically steer service execution between the device and the network.
  • Figure 6 provides an illustration of the SOF in the device and the network, and the interface between the two ends.
  • client applications on the device interact with the service orchestration function - client (SOF-C) to request services.
  • SOF-C service orchestration function - client
  • the SOF-C interacts with the device OS and a service orchestration function - network (SOF-N) to get information on service availability (locally and in network).
  • SOF-N service orchestration function - network
  • the SOF-C may then dynamically steer an application’s service request between local execution (e.g., on the device) and network execution. a. If execution is done locally (e.g., on the device), the SOF-C will send the service request to the local OS, e.g., via a system call. b. If execution is done in network, the SOF-C will send the service request to the CSF, e.g., via remote procedure call.
  • EMBODIMENT 4 DEVICE-NETWORK RESOURCE ORCHESTRATION
  • the Resource orchestration function - client (ROF-C) of the device may interact with the resource orchestration function - network (ROF-N) of the CSF via a devicenetwork resource orchestration interface to get a list of network computing resources and capabilities the device can use. This can be done via one or more of the following approaches a.
  • Option 1 The ROF-N broadcasts a list of one or more network computing resources and capabilities as system information b.
  • the ROF-C exposes the computing resource list and capabilities (from either or both of Options 1 and 2) to the OS of the device.
  • the ROF-C sends a resource request to the ROF-N.
  • the ROF-C and ROF-N may establish a data path between the device and the network for access to the computing resource.
  • the OS may then use the computing resource via the data path.
  • FIGS 10-12 illustrate various systems, devices, and components that may implement aspects of disclosed embodiments.
  • the UE 1002 may additionally communicate with an AP 1006 via an over-the-air (OTA) connection.
  • the AP 1006 manages a WLAN connection, which may serve to offload some/all network traffic from the RAN 1004.
  • the connection between the UE 1002 and the AP 1006 may be consistent with any IEEE 802.11 protocol.
  • the UE 1002, RAN 1004, and AP 1006 may utilize cellular- WLAN aggregation/integration (e.g., LWA/LWIP).
  • Cellular- WLAN aggregation may involve the UE 1002 being configured by the RAN 1004 to utilize both cellular radio resources and WLAN resources.
  • One example implementation is a “CU/DU split” architecture where the ANs 1008 are embodied as a gNB-Central Unit (CU) that is communicatively coupled with one or more gNB- Distributed Units (DUs), where each DU may be communicatively coupled with one or more Radio Units (RUs) (also referred to as RRHs, RRUs, or the like) (see e.g., 3GPP TS 38.401 V16.1.0 (2020-03)).
  • RUs Radio Units
  • the one or more RUs may be individual RSUs.
  • the CU/DU split may include an ng-eNB-CU and one or more ng-eNB- DUs instead of, or in addition to, the gNB-CU and gNB-DUs, respectively.
  • the ANs 1008 employed as the CU may be implemented in a discrete device or as one or more software entities running on server computers as part of, for example, a virtual network including a virtual Base Band Unit (BBU) or BBU pool, cloud RAN (CRAN), Radio Equipment Controller (REC), Radio Cloud Center (RCC), centralized RAN (C-RAN), virtualized RAN (vRAN), and/or the like (although these terms may refer to different implementation concepts). Any other type of architectures, arrangements, and/or configurations can be used.
  • BBU Virtual Base Band Unit
  • CRAN cloud RAN
  • REC Radio Equipment Controller
  • RRCC Radio Cloud Center
  • C-RAN centralized RAN
  • vRAN virtualized RAN
  • the plurality of ANs may be coupled with one another via an X2 interface (if the RAN 1004 is an LTE RAN or Evolved Universal Terrestrial Radio Access Network (E-UTRAN) 1010) or an Xn interface (if the RAN 1004 is aNG-RAN 1014).
  • the X2/Xn interfaces which may be separated into control/user plane interfaces in some embodiments, may allow the ANs to communicate information related to handovers, data/context transfers, mobility, load management, interference coordination, etc.
  • the ANs of the RAN 1004 may each manage one or more cells, cell groups, component carriers, etc. to provide the UE 1002 with an air interface for network access.
  • the UE 1002 may be simultaneously connected with a plurality of cells provided by the same or different ANs 1008 of the RAN 1004.
  • the UE 1002 and RAN 1004 may use carrier aggregation to allow the UE 1002 to connect with a plurality of component carriers, each corresponding to a Pcell or Scell.
  • a first AN 1008 may be a master node that provides an MCG and a second AN 1008 may be secondary node that provides an SCG.
  • the first/second ANs 1008 may be any combination of eNB, gNB, ng-eNB, etc.
  • the RAN 1004 may provide the air interface over a licensed spectrum or an unlicensed spectrum.
  • the nodes may use LAA, eLAA, and/or feLAA mechanisms based on CA technology with PCells/Scells.
  • the nodes Prior to accessing the unlicensed spectrum, the nodes may perform medium/carrier-sensing operations based on, for example, a listen-before-talk (LBT) protocol.
  • LBT listen-before-talk
  • the UE 1002 or AN 1008 may be or act as a roadside unit (RSU), which may refer to any transportation infrastructure entity used for V2X communications.
  • RSU may be implemented in or by a suitable AN or a stationary (or relatively stationary) UE.
  • An RSU implemented in or by: a UE may be referred to as a “UE-type RSU”; an eNB may be referred to as an “eNB-type RSU”; a gNB may be referred to as a “gNB-type RSU”; and the like.
  • an RSU is a computing device coupled with radio frequency circuitry located on a roadside that provides connectivity support to passing vehicle UEs.
  • the RSU may also include internal data storage circuitry to store intersection map geometry, traffic statistics, media, as well as applications/software to sense and control ongoing vehicular and pedestrian traffic.
  • the RSU may provide very low latency communications required for high speed events, such as crash avoidance, traffic warnings, and the like. Additionally or alternatively, the RSU may provide other cellular/WLAN communications services.
  • the components of the RSU may be packaged in a weatherproof enclosure suitable for outdoor installation, and may include a network interface controller to provide a wired connection (e.g., Ethernet) to a traffic signal controller or a backhaul network.
  • the RAN 1004 may be an E-UTRAN 1010 with one or more eNBs 1012.
  • the an E-UTRAN 1010 provides an LTE air interface (Uu) with the following characteristics: SCS of 15 kHz; CP-OFDM waveform for DL and SC-FDMA waveform for UL; turbo codes for data and TBCC for control; etc.
  • the LTE air interface may rely on CSI-RS for CSI acquisition and beam management; PDSCH/PDCCH DMRS for PDSCH/PDCCH demodulation; and CRS for cell search and initial acquisition, channel quality measurements, and channel estimation for coherent demodulation/detection at the UE.
  • the LTE air interface may operating on sub-6 GHz bands.
  • the NG interface may be split into two parts, an NG user plane (NG-U) interface, which carries traffic data between the nodes of the NG-RAN 1014 and a UPF 1048 (e.g., N3 interface), and an NG control plane (NG-C) interface, which is a signaling interface between the nodes of the NG-RAN 1014 and an AMF 1044 (e.g., N2 interface).
  • NG-U NG user plane
  • N-C NG control plane
  • the NG-RAN 1014 may provide a 5G-NR air interface (which may also be referred to as a Uu interface) with the following characteristics: variable SCS; CP-OFDM for DL, CP-OFDM and DFT-s-OFDM for UL; polar, repetition, simplex, and Reed-Muller codes for control and LDPC for data.
  • the 5G-NR air interface may rely on CSI-RS, PDSCH/PDCCH DMRS similar to the LTE air interface.
  • the 5G-NR air interface may not use a CRS, but may use PBCH DMRS for PBCH demodulation; PTRS for phase tracking for PDSCH; and tracking reference signal for time tracking.
  • the 5G-NR air interface may operating on FR1 bands that include sub- 6 GHz bands or FR2 bands that include bands from 24.25 GHz to 52.6 GHz.
  • the 5G-NR air interface may include an SSB that is an area of a downlink resource grid that includes PSS/SSS/PBCH.
  • the 5G-NR air interface may utilize BWPs for various purposes.
  • BWP can be used for dynamic adaptation of the SCS.
  • the UE 1002 can be configured with multiple BWPs where each BWP configuration has a different SCS. When a BWP change is indicated to the UE 1002, the SCS of the transmission is changed as well.
  • Another use case example of BWP is related to power saving.
  • multiple BWPs can be configured for the UE 1002 with different amount of frequency resources (e.g., PRBs) to support data transmission under different traffic loading scenarios.
  • a BWP containing a smaller number of PRBs can be used for data transmission with small traffic load while allowing power saving at the UE 1002 and in some cases at the gNB 1016.
  • a BWP containing a larger number of PRBs can be used for scenarios with higher traffic load.
  • the RAN 1004 is communicatively coupled to CN 1020 that includes network elements and/or network functions (NFs) to provide various functions to support data and telecommunications services to customers/subscribers (e.g., UE 1002).
  • the components of the CN 1020 may be implemented in one physical node or separate physical nodes.
  • NFV may be utilized to virtualize any or all of the functions provided by the network elements of the CN 1020 onto physical compute/storage resources in servers, switches, etc.
  • a logical instantiation of the CN 1020 may be referred to as a network slice, and a logical instantiation of a portion of the CN 1020 may be referred to as a network sub-slice.
  • the CN 1020 may be an LTE CN 1022 (also referred to as an Evolved Packet Core (EPC) 1022).
  • the EPC 1022 may include MME 1024, SGW 1026, SGSN 1028, HSS 1030, PGW 1032, and PCRF 1034 coupled with one another over interfaces (or “reference points”) as shown.
  • the NFs in the EPC 1022 are briefly introduced as follows.
  • the MME 1024 implements mobility management functions to track a current location of the UE 1002 to facilitate paging, bearer activation/deactivation, handovers, gateway selection, authentication, etc.
  • the SGW 1026 terminates an SI interface toward the RAN 1010 and routes data packets between the RAN 1010 and the EPC 1022.
  • the SGW 1026 may be a local mobility anchor point for inter-RAN node handovers and also may provide an anchor for inter-3GPP mobility. Other responsibilities may include lawful intercept, charging, and some policy enforcement.
  • the SGSN 1028 tracks a location of the UE 1002 and performs security functions and access control.
  • the SGSN 1028 also performs inter-EPC node signaling for mobility between different RAT networks; PDN and S-GW selection as specified by MME 1024; MME 1024 selection for handovers; etc.
  • the S3 reference point between the MME 1024 and the SGSN 1028 enable user and bearer information exchange for inter-3GPP access network mobility in idle/active states.
  • the HSS 1030 includes a database for network users, including subscription-related information to support the network entities’ handling of communication sessions.
  • the HSS 1030 can provide support for routing/roaming, authentication, authorization, naming/addressing resolution, location dependencies, etc.
  • An S6a reference point between the HSS 1030 and the MME 1024 may enable transfer of subscription and authentication data for authenticating/ authorizing user access to the EPC 1020.
  • the PGW 1032 may terminate an SGi interface toward a data network (DN) 1036 that may include an application (app)Zcontent server 1038.
  • the PGW 1032 routes data packets between the EPC 1022 and the data network 1036.
  • the PGW 1032 is communicatively coupled with the SGW 1026 by an S5 reference point to facilitate user plane tunneling and tunnel management.
  • the PGW 1032 may further include anode for policy enforcement and charging data collection (e.g., PCEF).
  • the SGi reference point may communicatively couple the PGW 1032 with the same or different data network 1036.
  • the PGW 1032 may be communicatively coupled with a PCRF 1034 via a Gx reference point.
  • the CN 1020 may be a 5GC 1040 including an AUSF 1042, AMF 1044, SMF 1046, UPF 1048, NSSF 1050, NEF 1052, NRF 1054, PCF 1056, UDM 1058, and AF 1060 coupled with one another over various interfaces as shown.
  • the NFs in the 5GC 1040 are briefly introduced as follows.
  • the AUSF 1042 stores data for authentication of UE 1002 and handle authentication- related functionality.
  • the AUSF 142 may facilitate a common authentication framework for various access types.
  • the AMF 1044 allows other functions of the 5GC 1040 to communicate with the UE 1002 and the RAN 1004 and to subscribe to notifications about mobility events with respect to the UE 1002.
  • the AMF 1044 is also responsible for registration management (e.g., for registering UE 1002), connection management, reachability management, mobility management, lawful interception of AMF-related events, and access authentication and authorization.
  • the AMF 1044 provides transport for SM messages between the UE 1002 and the SMF 1046, and acts as a transparent proxy for routing SM messages.
  • AMF 1044 also provides transport for SMS messages between UE 1002 and an SMSF.
  • AMF 1044 interacts with the AUSF 1042 and the UE 1002 to perform various security anchor and context management functions.
  • AMF 1044 is a termination point of a RAN-CP interface, which includes the N2 reference point between the RAN 1004 and the AMF 1044.
  • the AMF 1044 is also a termination point of NAS (Nl) signaling, and performs NAS ciphering and integrity protection.
  • AMF 1044 also supports NAS signaling with the UE 1002 over an N3IWF interface.
  • the N3IWF provides access to untrusted entities.
  • N3IWF may be a termination point for the N2 interface between the (R)AN 1004 and the AMF 1044 for the control plane, and may be a termination point for the N3 reference point between the (R)AN 1014 and the 1048 for the user plane.
  • the AMF 1044 handles N2 signalling from the SMF 1046 and the AMF 1044 for PDU sessions and QoS, encapsulate/de-encapsulate packets for IPSec and N3 tunnelling, marks N3 user-plane packets in the uplink, and enforces QoS corresponding to N3 packet marking taking into account QoS requirements associated with such marking received over N2.
  • N3IWF may also relay UL and DL control-plane NAS signalling between the UE 1002 and AMF 1044 via an Nl reference point between the UE 1002and the AMF 1044, and relay uplink and downlink user-plane packets between the UE 1002 and UPF 1048.
  • the N3IWF also provides mechanisms for IPsec tunnel establishment with the UE 1002.
  • the AMF 1044 may exhibit an Namf service-based interface, and may be a termination point for an N14 reference point between two AMFs 1044 and an N17 reference point between the AMF 1044 and a 5G-EIR (not shown by Figure 10).
  • the SMF 1046 is responsible for SM (e.g., session establishment, tunnel management between UPF 1048 and AN 1008); UE IP address allocation and management (including optional authorization); selection and control of UP function; configuring traffic steering at UPF 1048 to route traffic to proper destination; termination of interfaces toward policy control functions; controlling part of policy enforcement, charging, and QoS; lawful intercept (for SM events and interface to LI system); termination of SM parts of NAS messages; downlink data notification; initiating AN specific SM information, sent via AMF 1044 overN2 to AN 1008; and determining SSC mode of a session.
  • SM refers to management of a PDU session
  • a PDU session or “session” refers to a PDU connectivity service that provides or enables the exchange of PDUs between the UE 1002 and the DN 1036.
  • the UPF 1048 acts as an anchor point for intra-RAT and inter-RAT mobility, an external PDU session point of interconnect to data network 1036, and a branching point to support multihomed PDU session.
  • the UPF 1048 also performs packet routing and forwarding, packet inspection, enforces user plane part of policy rules, lawfully intercept packets (UP collection), performs traffic usage reporting, perform QoS handling for a user plane (e.g., packet filtering, gating, UL/DL rate enforcement), performs uplink traffic verification (e.g., SDF-to-QoS flow mapping), transport level packet marking in the uplink and downlink, and performs downlink packet buffering and downlink data notification triggering.
  • UPF 1048 may include an uplink classifier to support routing traffic flows to a data network.
  • the NSSF 1050 selects a set of network slice instances serving the UE 1002.
  • the NSSF 1050 also determines allowed NSSAI and the mapping to the subscribed S-NSSAIs, if needed.
  • the NSSF 1050 also determines an AMF set to be used to serve the UE 1002, or a list of candidate AMFs 1044 based on a suitable configuration and possibly by querying the NRF 1054.
  • the selection of a set of network slice instances for the UE 1002 may be triggered by the AMF 1044 with which the UE 1002 is registered by interacting with the NSSF 1050; this may lead to a change of AMF 1044.
  • the NSSF 1050 interacts with the AMF 1044 via an N22 reference point; and may communicate with another NSSF in a visited network via an N31 reference point (not shown).
  • the NEF 1052 securely exposes services and capabilities provided by 3GPP NFs for third party, internal exposure/re-exposure, AFs 1060, edge computing or fog computing systems (e.g., edge compute node, etc.
  • the NEF 1052 may authenticate, authorize, or throttle the AFs.
  • NEF 1052 may also translate information exchanged with the AF 1060 and information exchanged with internal network functions. For example, the NEF 1052 may translate between an AF-Service-Identifier and an internal 5GC information.
  • NEF 1052 may also receive information from other NFs based on exposed capabilities of other NFs. This information may be stored at the NEF 1052 as structured data, or at a data storage NF using standardized interfaces.
  • the stored information can then be re-exposed by the NEF 1052 to other NFs and AFs, or used for other purposes such as analytics.
  • the NRF 1054 supports service discovery functions, receives NF discovery requests from NF instances, and provides information of the discovered NF instances to the requesting NF instances. NRF 1054 also maintains information of available NF instances and their supported services.
  • the NRF 1054 also supports service discovery functions, wherein the NRF 1054 receives NF Discovery Request from NF instance or an SCP (not shown), and provides information of the discovered NF instances to the NF instance or SCP.
  • the PCF 1056 provides policy rules to control plane functions to enforce them, and may also support unified policy framework to govern network behavior.
  • the PCF 1056 may also implement a front end to access subscription information relevant for policy decisions in a UDR of the UDM 1058.
  • the PCF 1056 exhibit an Npcf service-based interface.
  • the UDM 1058 handles subscription-related information to support the network entities’ handling of communication sessions, and stores subscription data of UE 1002. For example, subscription data may be communicated via an N8 reference point between the UDM 1058 and the AMF 1044.
  • the UDM 1058 may include two parts, an application front end and a UDR.
  • the UDR may store subscription data and policy data for the UDM 1058 and the PCF 1056, and/or structured data for exposure and application data (including PFDs for application detection, application request information for multiple UEs 1002) for the NEF 1052.
  • the Nudr servicebased interface may be exhibited by the UDR 221 to allow the UDM 1058, PCF 1056, and NEF 1052 to access a particular set of the stored data, as well as to read, update (e.g., add, modify), delete, and subscribe to notification of relevant data changes in the UDR.
  • the UDM may include a UDM-FE, which is in charge of processing credentials, location management, subscription management and so on. Several different front ends may serve the same user in different transactions.
  • the UDM-FE accesses subscription information stored in the UDR and performs authentication credential processing, user identification handling, access authorization, registration/mobility management, and subscription management.
  • the UDM 1058 may exhibit the Nudm servicebased interface.
  • AF 1060 provides application influence on traffic routing, provide access to NEF 1052, and interact with the policy framework for policy control.
  • the AF 1060 may influence UPF 1048 (re)selection and traffic routing.
  • UPF 1048 selection and traffic routing.
  • the network operator may permit AF 1060 to interact directly with relevant NFs.
  • the AF 1060 may be used for edge computing implementations.
  • the 5GC 1040 may enable edge computing by selecting operator/3rd party services to be geographically close to a point that the UE 1002 is attached to the network. This may reduce latency and load on the network.
  • the data network (DN) 1036 may represent various network operator services, Internet access, or third party services that may be provided by one or more servers including, for example, application (app)Zcontent server 1038.
  • the DN 1036 may be an operator external public, a private PDN, or an intra-operator packet data network, for example, for provision of IMS services.
  • the app server 1038 can be coupled to an IMS via an S-CSCF or the I-CSCF.
  • the DN 1036 may represent one or more local area DNs (LADNs), which are DNs 1036 (or DN names (DNNs)) that is/are accessible by a UE 1002 in one or more specific areas. Outside of these specific areas, the UE 1002 is not able to access the LADN/DN 1036.
  • LADNs local area DNs
  • DNNs DN names
  • the DN 1036 may be an Edge DN 1036, which is a (local) Data Network that supports the architecture for enabling edge applications.
  • the app server 1038 may represent the physical hardware systems/devices providing app server functionality and/or the application software resident in the cloud or at an edge compute node that performs server function(s).
  • the app/content server 1038 provides an edge hosting environment that provides support required for Edge Application Server's execution.
  • the 5GS can use one or more edge compute nodes to provide an interface and offload processing of wireless communication traffic.
  • the edge compute nodes may be included in, or co-located with one or more RANI 010, 1014.
  • the edge compute nodes can provide a connection between the RAN 1014 and UPF 1048 in the 5GC 1040.
  • the edge compute nodes can use one or more NFV instances instantiated on virtualization infrastructure within the edge compute nodes to process wireless connections to and from the RAN 1014 and UPF 1048.
  • the interfaces of the 5GC 1040 include reference points and service-based itnterfaces.
  • the reference points include: N1 (between the UE 1002 and the AMF 1044), N2 (between RAN 1014 and AMF 1044), N3 (between RAN 1014 and UPF 1048), N4 (between the SMF 1046 and UPF 1048), N5 (between PCF 1056 and AF 1060), N6 (between UPF 1048 and DN 1036), N7 (between SMF 1046 and PCF 1056), N8 (between UDM 1058 and AMF 1044), N9 (between two UPFs 1048), N10 (between the UDM 1058 and the SMF 1046), Nil (between the AMF 1044 and the SMF 1046), N12 (between AUSF 1042 and AMF 1044), N13 (between AUSF 1042 and UDM 1058), N14 (between two AMFs 1044; not shown), N15 (between PCF 1056 and AMF 1044 in case of a non-roam
  • the service-based representation of Figure 10 represents NFs within the control plane that enable other authorized NFs to access their services.
  • the service-based interfaces include: Namf (SBI exhibited by AMF 1044), Nsmf (SBI exhibited by SMF 1046), Nnef (SBI exhibited by NEF 1052), Npcf (SBI exhibited by PCF 1056), Nudm (SBI exhibited by the UDM 1058), Naf (SBI exhibited by AF 1060), Nnrf (SBI exhibited by NRF 1054), Nnssf (SBI exhibited by NSSF 1050), Nausf (SBI exhibited by AUSF 1042).
  • NEF 1052 can provide an interface to edge compute nodes 1036x, which can be used to process wireless connections with the RAN 1014.
  • the system 1000 may include an SMSF, which is responsible for SMS subscription checking and verification, and relaying SM messages to/fromthe UE 1002 to/from other entities, such as an SMS-GMSC/IWMSC/SMS-router.
  • the SMS may also interact with AMF 1042 and UDM 1058 for a notification procedure that the UE 1002 is available for SMS transfer (e.g., set a UE not reachable flag, and notifying UDM 1058 when UE 1002 is available for SMS).
  • the 5GS may also include an SCP (or individual instances of the SCP) that supports indirect communication (see e.g., 3GPP TS 23.501 section 7.1.1); delegated discovery (see e.g., 3GPP TS 23.501 section 7.1.1); message forwarding and routing to destination NF/NF service(s), communication security (e.g., authorization of the NF Service Consumer to access the NF Service Producer API) (see e.g., 3GPP TS 33.501), load balancing, monitoring, overload control, etc.; and discovery and selection functionality for UDM(s), AUSF(s), UDR(s), PCF(s) with access to subscription data stored in the UDR based on UE's SUPI, SUCI or GPSI (see e.g., 3GPP TS 23.501 section 6.3).
  • SCP or individual instances of the SCP
  • indirect communication see e.g., 3GPP TS 23.501 section 7.1.1
  • delegated discovery see e.g.,
  • Load balancing, monitoring, overload control functionality provided by the SCP may be implementation specific.
  • the SCP may be deployed in a distributed manner. More than one SCP can be present in the communication path between various NF Services.
  • the SCP although not an NF instance, can also be deployed distributed, redundant, and scalable.
  • FIG 11 schematically illustrates a wireless network 1100 in accordance with various embodiments.
  • the wireless network 1100 may include a UE 1102 in wireless communication with an AN 1104.
  • the UE 1102 and AN 1104 may be similar to, and substantially interchangeable with, like-named components described with respect to Figure 10.
  • the UE 1102 may be communicatively coupled with the AN 1104 via connection 1106.
  • the connection 1106 is illustrated as an air interface to enable communicative coupling, and can be consistent with cellular communications protocols such as an LTE protocol or a 5GNR protocol operating at mmWave or sub-6GHz frequencies.
  • the UE 1102 may include a host platform 1108 coupled with a modem platform 1110.
  • the host platform 1108 may include application processing circuitry 1112, which may be coupled with protocol processing circuitry 1114 of the modem platform 1110.
  • the application processing circuitry 1112 may run various applications for the UE 1102 that source/sink application data.
  • the application processing circuitry 1112 may further implement one or more layer operations to transmit/receive application data to/from a data network. These layer operations may include transport (for example UDP) and Internet (for example, IP) operations
  • the protocol processing circuitry 1114 may implement one or more of layer operations to facilitate transmission or reception of data over the connection 1106.
  • the layer operations implemented by the protocol processing circuitry 1114 may include, for example, MAC, RLC, PDCP, RRC and NAS operations.
  • the modem platform 1110 may further include digital baseband circuitry 1116 that may implement one or more layer operations that are “below” layer operations performed by the protocol processing circuitry 1114 in a network protocol stack. These operations may include, for example, PHY operations including one or more of HARQ acknowledgement (ACK) functions, scrambling/descrambling, encoding/decoding, layer mapping/de-mapping, modulation symbol mapping, received symbol/bit metric determination, multi-antenna port precoding/decoding, which may include one or more of space-time, space-frequency or spatial coding, reference signal generation/detection, preamble sequence generation and/or decoding, synchronization sequence generation/detection, control channel signal blind decoding, and other related functions.
  • PHY operations including one or more of HARQ acknowledgement (ACK) functions, scrambling/descrambling, encoding/decoding, layer mapping/de-mapping, modulation symbol mapping, received symbol/bit metric determination, multi-antenna port precoding
  • the modem platform 1110 may further include transmit circuitry 1118, receive circuitry 1120, RF circuitry 1122, and RF front end (RFFE) 1124, which may include or connect to one or more antenna panels 1126.
  • the transmit circuitry 1118 may include a digital -to-analog converter, mixer, intermediate frequency (IF) components, etc.
  • the receive circuitry 1120 may include an analog-to-digital converter, mixer, IF components, etc.
  • the RF circuitry 1122 may include a low-noise amplifier, a power amplifier, power tracking components, etc.
  • RFFE 1124 may include filters (for example, surface/bulk acoustic wave filters), switches, antenna tuners, beamforming components (for example, phase-array antenna components), etc.
  • transmit/receive components may be specific to details of a specific implementation such as, for example, whether communication is TDM or FDM, in mmWave or sub-6 gHz frequencies, etc.
  • the transmit/receive components may be arranged in multiple parallel transmit/receive chains, may be disposed in the same or different chips/modules, etc.
  • the protocol processing circuitry 1114 may include one or more instances of control circuitry (not shown) to provide control functions for the transmit/receive components.
  • a UE 1102 reception may be established by and via the antenna panels 1126, RFFE 1124, RF circuitry 1122, receive circuitry 1120, digital baseband circuitry 1116, and protocol processing circuitry 1114.
  • the antenna panels 1126 may receive a transmission from the AN 1104 by receive-beamforming signals received by a plurality of antennas/antenna elements of the one or more antenna panels 1126.
  • the communication resources 1230 may include interconnection or network interface controllers, components, or other suitable devices to communicate with one or more peripheral devices 1204 or one or more databases 1206 or other network elements via a network 1208.
  • the communication resources 1230 may include wired communication components (e.g., for coupling via USB, Ethernet, Ethernet, Ethernet over GRE Tunnels, Ethernet over Multiprotocol Label Switching (MPLS), Ethernet over USB, Controller Area Network (CAN), Local Interconnect Network (LIN), DeviceNet, ControlNet, Data Highway+, PROFIBUS, or PROFINET, among many others), cellular communication components, NFC components, Bluetooth® (or Bluetooth® Low Energy) components, WiFi® components, and other communication components.
  • wired communication components e.g., for coupling via USB, Ethernet, Ethernet, Ethernet over GRE Tunnels, Ethernet over Multiprotocol Label Switching (MPLS), Ethernet over USB, Controller Area Network (CAN), Local Interconnect Network (LIN), DeviceNet, ControlNet, Data Highway+, PROFIBUS, or PROFINET, among many others
  • Example 5 includes the service registry function of examples 2-4 and/or some other example(s) herein, wherein, to achieve service registration comprises a notification-based approach, wherein each service will send notification message to the service registry when there is status change.
  • Example 8 includes the method of example 7 and/or some other example(s) herein, wherein, for the device association with one of the in-network service mesh function instances step, network controller/DHCP server managed association.
  • the network controller or the DHCP server when device first attach to a network, the network controller or the DHCP server will assign a CSF for the device. The network controller or the DHCP server configure the device with the CSF address.
  • Example 10 includes an in-network service orchestration function - network (SOF-N) in the in-network computing service function and a service orchestration function - client (SOF-C), wherein the SOF-N and SOF-C are used to dynamically steer service execution between device and network.
  • SOF-N in-network service orchestration function - network
  • SOF-C service orchestration function - client
  • Example 13 includes the method of examples 11-12 and/or some other example(s) herein, further comprising: if execution is done in network, the SOF-C will send the service request to CSF, e.g., via remote procedure call.
  • Example 14 includes the method of examples 10-13 and/or some other example(s) herein, wherein the decision on where to execute the requested service can be made by the SOF- C or the SOF-N.
  • Example 18 includes a method of operating an SOF-N including the SOF-N of example 17 and/or some other examples herein, the method comprising: resource orchestration function - client (ROF-C) interacts with resource orchestration function - network (ROF-N) to get a list of network computing resources and capabilities the client can use; ROF-C exposes the computing resource list and capabilities to OS; when OS request for a computing resource, the ROF-C send resource request to ROF-N; ROF-N decide whether to accept ROF-C’s request or not and response to ROF-C’s request; and if the resource request is accepted, the ROF-C and ROF-N will establish a data path between device and network for access to the computing resource. The OS can then use the computing resource via the data path.
  • ROI-C resource orchestration function - client
  • ROF-N resource orchestration function - network
  • Example 23 includes the method of examples 21-22 and/or some other example(s) herein, wherein the service response includes an assigned service instance address, an allocated computing resource, service execution results, and/or other information related to the selected service instance.
  • Example 31 includes the method of examples 27-30 and/or some other example(s) herein, wherein the OS can use the computing resource via the data path.
  • Example 35 may include the method of example 34 or some other example herein, wherein respective ones of the plurality of in-network service mesh functions are associated with different ones of a plurality of devices.
  • Example 37 may include the method of example 32 or some other example herein, wherein the indication of the service instance includes at least one of: an assigned service instance address, an allocated computing resource, and service execution results.
  • Example 38 may include the method of example 32 or some other example herein, wherein the device is associated with the in-network service mesh function through one or more of: a network-controller managed association, a dynamic host configuration protocol (DHCP) server managed association, and a device autonomous association.
  • DHCP dynamic host configuration protocol
  • Example 39 may include one or more non-transitory computer-readable media comprising instructions that, upon execution of the instructions by one or more processors of a device, are to cause a service orchestration function - client (SOF-C) of the device to: identify, from a client application of the device, a service request; identifying, based on interaction with one or more of an operating system (OS) of the device and a service orchestration function - network (SOF-N) of a compute service function (CSF) to which the device is communicatively coupled, information related to availability of a service related to the service request; and steering, based on the information related to the availability of the service, the service request for local execution or network execution.
  • SOF-C service orchestration function - client
  • OS operating system
  • SOF-N service orchestration function - network
  • CSF compute service function
  • Example 41 may include the one or more non-transitory comptuer-readable media of example 39 or some other example herein, wherein network execution relates to execution of the service by an electronic device to which the device is communicatively coupled.
  • Example 42 may include the one or more non-transitory comptuer-readable media of example 39 or some other example herein, wherein the steering is based on a determination made by the SOF-N based on the information related to availability of the service.
  • Example 44 may include the one or more non-transitory comptuer-readable media of example 39 or some other example herein, wherein the steering is based on a determination made by the SOF-C based on the information related to availability of the service.
  • Example 45 may include the one or more non-transitory comptuer-readable media of example 39 or some other example herein, wherein the information includes information related to one or more of: service availability, network condition, device batter level, and estimated service execution time.
  • Example 46 may include an apparatus comprising: one or more processors; one or more non-transitory computer-readable media comprising instructions that, upon execution of the instructions by the one or more processors, are to cause a resource orchestration function - client (ROF-C) to: identify, based on communication with a resource orchestration function - network (ROF-N) of a network, a plurality of resources of the network that are usable by the apparatus; providing an indication of the plurality of resources to an operating system (OS) of the apparatus; identifying a request from the OS for access to a resource of the plurality of resources; transmitting an indication of the request to the ROF-N; and identifying, based on a response to the request received from the ROF-N, whether to establish a data path between the device and the network such that the OS has access to the resource.
  • ROF-C resource orchestration function - client
  • Example 47 may include the apparatus of example 46 or some other example herein, wherein the ROF-C is to identify the plurality of resources based on a list broadcasted by the ROF-N.
  • Example 48 may include the apparatus of example 46 or some other example herein, wherein the ROF-C is to identify the plurality of resources based on a response, by the ROF-N, to an inquiry transmitted to the ROF-N by the ROF-C.
  • Example 49 may include the apparatus of example 46 or some other example herein, wherein the resource is a computing resource or capability of an element of the network.
  • Example 50 may include the apparatus of example 46 or some other example herein, wherein the ROF-C is to establish the data path if the response to the request indicates acceptance by the ROF-N.
  • Example 52 may include an apparatus comprising means to perform one or more elements of a method described in or related to any of examples 1-51, or any other method or process described herein.
  • Example 53 may include one or more non-transitory computer-readable media comprising instructions to cause an electronic device, upon execution of the instructions by one or more processors of the electronic device, to perform one or more elements of a method described in or related to any of examples 1-51, or any other method or process described herein.
  • Example 54 may include an apparatus comprising logic, modules, or circuitry to perform one or more elements of a method described in or related to any of examples 1-51, or any other method or process described herein.
  • Example 55 may include a method, technique, or process as described in or related to any of examples 1-51, or portions or parts thereof.
  • Example 56 may include an apparatus comprising: one or more processors and one or more computer-readable media comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples 1-51, or portions thereof.
  • Example 58 may include a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples 1-51, or portions or parts thereof, or otherwise described in the present disclosure.
  • PDU protocol data unit
  • Example 60 may include a signal encoded with a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples 1-51, or portions or parts thereof, or otherwise described in the present disclosure.
  • PDU protocol data unit
  • Example 62 may include a computer program comprising instructions, wherein execution of the program by a processing element is to cause the processing element to carry out the method, techniques, or process as described in or related to any of examples 1-51, or portions thereof.
  • Example 65 may include a system for providing wireless communication as shown and described herein.
  • the phrase “A and/or B” means (A), (B), or (A and B).
  • the phrase “A, B, and/or C” means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C).
  • the description may use the phrases “in an embodiment,” or “In some embodiments,” which may each refer to one or more of the same or different embodiments.
  • the terms “comprising,” “including,” “having,” and the like, as used with respect to embodiments of the present disclosure are synonymous.
  • memory and/or “memory circuitry” as used herein refers to one or more hardware devices for storing data, including RAM, MRAM, PRAM, DRAM, and/or SDRAM, core memory, ROM, magnetic disk storage mediums, optical storage mediums, flash memory devices or other machine readable mediums for storing data.
  • computer-readable medium may include, but is not limited to, memory, portable or fixed storage devices, optical storage devices, and various other mediums capable of storing, containing or carrying instructions or data.
  • the information objects may include markup and/or source code documents such as HTML, XML, JSON, Apex®, CSS, JSP, MessagePackTM, Apache® ThriftTM, ASN. l, Google® Protocol Buffers (protobuf), or some other document(s)/format(s) such as those discussed herein.
  • An information object may have both a logical and a physical structure. Physically, an information object comprises one or more units called entities. An entity is a unit of storage that contains content and is identified by a name. An entity may refer to other entities to cause their inclusion in the information object. An information object begins in a document entity, which is also referred to as a root element (or "root"). Logically, an information object comprises one or more declarations, elements, comments, character references, and processing instructions, all of which are indicated in the information object (e.g., using markup).
  • radio access technology refers to the technology used for the underlying physical connection to a radio based communication network.
  • communication protocol refers to a set of standardized rules or instructions implemented by a communication device and/or system to communicate with other devices and/or systems, including instructions for packetizing/depacketizing data, modulating/demodulating signals, implementation of protocols stacks, and/or the like.
  • Examples of wireless communications protocols may be used in various embodiments include a Global System for Mobile Communications (GSM) radio communication technology, a General Packet Radio Service (GPRS) radio communication technology, an Enhanced Data Rates for GSM Evolution (EDGE) radio communication technology, and/or a Third Generation Partnership Project (3GPP) radio communication technology including, for example, 3GPP Fifth Generation (5G) or New Radio (NR), Universal Mobile Telecommunications System (UMTS), Freedom of Multimedia Access (FOMA), Long Term Evolution (LTE), LTE- Advanced (LTE Advanced), LTE Extra, LTE-A Pro, cdmaOne (2G), Code Division Multiple Access 2000 (CDMA 2000), Cellular Digital Packet Data (CDPD), Mobitex, Circuit Switched Data (CSD), High-Speed CSD (HSCSD), Universal Mobile Telecommunications System (UMTS), Wideband Code Division Multiple Access (W-CDM), High Speed Packet Access (HSPA), HSPA Plus (HSPA+), Time Division-Code Division Multiple Access (TD-CDMA), Time Division-Sy
  • the software code can be stored as a computer- or processorexecutable instructions or commands on a physical non-transitory computer-readable medium.
  • suitable media include RAM, ROM, magnetic media such as a hard-drive or a floppy disk, or an optical medium such as a compact disk (CD) or DVD (digital versatile disk), flash memory, and the like, or any combination of such storage or transmission devices.

Abstract

Various embodiments herein provide techniques related to an in-network service mesh function of a compute service function (CSF). The in-network service mesh function may identify a device associated with the in-network service mesh function. The in-network service mesh function may further identify a service request received from the device and, in a service registry of the CSF based on the service request, a service instance for the device. The in-network service mesh function may further transmit an indication of the service instance. Other embodiments may be described or claimed.

Description

MECHANISMS FOR ENABLING IN-NETWORK COMPUTING SERVICES
CROSS REFERENCE TO RELATED APPLICATION
The present application claims priority to U.S. Provisional Patent Application No. 63/122,768, which was filed December 8, 2020.
FIELD
Various embodiments generally may relate to the field of wireless communications. For example, some embodiments may relate to in-network computing services.
BACKGROUND
In legacy client-server computing models, client devices may need to connect to application servers for computing services. However, such a connection may require a timeconsuming connection procedure, which may introduce latency.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. To facilitate this description, like reference numerals designate like structural elements. Embodiments are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings.
Figure 1 shows an example Innovative Optical and Wireless Network (I0WN) All Photonics Network (APN) and computing/data infrastructure architecture, in accordance with various embodiments.
Figure 2 illustrates an example computing service function (CSF) in a device and network, in accordance with various embodiments.
Figure 3 illustrates an example technique related to use of an in-network domain name system (DNS), in accordance with various embodiments.
Figure 4 schematically illustrates an example network, in accordance with various embodiments.
Figure 5 illustrates an example technique related to use of an in-network service mesh, in accordance with various embodiments.
Figure 6 schematically illustrates components of a network, in accordance with various embodiments.
Figure 7 illustrates an example technique related to device-network service orchestration, in accordance with various embodiments. Figure 8 schematically illustrates an alternative example of a network, in accordance with various embodiments.
Figure 9 illustrates an example technique related to device-network resource orchestration.
Figure 10 illustrates an example network architecture 1000 according to various embodiments.
Figure 11 schematically illustrates a wireless network 1100 in accordance with various embodiments.
Figure 12 illustrates components of a computing device 1200 according to some example embodiments.
DETAILED DESCRIPTION
The following detailed description refers to the accompanying drawings. The same reference numbers may be used in different drawings to identify the same or similar elements. In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular structures, architectures, interfaces, techniques, etc. in order to provide a thorough understanding of the various aspects of various embodiments. However, it will be apparent to those skilled in the art having the benefit of the present disclosure that the various aspects of the various embodiments may be practiced in other examples that depart from these specific details. In certain instances, descriptions of well-known devices, circuits, and methods are omitted so as not to obscure the description of the various embodiments with unnecessary detail. For the purposes of the present document, the phrases “A or B” and “A/B” mean (A), (B), or (A and B).
Generally, communication and computing systems are designed to meet workload requirements and enable new types of workload and applications. Typical computing workload in today’s system can be mainly categorized into the following two categories: Client-server type workloads (e.g., online gaming, web services, etc.) and High Performance Computing (HPC) type workloads (e.g., weather forecast, scientific computing, etc.).
Client-server type workloads (e.g., online gaming, web services, etc.) are pre-partitioned between client and server during application development. A micro-services framework can be used to facilitate development and computing scaling. Client-server type workloads are usually less sensitive to communication and I/O delays among micro services. HPC type workloads are typically run within data center. Parallelization is managed by data center OS. HPC workloads are sensitive to communication and I/O delays among computing nodes in DC. The emergence of cyber-physical systems is expected to bring in the following types workloads: sensing and cognition and cyber-physical interactive systems. Sensing and cognition workloads may utilize or include local data collection, processing, and analysis. Sensing and cognition workloads are latency sensitive (e.g., fault detection, hazards alarm) Cyber-physical interactive system workloads utilize or include real-time interaction(s) between physical and cyber domains. Cyber-physical interactive system workloads are also latency sensitive.
The emerging new workloads require low latency ubiquitous computing across devices, network and cloud. Today’s typical cloud computing is done in edge/center clouds and use orchestrator running in application layer to manage computing workload distribution. Figure 1 illustrates an example workload distribution scheme used by cloud computing systems.
Specifically, Figure 1 shows an example I0WN All Photonics Network (APN) and computing/ data infrastructure architecture according to various embodiments. Figure 1 includes, inter alia, a data plane function in an I0WN APN. In the example of Figure 1, the data plane function interacts with an APN controller, CSF, network transportation nodes (e.g., transponder, aggregation node, interchange node, etc.), devices/terminals, and external cloud. In those interactions, data plane function can serve as data consumer or data provider.
The data plane functions may have the following functionalities: (1) Device identity, access and QoS policies, configurations; (2) Measurement and sensing data storage and sharing among NW and compute functions within the communication and computing infrastructure; (3) Data process and analytics; and (4) Data exposure as services to external users
Figure 2 illustrates an example computing service function (CSF) in device and network according to various embodiments. Functionalities of the CSF include (1) for in-network computing resources, the CSF provides functions such as service discovery, access control, charging, and the like; and (2) cloud computing services can also be registered in CSF. Devices’ request for cloud computing services can be directed by CSF to proper computing locations.
Embodiments herein relate to mechanisms for enabling in-network computing and functionalities of CSF. The embodiments herein may be implemented as part of an I0WN framework such as the I0WN Global Forum (GF) as part of the APN system. In particular, the present disclosure is related to addressing the following three issues identified in today’s communication and computing systems.
As a first issue, in a legacy client-server computing model, client devices (which may also be referred to herein as a “client” or a “device”) may need to connect to application servers for computing services. A typical procedure may include the following elements:
1) A client is configured with an internet service provider (ISP) domain name system (DNS) server internet protocol (IP) address
2) The client sends a domain name inquiry to the ISP DNS server 3) The ISP DNS server contacts a content security policy (CSP) authoritative DNS server for domain name resolution
4) The CSP authoritative DNS server selects the proper application server
5) The CSP may choose to launch a new application server in locations close to client
6) The CSP authoritative DNS server responds to the ISP DNS server’s domain name inquiry
7) The ISP DNS server responds to the client’s domain name inquiry
8) The client connects to the assigned application server
As may been seen, the above elements may lead to long connection establishment time due to back-and-forth interactions among the device, the DNS servers, and the CSP controllers. The long connection establishment time may cause long first packet latency. For streaming type services, the long connection establishment time may be amortized by the long service time. However, for transactional services, the long connection establishment time could have prominent effect on performance.
As a second issue, computing resources in legacy systems may be concentrated in data centers (including mega data centers and smaller edge data centers). Computing resources and services may be centrally managed by CSP resource orchestrators. The data center operation model is designed for hosting large applications (e.g., Amazon®, Facebook®, Google®, Tencent®, Twitter®, etc.) with stable traffic. However, with applications, services and computing demands become more diverse, ubiquitous computing and flexible ways of accessing to computing resources may be desired. The desire to improve energy efficiency for sustainable grows also calls for more dynamic computing resource usage. The computing society has been working on serverless technology to adapt computing resources with respect to service traffic demands. The computing and communication infrastructure may also be evolved to enable the full benefit of serverless technology, e.g., an infrastructure that inherently support computing resource discovery, access control and load balancing rather than relying on over the top solutions.
As a third issue workload partitions between clients and servers in legacy client-server applications may be fixed at the time the application is programmed. Remote procedure call (RPC) interfaces may be programmed to enable communication between client-side application(s) and server-side application(s). This model may be acceptable for legacy client- server type applications and workload. However, moving forward, as more diverse workloads and devices generate more diverse requirements on computing, the fixed workload partition model may become undesirable or infeasible. For instance, based on conditions such as computing load, battery life, network connectivity, a device and network computing node may dynamically change the workload partition between the network and the device. To allow for such dynamic workload partition, new functions in the communication and computing infrastructure may be desirable.
The embodiments discussed herein solve one or more of the above-described issues. In particular, the embodiments include: embodiment 1 (related to in-network DNS); embodiment 2 (related to in-network service mesh); embodiment 3 (related to device-network service orchestration); and embodiment 4 (related to device-network resource orchestration).
EMBODIMENT 1: IN-NETWORK DNS
Embodiment 1 may address the long connection establishment time issue described above by enabling in-network DNS. The CSF (e.g., the CSF of Figure 2) may be equipped with a DNS function. An example procedure for embodiment 1 may operate in accordance with Figures 2 and 3 as follows:
1) At 301, the device (e.g., the device of Figure 2) sends a Dynamic Host Configuration Protocol (DHCP) broadcast, for example to the network of Figure 2.
2) At 302, a DHCP server of the network (e.g., the DHCP server of Figure 4) allocates an IP address and other system configurations such as a DNS server address. If the CSF of the network supports in-network DNS, the DHCP server may inform the device about the CSF address as the DNS server address. The DHCP server of the internet service provider (ISP) may also inform the device about CSF server configurations (whether the ISP supports CSF, the CSF IP address, etc.).
3) At 303, the application running on the device (e.g., the application of Figure 2 located in the device) may request access to a domain name.
4) At 304, a domain name resolver at the device may look for domain name records in its caches.
5) If the domain name is not in its local cache, the domain name resolver may send the domain name to a configured DNS server of the network to resolve the address. a. If the CSF address is used as the DNS server address, the domain name resolution request may go directly to the configured CSF of the network (e.g., the CSF of Figure 4). b. If a conventional DNS server is used, the network switches with CSF capabilities may filter out DNS requests and check its locally cached domain name records to resolve domain names.
Embodiment 1 may be used to at least partially address issue 1. However, if the domain name is used to find a server IP address, more elements may be needed to find the service. Also, the in-network DNS solution may not work if the domain name resolution request is sent over hypertext transfer protocol secure (HTTPS). EMBODIMENT 2: IN-NETWORK SERVICE MESH
The in-network service mesh may incorporate service orchestration functions such as service discovery, load balancing, service instance assignment, service access control in the network, etc. These functions may be performed in legacy networks via service frontends in the CSP cloud. The in-network service mesh may enable service orchestration access across different service providers, including network operators, CSPs, and application service providers.
Figure 4 depicts an example architecture showing interactions between devices 1-M, in- network service mesh function instances, service registry instances, a network controller or DHCP server, and service instances. The in-network service mesh function and the service registry may both be part of the computing service function (CSF) as depicted.
An example procedure related to Figures 4 and 5 may be as follows:
1) At 501, the device associates with one of the in-network service mesh function instances (e.g., device 1 associating with in-network service mesh function instance 1).
2) At 502, the device sends a service request to the associated in-network service mesh function
3) At 503, the in-network service mesh function performs an inquiry related to the service with the service registry, which have records relating to a number of instances of different services.
4) At 504, the in-network service mesh selects a service instance for the device. The selection decision can be based on factors such as location of the device, location of the service instance, load of the service instance, estimated service response time, etc.
5) At 505, the in-network service mesh function sends a response to the device’s service request. The response message may include assigned service instance address, allocated computing resource, service execution results, etc.
In some embodiments, Element (1) (e.g., A device associates with one of the in-network service mesh function instances) may include one or both of the following two options:
• Option 1 : A network controller or DHCP server managed association. In this option, when a device first attaches to a network, the network controller or the DHCP server will assign a CSF for the device. The network controller or the DHCP server may then configure the device with the CSF address.
• Option 2: Device autonomous association. In this option, when a device first attaches to a network, the device may broadcast its CSF association request message. CSFs that receive the association message may respond to the device. The response message may include information on CSF address, service capabilities, etc. If more than one CSF responds to the device, the device may select one CSF with which to associate (for example, by sending an association request to the CSF, and then the CSF responds with an admission message). Services (including in-network services and external services) register to the service registry. These services are indicated in Figure 1 as Service X (with multiple instances), Service Y (with multiple instances), etc. As examples, there may be at least three options to achieve service registration:
• Option 1: A subscription-based approach. In this approach, the service registry may subscribe to service events. If there are new events in the service (e.g., a new service instance in instantiated, a service update, etc., the service registry may be notified).
• Option 2: An inquiry -based approach. In this approach, the service registry regularly sends an inquiry message to each service. Each service may respond to the inquiry with updated service status.
• Option 3: A notification-based approach. In this approach, each service may send a notification message to the service registry when there is status change.
Embodiment 2 may address be desirable to address the first two issues described above. Specifically, Embodiment 2 may resolve the latency related to computing service establishment time, as well as computing resource/service discovery/access inflexibility.
EMBODIMENT 3: DEVICE-NETWORK SERVICE ORCHESTRATION
To address the third issue above, specifically the issue wherein a fixed workload partition between a client (e.g., a device) and server may exist, a service orchestration function may be introduced. The service orchestration function (SOF) resides in both the device and the CSF. The SOF may run on top of the device and/or CSF operating system (OS), and may be used to dynamically steer service execution between the device and the network. Figure 6 provides an illustration of the SOF in the device and the network, and the interface between the two ends.
An example procedure related to embodiment 3 may be described with respect to Figures 6 and 7 as follows:
1) At 701, client applications on the device interact with the service orchestration function - client (SOF-C) to request services.
2) At 702, the SOF-C interacts with the device OS and a service orchestration function - network (SOF-N) to get information on service availability (locally and in network).
3) At 703, the SOF-C may then dynamically steer an application’s service request between local execution (e.g., on the device) and network execution. a. If execution is done locally (e.g., on the device), the SOF-C will send the service request to the local OS, e.g., via a system call. b. If execution is done in network, the SOF-C will send the service request to the CSF, e.g., via remote procedure call.
The decision on where to execute the requested service may be made by the SOF-C or the SOF-N. In one embodiment, when the SOF-C is making the decision, the SOF-C may collect information from the local OS and the SOF-N on service availability, network condition, device battery level, estimated service execution time, etc. In one embodiment, when SOF-N is making the decision, the SOF-N may collect information from the SOF-C, network service registry, and/or network computing resource platform on one or more of service availability, network condition, device battery level, network computing load, estimated service execution time, etc.
EMBODIMENT 4: DEVICE-NETWORK RESOURCE ORCHESTRATION
Embodiment 3 may operates above the OS, and may have impact(s) on applications (e.g., applications that will need to interact with the SOF-C). Embodiment 4 may relate to use of a resource orchestration function (ROF) as part of the network stack. The ROF in the client side and network side may be used to orchestrate computing resources in the device and network. Figure 8 illustrates an example of such an ROF, and interfaces between the device and the network according to various embodiments.
An example procedure for this embodiment in accordance with Figures 8 and 9 may be as follows:
1) At 901, the Resource orchestration function - client (ROF-C) of the device may interact with the resource orchestration function - network (ROF-N) of the CSF via a devicenetwork resource orchestration interface to get a list of network computing resources and capabilities the device can use. This can be done via one or more of the following approaches a. Option 1: The ROF-N broadcasts a list of one or more network computing resources and capabilities as system information b. Option 2: The ROF-C sends an inquiry message to the ROF-N related to network computing resources and capabilities. The ROF-N responds to the ROF-C’s inquiry.
2) At 902, the ROF-C exposes the computing resource list and capabilities (from either or both of Options 1 and 2) to the OS of the device.
3) At 903, when the OS of the device requests a computing resource, the ROF-C sends a resource request to the ROF-N.
4) At 904, the ROF-N decides whether to accept the ROF-C’s request or not, and responds to the ROF-C’s request.
5) At 905, if the resource request is accepted, the ROF-C and ROF-N may establish a data path between the device and the network for access to the computing resource. The OS may then use the computing resource via the data path.
Figures 10-12 illustrate various systems, devices, and components that may implement aspects of disclosed embodiments.
Figure 10 illustrates an example network architecture 1000 according to various embodiments. The network 1000 may operate in a manner consistent with 3GPP technical specifications for LTE or 5G/NR systems. However, the example embodiments are not limited in this regard and the described embodiments may apply to other networks that benefit from the principles described herein, such as future 3GPP systems, or the like.
The network 1000 includes a UE 1002, which is any mobile or non-mobile computing device designed to communicate with a RAN 1004 via an over-the-air connection. The UE 1002 is communicatively coupled with the RAN 1004 by a Uu interface, which may be applicable to both LTE and NR systems. Examples of the UE 1002 include, but are not limited to, a smartphone, tablet computer, wearable computer, desktop computer, laptop computer, in-vehicle infotainment system, in-car entertainment system, instrument cluster, head-up display (HUD) device, onboard diagnostic device, dashtop mobile equipment, mobile data terminal, electronic engine management system, electron! c/engine control unit, electronic/engine control module, embedded system, sensor, microcontroller, control module, engine management system, networked appliance, machine-type communication device, machine-to-machine (M2M), device-to-device (D2D), machine-type communication (MTC) device, Internet of Things (loT) device, and/or the like. The network 1000 may include a plurality of UEs 1002 coupled directly with one another via a D2D, ProSe, PC5, and/or sidelink (SL) interface. These UEs 1002 may be M2M/D2D/MTC/IoT devices and/or vehicular systems that communicate using physical sidelink channels such as, but not limited to, PSBCH, PSDCH, PSSCH, PSCCH, PSFCH, etc. The UE 1002 may perform blind decoding attempts of SL channels/links according to the various embodiments herein.
In some embodiments, the UE 1002 may additionally communicate with an AP 1006 via an over-the-air (OTA) connection. The AP 1006 manages a WLAN connection, which may serve to offload some/all network traffic from the RAN 1004. The connection between the UE 1002 and the AP 1006 may be consistent with any IEEE 802.11 protocol. Additionally, the UE 1002, RAN 1004, and AP 1006 may utilize cellular- WLAN aggregation/integration (e.g., LWA/LWIP). Cellular- WLAN aggregation may involve the UE 1002 being configured by the RAN 1004 to utilize both cellular radio resources and WLAN resources.
The RAN 1004 includes one or more access network nodes (ANs) 1008. The ANs 1008 terminate air-interface(s) for the UE 1002 by providing access stratum protocols including RRC, PDCP, RLC, MAC, and PHY/L1 protocols. In this manner, the AN 1008 enables data/voice connectivity between CN 1020 and the UE 1002. The ANs 1008 may be a macrocell base station or a low power base station for providing femtocells, picocells or other like cells having smaller coverage areas, smaller user capacity, or higher bandwidth compared to macrocells; or some combination thereof. In these implementations, an AN 1008 be referred to as a BS, gNB, RAN node, eNB, ng-eNB, NodeB, RSU, TRxP, etc.
One example implementation is a “CU/DU split” architecture where the ANs 1008 are embodied as a gNB-Central Unit (CU) that is communicatively coupled with one or more gNB- Distributed Units (DUs), where each DU may be communicatively coupled with one or more Radio Units (RUs) (also referred to as RRHs, RRUs, or the like) (see e.g., 3GPP TS 38.401 V16.1.0 (2020-03)). In some implementations, the one or more RUs may be individual RSUs. In some implementations, the CU/DU split may include an ng-eNB-CU and one or more ng-eNB- DUs instead of, or in addition to, the gNB-CU and gNB-DUs, respectively. The ANs 1008 employed as the CU may be implemented in a discrete device or as one or more software entities running on server computers as part of, for example, a virtual network including a virtual Base Band Unit (BBU) or BBU pool, cloud RAN (CRAN), Radio Equipment Controller (REC), Radio Cloud Center (RCC), centralized RAN (C-RAN), virtualized RAN (vRAN), and/or the like (although these terms may refer to different implementation concepts). Any other type of architectures, arrangements, and/or configurations can be used.
The plurality of ANs may be coupled with one another via an X2 interface (if the RAN 1004 is an LTE RAN or Evolved Universal Terrestrial Radio Access Network (E-UTRAN) 1010) or an Xn interface (if the RAN 1004 is aNG-RAN 1014). The X2/Xn interfaces, which may be separated into control/user plane interfaces in some embodiments, may allow the ANs to communicate information related to handovers, data/context transfers, mobility, load management, interference coordination, etc.
The ANs of the RAN 1004 may each manage one or more cells, cell groups, component carriers, etc. to provide the UE 1002 with an air interface for network access. The UE 1002 may be simultaneously connected with a plurality of cells provided by the same or different ANs 1008 of the RAN 1004. For example, the UE 1002 and RAN 1004 may use carrier aggregation to allow the UE 1002 to connect with a plurality of component carriers, each corresponding to a Pcell or Scell. In dual connectivity scenarios, a first AN 1008 may be a master node that provides an MCG and a second AN 1008 may be secondary node that provides an SCG. The first/second ANs 1008 may be any combination of eNB, gNB, ng-eNB, etc.
The RAN 1004 may provide the air interface over a licensed spectrum or an unlicensed spectrum. To operate in the unlicensed spectrum, the nodes may use LAA, eLAA, and/or feLAA mechanisms based on CA technology with PCells/Scells. Prior to accessing the unlicensed spectrum, the nodes may perform medium/carrier-sensing operations based on, for example, a listen-before-talk (LBT) protocol.
In V2X scenarios the UE 1002 or AN 1008 may be or act as a roadside unit (RSU), which may refer to any transportation infrastructure entity used for V2X communications. An RSU may be implemented in or by a suitable AN or a stationary (or relatively stationary) UE. An RSU implemented in or by: a UE may be referred to as a “UE-type RSU”; an eNB may be referred to as an “eNB-type RSU”; a gNB may be referred to as a “gNB-type RSU”; and the like. In one example, an RSU is a computing device coupled with radio frequency circuitry located on a roadside that provides connectivity support to passing vehicle UEs. The RSU may also include internal data storage circuitry to store intersection map geometry, traffic statistics, media, as well as applications/software to sense and control ongoing vehicular and pedestrian traffic. The RSU may provide very low latency communications required for high speed events, such as crash avoidance, traffic warnings, and the like. Additionally or alternatively, the RSU may provide other cellular/WLAN communications services. The components of the RSU may be packaged in a weatherproof enclosure suitable for outdoor installation, and may include a network interface controller to provide a wired connection (e.g., Ethernet) to a traffic signal controller or a backhaul network.
In some embodiments, the RAN 1004 may be an E-UTRAN 1010 with one or more eNBs 1012. The an E-UTRAN 1010 provides an LTE air interface (Uu) with the following characteristics: SCS of 15 kHz; CP-OFDM waveform for DL and SC-FDMA waveform for UL; turbo codes for data and TBCC for control; etc. The LTE air interface may rely on CSI-RS for CSI acquisition and beam management; PDSCH/PDCCH DMRS for PDSCH/PDCCH demodulation; and CRS for cell search and initial acquisition, channel quality measurements, and channel estimation for coherent demodulation/detection at the UE. The LTE air interface may operating on sub-6 GHz bands.
In some embodiments, the RAN 1004 may be an next generation (NG)-RAN 1014 with one or more gNB 1016 and/or on or more ng-eNB 1018. The gNB 1016 connects with 5G- enabled UEs 1002 using a 5GNR interface. The gNB 1016 connects with a 5GC 1040 through an NG interface, which includes an N2 interface or an N3 interface. The ng-eNB 1018 also connects with the 5GC 1040 through an NG interface, but may connect with a UE 1002 via the Uu interface. The gNB 1016 and the ng-eNB 1018 may connect with each other over an Xn interface.
In some embodiments, the NG interface may be split into two parts, an NG user plane (NG-U) interface, which carries traffic data between the nodes of the NG-RAN 1014 and a UPF 1048 (e.g., N3 interface), and an NG control plane (NG-C) interface, which is a signaling interface between the nodes of the NG-RAN 1014 and an AMF 1044 (e.g., N2 interface).
The NG-RAN 1014 may provide a 5G-NR air interface (which may also be referred to as a Uu interface) with the following characteristics: variable SCS; CP-OFDM for DL, CP-OFDM and DFT-s-OFDM for UL; polar, repetition, simplex, and Reed-Muller codes for control and LDPC for data. The 5G-NR air interface may rely on CSI-RS, PDSCH/PDCCH DMRS similar to the LTE air interface. The 5G-NR air interface may not use a CRS, but may use PBCH DMRS for PBCH demodulation; PTRS for phase tracking for PDSCH; and tracking reference signal for time tracking. The 5G-NR air interface may operating on FR1 bands that include sub- 6 GHz bands or FR2 bands that include bands from 24.25 GHz to 52.6 GHz. The 5G-NR air interface may include an SSB that is an area of a downlink resource grid that includes PSS/SSS/PBCH.
The 5G-NR air interface may utilize BWPs for various purposes. For example, BWP can be used for dynamic adaptation of the SCS. For example, the UE 1002 can be configured with multiple BWPs where each BWP configuration has a different SCS. When a BWP change is indicated to the UE 1002, the SCS of the transmission is changed as well. Another use case example of BWP is related to power saving. In particular, multiple BWPs can be configured for the UE 1002 with different amount of frequency resources (e.g., PRBs) to support data transmission under different traffic loading scenarios. A BWP containing a smaller number of PRBs can be used for data transmission with small traffic load while allowing power saving at the UE 1002 and in some cases at the gNB 1016. A BWP containing a larger number of PRBs can be used for scenarios with higher traffic load.
The RAN 1004 is communicatively coupled to CN 1020 that includes network elements and/or network functions (NFs) to provide various functions to support data and telecommunications services to customers/subscribers (e.g., UE 1002). The components of the CN 1020 may be implemented in one physical node or separate physical nodes. In some embodiments, NFV may be utilized to virtualize any or all of the functions provided by the network elements of the CN 1020 onto physical compute/storage resources in servers, switches, etc. A logical instantiation of the CN 1020 may be referred to as a network slice, and a logical instantiation of a portion of the CN 1020 may be referred to as a network sub-slice.
The CN 1020 may be an LTE CN 1022 (also referred to as an Evolved Packet Core (EPC) 1022). The EPC 1022 may include MME 1024, SGW 1026, SGSN 1028, HSS 1030, PGW 1032, and PCRF 1034 coupled with one another over interfaces (or “reference points”) as shown. The NFs in the EPC 1022 are briefly introduced as follows. The MME 1024 implements mobility management functions to track a current location of the UE 1002 to facilitate paging, bearer activation/deactivation, handovers, gateway selection, authentication, etc.
The SGW 1026 terminates an SI interface toward the RAN 1010 and routes data packets between the RAN 1010 and the EPC 1022. The SGW 1026 may be a local mobility anchor point for inter-RAN node handovers and also may provide an anchor for inter-3GPP mobility. Other responsibilities may include lawful intercept, charging, and some policy enforcement.
The SGSN 1028 tracks a location of the UE 1002 and performs security functions and access control. The SGSN 1028 also performs inter-EPC node signaling for mobility between different RAT networks; PDN and S-GW selection as specified by MME 1024; MME 1024 selection for handovers; etc. The S3 reference point between the MME 1024 and the SGSN 1028 enable user and bearer information exchange for inter-3GPP access network mobility in idle/active states.
The HSS 1030 includes a database for network users, including subscription-related information to support the network entities’ handling of communication sessions. The HSS 1030 can provide support for routing/roaming, authentication, authorization, naming/addressing resolution, location dependencies, etc. An S6a reference point between the HSS 1030 and the MME 1024 may enable transfer of subscription and authentication data for authenticating/ authorizing user access to the EPC 1020.
The PGW 1032 may terminate an SGi interface toward a data network (DN) 1036 that may include an application (app)Zcontent server 1038. The PGW 1032 routes data packets between the EPC 1022 and the data network 1036. The PGW 1032 is communicatively coupled with the SGW 1026 by an S5 reference point to facilitate user plane tunneling and tunnel management. The PGW 1032 may further include anode for policy enforcement and charging data collection (e.g., PCEF). Additionally, the SGi reference point may communicatively couple the PGW 1032 with the same or different data network 1036. The PGW 1032 may be communicatively coupled with a PCRF 1034 via a Gx reference point.
The PCRF 1034 is the policy and charging control element of the EPC 1022. The PCRF 1034 is communicatively coupled to the app/content server 1038 to determine appropriate QoS and charging parameters for service flows. The PCRF 1032 also provisions associated rules into a PCEF (via Gx reference point) with appropriate TFT and QCI.
The CN 1020 may be a 5GC 1040 including an AUSF 1042, AMF 1044, SMF 1046, UPF 1048, NSSF 1050, NEF 1052, NRF 1054, PCF 1056, UDM 1058, and AF 1060 coupled with one another over various interfaces as shown. The NFs in the 5GC 1040 are briefly introduced as follows.
The AUSF 1042 stores data for authentication of UE 1002 and handle authentication- related functionality. The AUSF 142 may facilitate a common authentication framework for various access types.
The AMF 1044 allows other functions of the 5GC 1040 to communicate with the UE 1002 and the RAN 1004 and to subscribe to notifications about mobility events with respect to the UE 1002. The AMF 1044 is also responsible for registration management (e.g., for registering UE 1002), connection management, reachability management, mobility management, lawful interception of AMF-related events, and access authentication and authorization. The AMF 1044 provides transport for SM messages between the UE 1002 and the SMF 1046, and acts as a transparent proxy for routing SM messages. AMF 1044 also provides transport for SMS messages between UE 1002 and an SMSF. AMF 1044 interacts with the AUSF 1042 and the UE 1002 to perform various security anchor and context management functions. Furthermore, AMF 1044 is a termination point of a RAN-CP interface, which includes the N2 reference point between the RAN 1004 and the AMF 1044. The AMF 1044 is also a termination point of NAS (Nl) signaling, and performs NAS ciphering and integrity protection.
AMF 1044 also supports NAS signaling with the UE 1002 over an N3IWF interface. The N3IWF provides access to untrusted entities. N3IWF may be a termination point for the N2 interface between the (R)AN 1004 and the AMF 1044 for the control plane, and may be a termination point for the N3 reference point between the (R)AN 1014 and the 1048 for the user plane. As such, the AMF 1044 handles N2 signalling from the SMF 1046 and the AMF 1044 for PDU sessions and QoS, encapsulate/de-encapsulate packets for IPSec and N3 tunnelling, marks N3 user-plane packets in the uplink, and enforces QoS corresponding to N3 packet marking taking into account QoS requirements associated with such marking received over N2. N3IWF may also relay UL and DL control-plane NAS signalling between the UE 1002 and AMF 1044 via an Nl reference point between the UE 1002and the AMF 1044, and relay uplink and downlink user-plane packets between the UE 1002 and UPF 1048. The N3IWF also provides mechanisms for IPsec tunnel establishment with the UE 1002. The AMF 1044 may exhibit an Namf service-based interface, and may be a termination point for an N14 reference point between two AMFs 1044 and an N17 reference point between the AMF 1044 and a 5G-EIR (not shown by Figure 10).
The SMF 1046 is responsible for SM (e.g., session establishment, tunnel management between UPF 1048 and AN 1008); UE IP address allocation and management (including optional authorization); selection and control of UP function; configuring traffic steering at UPF 1048 to route traffic to proper destination; termination of interfaces toward policy control functions; controlling part of policy enforcement, charging, and QoS; lawful intercept (for SM events and interface to LI system); termination of SM parts of NAS messages; downlink data notification; initiating AN specific SM information, sent via AMF 1044 overN2 to AN 1008; and determining SSC mode of a session. SM refers to management of a PDU session, and a PDU session or “session” refers to a PDU connectivity service that provides or enables the exchange of PDUs between the UE 1002 and the DN 1036.
The UPF 1048 acts as an anchor point for intra-RAT and inter-RAT mobility, an external PDU session point of interconnect to data network 1036, and a branching point to support multihomed PDU session. The UPF 1048 also performs packet routing and forwarding, packet inspection, enforces user plane part of policy rules, lawfully intercept packets (UP collection), performs traffic usage reporting, perform QoS handling for a user plane (e.g., packet filtering, gating, UL/DL rate enforcement), performs uplink traffic verification (e.g., SDF-to-QoS flow mapping), transport level packet marking in the uplink and downlink, and performs downlink packet buffering and downlink data notification triggering. UPF 1048 may include an uplink classifier to support routing traffic flows to a data network.
The NSSF 1050 selects a set of network slice instances serving the UE 1002. The NSSF 1050 also determines allowed NSSAI and the mapping to the subscribed S-NSSAIs, if needed. The NSSF 1050 also determines an AMF set to be used to serve the UE 1002, or a list of candidate AMFs 1044 based on a suitable configuration and possibly by querying the NRF 1054. The selection of a set of network slice instances for the UE 1002 may be triggered by the AMF 1044 with which the UE 1002 is registered by interacting with the NSSF 1050; this may lead to a change of AMF 1044. The NSSF 1050 interacts with the AMF 1044 via an N22 reference point; and may communicate with another NSSF in a visited network via an N31 reference point (not shown).
The NEF 1052 securely exposes services and capabilities provided by 3GPP NFs for third party, internal exposure/re-exposure, AFs 1060, edge computing or fog computing systems (e.g., edge compute node, etc. In such embodiments, the NEF 1052 may authenticate, authorize, or throttle the AFs. NEF 1052 may also translate information exchanged with the AF 1060 and information exchanged with internal network functions. For example, the NEF 1052 may translate between an AF-Service-Identifier and an internal 5GC information. NEF 1052 may also receive information from other NFs based on exposed capabilities of other NFs. This information may be stored at the NEF 1052 as structured data, or at a data storage NF using standardized interfaces. The stored information can then be re-exposed by the NEF 1052 to other NFs and AFs, or used for other purposes such as analytics. The NRF 1054 supports service discovery functions, receives NF discovery requests from NF instances, and provides information of the discovered NF instances to the requesting NF instances. NRF 1054 also maintains information of available NF instances and their supported services. The NRF 1054 also supports service discovery functions, wherein the NRF 1054 receives NF Discovery Request from NF instance or an SCP (not shown), and provides information of the discovered NF instances to the NF instance or SCP.
The PCF 1056 provides policy rules to control plane functions to enforce them, and may also support unified policy framework to govern network behavior. The PCF 1056 may also implement a front end to access subscription information relevant for policy decisions in a UDR of the UDM 1058. In addition to communicating with functions over reference points as shown, the PCF 1056 exhibit an Npcf service-based interface.
The UDM 1058 handles subscription-related information to support the network entities’ handling of communication sessions, and stores subscription data of UE 1002. For example, subscription data may be communicated via an N8 reference point between the UDM 1058 and the AMF 1044. The UDM 1058 may include two parts, an application front end and a UDR. The UDR may store subscription data and policy data for the UDM 1058 and the PCF 1056, and/or structured data for exposure and application data (including PFDs for application detection, application request information for multiple UEs 1002) for the NEF 1052. The Nudr servicebased interface may be exhibited by the UDR 221 to allow the UDM 1058, PCF 1056, and NEF 1052 to access a particular set of the stored data, as well as to read, update (e.g., add, modify), delete, and subscribe to notification of relevant data changes in the UDR. The UDM may include a UDM-FE, which is in charge of processing credentials, location management, subscription management and so on. Several different front ends may serve the same user in different transactions. The UDM-FE accesses subscription information stored in the UDR and performs authentication credential processing, user identification handling, access authorization, registration/mobility management, and subscription management. In addition to communicating with other NFs over reference points as shown, the UDM 1058 may exhibit the Nudm servicebased interface.
AF 1060 provides application influence on traffic routing, provide access to NEF 1052, and interact with the policy framework for policy control. The AF 1060 may influence UPF 1048 (re)selection and traffic routing. Based on operator deployment, when AF 1060 is considered to be a trusted entity, the network operator may permit AF 1060 to interact directly with relevant NFs. Additionally, the AF 1060 may be used for edge computing implementations, The 5GC 1040 may enable edge computing by selecting operator/3rd party services to be geographically close to a point that the UE 1002 is attached to the network. This may reduce latency and load on the network. In edge computing implementations, the 5GC 1040 may select a UPF 1048 close to the UE 1002 and execute traffic steering from the UPF 1048 to DN 1036 via the N6 interface. This may be based on the UE subscription data, UE location, and information provided by the AF 1060, which allows the AF 1060 to influence UPF (re)selection and traffic routing.
The data network (DN) 1036 may represent various network operator services, Internet access, or third party services that may be provided by one or more servers including, for example, application (app)Zcontent server 1038. The DN 1036 may be an operator external public, a private PDN, or an intra-operator packet data network, for example, for provision of IMS services. In this embodiment, the app server 1038 can be coupled to an IMS via an S-CSCF or the I-CSCF. In some implementations, the DN 1036 may represent one or more local area DNs (LADNs), which are DNs 1036 (or DN names (DNNs)) that is/are accessible by a UE 1002 in one or more specific areas. Outside of these specific areas, the UE 1002 is not able to access the LADN/DN 1036.
Additionally or alternatively, the DN 1036 may be an Edge DN 1036, which is a (local) Data Network that supports the architecture for enabling edge applications. In these embodiments, the app server 1038 may represent the physical hardware systems/devices providing app server functionality and/or the application software resident in the cloud or at an edge compute node that performs server function(s). In some embodiments, the app/content server 1038 provides an edge hosting environment that provides support required for Edge Application Server's execution.
In some embodiments, the 5GS can use one or more edge compute nodes to provide an interface and offload processing of wireless communication traffic. In these embodiments, the edge compute nodes may be included in, or co-located with one or more RANI 010, 1014. For example, the edge compute nodes can provide a connection between the RAN 1014 and UPF 1048 in the 5GC 1040. The edge compute nodes can use one or more NFV instances instantiated on virtualization infrastructure within the edge compute nodes to process wireless connections to and from the RAN 1014 and UPF 1048.
The interfaces of the 5GC 1040 include reference points and service-based itnterfaces. The reference points include: N1 (between the UE 1002 and the AMF 1044), N2 (between RAN 1014 and AMF 1044), N3 (between RAN 1014 and UPF 1048), N4 (between the SMF 1046 and UPF 1048), N5 (between PCF 1056 and AF 1060), N6 (between UPF 1048 and DN 1036), N7 (between SMF 1046 and PCF 1056), N8 (between UDM 1058 and AMF 1044), N9 (between two UPFs 1048), N10 (between the UDM 1058 and the SMF 1046), Nil (between the AMF 1044 and the SMF 1046), N12 (between AUSF 1042 and AMF 1044), N13 (between AUSF 1042 and UDM 1058), N14 (between two AMFs 1044; not shown), N15 (between PCF 1056 and AMF 1044 in case of a non-roaming scenario, or between the PCF 1056 in a visited network and AMF 1044 in case of a roaming scenario), N16 (between two SMFs 1046; not shown), and N22 (between AMF 1044 and NSSF 1050). Other reference point representations not shown in Figure 10 can also be used. The service-based representation of Figure 10 represents NFs within the control plane that enable other authorized NFs to access their services. The service-based interfaces (SBIs) include: Namf (SBI exhibited by AMF 1044), Nsmf (SBI exhibited by SMF 1046), Nnef (SBI exhibited by NEF 1052), Npcf (SBI exhibited by PCF 1056), Nudm (SBI exhibited by the UDM 1058), Naf (SBI exhibited by AF 1060), Nnrf (SBI exhibited by NRF 1054), Nnssf (SBI exhibited by NSSF 1050), Nausf (SBI exhibited by AUSF 1042). Other service-based interfaces (e.g., Nudr, N5g-eir, and Nudsl) not shown in Figure 10 can also be used. In some embodiments, the NEF 1052 can provide an interface to edge compute nodes 1036x, which can be used to process wireless connections with the RAN 1014.
In some implementations, the system 1000 may include an SMSF, which is responsible for SMS subscription checking and verification, and relaying SM messages to/fromthe UE 1002 to/from other entities, such as an SMS-GMSC/IWMSC/SMS-router. The SMS may also interact with AMF 1042 and UDM 1058 for a notification procedure that the UE 1002 is available for SMS transfer (e.g., set a UE not reachable flag, and notifying UDM 1058 when UE 1002 is available for SMS).
The 5GS may also include an SCP (or individual instances of the SCP) that supports indirect communication (see e.g., 3GPP TS 23.501 section 7.1.1); delegated discovery (see e.g., 3GPP TS 23.501 section 7.1.1); message forwarding and routing to destination NF/NF service(s), communication security (e.g., authorization of the NF Service Consumer to access the NF Service Producer API) (see e.g., 3GPP TS 33.501), load balancing, monitoring, overload control, etc.; and discovery and selection functionality for UDM(s), AUSF(s), UDR(s), PCF(s) with access to subscription data stored in the UDR based on UE's SUPI, SUCI or GPSI (see e.g., 3GPP TS 23.501 section 6.3). Load balancing, monitoring, overload control functionality provided by the SCP may be implementation specific. The SCP may be deployed in a distributed manner. More than one SCP can be present in the communication path between various NF Services. The SCP, although not an NF instance, can also be deployed distributed, redundant, and scalable.
Figure 11 schematically illustrates a wireless network 1100 in accordance with various embodiments. The wireless network 1100 may include a UE 1102 in wireless communication with an AN 1104. The UE 1102 and AN 1104 may be similar to, and substantially interchangeable with, like-named components described with respect to Figure 10. The UE 1102 may be communicatively coupled with the AN 1104 via connection 1106. The connection 1106 is illustrated as an air interface to enable communicative coupling, and can be consistent with cellular communications protocols such as an LTE protocol or a 5GNR protocol operating at mmWave or sub-6GHz frequencies.
The UE 1102 may include a host platform 1108 coupled with a modem platform 1110. The host platform 1108 may include application processing circuitry 1112, which may be coupled with protocol processing circuitry 1114 of the modem platform 1110. The application processing circuitry 1112 may run various applications for the UE 1102 that source/sink application data. The application processing circuitry 1112 may further implement one or more layer operations to transmit/receive application data to/from a data network. These layer operations may include transport (for example UDP) and Internet (for example, IP) operations The protocol processing circuitry 1114 may implement one or more of layer operations to facilitate transmission or reception of data over the connection 1106. The layer operations implemented by the protocol processing circuitry 1114 may include, for example, MAC, RLC, PDCP, RRC and NAS operations.
The modem platform 1110 may further include digital baseband circuitry 1116 that may implement one or more layer operations that are “below” layer operations performed by the protocol processing circuitry 1114 in a network protocol stack. These operations may include, for example, PHY operations including one or more of HARQ acknowledgement (ACK) functions, scrambling/descrambling, encoding/decoding, layer mapping/de-mapping, modulation symbol mapping, received symbol/bit metric determination, multi-antenna port precoding/decoding, which may include one or more of space-time, space-frequency or spatial coding, reference signal generation/detection, preamble sequence generation and/or decoding, synchronization sequence generation/detection, control channel signal blind decoding, and other related functions.
The modem platform 1110 may further include transmit circuitry 1118, receive circuitry 1120, RF circuitry 1122, and RF front end (RFFE) 1124, which may include or connect to one or more antenna panels 1126. Briefly, the transmit circuitry 1118 may include a digital -to-analog converter, mixer, intermediate frequency (IF) components, etc.; the receive circuitry 1120 may include an analog-to-digital converter, mixer, IF components, etc.; the RF circuitry 1122 may include a low-noise amplifier, a power amplifier, power tracking components, etc.; RFFE 1124 may include filters (for example, surface/bulk acoustic wave filters), switches, antenna tuners, beamforming components (for example, phase-array antenna components), etc. The selection and arrangement of the components of the transmit circuitry 1118, receive circuitry 1120, RF circuitry 1122, RFFE 1124, and antenna panels 1126 (referred generically as “transmit/receive components”) may be specific to details of a specific implementation such as, for example, whether communication is TDM or FDM, in mmWave or sub-6 gHz frequencies, etc. In some embodiments, the transmit/receive components may be arranged in multiple parallel transmit/receive chains, may be disposed in the same or different chips/modules, etc.
In some embodiments, the protocol processing circuitry 1114 may include one or more instances of control circuitry (not shown) to provide control functions for the transmit/receive components.
A UE 1102 reception may be established by and via the antenna panels 1126, RFFE 1124, RF circuitry 1122, receive circuitry 1120, digital baseband circuitry 1116, and protocol processing circuitry 1114. In some embodiments, the antenna panels 1126 may receive a transmission from the AN 1104 by receive-beamforming signals received by a plurality of antennas/antenna elements of the one or more antenna panels 1126.
A UE 1102 transmission may be established by and via the protocol processing circuitry 1114, digital baseband circuitry 1116, transmit circuitry 1118, RF circuitry 1122, RFFE 1124, and antenna panels 1126. In some embodiments, the transmit components of the UE 1104 may apply a spatial filter to the data to be transmitted to form a transmit beam emitted by the antenna elements of the antenna panels 1126.
Similar to the UE 1102, the AN 1104 may include a host platform 1128 coupled with a modem platform 1130. The host platform 1128 may include application processing circuitry 1132 coupled with protocol processing circuitry 1134 of the modem platform 1130. The modem platform may further include digital baseband circuitry 1136, transmit circuitry 1138, receive circuitry 1140, RF circuitry 1142, RFFE circuitry 1144, and antenna panels 1146. The components of the AN 1104 may be similar to and substantially interchangeable with like- named components of the UE 1102. In addition to performing data transmission/reception as described above, the components of the AN 1108 may perform various logical functions that include, for example, RNC functions such as radio bearer management, uplink and downlink dynamic radio resource management, and data packet scheduling.
Figure 12 illustrates components of a computing device 1200 according to some example embodiments, able to read instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium) and perform any one or more of the methodologies discussed herein. Specifically, Figure 12 shows a diagrammatic representation of hardware resources 1200 including one or more processors (or processor cores) 1210, one or more memory /storage devices 1220, and one or more communication resources 1230, each of which may be communicatively coupled via a bus 1240 or other interface circuitry. For embodiments where node virtualization (e.g., NFV) is utilized, a hypervisor 1202 may be executed to provide an execution environment for one or more network slices/sub-slices to utilize the hardware resources 1200.
The processors 1210 include, for example, processor 1212 and processor 1214. The processors 1210 include circuitry such as, but not limited to one or more processor cores and one or more of cache memory, low drop-out voltage regulators (LDOs), interrupt controllers, serial interfaces such as SPI, I2C or universal programmable serial interface circuit, real time clock (RTC), timer-counters including interval and watchdog timers, general purpose I/O, memory card controllers such as secure digital/multi-media card (SD/MMC) or similar, interfaces, mobile industry processor interface (MIPI) interfaces and Joint Test Access Group (JTAG) test access ports. The processors 1210 may be, for example, a central processing unit (CPU), reduced instruction set computing (RISC) processors, Acorn RISC Machine (ARM) processors, complex instruction set computing (CISC) processors, graphics processing units (GPUs), one or more Digital Signal Processors (DSPs) such as a baseband processor, Application-Specific Integrated Circuits (ASICs), an Field-Programmable Gate Array (FPGA), a radio-frequency integrated circuit (RFIC), one or more microprocessors or controllers, another processor (including those discussed herein), or any suitable combination thereof. In some implementations, the processor circuitry 1210 may include one or more hardware accelerators, which may be microprocessors, programmable processing devices (e.g., FPGA, complex programmable logic devices (CPLDs), etc.), or the like.
The memory /storage devices 1220 may include main memory, disk storage, or any suitable combination thereof. The memory /storage devices 1220 may include, but are not limited to, any type of volatile, non-volatile, or semi-volatile memory such as random access memory (RAM), dynamic RAM (DRAM), static RAM (SRAM), synchronous DRAM (SDRAM), erasable programmable read-only memory (EPROM), electrically erasable programmable readonly memory (EEPROM), Flash memory, solid-state storage, phase change RAM (PRAM), resistive memory such as magnetoresistive random access memory (MRAM), etc., and may incorporate three-dimensional (3D) cross-point (XPOINT) memories from Intel® and Micron®. The memory /storage devices 1220 may also comprise persistent storage devices, which may be temporal and/or persistent storage of any type, including, but not limited to, non-volatile memory, optical, magnetic, and/or solid state mass storage, and so forth.
The communication resources 1230 may include interconnection or network interface controllers, components, or other suitable devices to communicate with one or more peripheral devices 1204 or one or more databases 1206 or other network elements via a network 1208. For example, the communication resources 1230 may include wired communication components (e.g., for coupling via USB, Ethernet, Ethernet, Ethernet over GRE Tunnels, Ethernet over Multiprotocol Label Switching (MPLS), Ethernet over USB, Controller Area Network (CAN), Local Interconnect Network (LIN), DeviceNet, ControlNet, Data Highway+, PROFIBUS, or PROFINET, among many others), cellular communication components, NFC components, Bluetooth® (or Bluetooth® Low Energy) components, WiFi® components, and other communication components. Network connectivity may be provided to/from the computing device 1200 via the communication resources 1230 using a physical connection, which may be electrical (e.g., a “copper interconnect”) or optical. The physical connection also includes suitable input connectors (e.g., ports, receptacles, sockets, etc.) and output connectors (e.g., plugs, pins, etc.). The communication resources 1230 may include one or more dedicated processors and/or FPGAs to communicate using one or more of the aforementioned network interface protocols.
Instructions 1250 may comprise software, a program, an application, an applet, an app, or other executable code for causing at least any of the processors 1210 to perform any one or more of the methodologies discussed herein. The instructions 1250 may reside, completely or partially, within at least one of the processors 1210 (e.g., within the processor’s cache memory), the memory /storage devices 1220, or any suitable combination thereof. Furthermore, any portion of the instructions 1250 may be transferred to the hardware resources 1200 from any combination of the peripheral devices 1204 or the databases 1206. Accordingly, the memory of processors 1210, the memory /storage devices 1220, the peripheral devices 1204, and the databases 1206 are examples of computer-readable and machine-readable media.
Additional examples of the presently described embodiments include the following, nonlimiting implementations. Each of the following non-limiting examples may stand on its own or may be combined in any permutation or combination with any one or more of the other examples provided below or throughout the present disclosure.
Example 1 includes an in-network DNS function as part of an in-network compute service function, wherein a DHCP server can inform device with the CSF address as the DNS server address; the DHCP server of the ISP can also inform device on CSF server configurations (whether the ISP support CSF, the CSF IP address); if the CSF address is used as the DNS server address, the domain name resolution request will go directly to the configured CSF; and/or if a conventional DNS server is used, the network switches with CSF capabilities can filter out DNS requests and check its locally cached domain name records to resolve domain names.
Example 2 includes a service registry function that registers services and service instances, including in-network services and services provided by external providers outside the network. Example 3 includes the service registry function of example 2 and/or some other example(s) herein, wherein, to achieve service registration comprises a subscription-based approach where the service registry subscribes to service events. If there are new events in the service, e.g., instantiated anew service instance, service update, etc., the service registry will be notified.
Example 4 includes the service registry function of examples 2-3 and/or some other example(s) herein, wherein, to achieve service registration comprises an inquiry -based approach, wherein the service registry regularly sends inquiry message to each service. Each service response to the inquiry with updated service status.
Example 5 includes the service registry function of examples 2-4 and/or some other example(s) herein, wherein, to achieve service registration comprises a notification-based approach, wherein each service will send notification message to the service registry when there is status change.
Example 6 includes an in-network service mesh function as part of the in-network compute service function, wherein the in-network service mesh incorporates service orchestration functions such as service discovery, load balancing, service instance assignment, service access control in the network; the in-network service mesh function has interfaces with service registry function, network controller or DHCP server, the devices, and there is also interfaces among in-network service mesh function instances (as shown in Figure 1).
Example 7 includes a method of operating an in-network service mesh function as part of the in-network compute service function (including the in-network service mesh function of example 6 and/or some other example(s) herein), the method comprising: device association with one of the in-network service mesh function instances; device send service request to the associated in-network service mesh function; In-network service mesh function inquiry the service from the service registry; In-network service mesh select a service instance for the device. The selection decision can be based on factors such as location of the device, location of the service instance, load of the service instance, estimated service response time, etc.; and In- network service mesh function response to device’s service request. The response message could include assigned service instance address, allocated computing resource, service execution results, etc.
Example 8 includes the method of example 7 and/or some other example(s) herein, wherein, for the device association with one of the in-network service mesh function instances step, network controller/DHCP server managed association. In this option, when device first attach to a network, the network controller or the DHCP server will assign a CSF for the device. The network controller or the DHCP server configure the device with the CSF address. Example 9 includes the method of examples 7-8 and/or some other example(s) herein, wherein, for the device association with one of the in-network service mesh function instances step, device autonomous association, wherein, when device first attach to a network, the device will broadcast its CSF association request message; CSFs that receive the association message will response to the device; the response message can include information on CSF address, service capabilities, etc., and if more than one CSFs response to the device, the device can select one CSF to associate (by sending an association request to the CSF and CSF response with an admission message).
Example 10 includes an in-network service orchestration function - network (SOF-N) in the in-network computing service function and a service orchestration function - client (SOF-C), wherein the SOF-N and SOF-C are used to dynamically steer service execution between device and network.
Example 11 includes a method of operating an SOF-N including the SOF-N of example 10 and/or some other examples herein, the method comprising: Client applications interact with the service orchestration function - client (SOF-C) to request for services; the SOF-C interact with client OS and service orchestration function - network (SOF-N) to get information on service availability (locally and in network); and The SOF-C can then dynamically steer application’s service request between local execution and network execution.
Example 12 includes the method of example 11 and/or some other example(s) herein, further comprising: if execution is done locally, the SOF-C will send the service request to local OS, e.g., via system call.
Example 13 includes the method of examples 11-12 and/or some other example(s) herein, further comprising: if execution is done in network, the SOF-C will send the service request to CSF, e.g., via remote procedure call.
Example 14 includes the method of examples 10-13 and/or some other example(s) herein, wherein the decision on where to execute the requested service can be made by the SOF- C or the SOF-N.
Example 15 includes the method of example 14 and/or some other example(s) herein, wherein, when SOF-C is making the decision, the SOF-C will collect information from local OS and SOF-N on service availability, network condition, device battery level, estimated service execution time, etc.
Example 16 includes the method of examples 14-15 and/or some other example(s) herein, wherein, when SOF-N is making the decision, the SOF-N will collect information from SOF-C, network service registry and network computing resource platform on service availability, network condition, device battery level, network computing load, estimated service execution time, etc.
Example 17 includes an in-network resource orchestration function - network (SOF-N) in the in-network computing service function and a resource orchestration function - client (SOF-C), wherein the resource orchestration function in the client side and network side are used to orchestrate computing resources in the device and network.
Example 18 includes a method of operating an SOF-N including the SOF-N of example 17 and/or some other examples herein, the method comprising: resource orchestration function - client (ROF-C) interacts with resource orchestration function - network (ROF-N) to get a list of network computing resources and capabilities the client can use; ROF-C exposes the computing resource list and capabilities to OS; when OS request for a computing resource, the ROF-C send resource request to ROF-N; ROF-N decide whether to accept ROF-C’s request or not and response to ROF-C’s request; and if the resource request is accepted, the ROF-C and ROF-N will establish a data path between device and network for access to the computing resource. The OS can then use the computing resource via the data path.
Example 19 includes the method of example 18 and/or some other example(s) herein, wherein the ROF-C interacting with the ROF-N to get the list of network computing resources and capabilities the client can use comprises: ROF-N broadcast list of network computing resources and capabilities as system information.
Example 20 includes the method of examples 18-19 and/or some other example(s) herein, wherein the ROF-C interacting with the ROF-N to get the list of network computing resources and capabilities the client can use comprises: ROF-C send inquiry message to ROF-N to inquire network computing resources and capabilities. ROF-N response to ROF-C’s inquiry.
Example 21 includes a method for operating an in-network service mesh function as part of the in-network compute service function wherein the in-network service mesh incorporates service orchestration functions incuding service discovery, load balancing, service instance assignment, service access control in the network, the in-network service mesh function has interfaces with service registry function, network controller or a DHCP server, and the devices, and there are also interfaces among various in-network service mesh function instances, wherein the method comprises: identifying, by an in-network service mesh function (INSMF) instance, an association with a device; receiving a service request by the INSMF instance from the device; querying, by the INSMF instance, a service registry for service information based on the service request; selecting, by the INSMF instance, a service instance for the device based on the service information; and providing, by the INSMF instance, a service response including the selected service instance and/or related information. Example 22 includes the method of example 21 and/or some other example(s) herein, wherein the selecting is based on one or more factors including a location of the device, a location of the INSMF instance, a load of the INSMF instance, an estimated service response time, and/or some other parameters or criteria.
Example 23 includes the method of examples 21-22 and/or some other example(s) herein, wherein the service response includes an assigned service instance address, an allocated computing resource, service execution results, and/or other information related to the selected service instance.
Example 24 includes a method for operating an in-network service orchestration function - network (SOF-N) in an in-network computing service function and a service orchestration function - client (SOF-C), wherein the SOF-N and SOF-C are used to dynamically steer service execution between a client device and a network function, the method comprising: identifying, by the SOF-C, a service request from a client application (app) of the client device; interacting, by the SOF-C, with a client OS of the client device and the SOF-N to obtain information on service availability (e.g., locally at the client and/or in network); and dynamically steering, by the SOF-C, the service request between local execution and network execution.
Example 25 includes the method of example 24 and/or some other example(s) herein, wherein when the execution is done locally, the method further comprises: sending, by the SOF- C, the service request to a local OS via system call.
Example 26 includes the method of example 24 and/or some other example(s) herein, wherein when the execution is done in the network, the method further comprises: sending, by the SOF-C, the service request to a computing service function (CSF) via an API call or a remote procedure call
Example 27 includes a method for operating an in-network resource orchestration function - network (ROF-N) in a in-network computing service function and a resource orchestration function - client (ROF-C), wherein the ROF-C and ROF-N are used to orchestrate computing resources in the client device and the network, the method comprising: requesting, by the ROF-C from the ROF-N, a list of network computing resources and capabilities the client device can use; exposing, by the ROF-C, the list of network computing resources and capabilities to a client OS implemented by the client device; sending, by the ROF-C, a resource request to the ROF-N in response to an OS request for a computing resource from the list; receiving, by the ROF-C from the ROF-N, a response to the resource request indicating whether the resource request was accepted or not; and establishing, by the ROF-C with the ROF-N, a data path between the client device and the network for access to the computing resource when the resource request is accepted. Example 28 includes the method of example 27 and/or some other example(s) herein, wherein the ROF-N broadcasts the list of network computing resources and capabilities as system information in a suitable system information containing message.
Example 29 includes the method of example 27 and/or some other example(s) herein, further comprising: sending, by the ROF-C, an inquiry message to the ROF-N to inquire about the network computing resources and capabilities.
Example 30 includes the method of example 29 and/or some other example(s) herein, further comprising: receiving, by the ROF-C from the ROF-N, a response including the network computing resources and capabilities.
Example 31 includes the method of examples 27-30 and/or some other example(s) herein, wherein the OS can use the computing resource via the data path.
Example 32 may include a method to be implemented by an in-network service mesh function of a compute service function (CSF), the method comprising: identifying, by the in- network service mesh function, a device associated with the in-network service mesh function; identifying, by the in-network service mesh function, a service request received from the device; identifying, by the in-network service mesh function in a service registry of the CSF based on the service request, a service instance for the device; and transmitting, by the in-network mesh function, an indication of the service instance.
Example 33 may include the method of example 32 or some other example herein, wherein the identification of the service instance is based on transmitting, by the in-network service mesh function, a query related to the service request to the service registry.
Example 34 may include the method of example 32 or some other example herein, wherein the in-network service mesh function is one of a plurality of in-network service mesh functions of the CSF.
Example 35 may include the method of example 34 or some other example herein, wherein respective ones of the plurality of in-network service mesh functions are associated with different ones of a plurality of devices.
Example 36 may include the method of example 32 or some other example herein, wherein the in-network service mesh function is to identify the service instance for the device based on at least one of: location of the device, location of the service instance, load of the service instance, and estimated service response time.
Example 37 may include the method of example 32 or some other example herein, wherein the indication of the service instance includes at least one of: an assigned service instance address, an allocated computing resource, and service execution results. Example 38 may include the method of example 32 or some other example herein, wherein the device is associated with the in-network service mesh function through one or more of: a network-controller managed association, a dynamic host configuration protocol (DHCP) server managed association, and a device autonomous association.
Example 39 may include one or more non-transitory computer-readable media comprising instructions that, upon execution of the instructions by one or more processors of a device, are to cause a service orchestration function - client (SOF-C) of the device to: identify, from a client application of the device, a service request; identifying, based on interaction with one or more of an operating system (OS) of the device and a service orchestration function - network (SOF-N) of a compute service function (CSF) to which the device is communicatively coupled, information related to availability of a service related to the service request; and steering, based on the information related to the availability of the service, the service request for local execution or network execution.
Example 40 may include the one or more non-transitory comptuer-readable media of example 39 or some other example herein, wherein local execution relates to execution of the service by the device.
Example 41 may include the one or more non-transitory comptuer-readable media of example 39 or some other example herein, wherein network execution relates to execution of the service by an electronic device to which the device is communicatively coupled.
Example 42 may include the one or more non-transitory comptuer-readable media of example 39 or some other example herein, wherein the steering is based on a determination made by the SOF-N based on the information related to availability of the service.
Example 43 may include the one or more non-transitory comptuer-readable media of example 42 or some other example herein, wherein the information is related to network service registry.
Example 44 may include the one or more non-transitory comptuer-readable media of example 39 or some other example herein, wherein the steering is based on a determination made by the SOF-C based on the information related to availability of the service.
Example 45 may include the one or more non-transitory comptuer-readable media of example 39 or some other example herein, wherein the information includes information related to one or more of: service availability, network condition, device batter level, and estimated service execution time.
Example 46 may include an apparatus comprising: one or more processors; one or more non-transitory computer-readable media comprising instructions that, upon execution of the instructions by the one or more processors, are to cause a resource orchestration function - client (ROF-C) to: identify, based on communication with a resource orchestration function - network (ROF-N) of a network, a plurality of resources of the network that are usable by the apparatus; providing an indication of the plurality of resources to an operating system (OS) of the apparatus; identifying a request from the OS for access to a resource of the plurality of resources; transmitting an indication of the request to the ROF-N; and identifying, based on a response to the request received from the ROF-N, whether to establish a data path between the device and the network such that the OS has access to the resource.
Example 47 may include the apparatus of example 46 or some other example herein, wherein the ROF-C is to identify the plurality of resources based on a list broadcasted by the ROF-N.
Example 48 may include the apparatus of example 46 or some other example herein, wherein the ROF-C is to identify the plurality of resources based on a response, by the ROF-N, to an inquiry transmitted to the ROF-N by the ROF-C.
Example 49 may include the apparatus of example 46 or some other example herein, wherein the resource is a computing resource or capability of an element of the network.
Example 50 may include the apparatus of example 46 or some other example herein, wherein the ROF-C is to establish the data path if the response to the request indicates acceptance by the ROF-N.
Example 51 may include the apparatus of example 46 or some other example herein, wherein the ROF-C is to not establish the data path if the response to the request indicates denial by the ROF-N.
Example 52 may include an apparatus comprising means to perform one or more elements of a method described in or related to any of examples 1-51, or any other method or process described herein.
Example 53 may include one or more non-transitory computer-readable media comprising instructions to cause an electronic device, upon execution of the instructions by one or more processors of the electronic device, to perform one or more elements of a method described in or related to any of examples 1-51, or any other method or process described herein.
Example 54 may include an apparatus comprising logic, modules, or circuitry to perform one or more elements of a method described in or related to any of examples 1-51, or any other method or process described herein.
Example 55 may include a method, technique, or process as described in or related to any of examples 1-51, or portions or parts thereof.
Example 56 may include an apparatus comprising: one or more processors and one or more computer-readable media comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples 1-51, or portions thereof.
Example 57 may include a signal as described in or related to any of examples 1-51, or portions or parts thereof.
Example 58 may include a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples 1-51, or portions or parts thereof, or otherwise described in the present disclosure.
Example 59 may include a signal encoded with data as described in or related to any of examples 1-51, or portions or parts thereof, or otherwise described in the present disclosure.
Example 60 may include a signal encoded with a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples 1-51, or portions or parts thereof, or otherwise described in the present disclosure.
Example 61 may include an electromagnetic signal carrying computer-readable instructions, wherein execution of the computer-readable instructions by one or more processors is to cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples 1-51, or portions thereof.
Example 62 may include a computer program comprising instructions, wherein execution of the program by a processing element is to cause the processing element to carry out the method, techniques, or process as described in or related to any of examples 1-51, or portions thereof.
Example 63 may include a signal in a wireless network as shown and described herein.
Example 64 may include a method of communicating in a wireless network as shown and described herein.
Example 65 may include a system for providing wireless communication as shown and described herein.
Example 66 may include a device for providing wireless communication as shown and described herein.
Any of the above-described examples may be combined with any other example (or combination of examples), unless explicitly stated otherwise. The foregoing description of one or more implementations provides illustration and description, but is not intended to be exhaustive or to limit the scope of embodiments to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of various embodiments.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a, “an” and “the” are intended to include plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specific the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operation, elements, components, and/or groups thereof.
For the purposes of the present disclosure, the phrase “A and/or B” means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase “A, B, and/or C” means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C). The description may use the phrases “in an embodiment,” or “In some embodiments,” which may each refer to one or more of the same or different embodiments. Furthermore, the terms “comprising,” “including,” “having,” and the like, as used with respect to embodiments of the present disclosure, are synonymous.
The terms “coupled,” “communicatively coupled,” along with derivatives thereof are used herein. The term “coupled” may mean two or more elements are in direct physical or electrical contact with one another, may mean that two or more elements indirectly contact each other but still cooperate or interact with each other, and/or may mean that one or more other elements are coupled or connected between the elements that are said to be coupled with each other. The term “directly coupled” may mean that two or more elements are in direct contact with one another. The term “communicatively coupled” may mean that two or more elements may be in contact with one another by a means of communication including through a wire or other interconnect connection, through a wireless communication channel or ink, and/or the like.
The term “circuitry” as used herein refers to, is part of, or includes hardware components such as an electronic circuit, a logic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group), an Application Specific Integrated Circuit (ASIC), a field-programmable device (FPD) (e.g., a field-programmable gate array (FPGA), a programmable logic device (PLD), a complex PLD (CPLD), a high-capacity PLD (HCPLD), a structured ASIC, or a programmable SoC), digital signal processors (DSPs), etc., that are configured to provide the described functionality. In some embodiments, the circuitry may execute one or more software or firmware programs to provide at least some of the described functionality. The term “circuitry” may also refer to a combination of one or more hardware elements (or a combination of circuits used in an electrical or electronic system) with the program code used to carry out the functionality of that program code. In these embodiments, the combination of hardware elements and program code may be referred to as a particular type of circuitry.
The term “processor circuitry” as used herein refers to, is part of, or includes circuitry capable of sequentially and automatically carrying out a sequence of arithmetic or logical operations, or recording, storing, and/or transferring digital data. Processing circuitry may include one or more processing cores to execute instructions and one or more memory structures to store program and data information. The term “processor circuitry” may refer to one or more application processors, one or more baseband processors, a physical central processing unit (CPU), a single-core processor, a dual-core processor, a triple-core processor, a quad-core processor, and/or any other device capable of executing or otherwise operating computerexecutable instructions, such as program code, software modules, and/or functional processes. Processing circuitry may include more hardware accelerators, which may be microprocessors, programmable processing devices, or the like. The one or more hardware accelerators may include, for example, computer vision (CV) and/or deep learning (DL) accelerators. The terms “application circuitry” and/or “baseband circuitry” may be considered synonymous to, and may be referred to as, “processor circuitry.”
The term “memory” and/or “memory circuitry” as used herein refers to one or more hardware devices for storing data, including RAM, MRAM, PRAM, DRAM, and/or SDRAM, core memory, ROM, magnetic disk storage mediums, optical storage mediums, flash memory devices or other machine readable mediums for storing data. The term “computer-readable medium” may include, but is not limited to, memory, portable or fixed storage devices, optical storage devices, and various other mediums capable of storing, containing or carrying instructions or data.
The term “interface circuitry” as used herein refers to, is part of, or includes circuitry that enables the exchange of information between two or more components or devices. The term “interface circuitry” may refer to one or more hardware interfaces, for example, buses, I/O interfaces, peripheral component interfaces, network interface cards, and/or the like.
The term “user equipment” or “UE” as used herein refers to a device with radio communication capabilities and may describe a remote user of network resources in a communications network. The term “user equipment” or “UE” may be considered synonymous to, and may be referred to as, client, mobile, mobile device, mobile terminal, user terminal, mobile unit, mobile station, mobile user, subscriber, user, remote station, access agent, user agent, receiver, radio equipment, reconfigurable radio equipment, reconfigurable mobile device, etc. Furthermore, the term “user equipment” or “UE” may include any type of wireless/wired device or any computing device including a wireless communications interface.
The term “network element” as used herein refers to physical or virtualized equipment and/or infrastructure used to provide wired or wireless communication network services. The term “network element” may be considered synonymous to and/or referred to as a networked computer, networking hardware, network equipment, network node, router, switch, hub, bridge, radio network controller, RAN device, RAN node, gateway, server, virtualized VNF, NFVI, and/or the like.
The term “computer system” as used herein refers to any type interconnected electronic devices, computer devices, or components thereof. Additionally, the term “computer system” and/or “system” may refer to various components of a computer that are communicatively coupled with one another. Furthermore, the term “computer system” and/or “system” may refer to multiple computer devices and/or multiple computing systems that are communicatively coupled with one another and configured to share computing and/or networking resources.
The term “appliance,” “computer appliance,” or the like, as used herein refers to a computer device or computer system with program code (e.g., software or firmware) that is specifically designed to provide a specific computing resource. A “virtual appliance” is a virtual machine image to be implemented by a hypervisor-equipped device that virtualizes or emulates a computer appliance or otherwise is dedicated to provide a specific computing resource. The term “element” refers to a unit that is indivisible at a given level of abstraction and has a clearly defined boundary, wherein an element may be any type of entity including, for example, one or more devices, systems, controllers, network elements, modules, etc., or combinations thereof. The term “device” refers to a physical entity embedded inside, or attached to, another physical entity in its vicinity, with capabilities to convey digital information from or to that physical entity. The term “entity” refers to a distinct component of an architecture or device, or information transferred as a payload. The term “controller” refers to an element or entity that has the capability to affect a physical entity, such as by changing its state or causing the physical entity to move.
The term “cloud computing” or “cloud” refers to a paradigm for enabling network access to a scalable and elastic pool of shareable computing resources with self-service provisioning and administration on-demand and without active management by users. Cloud computing provides cloud computing services (or cloud services), which are one or more capabilities offered via cloud computing that are invoked using a defined interface (e.g., an API or the like). The term “computing resource” or simply “resource” refers to any physical or virtual component, or usage of such components, of limited availability within a computer system or network. Examples of computing resources include usage/ access to, for a period of time, servers, processor(s), storage equipment, memory devices, memory areas, networks, electrical power, input/output (peripheral) devices, mechanical devices, network connections (e.g., channels/links, ports, network sockets, etc.), operating systems, virtual machines (VMs), software/applications, computer files, and/or the like. A “hardware resource” may refer to compute, storage, and/or network resources provided by physical hardware element(s). A “virtualized resource” may refer to compute, storage, and/or network resources provided by virtualization infrastructure to an application, device, system, etc. The term “network resource” or “communication resource” may refer to resources that are accessible by computer devices/systems via a communications network. The term “system resources” may refer to any kind of shared entities to provide services, and may include computing and/or network resources. System resources may be considered as a set of coherent functions, network data objects or services, accessible through a server where such system resources reside on a single host or multiple hosts and are clearly identifiable. As used herein, the term “cloud service provider” (or CSP) indicates an organization which operates typically large-scale “cloud” resources comprised of centralized, regional, and edge data centers (e.g., as used in the context of the public cloud). In other examples, a CSP may also be referred to as a Cloud Service Operator (CSO). References to “cloud computing” generally refer to computing resources and services offered by a CSP or a CSO, at remote locations with at least some increased latency, distance, or constraints relative to edge computing.
As used herein, the term “data center” refers to a purpose-designed structure that is intended to house multiple high-performance compute and data storage nodes such that a large amount of compute, data storage and network resources are present at a single location. This often entails specialized rack and enclosure systems, suitable heating, cooling, ventilation, security, fire suppression, and power delivery systems. The term may also refer to a compute and data storage node in some contexts. A data center may vary in scale between a centralized or cloud data center (e.g., largest), regional data center, and edge data center (e.g., smallest).
As used herein, the term “edge computing” refers to the implementation, coordination, and use of computing and resources at locations closer to the “edge” or collection of “edges” of a network. Deploying computing resources at the network’s edge may reduce application and network latency, reduce network backhaul traffic and associated energy consumption, improve service capabilities, improve compliance with security or data privacy requirements (especially as compared to conventional cloud computing), and improve total cost of ownership). As used herein, the term “edge compute node” refers to a real-world, logical, or virtualized implementation of a compute-capable element in the form of a device, gateway, bridge, system or subsystem, component, whether operating in a server, client, endpoint, or peer mode, and whether located at an “edge” of an network or at a connected location further within the network. References to a “node” used herein are generally interchangeable with a “device”, “component”, and “sub-system”; however, references to an “edge computing system” or “edge computing network” generally refer to a distributed architecture, organization, or collection of multiple nodes and devices, and which is organized to accomplish or offer some aspect of services or resources in an edge computing setting.
Additionally or alternatively, the term “Edge Computing” refers to a concept that enables operator and 3rd party services to be hosted close to the UE's access point of attachment, to achieve an efficient service delivery through the reduced end-to-end latency and load on the transport network. As used herein, the term “Edge Computing Service Provider” refers to a mobile network operator or a 3rd party service provider offering Edge Computing service. As used herein, the term “Edge Data Network” refers to a local Data Network (DN) that supports the architecture for enabling edge applications. As used herein, the term “Edge Hosting Environment” refers to an environment providing support required for Edge Application Server's execution. As used herein, the term “Application Server” refers to application software resident in the cloud performing the server function.
The term “Internet of Things” or “loT” refers to a system of interrelated computing devices, mechanical and digital machines capable of transferring data with little or no human interaction, and may involve technologies such as real-time analytics, machine learning and/or Al, embedded systems, wireless sensor networks, control systems, automation (e.g., smarthome, smart building and/or smart city technologies), and the like. loT devices are usually low-power devices without heavy compute or storage capabilities. “Edge loT devices” may be any kind of loT devices deployed at a network’s edge.
As used herein, the term “cluster” refers to a set or grouping of entities as part of an edge computing system (or systems), in the form of physical entities (e.g., different computing systems, networks or network groups), logical entities (e.g., applications, functions, security constructs, containers), and the like. In some locations, a “cluster” is also referred to as a “group” or a “domain”. The membership of cluster may be modified or affected based on conditions or functions, including from dynamic or property -based membership, from network or system management scenarios, or from various example techniques discussed below which may add, modify, or remove an entity in a cluster. Clusters may also include or be associated with multiple layers, levels, or properties, including variations in security features and results based on such layers, levels, or properties.
The term “application” may refer to a complete and deploy able package, environment to achieve a certain function in an operational environment. The term “AI/ML application” or the like may be an application that contains some AI/ML models and application-level descriptions. The term “machine learning” or “ML” refers to the use of computer systems implementing algorithms and/or statistical models to perform specific task(s) without using explicit instructions, but instead relying on patterns and inferences. ML algorithms build or estimate mathematical model(s) (referred to as “ML models” or the like) based on sample data (referred to as “training data,” “model training information,” or the like) in order to make predictions or decisions without being explicitly programmed to perform such tasks. Generally, an ML algorithm is a computer program that leams from experience with respect to some task and some performance measure, and an ML model may be any object or data structure created after an ML algorithm is trained with one or more training datasets. After training, an ML model may be used to make predictions on new datasets. Although the term “ML algorithm” refers to different concepts than the term “ML model,” these terms as discussed herein may be used interchangeably for the purposes of the present disclosure.
The term “machine learning model,” “ML model,” or the like may also refer to ML methods and concepts used by an ML-assisted solution. An “ML-assisted solution” is a solution that addresses a specific use case using ML algorithms during operation. ML models include supervised learning (e.g., linear regression, k-nearest neighbor (KNN), decision tree algorithms, support machine vectors, Bayesian algorithm, ensemble algorithms, etc.) unsupervised learning (e.g., K-means clustering, principle component analysis (PCA), etc.), reinforcement learning (e.g., Q-leaming, multi-armed bandit learning, deep RL, etc.), neural networks, and the like. Depending on the implementation a specific ML model could have many sub-models as components and the ML model may train all sub-models together. Separately trained ML models can also be chained together in an ML pipeline during inference. An “ML pipeline” is a set of functionalities, functions, or functional entities specific for an ML-assisted solution; an ML pipeline may include one or several data sources in a data pipeline, a model training pipeline, a model evaluation pipeline, and an actor. The “actor” is an entity that hosts an ML assisted solution using the output of the ML model inference). The term “ML training host” refers to an entity, such as a network function, that hosts the training of the model. The term “ML inference host” refers to an entity, such as a network function, that hosts model during inference mode (which includes both the model execution as well as any online learning if applicable). The ML-host informs the actor about the output of the ML algorithm, and the actor takes a decision for an action (an “action” is performed by an actor as a result of the output of an ML assisted solution). The term “model inference information” refers to information used as an input to the ML model for determining inference(s); the data used to train an ML model and the data used to determine inferences may overlap, however, “training data” and “inference data” refer to different concepts.
The terms “instantiate,” “instantiation,” and the like as used herein refers to the creation of an instance. An “instance” also refers to a concrete occurrence of an object, which may occur, for example, during execution of program code. The term “information element” refers to a structural element containing one or more fields. The term “field” refers to individual contents of an information element, or a data element that contains content. As used herein, a “database object”, “data structure”, or the like may refer to any representation of information that is in the form of an object, attribute-value pair (AVP), key-value pair (KVP), tuple, etc., and may include variables, data structures, functions, methods, classes, database records, database fields, database entities, associations between data and/or database entities (also referred to as a “relation”), blocks and links between blocks in block chain implementations, and/or the like.
An “information object,” as used herein, refers to a collection of structured data and/or any representation of information, and may include, for example electronic documents (or “documents”), database objects, data structures, files, audio data, video data, raw data, archive files, application packages, and/or any other like representation of information. The terms “electronic document” or “document,” may refer to a data structure, computer file, or resource used to record data, and includes various file types and/or data formats such as word processing documents, spreadsheets, slide presentations, multimedia items, webpage and/or source code documents, and/or the like. As examples, the information objects may include markup and/or source code documents such as HTML, XML, JSON, Apex®, CSS, JSP, MessagePack™, Apache® Thrift™, ASN. l, Google® Protocol Buffers (protobuf), or some other document(s)/format(s) such as those discussed herein. An information object may have both a logical and a physical structure. Physically, an information object comprises one or more units called entities. An entity is a unit of storage that contains content and is identified by a name. An entity may refer to other entities to cause their inclusion in the information object. An information object begins in a document entity, which is also referred to as a root element (or "root"). Logically, an information object comprises one or more declarations, elements, comments, character references, and processing instructions, all of which are indicated in the information object (e.g., using markup).
The term “data item” as used herein refers to an atomic state of a particular object with at least one specific property at a certain point in time. Such an object is usually identified by an object name or object identifier, and properties of such an object are usually defined as database objects (e.g., fields, records, etc.), object instances, or data elements (e.g., mark-up language elements/tags, etc.). Additionally or alternatively, the term “data item” as used herein may refer to data elements and/or content items, although these terms may refer to difference concepts. The term “data element” or “element” as used herein refers to a unit that is indivisible at a given level of abstraction and has a clearly defined boundary. A data element is a logical component of an information object (e.g., electronic document) that may begin with a start tag (e.g., “<element>”) and end with a matching end tag (e.g., “</element>”), or only has an empty element tag (e.g., “<element />”). Any characters between the start tag and end tag, if any, are the element’s content (referred to herein as “content items” or the like).
The content of an entity may include one or more content items, each of which has an associated datatype representation. A content item may include, for example, attribute values, character values, URIs, qualified names (qnames), parameters, and the like. A qname is a fully qualified name of an element, attribute, or identifier in an information object. A qname associates a URI of a namespace with a local name of an element, attribute, or identifier in that namespace. To make this association, the qname assigns a prefix to the local name that corresponds to its namespace. The qname comprises a URI of the namespace, the prefix, and the local name. Namespaces are used to provide uniquely named elements and attributes in information objects. Content items may include text content (e.g., “<element>content item</element>”), attributes (e.g., “<element attribute="attributeValue">”), and other elements referred to as “child elements” (e.g., “<elementl><element2>content item</element2></elementl>”). An “attribute” may refer to a markup construct including a name-value pair that exists within a start tag or empty element tag. Attributes contain data related to its element and/or control the element’s behavior.
The term “channel” as used herein refers to any transmission medium, either tangible or intangible, which is used to communicate data or a data stream. The term “channel” may be synonymous with and/or equivalent to “communications channel,” “data communications channel,” “transmission channel,” “data transmission channel,” “access channel,” “data access channel,” “link,” “data link,” “carrier,” “radiofrequency carrier,” and/or any other like term denoting a pathway or medium through which data is communicated. Additionally, the term “link” as used herein refers to a connection between two devices through a RAT for the purpose of transmitting and receiving information. As used herein, the term “radio technology” refers to technology for wireless transmission and/or reception of electromagnetic radiation for information transfer. The term “radio access technology” or “RAT” refers to the technology used for the underlying physical connection to a radio based communication network. As used herein, the term “communication protocol” (either wired or wireless) refers to a set of standardized rules or instructions implemented by a communication device and/or system to communicate with other devices and/or systems, including instructions for packetizing/depacketizing data, modulating/demodulating signals, implementation of protocols stacks, and/or the like.
As used herein, the term “radio technology” refers to technology for wireless transmission and/or reception of electromagnetic radiation for information transfer. The term “radio access technology” or “RAT” refers to the technology used for the underlying physical connection to a radio based communication network. As used herein, the term “communication protocol” (either wired or wireless) refers to a set of standardized rules or instructions implemented by a communication device and/or system to communicate with other devices and/or systems, including instructions for packetizing/depacketizing data, modulating/ demodulating signals, implementation of protocols stacks, and/or the like. Examples of wireless communications protocols may be used in various embodiments include a Global System for Mobile Communications (GSM) radio communication technology, a General Packet Radio Service (GPRS) radio communication technology, an Enhanced Data Rates for GSM Evolution (EDGE) radio communication technology, and/or a Third Generation Partnership Project (3GPP) radio communication technology including, for example, 3GPP Fifth Generation (5G) or New Radio (NR), Universal Mobile Telecommunications System (UMTS), Freedom of Multimedia Access (FOMA), Long Term Evolution (LTE), LTE- Advanced (LTE Advanced), LTE Extra, LTE-A Pro, cdmaOne (2G), Code Division Multiple Access 2000 (CDMA 2000), Cellular Digital Packet Data (CDPD), Mobitex, Circuit Switched Data (CSD), High-Speed CSD (HSCSD), Universal Mobile Telecommunications System (UMTS), Wideband Code Division Multiple Access (W-CDM), High Speed Packet Access (HSPA), HSPA Plus (HSPA+), Time Division-Code Division Multiple Access (TD-CDMA), Time Division-Synchronous Code Division Multiple Access (TD-SCDMA), LTE LAA, MuLTEfire, UMTS Terrestrial Radio Access (UTRA), Evolved UTRA (E-UTRA), Evolution-Data Optimized or Evolution-Data Only (EV-DO), Advanced Mobile Phone System (AMPS), Digital AMPS (D-AMPS), Total Access Communication System/Extended Total Access Communication System (TACS/ETACS), Push- to-talk (PTT), Mobile Telephone System (MTS), Improved Mobile Telephone System (IMTS), Advanced Mobile Telephone System (AMTS), Cellular Digital Packet Data (CDPD), DataTAC, Integrated Digital Enhanced Network (iDEN), Personal Digital Cellular (PDC), Personal Handyphone System (PHS), Wideband Integrated Digital Enhanced Network (WiDEN), iBurst, Unlicensed Mobile Access (UMA), also referred to as also referred to as 3GPP Generic Access Network, or GAN standard), Bluetooth®, Bluetooth Low Energy (BLE), IEEE 802.15.4 based protocols (e.g., IPv6 over Low power Wireless Personal Area Networks (6L0WPAN), WirelessHART, MiWi, Thread, 802.11a, etc.) WiFi-direct, ANT/ANT+, ZigBee, Z-Wave, 3GPP device-to-device (D2D) or Proximity Services (ProSe), Universal Plug and Play (UPnP), Low- Power Wide-Area-Network (LPWAN), Long Range Wide Area Network (LoRA) or LoRaWAN™ developed by Semtech and the LoRa Alliance, Sigfox, Wireless Gigabit Alliance (WiGig) standard, Worldwide Interoperability for Microwave Access (WiMAX), mmWave standards in general (e.g., wireless systems operating at 10-300 GHz and above such as WiGig, IEEE 802.11 ad, IEEE 802.11 ay, etc.), V2X communication technologies (including 3GPP C- V2X), Dedicated Short Range Communications (DSRC) communication systems such as Intelligent-Transport-Systems (ITS) including the European ITS-G5, ITS-G5B, ITS-G5C, etc. In addition to the standards listed above, any number of satellite uplink technologies may be used for purposes of the present disclosure including, for example, radios compliant with standards issued by the International Telecommunication Union (ITU), or the European Telecommunications Standards Institute (ETSI), among others. The examples provided herein are thus understood as being applicable to various other communication technologies, both existing and not yet formulated.
The term “access network” refers to any network, using any combination of radio technologies, RATs, and/or communication protocols, used to connect user devices and service providers. In the context of WLANs, an “access network” is an IEEE 802 local area network (LAN) or metropolitan area network (MAN) between terminals and access routers connecting to provider services. The term “access router” refers to router that terminates a medium access control (MAC) service from terminals and forwards user traffic to information servers according to Internet Protocol (IP) addresses.
The term “SMTC” refers to an SSB-based measurement timing configuration configured by SSB-MeasurementTimingConfiguration. The term “SSB” refers to a synchronization signal/Physical Broadcast Channel (SS/PBCH) block, which includes a Primary Syncrhonization Signal (PSS), a Secondary Syncrhonization Signal (SSS), and a PBCH. The term “a “Primary Cell” refers to the MCG cell, operating on the primary frequency, in which the UE either performs the initial connection establishment procedure or initiates the connection reestablishment procedure. The term “Primary SCG Cell” refers to the SCG cell in which the UE performs random access when performing the Reconfiguration with Sync procedure for DC operation. The term “Secondary Cell” refers to a cell providing additional radio resources on top of a Special Cell for a UE configured with CA. The term “Secondary Cell Group” refers to the subset of serving cells comprising the PSCell and zero or more secondary cells for a UE configured with DC. The term “Serving Cell” refers to the primary cell for a UE in RRC_CONNECTED not configured with CA/DC there is only one serving cell comprising of the primary cell. The term “serving cell” or “serving cells” refers to the set of cells comprising the Special Cell(s) and all secondary cells for a UE in RRC CONNECTED configured with CA. The term “Special Cell” refers to the PCell of the MCG or the PSCell of the SCG for DC operation; otherwise, the term “Special Cell” refers to the Pcell.
The term “All Photonics Network” or “APN” refers to communications infrastructure of the future comprising a high-quality all-optical network that responds to increasingly diverse and complex needs by making use of photonics-based technologies in all areas from network to terminal. This approach enables a variety of benefits including power saving, larger capacity networks and ultra-low delay on the order of a 100X improvement over traditional networks.
The term “digital twins” refers to virtual, digital replicas of actual or potential physical devices, processes, people, places and systems that can be used to run simulations. IOWN’S use of digital twin computing will be a significant advancement enabling previously impossible new, large-scale, high-precision real world reproductions by performing numerous operations to freely combine various digital twins.
The term “inter connectivity,” at least in the context of I0WN, refers to the linking of optical networks so that users of one all-photonics network can communicate with users of another all-photonics network.
The term “framework” refers to a collection of documents that define the specification and underlying implementation details.
The term “large-scale simulation” refers to a simulation is an approximate imitation of the operation of a process or system representing its operation over time.
The term “ultra-wideband” or “UWB” refers to a wireless technology (or RAT) for transmitting large amounts of digital data over a wide spectrum of frequency bands with very low power for a short distance.
The term “user interface” or “UI” refers to a graphical layout of an application, which may include buttons users click on, the text they read, the images, sliders, text entry fields, and all the rest of the items the user interacts with. “UX” stands for “user experience.” The term “user experience” or “UX” refers to a determination of how easy or difficult it is to interact with a user interface element.
In addition to the above, any of the disclosed embodiments and example implementations can be embodied in the form of various types of hardware, software, firmware, middleware, or combinations thereof, including in the form of control logic, and using such hardware or software in a modular or integrated manner. Additionally, any of the software components or functions described herein can be implemented as software, program code, script, instructions, etc., operable to be executed by processor circuitry. These components, functions, programs, etc., can be developed using any suitable computer language such as, for example, Python, PyTorch, NumPy, Ruby, Ruby on Rails, Scala, Smalltalk, Java™, C++, C#, “C”, Kotlin, Swift, Rust, Go (or “Golang”), EMCAScript, JavaScript, TypeScript, Jscript, ActionScript, Server-Side JavaScript (SSJS), PHP, Pearl, Lua, Torch/Lua with Just-In Time compiler (LuaJIT), Accelerated Mobile Pages Script (AMPscript), VBScript, JavaServer Pages (JSP), Active Server Pages (ASP), Node.js, ASP.NET, JAMscript, Hypertext Markup Language (HTML), extensible HTML (XHTML), Extensible Markup Language (XML), XML User Interface Language (XUL), Scalable Vector Graphics (SVG), RESTful API Modeling Language (RAML), wiki markup or Wikitext, Wireless Markup Language (WML), Java Script Object Notion (JSON), Apache® MessagePack™, Cascading Stylesheets (CSS), extensible stylesheet language (XSL), Mustache template language, Handlebars template language, Guide Template Language (GTL), Apache® Thrift, Abstract Syntax Notation One (ASN.l), Google® Protocol Buffers (protobuf), Bitcoin Script, EVM® bytecode, Solidity™, Vyper (Python derived), Bamboo, Lisp Like Language (LLL), Simplicity provided by Blockstream™, Rholang, Michelson, Counterfactual, Plasma, Plutus, Sophia, Salesforce® Apex®, and/or any other programming language or development tools including proprietary programming languages and/or development tools. The software code can be stored as a computer- or processorexecutable instructions or commands on a physical non-transitory computer-readable medium. Examples of suitable media include RAM, ROM, magnetic media such as a hard-drive or a floppy disk, or an optical medium such as a compact disk (CD) or DVD (digital versatile disk), flash memory, and the like, or any combination of such storage or transmission devices.
Embodiments described herein are not intended to be exhaustive or to limit the scope of this disclosure to the precise forms disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of various embodiments. Where specific details are set forth in order to describe example embodiments of the disclosure, it should be apparent to one skilled in the art that the disclosure can be practiced without, or with variation of, these specific details. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.

Claims

1. A method to be implemented by an in-network service mesh function of a compute service function (CSF), the method comprising: identifying, by the in-network service mesh function, a device associated with the in- network service mesh function; identifying, by the in-network service mesh function, a service request received from the device; identifying, by the in-network service mesh function in a service registry of the CSF based on the service request, a service instance for the device; and transmitting, by the in-network mesh function, an indication of the service instance.
2. The method of claim 1, wherein the identification of the service instance is based on transmitting, by the in-network service mesh function, a query related to the service request to the service registry.
3. The method of claim 1, wherein the in-network service mesh function is one of a plurality of in-network service mesh functions of the CSF.
4. The method of claim 3, wherein respective ones of the plurality of in-network service mesh functions are associated with different ones of a plurality of devices.
5. The method of any of claims 1-4, wherein the in-network service mesh function is to identify the service instance for the device based on at least one of: location of the device, location of the service instance, load of the service instance, and estimated service response time.
6. The method of any of claims 1-4, wherein the indication of the service instance includes at least one of: an assigned service instance address, an allocated computing resource, and service execution results.
7. The method of any of claims 1-4, wherein the device is associated with the in-network service mesh function through one or more of: a network-controller managed association, a dynamic host configuration protocol (DHCP) server managed association, and a device autonomous association.
- 43 -
8. One or more non-transitory computer-readable media comprising instructions that, upon execution of the instructions by one or more processors of a device, are to cause a service orchestration function - client (SOF-C) of the device to: identify, from a client application of the device, a service request; identifying, based on interaction with one or more of an operating system (OS) of the device and a service orchestration function - network (SOF-N) of a compute service function (CSF) to which the device is communicatively coupled, information related to availability of a service related to the service request; and steering, based on the information related to the availability of the service, the service request for local execution or network execution.
9. The one or more non-transitory computer-readable media of claim 8, wherein local execution relates to execution of the service by the device.
10. The one or more non-transitory computer-readable media of claim 8, wherein network execution relates to execution of the service by an electronic device to which the device is communicatively coupled.
11. The one or more non-transitory computer-readable media of claim 8, wherein the steering is based on a determination made by the SOF-N based on the information related to availability of the service.
12. The one or more non-transitory computer-readable media of claim 11, wherein the information is related to network service registry.
13. The one or more non-transitory computer-readable media of any of claims 8-12, wherein the steering is based on a determination made by the SOF-C based on the information related to availability of the service.
14. The one or more non-transitory computer-readable media of any of claims 8-12, wherein the information includes information related to one or more of: service availability, network condition, device batter level, and estimated service execution time.
15. An apparatus comprising:
- 44 - one or more processors; one or more non-transitory computer-readable media comprising instructions that, upon execution of the instructions by the one or more processors, are to cause a resource orchestration function - client (ROF-C) to: identify, based on communication with a resource orchestration function - network (ROF-N) of a network, a plurality of resources of the network that are usable by the apparatus; providing an indication of the plurality of resources to an operating system (OS) of the apparatus; identifying a request from the OS for access to a resource of the plurality of resources; transmitting an indication of the request to the ROF-N; and identifying, based on a response to the request received from the ROF-N, whether to establish a data path between the device and the network such that the OS has access to the resource.
16. The apparatus of claim 15, wherein the ROF-C is to identify the plurality of resources based on a list broadcasted by the ROF-N.
17. The apparatus of claim 15, wherein the ROF-C is to identify the plurality of resources based on a response, by the ROF-N, to an inquiry transmitted to the ROF-N by the ROF-C.
18. The apparatus of any of claims 15-17, wherein the resource is a computing resource or capability of an element of the network.
19. The apparatus of any of claims 15-17, wherein the ROF-C is to establish the data path if the response to the request indicates acceptance by the ROF-N.
20. The apparatus of any of claims 15-17, wherein the ROF-C is to not establish the data path if the response to the request indicates denial by the ROF-N.
- 45 -
PCT/US2021/060270 2020-12-08 2021-11-22 Mechanisms for enabling in-network computing services WO2022125296A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202180075376.7A CN116868556A (en) 2020-12-08 2021-11-22 Mechanism for implementing in-network computing services

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063122768P 2020-12-08 2020-12-08
US63/122,768 2020-12-08

Publications (1)

Publication Number Publication Date
WO2022125296A1 true WO2022125296A1 (en) 2022-06-16

Family

ID=81973912

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/060270 WO2022125296A1 (en) 2020-12-08 2021-11-22 Mechanisms for enabling in-network computing services

Country Status (2)

Country Link
CN (1) CN116868556A (en)
WO (1) WO2022125296A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150011815A (en) * 2012-05-23 2015-02-02 알까뗄 루슨트 Connectivity service orchestrator
US20190166009A1 (en) * 2017-11-29 2019-05-30 Amazon Technologies, Inc. Network planning and optimization

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150011815A (en) * 2012-05-23 2015-02-02 알까뗄 루슨트 Connectivity service orchestrator
US20190166009A1 (en) * 2017-11-29 2019-05-30 Amazon Technologies, Inc. Network planning and optimization

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
BARAKABITZE ALCARDO ALEX; AHMAD ARSLAN; MIJUMBI RASHID; HINES ANDREW: "5G network slicing using SDN and NFV: A survey of taxonomy, architectures and future challenges", COMPUTER NETWORKS, vol. 167, 17 November 2019 (2019-11-17), AMSTERDAM, NL , XP086020215, ISSN: 1389-1286, DOI: 10.1016/j.comnet.2019.106984 *
DECHOUNIOTIS DIMITRIOS, ATHANASOPOULOS NIKOLAOS, LEIVADEAS ARIS, MITTON NATHALIE, JUNGERS RAPHAEL, PAPAVASSILIOU SYMEON: "Edge Computing Resource Allocation for Dynamic Networks: The DRUID-NET Vision and Perspective", SENSORS, vol. 20, no. 8, 13 April 2020 (2020-04-13), CH , pages 2191, XP055787002, ISSN: 1424-8220, DOI: 10.3390/s20082191 *
SHIH YUAN-YAO; LIN HSIN-PENG; PANG AI-CHUN; CHUANG CHING-CHIH; CHOU CHUN-TING: "An NFV-Based Service Framework for IoT Applications in Edge Computing Environments", IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, IEEE, USA, vol. 16, no. 4, 1 December 2019 (2019-12-01), USA , pages 1419 - 1434, XP011759538, DOI: 10.1109/TNSM.2019.2948764 *

Also Published As

Publication number Publication date
CN116868556A (en) 2023-10-10

Similar Documents

Publication Publication Date Title
US20230141237A1 (en) Techniques for management data analytics (mda) process and service
WO2022087482A1 (en) Resource allocation for new radio multicast-broadcast service
WO2022087474A1 (en) Intra-user equipment prioritization for handling overlap of uplink control and uplink data channels
WO2022031553A1 (en) Data plane for big data and data as a service in next generation cellular networks
WO2022221260A1 (en) O-cloud lifecycle management service support
WO2022087489A1 (en) Downlink control information (dci) based beam indication for new radio (nr)
WO2022125296A1 (en) Mechanisms for enabling in-network computing services
WO2024015747A1 (en) Session management function selection in cellular networks supporting distributed non-access stratum between a device and network functions
WO2024076852A1 (en) Data collection coordination function and network data analytics function framework for sensing services in next generation cellular networks
WO2023049345A1 (en) Load balancing optimization for 5g systems
WO2022232038A1 (en) Performance measurements for unified data repository (udr)
US20240155393A1 (en) Measurement reporting efficiency enhancement
WO2024092132A1 (en) Artificial intelligence and machine learning entity loading in cellular networks
WO2024081642A1 (en) Pipelining services in next-generation cellular networks
WO2024020519A1 (en) Systems and methods for sharing unstructured data storage function services
WO2024097783A1 (en) Federated learning group authorization of network data analytics functions in 5g core
WO2022221495A1 (en) Machine learning support for management services and management data analytics services
WO2023122037A1 (en) Measurements and location data supporting management data analytics (mda) for coverage problem analysis
WO2024091970A1 (en) Performance evaluation for artificial intelligence/machine learning inference
WO2023014745A1 (en) Performance measurements for network exposure function
WO2024026515A1 (en) Artificial intelligence and machine learning entity testing
WO2022261028A1 (en) Data functions and procedures in the non-real time radio access network intelligent controller
WO2023055852A1 (en) Performance measurements for policy authorization and event exposure for network exposure functions
WO2024039950A2 (en) Constrained application protocol for computing services in cellular networks
EP4240050A1 (en) A1 enrichment information related functions and services in the non-real time ran intelligent controller

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21904102

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202180075376.7

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21904102

Country of ref document: EP

Kind code of ref document: A1