CN111684824B - Enhanced NEF function, MEC, and 5G integration - Google Patents

Enhanced NEF function, MEC, and 5G integration Download PDF

Info

Publication number
CN111684824B
CN111684824B CN201880088317.1A CN201880088317A CN111684824B CN 111684824 B CN111684824 B CN 111684824B CN 201880088317 A CN201880088317 A CN 201880088317A CN 111684824 B CN111684824 B CN 111684824B
Authority
CN
China
Prior art keywords
wtru
message
network
csp
location
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201880088317.1A
Other languages
Chinese (zh)
Other versions
CN111684824A (en
Inventor
迪巴舍希·帕卡亚斯塔
米歇尔·佩拉斯
罗伯特·G·加兹达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Publication of CN111684824A publication Critical patent/CN111684824A/en
Application granted granted Critical
Publication of CN111684824B publication Critical patent/CN111684824B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/20Services signaling; Auxiliary data signalling, i.e. transmitting data via a non-traffic channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W76/00Connection management
    • H04W76/10Connection setup
    • H04W76/12Setup of transport tunnels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W76/00Connection management
    • H04W76/20Manipulation of established connections
    • H04W76/22Manipulation of transport tunnels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/51Discovery or management thereof, e.g. service location protocol [SLP] or web services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/50Service provisioning or reconfiguring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/02Hierarchically pre-organised networks, e.g. paging networks, cellular networks, WLAN [Wireless Local Area Network] or WLL [Wireless Local Loop]
    • H04W84/04Large scale networks; Deep hierarchical networks
    • H04W84/042Public Land Mobile systems, e.g. cellular systems

Abstract

Methods, devices, and systems for a third party edge Cloud Service Provider (CSP) to provide edge computing services to a network service provider. The edge computing service is initialized in the network. Network information services of the network are discovered. The location of cloud resources in the network may change dynamically. User plane traffic is steered to the location of the cloud resources. In some embodiments, initializing the edge computing service includes transmitting an identification of active users and data network names to a network function virtualization management and organization system, or transmitting a list of user subscriptions to a network exposure function. In some embodiments, dynamically changing the location of the cloud resources and steering user-plane traffic to the location of the cloud resources includes determining a number of users of the edge application at a location and network requirements of the edge application.

Description

Enhanced NEF function, MEC, and 5G integration
Cross Reference to Related Applications
This application claims the benefit of U.S. patent application serial No. 62/599,335, filed on 12, 15, 2017, the contents of which are incorporated herein by reference.
Background
Mobile Edge Computing (MEC) is an emerging technology that may enable service and content providers to offer their applications and services on the edge of a network to reach the core network, rather than utilizing applications in a data center. The architecture based on the 3GPP 5G service describes a service function called a Network Exposure Function (NEF) that exposes network services to application functions. These application functions may be owned by the network operator or by a trusted third party service provider.
Disclosure of Invention
Some embodiments provide methods, apparatus and systems for any edge computing service provider (e.g., a third party service provider or a network operator) to provide edge computing services to a network service provider or network operator. An edge computing service is initialized in the network. Network information services of the network are discovered. The location of cloud resources on which a Mobile Edge Application (MEA) can run can be dynamically changed or configured. For example, user plane traffic may be steered (steer) to the location of cloud resources.
In some embodiments, initializing the edge computing service includes transmitting an identification and Data Network Name (DNN) of the active user to a Network Function Virtualization (NFV) management and organization (MANO) system, or transmitting a list of user subscriptions to a Network Exposure Function (NEF).
In some embodiments, discovering network information includes monitoring a user's location and/or network conditions and/or obtaining network information services from a network operator. In some embodiments, dynamically changing the location of the cloud resources and steering user-plane traffic to the location of the cloud resources includes determining a number of users of the edge application at a location and bandwidth and/or latency requirements of the edge application. In some embodiments, dynamically changing the location of cloud resources that the MEA can operate and manipulating the user plane to the location of the traffic cloud resources includes transmitting a message to a Network Exposure Function (NEF) to update the user plane. In some embodiments, the message to the NEF to update the user plane includes an application identification, a user identification, and a Data Network Name (DNN).
Drawings
The invention may be understood in more detail from the following description, given by way of example in conjunction with the accompanying drawings, in which like reference numerals identify like elements, and in which:
FIG. 1A is a system diagram illustrating an example communication system in which one or more disclosed embodiments may be implemented;
figure 1B is a system diagram illustrating an exemplary wireless transmit/receive unit (WTRU) that may be used within the communication system shown in figure 1A, according to an embodiment;
fig. 1C is a system diagram illustrating an example Radio Access Network (RAN) and an example Core Network (CN) that may be used within the communication system shown in fig. 1A, according to an embodiment;
fig. 1D is a system diagram illustrating another example RAN and another example CN that may be used within the communication system shown in fig. 1A, according to an embodiment;
FIG. 2 is a system diagram showing aspects of a far-edge cloud;
FIG. 3 is a system diagram illustrating an example service-based architecture;
FIG. 4 is a system diagram illustrating an example non-roaming 5G system architecture;
FIG. 5 is a system diagram illustrating an example non-roaming 5G system architecture for multiple PDU sessions;
FIG. 6 is a system diagram illustrating an example of a single PDU session in a non-roaming 5G system architecture;
FIG. 7 is a message sequence chart illustrating an example session setup procedure for roaming and non-roaming with LBO conditions;
FIG. 8 is a system diagram illustrating the ETSI MEC architecture;
FIG. 9 is a system diagram illustrating an example NFV management and organization (MANO);
FIG. 10 is a system diagram illustrating an example logical architecture for MEC and 5G system integration;
FIG. 11 is a message sequence chart illustrating an example process for enabling third party CSPs;
FIG. 12 is a tree diagram illustrating example cloud resources deployed at different points in a network;
FIG. 13 is a message sequence chart showing an example initialization process in which the CSP updates the MNO's database with a list of active subscribers;
FIG. 14 is a message sequence chart illustrating another example process;
FIG. 15 is a message sequence chart illustrating another example process;
FIG. 16 is a message sequence chart illustrating an example process of a discovery method;
FIG. 17 is a message sequence chart illustrating an example subscription process;
FIG. 18 is a message sequence chart illustrating a process for implementing the first option of dynamic reconfiguration;
FIG. 19 is a message sequence chart illustrating a process for implementing a second option for dynamic reconfiguration;
FIG. 20 is a message sequence chart showing a process for implementing the third option for dynamic reconfiguration;
FIG. 21 is a block diagram showing a CSP cloud service as a neutral host and providing edge services for more than one network operator; and
fig. 22 is a message sequence chart showing the CSP's interaction with the NEF from each network operator.
Detailed Description
Fig. 1A is a diagram illustrating an example communication system 100 in which one or more disclosed embodiments may be implemented. The communication system 100 may be a multiple-access system that provides content, such as voice, data, video, messaging, broadcast, etc., to a plurality of wireless users. The communication system 100 may enable multiple wireless users to access such content by sharing system resources, including wireless bandwidth. For example, communication system 100 may use one or more channel access methods such as Code Division Multiple Access (CDMA), time Division Multiple Access (TDMA), frequency Division Multiple Access (FDMA), orthogonal FDMA (OFDMA), single carrier FDMA (SC-FDMA), zero tail unique word DFT spread OFDM (ZT UW DTS-s OFDM), unique word OFDM (UW-OFDM), resource block filtered OFDM, and filter bank multi-carrier (FBMC), among others.
As shown in fig. 1A, the communication system 100 may include wireless transmit/receive units (WTRUs) 102a, 102b, 102c, 102d, RANs 104, CNs 106, public Switched Telephone Networks (PSTN) 108, the internet 110, and other networks 112, although it should be understood that any number of WTRUs, base stations, networks, and/or network elements are contemplated by the disclosed embodiments. Each WTRU102a, 102b, 102c, 102d may be any type of device configured to operate and/or communicate in a wireless environment. For example, any of the WTRUs 102a, 102b, 102c, 102d may be referred to as a "station" and/or a "STA," which may be configured to transmit and/or receive wireless signals, and may include User Equipment (UE), a mobile station, a fixed or mobile subscriber unit, a subscription-based unit, a pager, a cellular telephone, a Personal Digital Assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, a hotspot or Mi-Fi device, an internet of things (IoT) device, a watch or other wearable device, a head-mounted display (HMD), a vehicle, a drone, medical devices and applications (e.g., tele-surgery), industrial devices and applications (e.g., robots and/or other wireless devices operating in industrial and/or automated processing chain environments), consumer electronics, and devices operating on commercial and/or industrial wireless networks, among others. Any of the WTRUs 102a, 102b, 102c, 102d may be interchangeably referred to as a UE.
Communication system 100 may also include base station 114a and/or base station 114b. Each base station 114a, 114b may be any type of device configured to facilitate access to one or more communication networks (e.g., CN 106/115, internet 110, and/or other networks 112) by wirelessly interfacing with at least one of the WTRUs 102a, 102b, 102c, 102 d. For example, the base stations 114a, 114B may be Base Transceiver Stations (BTSs), node Bs, eNodeBs, home NodeBs, home eNodeBs, gNBs, NR NodeBs, site controllers, access Points (APs), and wireless routers, among others. Although each base station 114a, 114b is depicted as a single element, it should be appreciated that the base stations 114a, 114b may include any number of interconnected base stations and/or network elements.
Base station 114a may be part of RAN 104/113, and the RAN may also include other base stations and/or network elements (not shown), such as Base Station Controllers (BSCs), radio Network Controllers (RNCs), relay nodes, and so forth. Base station 114a and/or base station 114b may be configured to transmit and/or receive wireless signals on one or more carrier frequencies, known as cells (not shown). These frequencies may be in licensed spectrum, unlicensed spectrum, or a combination of licensed and unlicensed spectrum. A cell may provide wireless service coverage for a particular geographic area that is relatively fixed or may vary over time. The cell may be further divided into cell sectors. For example, the cell associated with base station 114a may be divided into three sectors. Thus, in one embodiment, the base station 114a may include three transceivers, that is, each transceiver corresponds to a sector of a cell. In an embodiment, base station 114a may use multiple-input multiple-output (MIMO) technology and may use multiple transceivers for each sector of a cell. For example, using beamforming, signals may be transmitted and/or received in desired spatial directions.
The base stations 114a, 114b may communicate with one or more of the WTRUs 102a, 102b, 102c, 102d over an air interface 116, which may be any suitable wireless communication link (e.g., radio Frequency (RF), microwave, centimeter-wave, millimeter-wave, infrared (IR), ultraviolet (UV), visible, and so on). Air interface 116 may be established using any suitable Radio Access Technology (RAT).
More specifically, as described above, communication system 100 may be a multiple-access system and may use one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, and SC-FDMA, among others. For example, the base station 114a in the RAN 104 and the WTRUs 102a, 102b, 102c may implement a radio technology such as Universal Mobile Telecommunications System (UMTS) terrestrial radio access (UTRA), which may establish the air interfaces 115/116/117 using Wideband CDMA (WCDMA). WCDMA may include communication protocols such as High Speed Packet Access (HSPA) and/or evolved HSPA (HSPA +). HSPA may include high speed Downlink (DL) packet access (HSDPA) and/or High Speed UL Packet Access (HSUPA).
In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as evolved UMTS terrestrial radio access (E-UTRA), which may establish the air interface 116 using Long Term Evolution (LTE) and/or LTE-advanced (LTE-a) and/or LTE-Pro (LTE-a Pro).
In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology, such as NR radio access, that may use NR to establish the air interface 116.
In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement multiple radio access technologies. For example, the base station 114a and the WTRUs 102a, 102b, 102c may collectively implement LTE radio access and NR radio access (e.g., using Dual Connectivity (DC) principles). As such, the air interface used by the WTRUs 102a, 102b, 102c may be characterized by multiple types of radio access technologies and/or transmissions sent to/from multiple types of base stations (e.g., enbs and gnbs).
In other embodiments, the base station 114a and the WTRUs 102a, 102b, 102c may implement radio technologies such as IEEE 802.11 (i.e., wireless high fidelity (WiFi)), IEEE 802.16 (i.e., worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000 1X, CDMA2000 EV-DO, temporary Standard 2000 (IS-2000), temporary Standard 95 (IS-95), temporary Standard 856 (IS-856), global System for Mobile communications (GSM), enhanced data rates for GSM evolution (EDGE), and GSM EDGE (GERAN), among others.
The base station 114B in fig. 1A may be, for example, a wireless router, a home nodeb, a home enodeb, or an access point, and may use any suitable RAT to facilitate wireless connectivity in local areas such as business establishments, homes, vehicles, campuses, industrial facilities, air corridors (e.g., for use by drones), roads, and so forth. In one embodiment, the base station 114b and the WTRUs 102c, 102d may establish a Wireless Local Area Network (WLAN) by implementing a radio technology such as IEEE 802.11. In an embodiment, the base station 114b and the WTRUs 102c, 102d may establish a Wireless Personal Area Network (WPAN) by implementing a radio technology such as IEEE 802.15. In yet another embodiment, the base station 114b and the WTRUs 102c, 102d may establish pico cells or femto cells using a cellular-based RAT (e.g., WCDMA, CDMA2000, GSM, LTE-A Pro, NR, etc.). As shown in fig. 1A, the base station 114b may be directly connected to the internet 110. Thus, the base station 114b need not access the Internet 110 via the CN 106/115.
The RAN 104/113 may communicate with a CN 106/115, which may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more WTRUs 102a, 102b, 102c, 102 d. The data may have different quality of service (QoS) requirements, such as different throughput requirements, latency requirements, fault tolerance requirements, reliability requirements, data throughput requirements, and mobility requirements, among others. The CNs 106/115 can provide call control, billing services, mobile location-based services, pre-paid calling, internet connectivity, video distribution, etc., and/or can perform high-level security functions such as user authentication. Although not shown in fig. 1A, it should be appreciated that the RANs 104/113 and/or CNs 106/115 may communicate directly or indirectly with other RANs that employ the same RAT as the RANs 104/113 or a different RAT. For example, in addition to connecting to the RAN 104/113, which may use NR radio technology, the CN 106/115 may communicate with another RAN (not shown) that uses GSM, UMTS, CDMA2000, wiMAX, E-UTRA, or WiFi radio technology.
The CN 106/115 may also act as a gateway for the WTRUs 102a, 102b, 102c, 102d to access the PSTN 108, the Internet 110, and/or other networks 112. The PSTN 108 may include a circuit-switched telephone network that provides Plain Old Telephone Service (POTS). The internet 110 may include a system of globally interconnected computer network devices that utilize common communication protocols, such as transmission control protocol/internet protocol (TCP), user Datagram Protocol (UDP) and/or IP in the TCP/IP internet protocol suite. The network 112 may include wired and/or wireless communication networks owned and/or operated by other service providers. For example, the network 112 may include another CN connected to one or more RANs, which may use the same RAT as the RAN 104/113 or a different RAT.
Some or all of the WTRUs 102a, 102b, 102c, 102d in the communication system 100 may include multi-mode capabilities (e.g., the WTRUs 102a, 102b, 102c, 102d may include multiple transceivers communicating with different wireless networks over different wireless links). For example, the WTRU102 c shown in figure 1A may be configured to communicate with the base station 114a using a cellular-based radio technology and with the base station 114b, which may use an IEEE802 radio technology.
Figure 1B is a system diagram illustrating an example WTRU 102. As shown in fig. 1B, the WTRU102 may include a processor 118, a transceiver 120, a transmit/receive element 122, a speaker/microphone 124, a keypad 126, a display/touch pad 128, non-removable memory 130, removable memory 132, a power source 134, a Global Positioning System (GPS) chipset 136, and/or other peripherals 138, and/or the like. It should be appreciated that the WTRU102 may include any subcombination of the foregoing elements while maintaining compliance with the embodiments.
The processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a Digital Signal Processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) circuits, any other type of Integrated Circuit (IC), a state machine, or the like. The processor 118 may perform signal decoding, data processing, power control, input/output processing, and/or any other functions that enable the WTRU102 to operate in a wireless environment. The processor 118 may be coupled to a transceiver 120 and the transceiver 120 may be coupled to a transmit/receive element 122. Although fig. 1B depicts processor 118 and transceiver 120 as separate components, it should be understood that processor 118 and transceiver 120 may be integrated together in one electronic package or chip.
The transmit/receive element 122 may be configured to transmit or receive signals to or from a base station (e.g., base station 114 a) via the air interface 116. For example, in one embodiment, the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals. As an example, in an embodiment, the transmit/receive element 122 may be a radiator/detector configured to transmit and/or receive IR, UV or visible light signals. In yet another embodiment, the transmit/receive element 122 may be configured to transmit and/or receive RF and optical signals. It should be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.
Although transmit/receive element 122 is depicted in fig. 1B as a single element, WTRU102 may include any number of transmit/receive elements 122. More specifically, the WTRU102 may use MIMO technology. Thus, in one embodiment, the WTRU102 may include two or more transmit/receive elements 122 (e.g., multiple antennas) that transmit and receive wireless signals over the air interface 116.
Transceiver 120 may be configured to modulate signals to be transmitted by transmit/receive element 122 and to demodulate signals received by transmit/receive element 122. As described above, the WTRU102 may have multi-mode capabilities. Thus, the transceiver 120 may include multiple transceivers that allow the WTRU102 to communicate via multiple RATs (e.g., NR and IEEE 802.11).
The processor 118 of the WTRU102 may be coupled to and may receive user input data from a speaker/microphone 124, a keypad 126, and/or a display/touch pad 128, such as a Liquid Crystal Display (LCD) display unit or an Organic Light Emitting Diode (OLED) display unit. The processor 118 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128. Further, the processor 118 may access information from and store data in any suitable memory, such as the non-removable memory 130 and/or the removable memory 132. The non-removable memory 130 may include Random Access Memory (RAM), read Only Memory (ROM), a hard disk, or any other type of memory storage device. The removable memory 132 may include a Subscriber Identity Module (SIM) card, a memory stick, a Secure Digital (SD) memory card, and the like. In other embodiments, the processor 118 may access information from and store data in memory that is not physically located in the WTRU102, such memory may be located, for example, in a server or a home computer (not shown).
The processor 118 may receive power from the power source 134 and may be configured to distribute and/or control power for other components in the WTRU 102. The power source 134 may be any suitable device for powering the WTRU 102. For example, the power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (Ni-Cd), nickel-zinc (Ni-Zn), nickel metal hydride (NiMH), lithium ion (Li-ion), etc.), solar cells, and fuel cells, among others.
The processor 118 may also be coupled to a GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) related to the current location of the WTRU 102. In addition to or in lieu of information from the GPS chipset 136, the WTRU102 may receive location information from base stations (e.g., base stations 114a, 114 b) via the air interface 116 and/or determine its location based on the timing of signals received from two or more nearby base stations. It should be appreciated that the WTRU102 may obtain location information via any suitable location determination method while maintaining compliance with the embodiments.
The processor 118 may be further coupled to other peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality, and/or wired or wireless connectivity. For example, in the case of a liquid, peripheral devices 138 may include accelerometers, electronic compasses, satellite transceivers, digital cameras (for photos and/or video), universal Serial Bus (USB) ports, vibration devices, television transceivers, hands-free headsets, peripheral devices, and the like,
Figure BDA0002612075060000101
Modules, frequency Modulation (FM) radio units, digital music players, media players, video game modules, internet browsers, virtual reality and/or augmented reality (VR/AR) devices, and activity trackers, among others. The peripheral device 138 may include one or more sensors, which may be one or more of the following: gyroscopes, accelerometers, hall Effect sensors, magnetometers, orientation sensors, proximity sensors, temperature sensors, time sensors, geographical position sensors, altimeters, light sensors, touch SensorsA sensor, magnetometer, barometer, posture sensor, biometric sensor, and/or humidity sensor.
The WTRU102 may include a full duplex radio for which reception and transmission of some or all signals (e.g., associated with particular subframes for UL (e.g., for transmission) and downlink (e.g., for reception)) may be concurrent or simultaneous, etc. The full-duplex radio may include an interference management unit 139 that reduces and/or substantially eliminates self-interference via hardware (e.g., choke coils) or signal processing via a processor (e.g., a separate processor (not shown) or via the processor 118). In an embodiment, the WTRU102 may include a half-duplex radio that transmits or receives some or all signals, such as associated with a particular subframe for UL (e.g., for transmission) or downlink (e.g., for reception).
Figure 1C is a system diagram illustrating the RAN 104 and the CN 106 according to an embodiment. As described above, the RAN 104 may use E-UTRA radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 116. The RAN 104 may also communicate with a CN 106.
RAN 104 may include enodebs 160a, 160B, 160c, however, it should be appreciated that RAN 104 may include any number of enodebs while maintaining consistent embodiments. Each enodeb 160a, 160B, 160c may include one or more transceivers that communicate with the WTRUs 102a, 102B, 102c over the air interface 116. In one embodiment, the enodebs 160a, 160B, 160c may implement MIMO technology. Thus, for example, the enodeb 160a may use multiple antennas to transmit wireless signals to the WTRU102a and/or to receive wireless signals from the WTRU102 a.
Each enodeb 160a, 160B, 160c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, user scheduling in UL and/or DL, and so on. As shown in FIG. 1C, eNode Bs 160a, 160B, 160C may communicate with each other over an X2 interface.
The CN 106 shown in fig. 1C may include a Mobility Management Entity (MME) 162, a Serving Gateway (SGW) 164, and a Packet Data Network (PDN) gateway (or PGW) 166. While each of the foregoing elements are described as being part of CN 106, it should be understood that any of these elements may be owned and/or operated by an entity other than the CN operator.
The MME 162 may be connected to each enodeb 162a, 162B, 162c in the RAN 104 via an S1 interface and may act as a control node. For example, the MME 162 may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, performing bearer activation/deactivation processes, and selecting a particular serving gateway during initial attach of the WTRUs 102a, 102b, 102c, among other things. The MME 162 may also provide a control plane function for switching between the RAN 104 and other RANs (not shown) that employ other radio technologies (e.g., GSM or WCDMA).
The SGW 164 may be connected to each enodeb 160a, 160B, 160c in the RAN 104 via an S1 interface. SGW 164 may generally route and forward user data packets to/from WTRUs 102a, 102b, 102c. The SGW 164 may also perform other functions such as anchoring the user plane during inter-enodeb handovers, triggering paging processing when DL data is available for the WTRUs 102a, 102B, 102c, managing and storing the context of the WTRUs 102a, 102B, 102c, and the like.
The SGW 164 may be connected to a PGW 166, which may provide packet-switched network (e.g., internet 110) access for the WTRUs 102a, 102b, 102c to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices.
The CN 106 may facilitate communications with other networks. For example, the CN 106 may provide the WTRUs 102a, 102b, 102c with access to a circuit-switched network (e.g., the PSTN 108) to facilitate communications between the WTRUs 102a, 102b, 102c and conventional landline communication devices. For example, the CN 106 may include or communicate with an IP gateway (e.g., an IP Multimedia Subsystem (IMS) server), and the IP gateway may serve as an interface between the CN 106 and the PSTN 108. In addition, the CN 106 may provide the WTRUs 102a, 102b, 102c with access to other networks 112, which may include other wired or wireless networks owned and/or operated by other service providers.
Although the WTRU is depicted in fig. 1A-1D as a wireless terminal, it is contemplated that in some exemplary embodiments, such a terminal may use (e.g., temporarily or permanently) a wired communication interface with a communication network.
In a typical embodiment, the other network 112 may be a WLAN.
A WLAN in infrastructure Basic Service Set (BSS) mode may have an Access Point (AP) for the BSS and one or more Stations (STAs) associated with the AP. The AP may access another type of wired/wireless network that is either interfaced to a Distribution System (DS) or that carries traffic into and/or out of the BSS. Traffic originating outside the BSS and destined for the STAs may arrive through the AP and be delivered to the STAs. Traffic originating from the STAs and destined for destinations outside the BSS may be sent to the AP for delivery to the corresponding destinations. Traffic between STAs within the BSS may be transmitted through the AP, for example, in the case where the source STA may transmit traffic to the AP and the AP may deliver the traffic to the destination STA. Traffic between STAs within the BSS may be considered and/or referred to as point-to-point traffic. The point-to-point traffic may be transmitted between (e.g., directly between) the source and destination STAs using Direct Link Setup (DLS). In some exemplary embodiments, DLS may use 802.11e DLS or 802.11z Tunneled DLS (TDLS)). A WLAN using an Independent BSS (IBSS) mode may not have an AP, and STAs within or using the IBSS (e.g., all STAs) may communicate directly with each other. The IBSS communication mode may also be referred to herein as an "ad-hoc" communication mode.
When using the 802.11ac infrastructure mode of operation or similar mode of operation, the AP may transmit a beacon on a fixed channel (e.g., a primary channel). The primary channel may have a fixed width (e.g., a20 MHz bandwidth) or a width that is dynamically set via signaling. The primary channel may be an operating channel of the BSS and may be used by the STA to establish a connection with the AP. In some exemplary embodiments, carrier sense multiple access with collision avoidance (CSMA/CA) may be implemented (e.g., in 802.11 systems). For CSMA/CA, STAs (e.g., each STA) including the AP may sense the primary channel. A particular STA may back off if it senses/detects and/or determines that the primary channel is busy. In a given BSS, one STA (e.g., only one station) may transmit at any given time.
High Throughput (HT) STAs may communicate using 40MHz wide channels, for example, by combining a20 MHz wide primary channel with 20MHz wide adjacent or non-adjacent channels to form a 40MHz wide channel.
Very High Throughput (VHT) STAs may support channels that are 20MHz, 40MHz, 80MHz, and/or 160MHz wide. 40MHz and/or 80MHz channels may be formed by combining consecutive 20MHz channels. The 160MHz channel may be formed by combining 8 consecutive 20MHz channels or by combining two discontinuous 80MHz channels (such a combination may be referred to as an 80+80 configuration). For the 80+80 configuration, after channel encoding, the data may be passed through a segment parser that may split the data into two streams. Inverse Fast Fourier Transform (IFFT) processing and time domain processing may be performed separately on each stream. The streams may be mapped on two 80MHz channels and data may be transmitted by the transmitting STA. At the receiver of the receiving STA, the above operations for the 80+80 configuration may be reversed and the combined data may be sent to the Medium Access Control (MAC).
802.11af and 802.11ah support the second 1GHz mode of operation. The channel operating bandwidth and carriers used in 802.11af and 802.11ah are reduced compared to 802.11n and 802.11 ac. 802.11af supports 5MHz, 10MHz, and 20MHz bandwidths in the TV white space (TVWS) spectrum, and 802.11ah supports 1MHz, 2MHz, 4MHz, 8MHz, and 16MHz bandwidths using non-TVWS spectrum. In accordance with an exemplary embodiment, the 802.11ah may support meter type control/Machine Type Communication (MTC), such as MTC devices in a macro coverage area. MTC devices may have certain capabilities, such as limited capabilities including supporting (e.g., supporting only) certain and/or limited bandwidth. The MTC device may include a battery, and the battery life of the battery is above a threshold (e.g., to maintain a long battery life or power source life).
For WLAN systems that can support multiple channels and channel bandwidths (e.g., 802.11n, 802.11ac, 802.11af, and 802.11 ah), these systems include a channel that can be designated as the primary channel. The bandwidth of the primary channel may be equal to a maximum common operating bandwidth supported by all STAs in the BSS. The bandwidth of the primary channel may be set and/or limited by a STA that is sourced from all STAs operating in the BSS and supports the minimum bandwidth mode of operation. In an example for 802.11ah, even though the AP and other STAs in the BSS support 2MHz, 4MHz, 8MHz, 16MHz, and/or other channel bandwidth operating modes, the width of the primary channel may be 1MHz for STAs (e.g., MTC-type devices) that support (e.g., only support) 1MHz mode. Carrier sensing and/or Network Allocation Vector (NAV) setting may depend on the state of the primary channel. If the primary channel is busy (e.g., because STAs (which support only 1MHz mode of operation) transmit to the AP), the entire available band may be considered busy even though most of the band remains idle and available for use.
In the united states, the available frequency band available for 802.11ah is 902MHz to 928MHz. In korea, the available frequency band is 917.5MHz to 923.5MHz. In Japan, the available band is 916.5MHz to 927.5MHz. The total bandwidth available for 802.11ah is 6MHz to 26MHz, in accordance with the country code.
Figure 1D is a system diagram illustrating RAN 113 and CN115 according to an embodiment. As described above, the RAN 113 may communicate with the WTRUs 102a, 102b, 102c over the air interface 116 using NR radio technology. RAN 113 may also communicate with CN 115.
RAN 113 may include gnbs 180a, 180b, 180c, but it should be appreciated that RAN 113 may include any number of gnbs while maintaining consistent embodiments. Each of the gnbs 180a, 180b, 180c may include one or more transceivers to communicate with the WTRUs 102a, 102b, 102c over the air interface 116. In one embodiment, the gnbs 180a, 180b, 180c may implement MIMO techniques. For example, the gnbs 180a, 180b, 180c may use beamforming processing to transmit and/or receive signals to and/or from the gnbs 180a, 180b, 180 c. Thus, for example, the gNB 180a may use multiple antennas to transmit wireless signals to the WTRU102a and/or to receive wireless signals from the WTRU102 a. In an embodiment, the gnbs 180a, 180b, 180c may implement carrier aggregation techniques. For example, the gNB 180a may transmit multiple component carriers (not shown) to the WTRU102 a. A subset of the component carriers may be on the unlicensed spectrum and the remaining component carriers may be on the licensed spectrum. In an embodiment, the gnbs 180a, 180b, 180c may implement coordinated multipoint (CoMP) techniques. For example, WTRU102a may receive a cooperative transmission from gNB 180a and gNB 180b (and/or gNB 180 c).
WTRUs 102a, 102b, 102c may communicate with gnbs 180a, 180b, 180c using transmissions associated with scalable parameter configurations. For example, the OFDM symbol spacing and/or the OFDM subcarrier spacing (SCS) may be different for different communications, different cells, and/or different portions of the wireless transmission spectrum. The WTRUs 102a, 102b, 102c may communicate with the gnbs 180a, 180b, 180c using subframes or Transmission Time Intervals (TTIs) having different or scalable lengths (e.g., including different numbers of OFDM symbols and/or lasting different absolute time lengths).
The gnbs 180a, 180b, 180c may be configured to communicate with WTRUs 102a, 102b, 102c in independent configurations and/or non-independent configurations. In a standalone configuration, the WTRUs 102a, 102B, 102c may communicate with the gnbs 180a, 180B, 180c without accessing other RANs, such as the enodebs 160a, 160B, 160c. In a standalone configuration, the WTRUs 102a, 102b, 102c may use one or more of the gnbs 180a, 180b, 180c as mobility anchors. In a standalone configuration, the WTRUs 102a, 102b, 102c may communicate with the gnbs 180a, 180b, 180c using signals in the unlicensed frequency band. In a non-standalone configuration, the WTRUs 102a, 102B, 102c may communicate/connect with the gnbs 180a, 180B, 180c while communicating/connecting with other RANs, such as the enodebs 160a, 160B, 160c. For example, the WTRUs 102a, 102B, 102c may communicate with one or more gnbs 180a, 180B, 180c and one or more enodebs 160a, 160B, 160c in a substantially simultaneous manner by implementing DC principles. In a non-standalone configuration, the enode bs 160a, 160B, 160c may serve as mobility anchors for the WTRUs 102a, 102B, 102c, and the gnbs 180a, 180B, 180c may provide additional coverage and/or throughput in order to serve the WTRUs 102a, 102B, 102c.
Each gNB 180a, 180b, 180c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, user scheduling in UL and/or DL, support network slicing, DC, implement interworking processing between NR and E-UTRA, route user plane data to User Plane Functions (UPFs) 184a, 184b, and route control plane information to access and mobility management functions (AMFs) 182a, 182b, among other things. As shown in fig. 1D, the gnbs 180a, 180b, 180c may communicate with each other over an Xn interface.
The CN115 shown in fig. 1D may include at least one AMF 182a, 182b, at least one UPF 184a, 184b, at least one Session Management Function (SMF) 183a, 183b, and possibly a Data Network (DN) 185a, 185b. While each of the foregoing elements are depicted as being part of the CN115, it should be appreciated that any of these elements may be owned and/or operated by an entity other than the CN operator.
The AMFs 182a, 182b may be connected to one or more gnbs 180a, 180b, 180c in the RAN 113 via an N2 interface and may act as control nodes. For example, the AMFs 182a, 182b may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, supporting network slicing (e.g., handling different PDU sessions with different requirements), selecting specific SMFs 183a, 183b, managing registration areas, terminating NAS signaling, and mobility management, among others. The AMFs 182a, 182b may use network slicing processing to customize the CN support provided for the WTRUs 102a, 102b, 102c based on the type of service used by the WTRUs 102a, 102b, 102c. As an example, different network slices may be established for different use cases, such as services relying on ultra-reliable low latency communication (URLLC) access, services relying on enhanced large-scale mobile broadband (eMBB) access, services for MTC access, and so on. The AMFs 182a/182b may provide control plane functionality for switching between the RAN 113 and other RANs (not shown) that employ other radio technologies (e.g., LTE-a Pro, and/or non-3 GPP access technologies such as WiFi).
The SMFs 183a, 183b may be connected to the AMFs 182a, 182b in the CN115 via an N11 interface. The SMFs 183a, 183b may also be connected to UPFs 184a, 184b in the CN115 via an N4 interface. The SMFs 183a, 183b may select and control the UPFs 184a, 184b, and may configure traffic routing through the UPFs 184a, 184b. The SMFs 183a, 183b may perform other functions such as managing and assigning WTRU IP addresses, managing PDU sessions, controlling policy enforcement and QoS, and providing downlink data notification, among others. The PDU session type may be IP based, non-IP based, ethernet based, and the like.
The UPFs 184a, 184b may be connected to one or more gnbs 180a, 180b, 180c in the RAN 113 via an N3 interface, which may provide the WTRUs 102a, 102b, 102c with packet-switched network (e.g., the internet 110) access to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices. The UPFs 184, 184b may perform other functions such as routing and forwarding packets, enforcing user plane policies, supporting multi-homed PDU sessions, handling user plane QoS, buffering downlink packets, and providing mobility anchoring processing, among others.
The CN115 may facilitate communications with other networks. For example, the CN115 may include or may communicate with an IP gateway (e.g., an IP Multimedia Subsystem (IMS) server) that acts as an interface between the CN115 and the PSTN 108. In addition, the CN115 may provide the WTRUs 102a, 102b, 102c with access to the other networks 112, which may include other wired or wireless networks owned and/or operated by other service providers. In one embodiment, the WTRUs 102a, 102b, 102c may connect to the local Data Networks (DNs) 185a, 185b through the UPFs 184a, 184b via an N3 interface that interfaces to the UPFs 184a, 184b and an N6 interface between the UPFs 184a, 184b and the DNs 185a, 185b.
In view of fig. 1A-1D and the corresponding description with respect to fig. 1A-1D, one or more or all of the functions described herein with respect to one or more of the following may be performed by one or more emulation devices (not shown): WTRUs 102a-d, base stations 114a-B, enode bs 160a-c, MME 162, SGW 164, PGW 166, gNB 180a-c, AMFs 182a-B, UPFs 184a-B, SMFs 183a-B, DNs 185 a-B, and/or any other device(s) described herein. The emulation device can be one or more devices configured to simulate one or more or all of the functions described herein. For example, the emulation device may be used to test other devices and/or simulate network and/or WTRU functions.
The simulated device may be designed to conduct one or more tests on other devices in a laboratory environment and/or an operator network environment. For example, the one or more simulated devices may perform one or more or all functions while implemented and/or deployed, in whole or in part, as part of a wired and/or wireless communication network in order to test other devices within the communication network. The one or more emulation devices can perform one or more or all functions while temporarily implemented or deployed as part of a wired and/or wireless communication network. The simulation device may be directly coupled to another device to perform testing and/or may use over-the-air wireless communication to perform testing.
The one or more emulation devices can perform one or more functions, including all functions, while not being implemented or deployed as part of a wired and/or wireless communication network. For example, the simulation device may be used in a test scenario of a test laboratory and/or a wired and/or wireless communication network that is not deployed (e.g., tested) in order to conduct testing with respect to one or more components. The one or more simulation devices may be test devices. The simulation device may transmit and/or receive data using direct RF coupling and/or wireless communication via RF circuitry (which may include one or more antennas, as examples).
5G wireless networks are currently being developed with the goal of establishing a unified connectivity framework that extends the capabilities of Human Type Communication (HTC) and allows for the interconnection of Machine Type Communication (MTC) from machines such as vehicles, robots, ioT sensors and actuators, and other industrial devices. The unified framework can realize future industrial drive application by supporting mixed priority HTC and industrial MTC services. Although there is significant uncertainty as to what the final 5G framework will be, some of its features may include low latency, proximity services, context awareness, and Mobile Edge Computation (MEC).
For example, it is expected that breakthroughs in media access and advanced waveform technology combined with novel coding and modulation schemes provide 5G networks with transmission latencies of less than 1ms, which may be referred to as low latency. The 5G system may enable devices to communicate directly with other devices in the vicinity in a device-to-device (D2D) manner over a direct local link. The 5G network may be context aware. For example, a network may be expected to know (e.g., continuously) various locations and characteristics of a given device, and may be expected to possess information about its surroundings. MEC is an emerging technology that may enable service and content providers to offer their applications and services at the edge of the network, rather than utilizing the core network. In other words, in MEC systems, application and service deployment may be enabled through a cloud environment at the edge of the mobile network. This concept may reduce latency by limiting traffic to the geographic location(s) of the subscribers and may avoid congesting the backbone network (e.g., core network).
MECs may play a role in achieving 5G vision. For example, MECs may help meet critical 5G requirements such as latency, bandwidth, context awareness, and the like. Hooks (Hook) (e.g., initial frame) may be introduced in 5G to integrate MECs such as UPF, branch UPF, etc. Network functions, such as Network Exposure Function (NEF), may be defined to expose or provide network services and/or Application Functions (AFs), and extend networks with non-3 GPP services.
Various devices, systems and methods may be employed to integrate MECs in a 3GPP defined 5G network.
FIG. 2 is a system diagram showing aspects of the far-edge cloud and its location relative to other parts of the network. The far edge cloud may include a cloud formed by any one or combination of small cells, wiFi APs, henbs, set-top boxes, hetNet gateways, in-home media gateways, and the like. The far-edge cloud 210 may form the far edge of the network outside of the managed data center, beyond the cloud 220 defined by the European Telecommunications Standards Institute (ETSI) MEC. The far-edge cloud may provide services independently or in cooperation with MEC/remote cloud 230. Since resources are available to far-edge clouds, it may be limited in computing power, storage, and network connectivity. On the other hand, by being relatively closer to the end user, a far edge cloud may have the advantage of being able to respond with lower latency.
5G and edge calculations can generate new business models. Property owners such as shopping malls and attic (tower) companies can generate additional revenue by placing small data centers within their property and can provide edge computing services to wireless service providers. Such players may follow an infrastructure as a service (IaaS) model and may manage hardware and networking resources. Such players may extend their business model by extending to a platform as a service (PaaS) model. The PaaS model may allow application developers to install edge applications. In this way, the property owner can have the presence and ability to use and generate additional revenue. On the other hand, mobile Network Operators (MNOs) may be able to avoid installing, managing data centers, and may utilize edge computing to improve consumer experience. The MNO may charge the end user a premium for using the edge computing service.
Regardless of whether the IaaS or PaaS model is used, a standardized interface between the mobile network and an edge computing service provider, such as a trusted third party cloud service provider, may be desirable.
FIG. 3 is a system diagram illustrating an example service-based architecture. In this example, all functions expose a service Application Programming Interface (API) that can be used by other functions. The architecture includes a Network Exposure Function (NEF) 310, a Network Repository Function (NRF) 311, a Policy Control Function (PCF) 312, a unified data management function (UDM) 313, an Application Function (AF) 314, an authentication server function (AUSF) 315, a core access and mobility management function (AMF) 316, a Session Management Function (SMF) 317, and a User Plane Function (UPF) 318. Various implementations may include one, some, or all of these functions, or variations thereof.
In the example of fig. 3, NEF 310 provides security exposure of services and capabilities provided by 3GPP network functions, such as application functions, edge computing, etc. NRF 311 provides a repository function that provides registration and discovery functions for other network functions. PCF 312 provides policy control functions that combine network segmentation, roaming, and mobility management. UDM 313 is responsible for storing authentication and access authorization credentials. The AF 314 interacts with the 3GPP core network in order to provide services to facilitate application impact on traffic routing, access network capability exposure, and interaction with the policy framework for policy control. The AUSF 315 provides authentication server functionality. The AMF 316 provides core access and mobility functions. The SMF 317 provides session establishment, modification and release, tunnel maintenance between the UPF and AN nodes, and selection and control of the UPF 318, and configures traffic handling at the UPF 318 to route traffic to the appropriate destination. The UPF 318 provides an anchor point for intra-RAT/inter-RAT mobility, an external PDU session point for interconnect data, and network packet routing and forwarding.
The 5G system architecture may be represented using a reference point. Fig. 4 is a system diagram illustrating an example non-roaming 5G system architecture for simultaneous access to two (e.g., local and central) data networks. In the example provided, WTRU 410 may gain access to both DNs 420 and 430 by establishing a single PDU session. The reference point represents the interaction standardized by 3 GPP. These interactions may use the APIs exposed by the service functions.
An example principle of edge computation in 5G includes the 5G core network selecting a UPF close to the WTRU and performing traffic steering from the UPF to the local data network via an N6 interface. Service or session continuity may be required due to the mobility of user or application functions. Network information and capabilities may be exposed to the edge computing application.
Fig. 5 is a system diagram illustrating an example non-roaming 5G system architecture for multiple PDU sessions. For example, a dedicated PDU session may be used for Edge Computing (EC), while another PDU session may be used for non-EC traffic. In this example, traffic flow 510 associated with the first PDU session may proceed to central DN 530. Traffic flow 520 associated with the second PDU session may proceed to local DN 540 and may terminate at local DN 540 or may be delivered to an external network.
Fig. 6 is a system diagram illustrating an example of a single PDU session in a non-roaming 5G system architecture for simultaneous access to two (e.g., local and central) data networks. In this example, EC traffic 610 may be steered by a first UPF 620 to a local DN630, and non-EC traffic 640 may be forwarded to a second UPF 650, which steers non-EC flows to an external DN 660. Local DN630 may terminate certain flows or pass through flows.
Each PDU session may support a single PDU session type (e.g., supporting the exchange of a single type of PDU requested by the WTRU when establishing the PDU session). The following example PDU session types may be defined: IPv4, IPv6, ethernet, and unstructured (where the PDU type exchanged between the WTRU and the DN may be transparent to the 5G system). NAS SM signaling exchanged between the WTRU and SMF over N1 may be used to establish (e.g., according to WTRU requests), modify (e.g., according to WTRU and 5GC requests), and/or release (e.g., according to WTRU and 5GC requests) PDU sessions. Upon request from the application server, the 5GC may trigger the WTRU to establish a PDU session to a specific Data Network Name (DNN).
The SMF may be responsible for checking whether the WTRU request conforms to the user subscription associated with the requesting WTRU. To this end, the SMF may retrieve SMF-level subscription data from the UDM. Such data may indicate the allowed PDU session type per DNN and, in the case of home routing, whether the Visited Public Land Mobile Network (VPLMN) is allowed to insert an uplink classifier (UL CL) or a branching point for PDU sessions into the DNN. Information about allowed Services and Session Continuity (SSC) patterns may be provided by SMFs in a Home Public Land Mobile Network (HPLMN) to SMFs in a VPLMN.
The WTRU may provide a PDU session Identifier (ID) in a PDU session setup request sent to the network. The WTRU may also provide PDU session type, slice information (e.g., single network slice selection assistance information (S-NSSAI)), DNN, and/or SSC patterns.
Table 1 shows example attributes for a PDU session:
TABLE 1
Figure BDA0002612075060000231
PDU session establishment procedures in roaming and non-roaming with local breakout conditions may be used to establish new PDU sessions and/or to switch existing PDU sessions between 3GPP access and non-3 GPP access. In the roaming case, the AMF may determine whether to establish a PDU session in Local Breakout (LBO) or home routing. In the LBO case, the procedure is the same as in the non-roaming case, except that the SMF, UPF and PCF are located in the visited network.
Fig. 7 is a message sequence chart illustrating an example session setup procedure for roaming and non-roaming with LBO conditions. The process of fig. 7 may assume that the WTRU is already registered with the AMF and thus the AMF has already obtained user subscription data from the UDM. The messages, requests, and responses described herein may be described in the following formats: "message type (parameter X, parameter y.. Parameter N)".
In step 701, a NAS message (e.g., including S-NSSAI, DNN, PDU Session ID, request type, and/or N1 SM information) is sent from the WTRU to the AMF. To establish a new PDU session, the WTRU may generate a new PDU session ID. The WTRU may initiate the WTRU-requested PDU session setup procedure by transmitting a NAS message containing a PDU session setup request within the N1 SM message. The PDU session setup request may include a PDU type, an SSC mode, and/or a protocol configuration option. The request type may indicate an initial request if the PDU session setup request attempts to establish a new PDU session, and may indicate an existing PDU session if the request relates to an existing PDU session between a 3GPP access and a non-3 GPP access. The NAS message sent by the WTRU may be encapsulated by the AN in AN N2 message to the AMF, which may include user location information and access technology type information. The N1 SM information may include an SM PDU DN request container containing information for PDU session authorization by an external DN.
In step 702, the AMF determines that the message corresponds to a request for a new PDU session based on the request type determination indicating the initial request and that the PDU session ID is not used for any existing PDU session(s) of the WTRU. If the NAS message does not contain the S-NSSAI, the AMF may determine a default S-NSSAI for the requested PDU session according to the WTRU' S subscription. The AMF may select the SMF according to a suitable procedure. The AMF may store an association of PDU session IDs and SMF IDs. In case the request type indicates an existing PDU session, and the AMF does not recognize the PDU session ID or the subscription context from the UDM does not contain an SMF ID corresponding to the DNN, such a situation may be handled as an error situation.
In step 703, an SM request (e.g., including subscriber permanent ID, DNN, S-NSSAI, PDU session ID, AMF ID, N1 SM information (e.g., PDU session ID, PDU session setup request), user location information, and/or access technology type) is sent from the AMF to the SMF. The AMF ID may uniquely identify the AMF serving the WTRU. The N1 SM information may include a PDU session setup request received from the WTRU.
In step 704a, a subscription data request (subscriber permanent ID, DNN) is sent from the SMF to the UDM. If the request type indicates an existing PDU session in step 703, the SMF may determine that the request is due to a handover between a 3GPP access and a non-3 GPP access. The SMF may identify an existing PDU session based on the PDU session ID. The SMF may request subscription data related to the SM of the DNN-related WTRU if the SMF has not retrieved the subscription data.
In step 704b, a subscription data response is sent from the UDM to the SMF. The subscription data may include one or more authorized PDU types, one or more authorized SSC patterns, and/or a default QoS profile. The SMF may determine whether the WTRU request complies with the user subscription and local policy. If the request is not met, the SMF may reject the WTRU request through NAS SM signaling (e.g., including the relevant SM reject cause) relayed by the AMF, where the SMF indicates to the AMF that the PDU session ID is to be considered released and skips the remainder of the procedure.
In step 705, a procedure related to PDU session authentication/authorization is performed. Signaling may occur between the SMF and DN via UPF. If the SMF needs to authorize/authenticate the establishment of a PDU session, the SMF may select a UPF and trigger PDU session establishment authorization/authentication. If the PDU session establishment authentication/authorization fails, the SMF may terminate the PDU session establishment procedure and indicate a rejection to the WTRU.
In step 706a, if a dynamic PCC is deployed, the SMF may perform PCF selection. In step 706b, the SMF may initiate a PDU-CAN session setup to the PCF to obtain default PCC rules for the PDU session. If the request type in step 703 indicates an existing PDU session, the PCF may instead initiate a PDU-CAN session modification.
The purpose of these procedures may be to receive PCC rules before selecting a UPF. If no PCC rule is needed as input for UPF selection, the relevant process may be skipped.
In step 707, if the request type in step 703 indicates an initial request, the SMF may select an SSC pattern for the PDU session. SMF may also select UPF if the procedures associated with receiving the PCC rule are not performed before selecting UPF. In case of PDU type IPv4 or IPv6, SMF may assign IP address/prefix for PDU session. For unstructured PDU types, SMF may assign an IPv6 prefix and assign an N6 point-to-point tunnel (based on UDP/IPv 6) for the PDU session.
In step 708, if a dynamic PCC is deployed and a PDU-CAN session setup has not been performed, the SMF may initiate a PDU-CAN session setup to the PCF to obtain default PCC rules for the PDU session. Otherwise, if the request type indicates an initial request and dynamic PCC is deployed and the PDU type is IPv4 or IPv6, the SMF may initiate PDU-CAN session modification and provide the assigned WTRU IP address/prefix to the PCF.
If the request type indicates an initial request and no procedure related to receiving PCC rules has been performed before selecting a UPF, the SMF may initiate an N4 session establishment procedure with the selected UPF, step 709. For example, if the request type does not indicate an initial request, or a procedure for receiving PCC rules is not performed, the SMF may initiate an N4 session modification procedure with the selected UPF. For example, in step 709a, the SMF sends an N4 session setup/modification request to the UPF and provides packet detection, enforcement and reporting rules to be installed on the UPF for that PDU session. If the CN tunnel information is assigned by the SMF, the CN tunnel information may be provided to the UPF in this step. At step 709b, the upf confirms by sending an N4 session establishment/modification response. If CN tunnel information is allocated by the UPF, the CN tunnel information may be provided to the SMF in this step.
In step 710, an SM response (e.g., including a cause, N2 SM information (PDU session ID, one or more QoS profiles, and/or CN tunnel information), N1 SM information (e.g., including a PDU session setup accept (e.g., an authorized QoS rule, SSC mode, S-NSSAI, and/or assigned IPv4 address))) is sent from the SMF to the AMF.
The N2 SM information may carry information that the AMF may forward to AN Access Network (AN) (e.g., RAN). For example, the AMF may forward CN tunnel information corresponding to a core network address of an N3 tunnel, the core network address of the N3 tunnel corresponding to the PDU session. The QoS profile may provide to the (R) AN a mapping between QoS parameters and QoS flow identifiers (multiple QoS profiles may be provided to the (R) AN) and/or a PDU session ID that may be signaled by the AN to the WTRU to indicate to the WTRU AN association between AN resource and a PDU session for the WTRU.
The N1 SM information may include PDU session setup accept that the AMF may provide to the WTRU. The plurality of authorized QoS rules may be included in the PDU session setup accept and in the N2 SM message within the N1 SM message. The SM response may also contain the PDU session ID and information that allows the AMF to know which WTRU is the target and to determine which access to use to the WTRU. The access information may be used in the case where the WTRU is connected through both 3GPP and non-3 GPP accesses. In step 711, AN N2 PDU session request (N2 SM info, NAS message (PDU session ID, PDU session setup accept)) NAS message is sent from the AMF to the (R) AN. The AMF may send a NAS message to the (R) AN containing, for example, the PDU session ID and PDU session setup accept targeted to the WTRU and the N2 SM information received from the SMF within the N2 PDU session request.
In step 712, a PDU session setup accept message is sent from the (R) AN to the WTRU. The (R) AN may issue AN AN-specific signaling exchange with the WTRU that is related to information received from the SMF. For example, in the case of a 3GPP RAN, RRC connection reconfiguration may occur where the WTRU establishes the necessary RAN resources in relation to the authorized QoS rules for the PDU session request received in step 710.
The (R) AN may also assign (R) AN N3 tunneling information for the PDU session. The (R) AN may forward the NAS message (PDU session ID, N1 SM info (PDU session setup accept)) provided in step 710 to the WTRU. The (R) AN may provide only the NAS message to the WTRU if necessary RAN resources are established and the allocation of the (R) AN tunneling information is successful.
In step 713, (R) the AN sends N2 PDU session response (PDU session ID, reason, N2 SM info (PDU session ID, (R) AN N tunnel info, list of accept/reject QoS profile (s))) to the AMF). The (R) AN tunnel information may correspond to AN access network address of AN N3 tunnel corresponding to the PDU session.
In step 714, the AMF sends an SM request (N2 SM info) to the SMF. The AMF may forward N2 SM information received, for example, from the (R) AN to the SMF. Note that in some implementations, a step may be included such that the WTRU indicates to the core network that the WTRU has successfully established a PDU session, or whether successful establishment in the (R) AN indicated in step 712 is sufficient. For example, the WTRU may send a NAS PDU session setup complete message to indicate that the WTRU has successfully established a PDU session.
At step 715a, if an N4 session for the PDU session has not been established, the SMF initiates an N4 session establishment procedure with the UPF. Otherwise, the SMF initiates an N4 session modification procedure with the UPF. The SMF provides AN and CN tunnel information. If the SMF selects CN tunnel information at step 708, only CN tunnel information needs to be provided. If the PDU session setup request is due to mobility between 3GPP and non-3 GPP accesses, then in this step the downlink data path may be handed over to the target access. At step 715, the upf provides an N4 session setup/modification response to the SMF.
In step 716, the SMF sends an SM response (cause) to the AMF. After this step, the AMF may forward the relevant events to the SMF, e.g. at handover, where (R) AN tunnel information changes or the AMF is relocated. The related event may include a change in a user location or access type associated with the N1 signaling, for example, received by the AMF from the R (AN). In some implementations, the SMF may explicitly subscribe to these events, or the subscription may be implicit.
In step 717, the SMF generates and sends an IPv6 router advertisement to the WTRU via the N4 interface and UPF, in case of PDU type IPv 6.
In step 718, if the PDU session setup request is due to a handover between a 3GPP access and a non-3 GPP access (e.g., the request type indicates an existing PDU session), the SMF performs a procedure to release the user plane on the source access (3 GPP or non-3 GPP access).
In step 719, if the SMF identity is not included in the DNN subscription context by the UDM in step 704b, the SMF invokes a "UDM _ registered WTRU services NF" service including the SMF address and the DNN. The UDM may store, for example, SMF identities, addresses and associated DNNs. If the PDU session establishment is unsuccessful during the procedure, the SMF may notify the AMF.
An Application Function (AF) may send a request to affect SMF routing decisions for traffic of a PDU session. This may affect UPF selection and allow user traffic to be routed to the home DN. Such a request may contain, for example, information identifying the traffic to be routed, information about where to route the traffic, the potential location of the AF to which the traffic routing should apply, and a time indication of when to apply the traffic routing. The information identifying the service may include a DNN and/or an application identifier or service filtering information. In some implementations, a mapping may be provided between information provided by the AF and information used in the core network. The information about where to route traffic may include an external identifier, a Mobile Station International Subscriber Directory Number (MSISDN), or another identifier for an individual WTRU, a group of WTRUs, or all WTRUs. The potential location of AF may be used, for example, for UPF selection.
The AF making such a request may be assumed to belong to a Public Land Mobile Network (PLMN) serving the WTRU. The AF may issue the request on behalf of other applications not owned by the PLMN serving the WTRU. The SMF may select or reselect the UPF(s) for the PDU session in consideration of this information based on local policy; activate a mechanism for traffic multihoming or enforcement of a UL classifier (UL CL); and/or to inform the application function about the (re-) selection of the UP path. Mechanisms for traffic multi-homing or UL CL implementation may include providing traffic forwarding (e.g., outage) rules to the UPF.
In some implementations, the application function request may be routed to the SMF, for example, from the NEF or PCF.
The application function may request location information that is notified to the WTRU(s).
The first driver of the edge computing trend is that network operators may wish to provide additional value added services and bring better performance and quality of experience to end users by taking advantage of their unique characteristics of the access network, such as proximity to the end user and knowledge of the user's identity. The second major driver of edge computing is the need to supplement the power-inefficient IoT devices with computing power at the edge of the network to achieve complex operations or operations involving large amounts of data and devices, which is otherwise simply not possible due to the latency and capacity limitations introduced by the backhaul links.
The third drive for edge computing comes from the development of cloud computing itself, which results in an increasing integration of software development and deployment activities, as illustrated by the "DevOps" development model, in order to cope with the increased system complexity. This trend is enabled by technologies like Network Function Virtualization (NFV), which can also be described as "merging network infrastructure with the IT world", and at ITs core, aims to reduce capital and operational expenditures for application providers. MEC can be seen as a way to extend this new flexibility to the rest of the internet outside of the data center and even end-user devices, which ultimately helps to innovate new application classes that are not well served by remote clouds.
Fig. 8 is a system diagram illustrating the ETSI MEC architecture. In the example shown in fig. 8, a Mobile Edge Host (MEH) 810 is an entity that contains a Mobile Edge Platform (MEP) 820 and a virtualization infrastructure 830. The virtualization infrastructure may include a data plane that implements the traffic rules received by the MEP 820 and routes traffic between, for example, applications, services, DNS servers/proxies, 3GPP networks, local networks, and external networks. The MEP 820 is a collection of basic functions that run a mobile edge application on a particular virtualization infrastructure. The MEP may receive the business rules, e.g., from a Mobile Edge Platform Manager (MEPM) 840, an application or service, and instruct the virtualization infrastructure 830 accordingly.
A mobile edge application (ME application) may be instantiated on the virtualization infrastructure 830 of the MEH 810 based on a configuration or request confirmed by the Mobile Edge Platform Manager (MEPM) 840. The MEPM 840 may manage application lifecycle and notify the mobile edge coordinator (MEO) 850 of relevant application related events, provide element management functionality to the mobile edge platform, and manage application rules and requirements.
FIG. 9 is a system diagram illustrating an example NFV management and organization (MANO). In NFV MANO, functions typically required for NFV coordination include service coordination and instantiation, service linking, scaling services, and/or service monitoring. For service coordination and instantiation, orchestration software must communicate with the underlying NFV platform to instantiate a service, i.e., it can create a virtual instance of a service on the platform. A service chain may enable services to be cloned and scaled up for a single consumer or multiple consumers. As more services are added, the scaling service may handle finding and managing enough resources to deliver the service. Service monitoring can track the performance of the platform and resources to ensure that they are sufficient to provide good service.
Referring to FIG. 9, resource orchestration may be implemented to ensure that sufficient computing, storage, and network resources are available to provide network services. To meet this goal, the Network Function Virtualization Orchestrator (NFVO) 910 may work with a Virtualized Infrastructure Manager (VIM) 920 or directly with NFV infrastructure (NFVI) resources 930 as required. It can coordinate, authorize, release, and seize NFVI resources 930 independently of any particular VIM. It may provide management of Virtual Network Function (VNF) instances 940, 941, and 942 that share NFVI resources.
To address the new challenges facing network operators, it may be desirable to deploy NFV-based solutions across different points of presence (POPs) or within one POP but across multiple resources. Without NFV, this may not be possible. Using NFV MANO, service providers can build this capability using NFVO, which can provide the capability to directly tie VIM 910 through its northbound API rather than tying NFVI resources directly. This may eliminate physical boundaries that may normally impede such deployment. To provide service orchestration, the NFV orchestrator may create end-to-end services between different VNFs 940, 941, and/or 942, which may be managed by different VNFMs 950 with which NFVO 910 coordinates.
Hardware virtualization or platform virtualization may refer to creating a virtual machine that acts like a real computer with an operating system. Software executing on such virtual machines may be separated from the underlying hardware resources.
Software virtualization may include operating system level virtualization (hosting multiple virtualization environments within a single OS instance); application virtualization and workspace virtualization (hosting individual applications in a separate environment from the underlying OS); service virtualization (simulating the behavior of dependent (e.g., third party, evolved, or unrealized) system components needed to execute an Application Under Test (AUT) for development or testing purposes).
Memory virtualization may include aggregating Random Access Memory (RAM) resources from a networked system into a single memory pool. Virtual memory may give the application the impression that it has continuous working memory, isolating it from the underlying physical memory implementation. Storage virtualization may include a process of abstracting (e.g., fully) logical storage from physical storage. Network virtualization may include creating a virtualized network addressing space within or across network subnets. A Virtual Private Network (VPN) is a network protocol that replaces the actual wires or other physical media in a network with an abstraction layer, allowing the creation of a network over the internet.
Various hooks may be used to integrate MECs in a 5G network. If the MNO is an MEC service provider, the MEC (control application function) may be implemented internally and may interact directly with the SMF or other 5G functions, or may interact via the NEF. For external (as well as internal) MEC service providers, hooks may include the ability to gain control to handle traffic flows per user, per application, set policies for QOE and/or session continuity, obtain network information, radio and core networks, and/or set network parameters, which may be difficult for third party providers. It may be desirable to enable these capabilities of third party cloud providers through some standardized, well-known API.
Various processes and APIs may allow third party cloud providers to provide edge computing services to network service providers in the context of a 5G network. Such APIs and procedures may include APIs and procedures for initial configuration and setup; APIs and procedures for network information exchange; and/or APIs and processes for dynamically changing cloud resource locations and steering user plane traffic to new locations.
For third party edge Computing Service Providers (CSPs), the following assumptions regarding a 3gpp 5G network are made for the various examples herein.
The CSP may own, deploy, and manage computing resources. The pavilion company and/or the property owner can deploy cloud resources in their facilities. These deployments may be considered small data centers that may be used by network service providers. The third party cloud service provider may operate in an infrastructure as a service (IaaS) or platform as a service (PaaS) mode. When the CSP provides IaaS-type services to the MNO, the MNO may request computing resources that are close to the desired location. The CSP may reserve this resource in one location and provide an interface to the MNO for managing the lifecycle of the edge application. The MNO may also be responsible for resource monitoring and may request more resources or release resources based on load. The CSP also manages the application lifecycle in the PaaS model. The application developer may provide the CSP with edge applications (rather than MNOs) to be managed by the CSP. The MNO may direct traffic to the edge application based on the CSP's request or configuration. Various examples described herein relate to a second scenario in which a CSP provides application services.
The 3GPP local DN may represent an edge computing facility owned and deployed by a third party service provider. The "orchestration function" may be owned and provided by the CSP, for example, on-board the resource provisioning or application. The CSP arrangement function may receive a request from an "edge application" or "edge platform" when a user attempts to connect. The request may include location information about a user attempting to connect to the application. The CSP orchestration function may then allocate resources and on-board applications based on the location information. After the on-board application, user plane traffic may be steered to the edge application.
The CSP scheduling function may communicate with the 3GPP management system and any other MANOs to coordinate activities, such as negotiating policy decisions. In the case/scenario where the CSP is only an IaaS provider, the 3GPP management system may request resources through the CSP orchestrator. The CSP and the 3GPP network may exchange user information and/or user IDs to identify the user plane corresponding to the user. It may be assumed that the exchanged user ID or user information is not a 3GPP defined ID. Instead, the ID may be an IP address or a token provided by an external trusted authority, or another identifier.
The CSP may reserve resources and onboard applications based on knowledge of the location of the WTRU and the DNN. The 3GPP MNO may provide the CSP with network topology information. The topology information may include information such as node IDs, location IDs, and/or cell IDs. The CSP may then use the topology information to deploy the cloud resources. In the case where the CSP wants to reference a cloud deployment, it can use the node ID or location ID from the topology information. Based on the user location, such as a cell ID or node ID, the CSP may determine a desired (e.g., optimized or ideal) cloud resource location that can handle user-plane traffic. Assume that there is no one-to-one mapping between cellular nodes and cloud resource locations.
As used herein, MEC is assumed to be an AF for a 3gpp 5G network. NEF functions may be used to configure, set policy information, and perform traffic steering towards MEC platforms deployed within the network and in the local DN. From the 3GPP perspective, the AF may be trusted or untrusted. The trusted AF may comprise a MEC platform owned by the MNO. The trusted AF may communicate directly with the 3GPP network function (e.g., SMF). Untrusted AFs may be restricted to transmitting or receiving 3GPP services via NEF. A third party (e.g., non-MNO) MEC provider may be classified as an untrusted AF. To enable third party MEC providers, it may be specified that the MEC platform requires a dedicated NEF. The NEF may be implemented within a core network owned by an MNO and may be provided by dedicated hardware such as a server or switch and a storage device. The NEF may also be implemented as a virtualization function. In the case where NEF is implemented at MNO premises, NEF may be co-located with the gNB. In other examples, the NEF may be located within Customer Premises Equipment (CPE), such as routers, network switches, gateways, set-top boxes, DVRs, or terminals, and associated equipment at the physical location of the customer rather than at or between the provider premises.
For example, MEC NEF functionality, API sets, and methods of enabling third party MEC platforms to provide MEC services via 3gpp 5G networks may include "MEC NEF (MNF)" and "MEP 5G adapter (M5A)" functionality. The MNF may be an extension of the 3GPP NEF and the M5A functionality may provide additional services in the Mobile Edge Platform (MEP). For example, the M5A function may use an authentication API, such as "get authenticated collar" to authenticate with the MNF, while the M5A function may use an API towards the MNF, such as "set traffic rules" to establish traffic paths. The MNF may send traffic rule related information to the SMF and policy updates to the PCF. Example traffic rules may include "steering flow ID = N, at UPF = i to local DN = y". The M5A functionality may interact with the MNF to obtain and set up, for example, radio network and core network information. The M5A function may transmit "obtain XXX network information" to the MNF, and may transmit "set XXX network information" to the MNF. Example requests for network information may include "get available BW, total traffic capacity, bidirectional BW, load at location = x". The M5A function may set policy related information by interacting with the MNF using the API: for example, the M5A function may send "set mobility and session continuity policy" to the MNF, and the MNF may send the received policy information to the PCF.
Fig. 10 is a system diagram illustrating an example logical architecture for MEC and 5G system integration. In the example of fig. 10, it is assumed that the CSP is an untrusted third party application function. Note that this is consistent with the 5G naming convention, where any functionality outside of the MNO network is considered untrusted. The term CSP is used herein in a more general sense to include control functions residing in an external edge computing platform. For example, it may be similar to some industry standard implementations, such as the Mobile Edge Platform Manager (MEPM) in ETSI MEC. In the example of fig. 10, MEC resource 1010 is deployed as part of local DN 1020. It is assumed that resource deployment and management is controlled by the CSP. Edge application deployment and application lifecycle management may also be managed by the CSP or third party application service provider.
Since the CSP is an untrusted third party AF in this example, it can only interact with NEF 1030. Example functions that may be enabled by MEPM and NEF interactions may include: initial provisioning of the system to establish a default PDU session; obtaining location and radio network information and using the information to dynamically manage edge applications and/or edge computing resources; controlling CN and AN configuration; and/or obtain additional network information such as user context or CN operation information, etc.
Fig. 11 is a message sequence chart illustrating an exemplary process for enabling a third party CSP to provide MEC services. At step 1110, the third party CSP may initialize the management system by providing, for example, an identification of the active subscriber and an identification of the data network. At step 1120, the CSP may monitor user location, application usage, and network information. At step 1130, the csp may decide to steer user traffic to the local DN based on the monitored information and initiate a process to dynamically set or modify network parameters accordingly. The steps shown in fig. 11 are discussed further herein.
Various examples herein assume that the CSP deploys cloud resources at different points of presence (POPs) of the network. At a certain POP, the CSP may have a DNN name known to the MNO. For example, the DNN name may be of the form: com. In some cases, a third party MEC service provider may manage resources and services. For example, in a pre-provisioned or pre-configured scenario, the CSP may allocate resources and onboard applications to its edge computing resources. Thus, the CSP may already know that user traffic in certain cells or locations should be steered to the edge computing facility. The network may determine whether each user may use the service. In a real-time operation or runtime scenario, the CSP may reserve resources and be onboard an application at the edge computing facility. In this case, the CSP may need to update the SMF and PCF settings in real-time.
FIG. 12 is a tree diagram illustrating example cloud resources deployed at different points in a network. As shown in fig. 12, cloud resources 1210, 1211, or 1212 may be deployed, for example, at a core network 1220, at an aggregation point 1230 on an EnB or AP, or at the very edge 1240 of the network, such as an EnB, AP, small cell, enterprise server, or other CPE. It may be assumed that the edge applications may run at different levels of network deployment, such as an EnB level, a first/second aggregation point, a core network, etc., based on usage, application requirements, number of users, network conditions, etc. The edge application may also start at a certain level and then later (e.g., dynamically) move to a different level. The CSP may obtain the network topology from the MNO and may maintain a map of resources, and/or computing resources (compute, store), deployed near the network node tracking information (e.g., node ID, location ID). The CSP may update a database of MNOs (e.g., UDMs) with information about which users have subscribed to the DNNs, and may provide the DNN names to the users.
Fig. 13 is a message sequence chart showing an example initialization process in which the CSP updates the MNO's database with a list of active subscribers. In a first example, the CSP may provide to the MNO a name of a user who subscribes to use a service provided by a particular DNN (e.g., mycsp.com). As shown in fig. 13, the CSP provides the subscription list to the 3GPP management system in step 1310. The list may include information such as DNN names and user IDs. The CSP may operate more than one DNN; thus, the DNN name may be provided as part of a list. The DNN name may correspond to and/or be similar to a domain name. The CSP may use different DNN names and/or domain names to share cloud resources among different network service providers. The CSP may assign different priorities and/or privileges to the DNN names. At step 1320, the 3gpp management system can update the database (e.g., UDM) with the information obtained from the CSP. In a second example, the CSP may first provide a list of subscribers to the NEF, as shown at step 1330. In step 1340, nef may update UDM with the provided information. In some embodiments, the NEF may first discover the correct UDM, authenticate, and then update the database. After the database has been updated, the user may be provided with the DNN name to which they have subscribed. The WTRU may send the DNN name as part of the PDU establishment. If the WTRU does not send the DNN name, the 3GPP network may obtain subscription information from the UDM and establish a PDU session with a local DNN.
Fig. 14 is a message sequence chart illustrating another example process. In some scenarios, the user knows the DNN name when it is configured, such as when the WTRU installs an application running on the edge. As shown in fig. 14, the wtru may obtain the DNN name from the CSP management platform in step 1410. In another embodiment, the 3GPP network may provide a valid DNN name as part of the initial registration procedure, as shown in step 1420. The CSP may create a "deployment map" of the cloud resources. The mapping may contain a record of (1 \8230; N) deployment details, such as a list of computing capacities, storage capacities, and/or cell IDs (1 \8230; N). The graph may also track resources at the registration area level, which includes computing capacity, storage capacity, and/or registration area information.
Fig. 15 is a message sequence chart illustrating another example process in which a CSP provides setup assistance for establishing a PDU session. The CSP may determine default settings for initial PDU session establishment for the WTRU based on a deployment map of cloud resources. The default settings may indicate, for example, a DNN name, a user location, and/or a default DNN location where the WTRU's user plane traffic should be handled when setting up the PDU. The CSP may provide a default option for the SMF (e.g., a user from a particular registration area may use the DNN at a particular location ID to establish an initial PDU session). At this point in time, it may not be known which SMF will be used. This option may be applicable to all SMFs. The default option may be to use cloud resources at the registration area level.
The CSP may provide general guidance. One example of such guidance may be indicating that all users at location = "cell id, registration area" requesting subscription DNN = mycsp.com for IPv4 PDU sessions may use DNN at location = "mno _ abc". This information helps the AMF to select SMF and helps the SMF to select UPF. The SMF may also use this information to configure a UPF with a classifier. The selection of AMF and SMF is a 3GPP specific procedure. This information may be stored in a database to be retrieved by the AMF if the WTRU requests to establish a PDU session.
Fig. 15 depicts two implementations of this process. For example, in option 1, the CSP may provide the information to the 3GPP management system, as shown in step 1510. Then, at step 1520, the 3gpp management system can update the UDM database. As shown in option 2, the CSP may first provide information to the NEF at step 1530, and the NEF may update the UDM database at step 1540. Option 2 may provide CSP with more flexibility and control and may change dynamically as network conditions change. After initial configuration, and after providing default setting information to the 3GPP network, the CSP may subscribe to location updates and network information.
The CSP may begin to monitor network information including, for example, the user's location, the applications used by the user, and network conditions. It may be assumed that the network operator provides this information through a "network information service". Network information services refer to all network related information such as radio network information, core network information, user location and context information, etc. For example, the radio network information may include S1 bearer information, and/or Radio Access Bearer (RAB) establishment information. The core network information may include delay, jitter, backhaul bandwidth, etc. The user location information may include, for example, a cell ID and/or registration area corresponding to a particular user. The CSP may use such data to reconfigure cloud resources and possibly move the application to a new DNN. This may be done for several reasons, including load balancing, maintaining latency, and/or bandwidth requirements. The new DNN may be closer to or further away from the user. The CSP should be able to discover such services and authenticate to the 3GPP system before it can subscribe.
Fig. 16 is a message sequence chart illustrating an example procedure for such a discovery method. Discovery and authentication of network information services may be performed by CSP and 3GPP network management systems. At step 1610, the csp may send a discover _ network _ info _ service (security _ certificate) message, which may include a security certificate, to the NEF. At step 1620, the nef may authenticate the request and query a network storage function (NRF). In step 1630, the nef may send a query _ available _ networdservice (CSP _ ID) message with CSP _ ID to the NRF. At step 1640, the nrf may respond with a available _ service (CSP _ ID, service ID) message, which may include the requesting CSP _ ID and a list of available services. At step 1650, the nef may notify the CSP by sending an available _ service (service ID) message to the CSP. The response may include a list of service identifiers that the CSP may use to subscribe to the network information service.
The CSP may subscribe to a desired network information service. The CSP may subscribe to each service individually, or send a single subscription request, which may include multiple subscriptions.
Fig. 17 is a message sequence chart showing an example of such a subscription process. As shown in step 1710, the CSP may send a subscribe _ for _ network _ info _ service (CSP _ ID, security _ certificate, list [ service ID, subscription information ], callback reference) message to the NEF, which may contain the CSP ID, security certificate, list of services it wants to subscribe to, and callback reference. Callback references may be used to notify subscribers and provide subscribed information. The network information service may provide various network information such as WTRU location information, radio network information, and/or core network information. These network information services may be owned and operated by 3GPP network operators. The 3GPP network may provide location information and radio network information in a similar manner to location services or RNIS in the ETSI MEC platform.
At step 1720, the nef may authenticate the subscription request. Then, at step 1730, nef may query a network repository function with the service ID to obtain an entry point for each service. The entry point for a service may be a simple URI that may be accessed by other applications and services. In step 1730, the nef may send a get _ service _ ingress _ point (service ID) message to the NRF, which may respond by sending a response with an ingress _ point, as shown in step 1740. Thereafter, the NEF may send a subscribe _ network info _ service (CSP _ ID, subscription information) message to each service, as shown in step 1750. The network information service may respond to the NEF with a subscribe _ ack (CSP _ ID acknowledge message at step 1760). If the requested information is available, the network information service may send the requested information to the NEF by sending a netjlnfo message at step 1770. The netjjnformation message may include fields such as CSP _ ID in addition to netjjnformation, and the NEF may collect the received information and forward it to the correct CSP, e.g., at step 1780. This information may be sent using a callback reference or in a message such as a net _ info _ response (net _ info).
Depending on service details and availability, the CSP may be interested in a variety of information for managing cloud resources. The CSP may subscribe to receive this information and make changes/modifications to it as needed. Examples of such information may include: location information for individual WTRUs; the number of WTRUs in a given location; the number of WTRUs using an application ID in a given area; and/or a user traffic profile in a given area.
As described above, CSPs may be viewed as consumers of network information. The CSP may collect network information and use that information to decide how to establish the correct UPF functionality. By establishing the correct UPF functionality, user traffic can be routed to the edge application running on the correct local DN.
In addition to the described functionality, the CSP may also have the ability to process network information, run advanced analytics using information from other data sources to fine tune and optimize network settings. The CSP may assist or supplement RRM functions within the network. The CSP may monitor and predict security threats and take appropriate action, such as blocking a user, disconnecting a connection, etc. Thus, the ability to dynamically/instantaneously set, modify, and update network parameters by the CSP may be desirable.
For example, CSP and NEF may support the following APIs to set up radio network information: set _ all _ RNI (CSP _ ID, radio _ info) and/or set _ per _ user _ RNI (CSP _ ID, user _ ID, radio _ info). CSP and NEF may support the following APIs to set core network information: set _ all _ CNI (CSP _ ID, cn _ info), set _ per _ subscriber _ CNI (CSP _ ID, subscriber _ ID, cn _ info).
The CSP may monitor user information such as the number of users using the edge application at a given location, and what the application requirements are in terms of latency, bandwidth, etc. at a given location. For example, in a particular location, N users may be using an edge application that requires a latency of X seconds. The CSP may determine that the users are being served by an edge application running at the DNN, the edge application is configured as a default, and the location of the DNN is at a higher level POP. The CSP may decide to move the application serving N users to a closer-edge DNN (e.g., enB, AP, etc.). The CSP may inform the 3GPP network that this is the preferred user plane setup for these users. This may indicate that for these users, it may be desirable to steer the traffic to DNNs closer to the edge. In this case, the CSP may identify to the 3GPP network the user plane associated with the application ID/flow, user ID, and/or new DNN location. The CSP may also indicate to the edge application that a possible relocation may occur.
Fig. 18 is a message sequence chart illustrating a process for implementing the first option for dynamic reconfiguration. In this example, the CSP initiates the process by sending an update _ user _ plane (application ID, user ID, DNN _ location) message to the NEF, as shown in step 1810. At steps 1820 and 1830, the nef may determine and query the appropriate AMF to find a list of SMFs that are handling the user session, and may do so by sending a query message to obtain a _ SMF _ list (user list). At step 1840, the amf may return a list of SMFs serving the user in a response (SMF list) message. In step 1850, the nef may send an update _ user _ plane (application ID, DNN _ location) message to the SMF. The NEF may forward the application ID, DNN _ location information received from the CSP to the correct SMF. In steps 1860 and 1870, the smf may trigger a PDU session update based on the received message and send a response to the NEF.
Fig. 19 is a message sequence chart showing a procedure for implementing the second option for dynamic reconfiguration. The CSP may initiate the process by sending an update _ user _ plane (application ID, user ID, DNN _ location) message to the NEF, as shown in step 1910. In this alternative, after determining the appropriate AMF at step 1920, the NEF may send all information to the AMF, for example, by sending an update _ userplane (useld, application Id, DNN _ location) message, as shown at step 1930. At step 1940, the amf may return a response (OK) to receipt of the reply message. The amf may determine a list of SMFs serving the WTRU list in step 1950 and send an update _ user _ plane (application ID, DNN _ location) message to all SMFs in step 1960. In step 1970, the amf may trigger a PDU session update with multiple SMFs.
Fig. 20 is a message sequence chart illustrating a process for implementing the third option for dynamic reconfiguration. The CSP may initiate the process by sending an update _ user _ plane (application ID, user ID, DNN _ location) message to the NEF, as shown in step 2010. In this alternative, the NEF may query the PCF by sending a get _ SMF _ list (subscriber list) message to obtain information about the relevant SMFs, as shown in step 2020. It can be assumed that the PCF has all relevant information about the user and the SMF etc. managing the user plane. Once the NEF obtains the list of relevant SMFs in step 2030, an update _ user _ plane (application ID, DNN _ location) message may be sent to all SMFs, as shown in step 2040.
There may be more than one (e.g., N) SMF processing sessions for N users. Here, it is assumed that the NEF or the AMF transmits N messages to N SMFs. Based on the application ID, the SMF may identify the PDU session that needs to be modified. At step 2050, after the PDU session has been identified, the SMF triggers PDU session modification. This may include a new UPF with classifier functionality that is able to steer user plane traffic to a new DNN. At step 2060, the smf may send a response to the NEF.
CSP cloud services can act as a neutral host and provide edge services for more than one network operator. Fig. 21 is a block diagram showing a simple scenario. Here, it can be assumed that the CSP interacts with the NEF of each network operator. This is a simple case where the CSP maintains information for each Network Operator (NO) and interacts with an independent NEF.
Fig. 22 is a message sequence chart showing CSP interaction with NEFs from multiple NOs. As shown in fig. 18-20, the CSP may initiate the dynamic reconfiguration process with the NEF of each NO separately. The CSP may send an update _ user _ plane (application ID, user ID, DNN _ location) message to each NEF, as shown in steps 2210, 2220, and 2230. In a scenario where a single network may host many virtual network operators, the CSP may also include an MVNO identification in the API it is requesting. For example, the previous API for modifying the PDU session may be updated with MVNO ID, update _ user _ plane (NVNO _ ID, application ID, user ID, DNN _ location).
Although features and elements are described above in particular combinations, one of ordinary skill in the art will appreciate that each feature or element can be used alone or in any combination with other features and elements. In addition, the methods described herein may be implemented in a computer program, software, or firmware incorporated in a computer-readable medium for execution by a computer or processor. Examples of computer readable media include electronic signals (transmitted over a wired or wireless connection) and computer readable storage media. Examples of the computer readable storage medium include, but are not limited to, read Only Memory (ROM), random Access Memory (RAM), registers, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks and Digital Versatile Disks (DVDs). A processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.

Claims (15)

1. A method for use in a network exposure function, NEF, the method comprising:
receiving, from a Cloud Service Provider (CSP), a message to subscribe to information associated with at least one wireless transmit/receive unit (WTRU), wherein the information includes location information of the at least one WTRU, a number of the at least one WTRU in a location, or a number of the at least one WTRU using an application in a location;
retrieving the information associated with the at least one WTRU;
transmitting the information associated with the at least one WTRU to the CSP;
receiving an update user plane message from the CSP, the update user plane message determined by the CSP based on the information associated with the at least one WTRU and including an application identifier, an identifier associated with the WTRU, and a Data Network Name (DNN) location to enable the NEF to handle a User Plane (UP) associated with the at least one WTRU; and
manipulating the UP associated with the at least one WTRU based on the update user plane message including the DNN location received from the CSP.
2. The method of claim 1, wherein manipulating the UP further comprises:
querying a core network access and mobility function, AMF, for at least one session management function, SMF, serving the at least one WTRU;
receiving an indication of the at least one SMF from the AMF; and
sending a message to the at least one SMF, wherein the message includes the application identifier, an identifier associated with the WTRU, and the DNN location;
wherein the message is an indication to update the UP associated with the at least one WTRU.
3. The method of claim 1, wherein manipulating the UP further comprises:
sending a message to an AMF, wherein the message includes the application identifier, an identifier associated with the WTRU, and the DNN location;
wherein the message is an indication to trigger one or more SMFs to update the UP associated with the at least one WTRU.
4. The method of claim 1, wherein manipulating the UP further comprises:
querying a policy control function, PCF, for at least one session management function, SMF, serving the at least one WTRU;
receiving an indication from the PCF regarding the at least one SMF; and
sending a message to the at least one SMF, wherein the message comprises the application identifier, an identifier associated with the WTRU, and the DNN location information;
wherein the message is an indication for updating the user plane.
5. The method of claim 1, wherein the NEF communicates with the CSP via an application programming interface, API.
6. An apparatus implementing a network exposure function, NEF, the NEF comprising:
a receiver configured to receive a message from a Cloud Service Provider (CSP) subscribing to information associated with at least one wireless transmit/receive unit (WTRU), wherein the information comprises location information of the at least one WTRU, a number of the at least one WTRU in a location, or a number of the at least one WTRU using an application in a location;
a processor configured to retrieve the information associated with the at least one WTRU; and
a transmitter configured to transmit the information associated with the at least one WTRU to the CSP;
wherein the receiver is further configured to receive an update user plane message from the CSP, the update user plane message determined by the CSP based on the information associated with the at least one WTRU and including an application identifier, an identifier associated with the WTRU, and a Data Network Name (DNN) location to enable the NEF to handle a User Plane (UP) associated with the at least one WTRU; and
wherein the processor and transmitter are further configured to manipulate the UP associated with the at least one WTRU based on the update user plane message including the DNN location received from the CSP.
7. The device of claim 6, the NEF further configured to:
querying a core network access and mobility function, AMF, for at least one session management function, SMF, serving the at least one WTRU;
receiving an indication of the at least one SMF from the AMF; and
sending a message to the at least one SMF, wherein the message includes the application identifier, an identifier associated with the at least one WTRU, and the DNN location;
wherein the message is an indication to update the UP associated with the at least one WTRU.
8. The device of claim 6, the NEF further configured to:
sending a message to an AMF, wherein the message includes the application identifier, an identifier associated with the at least one WTRU, and the DNN location;
wherein the message is an indication to trigger one or more SMFs to update the UP associated with the at least one WTRU.
9. The device of claim 6, the NEF further configured to:
querying a policy control function, PCF, for at least one session management function, SMF, serving the at least one WTRU;
receiving an indication from the PCF regarding the at least one SMF; and
sending a message to the at least one SMF, wherein the message includes the application identifier, an identifier associated with the WTRU, and the DNN location information;
wherein the message is an indication for updating the user plane.
10. The device according to claim 6, the NEF further configured to communicate with the CSP via an Application Programming Interface (API).
11. The device according to claim 6, wherein the NEF is located in a customer premises equipment, CPE.
12. The device according to claim 6, wherein the NEF is located in a mobile network operator, MNO, core network, CN.
13. The apparatus of claim 12, wherein the NEF is co-located with a gNB.
14. A method for use by a cloud service provider, CSP, the method comprising:
sending a message to a Network Exposure Function (NEF) for subscribing to information associated with at least one wireless transmit/receive unit (WTRU), wherein the information includes location information of the at least one WTRU, a number of the at least one WTRU in a location, or a number of the at least one WTRU using an application in a location;
retrieving the information associated with the at least one WTRU from the NEF;
the CSP determining to update a User Plane (UP) associated with the at least one WTRU based on the received information;
the CSP sends an update user plane message to the NEF, the update user plane message including an application identifier, an identifier associated with the WTRU, and a Data Network Name (DNN) location to enable the NEF to handle the UP associated with the at least one WTRU.
15. The method of claim 14, wherein the CSP communicates with the NEF via an application programming interface, API.
CN201880088317.1A 2017-12-15 2018-12-17 Enhanced NEF function, MEC, and 5G integration Active CN111684824B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201762599335P 2017-12-15 2017-12-15
US62/599,335 2017-12-15
PCT/US2018/065968 WO2019118964A1 (en) 2017-12-15 2018-12-17 Enhanced nef function, mec and 5g integration

Publications (2)

Publication Number Publication Date
CN111684824A CN111684824A (en) 2020-09-18
CN111684824B true CN111684824B (en) 2023-04-11

Family

ID=65031759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880088317.1A Active CN111684824B (en) 2017-12-15 2018-12-17 Enhanced NEF function, MEC, and 5G integration

Country Status (5)

Country Link
US (1) US11533594B2 (en)
EP (1) EP3725103A1 (en)
KR (1) KR20200109303A (en)
CN (1) CN111684824B (en)
WO (1) WO2019118964A1 (en)

Families Citing this family (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019140221A1 (en) * 2018-01-12 2019-07-18 Idac Holdings, Inc. Methods and procedures for providing an ieee 802.11 based radio network information service for etsi mec
CN112823564A (en) * 2018-10-04 2021-05-18 瑞典爱立信有限公司 Method for providing dynamic NEF tunnel allocation and related network node/function
US11330648B2 (en) * 2019-02-15 2022-05-10 Ofinno, Llc Charging aggregation control for network slices
CN110290140B (en) * 2019-06-28 2021-09-24 腾讯科技(深圳)有限公司 Multimedia data processing method and device, storage medium and electronic equipment
US10932108B1 (en) 2019-08-28 2021-02-23 Sprint Communications Company L.P. Wireless communication network exposure function (NEF) that indicates network status
CN112584437B (en) * 2019-09-30 2023-03-28 中国移动通信有限公司研究院 Data distribution method and device
US20230354149A1 (en) * 2019-10-04 2023-11-02 Telefonaktiebolaget Lm Ericsson (Publ) Method for identification of traffic suitable for edge breakout and for traffic steering in a mobile network
CN112770336B (en) * 2019-10-21 2023-04-25 中移(成都)信息通信科技有限公司 Equipment testing method and system
US20220345442A1 (en) * 2019-11-05 2022-10-27 Samsung Electronics Co., Ltd. Device and method for providing information of application server in mobile communication system
WO2021092441A1 (en) * 2019-11-07 2021-05-14 Idac Holdings, Inc. Address change notification associated with edge computing networks
US11336721B2 (en) * 2019-11-29 2022-05-17 Amazon Technologies, Inc. Dynamic resource movement in heterogeneous computing environments including cloud edge locations
CN114930294A (en) * 2019-12-30 2022-08-19 皇家Kpn公司 System, apparatus and method for edge node computation
CN115136628A (en) * 2019-12-31 2022-09-30 康维达无线有限责任公司 Edge-aware distributed network
US11395195B2 (en) 2020-01-22 2022-07-19 Cisco Technology, Inc. Systems and methods for managing MEC application hosting
US11902338B2 (en) * 2020-02-13 2024-02-13 Lg Electronics Inc. Communication related to multi-access PDU sessions
US11902104B2 (en) * 2020-03-04 2024-02-13 Intel Corporation Data-centric service-based network architecture
CN113747436B (en) * 2020-05-14 2022-09-23 华为技术有限公司 Communication system, server, communication method and device
CN111935738B (en) * 2020-07-17 2022-07-26 网络通信与安全紫金山实验室 Method and system for multi-operator core network docking MEC
EP4176601A4 (en) * 2020-08-06 2023-08-02 Apple Inc. User equipment authentication and authorization procedure for edge data network
CN116235515A (en) * 2020-09-16 2023-06-06 苹果公司 Security protection for user consent for edge computing
US11509715B2 (en) * 2020-10-08 2022-11-22 Dell Products L.P. Proactive replication of software containers using geographic location affinity to predicted clusters in a distributed computing environment
CN112202917A (en) * 2020-10-14 2021-01-08 中国联合网络通信集团有限公司 Method and equipment for terminating multi-access edge computing service
US11924662B2 (en) * 2020-11-13 2024-03-05 At&T Intellectual Property I, L.P. Remote user plane deployment and configuration
CN112533178B (en) * 2020-11-24 2022-04-08 中移(杭州)信息技术有限公司 Method, platform, server and storage medium for realizing network capability opening
CN114629912B (en) * 2020-11-26 2023-07-21 中移物联网有限公司 Communication transmission method and device based on MEC
US11463915B2 (en) 2020-11-30 2022-10-04 Verizon Patent And Licensing Inc. Systems and methods for exposing custom per flow descriptor attributes
CN112437435A (en) * 2020-12-07 2021-03-02 腾讯科技(深圳)有限公司 Data information acquisition method and device, related equipment and medium
KR102400158B1 (en) * 2020-12-08 2022-05-19 인하대학교 산학협력단 Dynamic Resource Allocation Method and Apparatus for Service Chaining in Cloud-Edge-Radio 5G Network
KR102458785B1 (en) * 2020-12-21 2022-10-25 포인트아이 주식회사 FPGA-based MEC data plane system for building private 5G network
US20220312053A1 (en) * 2021-03-29 2022-09-29 At&T Mobility Ii Llc Streaming awareness gateway
CN113543152A (en) * 2021-07-09 2021-10-22 大唐网络有限公司 5G communication system, data communication method, and non-volatile storage medium
US11689982B2 (en) * 2021-08-24 2023-06-27 Verizon Patent And Licensing Inc. Weighted MEC selection for application-based MEC traffic steering
US11711679B2 (en) * 2021-09-21 2023-07-25 International Business Machines Corporation Context aware cloud service availability in a smart city by renting temporary data centers
WO2023055368A1 (en) * 2021-09-30 2023-04-06 Nokia Technologies Oy Application specific protocol data unit sessions
CN116193567A (en) * 2021-11-29 2023-05-30 华为技术有限公司 Communication method and device
CN114339727B (en) * 2021-12-29 2023-08-15 中国联合网络通信集团有限公司 Edge platform, configuration method, device, terminal and storage medium
US11924715B2 (en) * 2022-05-06 2024-03-05 Nokia Solutions And Networks Oy Edge application server assignment for ad-hoc groups of user equipment
WO2023246127A1 (en) * 2022-06-22 2023-12-28 Huawei Technologies Co., Ltd. System and methods for mission execution in network
CN114980359B (en) * 2022-07-28 2022-12-27 阿里巴巴(中国)有限公司 Data forwarding method, device, equipment, system and storage medium
US11659400B1 (en) 2022-08-02 2023-05-23 Digital Global Systems, Inc. System, method, and apparatus for providing optimized network resources
US11570627B1 (en) 2022-08-02 2023-01-31 Digital Global Systems, Inc. System, method, and apparatus for providing optimized network resources
US11751064B1 (en) 2022-08-02 2023-09-05 Digital Global Systems, Inc. System, method, and apparatus for providing optimized network resources
US11843953B1 (en) 2022-08-02 2023-12-12 Digital Global Systems, Inc. System, method, and apparatus for providing optimized network resources
WO2024072104A1 (en) * 2022-09-29 2024-04-04 Samsung Electronics Co., Ltd. Method and apparatus for policy control for restricted pdu session in wireless communication system

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102170626B (en) * 2006-04-07 2013-04-17 华为技术有限公司 MME (mobility management entity)/UPE (user plane entity) reselection method and system of UE (user equipment)
CN101557575B (en) * 2009-03-18 2012-03-21 华为技术有限公司 Method for indicating position of user equipment and access point equipment
CN102149084B (en) * 2010-02-09 2015-05-20 中兴通讯股份有限公司 Method and system for identifying M2M (machine-to-machine) terminal
CN102223729B (en) * 2010-04-16 2016-06-29 中兴通讯股份有限公司 Control machine type communication device and access the method and system of network
US9137171B2 (en) * 2011-12-19 2015-09-15 Cisco Technology, Inc. System and method for resource management for operator services and internet
US20160132875A1 (en) * 2014-02-05 2016-05-12 Google Inc. Enhancement of mobile device initiated transactions
CN104883736B (en) * 2015-05-27 2018-08-03 国家计算机网络与信息安全管理中心 The localization method and device of terminal
KR102071311B1 (en) * 2015-08-17 2020-01-30 후아웨이 테크놀러지 컴퍼니 리미티드 User plane gateway update method and device
US10069791B2 (en) 2015-11-02 2018-09-04 Cisco Technology, Inc. System and method for providing a change in user equipment packet data network internet protocol address in a split control and user plane evolved packet core architecture
US11444850B2 (en) * 2016-05-02 2022-09-13 Huawei Technologies Co., Ltd. Method and apparatus for communication network quality of service capability exposure
CN113573288A (en) 2016-05-06 2021-10-29 康维达无线有限责任公司 Traffic steering for service layer
US10432724B2 (en) * 2016-11-18 2019-10-01 International Business Machines Corporation Serializing access to data objects in a logical entity group in a network storage
CN106851856B (en) * 2016-12-23 2019-04-09 电信科学技术研究院有限公司 A kind of wireless communication method for building up and the network equipment based on mobile relay
WO2018129665A1 (en) 2017-01-10 2018-07-19 华为技术有限公司 Communication method, network exposure function network element, and control plane network element
CN109155909B (en) 2017-01-16 2021-08-10 Lg 电子株式会社 Method for updating UE configuration in wireless communication system and apparatus thereof
CN107743307B (en) 2017-10-30 2021-01-05 中国联合网络通信集团有限公司 Method and equipment for processing MEC (Mec) based on position
US11025456B2 (en) * 2018-01-12 2021-06-01 Apple Inc. Time domain resource allocation for mobile communication

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A way forward for accommodating NFV in 3GPP 5G systems;Myung-Ki Shin等;《2017 International Conference on Information and Communication Technology Convergence (ICTC)》;20171214;全文 *
EPC向5G核心网架构演进探讨;杨旭等;《面向5G的LTE网络创新研讨会(2017)论文集》;20170817;全文 *
高可靠云服务供应关键技术研究;周傲;《中国博士学位论文全文数据库信息科技辑》;20160315;全文 *

Also Published As

Publication number Publication date
US20210176613A1 (en) 2021-06-10
EP3725103A1 (en) 2020-10-21
CN111684824A (en) 2020-09-18
KR20200109303A (en) 2020-09-22
WO2019118964A1 (en) 2019-06-20
US11533594B2 (en) 2022-12-20

Similar Documents

Publication Publication Date Title
CN111684824B (en) Enhanced NEF function, MEC, and 5G integration
CN111034273B (en) Terminal requesting network slicing capability from non-3 GPP access networks
CN109076347B (en) Network slicing operation
CN114430897B (en) Method, device and system for edge parsing function
CN114424597A (en) Authentication and authorization of drone access networks
CN112385248A (en) Procedure to enable configuration of PC5 communication parameters for advanced vehicle-to-everything (V2X) services
CN114557117A (en) Transparent relocation of MEC application instances between 5G devices and MEC hosts
JP7347507B2 (en) Enabling private network communication
CN111742535A (en) Method and procedure for providing IEEE 802.11-based wireless network information service for ETSI MEC
CN112425138A (en) Pinning service function chains to context-specific service instances
EP4128724A1 (en) Methods, apparatus, and systems for discovery of edge network management servers
CN115462123A (en) Interworking of extended 5G local area networks with home networks and change to access networks of 5G LAN connected devices
EP4140158A1 (en) Multi rat d2d, extending d2d to include 3gpp and other non-3gpp rat / devices
JP2022517260A (en) How to specify the type of MAC address by the dynamic allocation mechanism
WO2022177885A1 (en) Multiple application identifications using layer-3 relay
KR20230150971A (en) Methods, devices and systems for integrating constrained multi-access edge computing hosts in a multi-access edge computing system
US20240129968A1 (en) Methods, architectures, apparatuses and systems for supporting multiple application ids using layer-3 relay
EP4324293A1 (en) Discovery and interoperation of constrained devices with mec platform deployed in mnos edge computing infrastructure
WO2024026082A1 (en) Method and apparatus for enabling n3gpp communication between remote wtru and relay wtru
EP4342158A1 (en) Multi-access edge computing
CN116941232A (en) Method, apparatus and system for integrating constrained multi-access edge computing hosts in a multi-access edge computing system
WO2023183538A1 (en) Shared-application vertical-session-based-edge-application-instance discovery and selection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20201113

Address after: Tokyo, Japan

Applicant after: Sony Corp.

Address before: Wilmington, Delaware, USA

Applicant before: IDAC HOLDINGS, Inc.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant