CN114339821A - Method and apparatus for machine learning model sharing between distributed NWDAFs - Google Patents

Method and apparatus for machine learning model sharing between distributed NWDAFs Download PDF

Info

Publication number
CN114339821A
CN114339821A CN202111153446.4A CN202111153446A CN114339821A CN 114339821 A CN114339821 A CN 114339821A CN 202111153446 A CN202111153446 A CN 202111153446A CN 114339821 A CN114339821 A CN 114339821A
Authority
CN
China
Prior art keywords
nwdaf
model
consumer
service
provider
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111153446.4A
Other languages
Chinese (zh)
Inventor
廖青毓
梅加什里·达塔特里·凯达拉古德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Publication of CN114339821A publication Critical patent/CN114339821A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Mobile Radio Communication Systems (AREA)

Abstract

A method and apparatus for Machine Learning (ML) model sharing between distributed NWDAFs in a 5G network is provided. The method comprises the following steps: enabling a provider NWDAF to share a trained model with a consumer NWDAF through a model provisioning service, wherein an output of the model provisioning service comprises: one or more analysis IDs and timestamps indicating versions of the trained models.

Description

Method and apparatus for machine learning model sharing between distributed NWDAFs
Technical Field
Embodiments of the present disclosure generally relate to the field of wireless communications, and in particular, to methods and apparatus for Machine Learning (ML) model sharing between distributed network data and analytics functions (NWDAFs) in a 5G network.
Background
In 3GPP release 16, a centralized NWDAF is introduced in the context of network automation to support data collection services from other Network Functions (NFs) and Application Functions (AFs) and to expose analysis information for other NFs, operations, administration and maintenance (OAM) and AFs including access and mobility management functions (AMF), part management functions (SMF), Policy Control Functions (PCF). In 3GPP release 17, the SA2 WG continues to enhance NWDAF for network automation, supporting distributed NWDAF.
Disclosure of Invention
In accordance with an embodiment of the present disclosure, there is provided a method for Machine Learning (ML) model sharing between distributed NWDAFs in a 5G network, comprising: enabling the provider NWDAF to share the trained model with the consumer NWDAF through the model provisioning service; wherein the output of the model provisioning service comprises: one or more analysis IDs and timestamps indicating versions of the trained models.
In accordance with another embodiment of the present disclosure, there is provided an apparatus for Machine Learning (ML) model sharing between distributed NWDAFs in a 5G network, comprising: a processing circuit configured to: enabling a provider NWDAF to share a trained model with a consumer NWDAF through a model provisioning service, wherein an output of the model provisioning service comprises: one or more analysis IDs and timestamps indicating versions of the trained models.
Drawings
Embodiments of the disclosure will be described by way of example, and not limitation, with reference to the figures of the accompanying drawings in which like references indicate similar elements and in which:
fig. 1 is a network diagram illustrating an example network environment, according to some example embodiments of the present disclosure.
Fig. 2 illustrates a process for training model registration, discovery, and consumption, according to some example embodiments of the present disclosure.
FIG. 3 illustrates a process for training model registration, discovery, consumption, and update according to an example embodiment of the present disclosure.
FIG. 4 illustrates a process for training model registration, discovery, consumption, and update according to another example embodiment of the present disclosure.
Fig. 5 is a flowchart illustrating a method 500 for Machine Learning (ML) model sharing between distributed NWDAFs in a 5G network, according to some example embodiments of the present disclosure.
Fig. 6 schematically illustrates a wireless network 600 according to some example embodiments of the present disclosure.
Fig. 7 is a block diagram illustrating components according to some example embodiments of the present disclosure.
Detailed Description
Various aspects of the illustrative embodiments will be described using terms commonly employed by those skilled in the art to convey the substance of the disclosure to others skilled in the art. It will be apparent, however, to one skilled in the art that many alternative embodiments may be practiced using portions of the described aspects. For purposes of explanation, specific numbers, materials, and configurations are set forth in order to provide a thorough understanding of the illustrative embodiments. However, it will be apparent to one skilled in the art that alternative embodiments may be practiced without the specific details. In other instances, well-known features may be omitted or simplified in order not to obscure the illustrative embodiments.
Further, various operations will be described as multiple discrete operations, in turn, in a manner that is most helpful in understanding the illustrative embodiments; however, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations need not be performed in the order of presentation.
The phrases "in an embodiment," "in one embodiment," and "in some embodiments" are used repeatedly herein. The phrase generally does not refer to the same embodiment; however, it may also refer to the same embodiment. The terms "comprising," "having," and "including" are synonymous, unless the context dictates otherwise. The phrases "A or B" and "A/B" mean "(A), (B) or (A and B)".
Fig. 1 is a network diagram illustrating an example network environment, according to some example embodiments of the present disclosure. The network 100 may operate in a manner consistent with the 3GPP technical specifications for LTE or 5G/NR systems. However, the example embodiments are not limited in this respect, and the described embodiments may be applied to other networks, such as future 3GPP systems and the like, that benefit from the principles described herein.
Network 100 may include a UE 102, which may include any mobile or non-mobile computing device designed to communicate with RAN 104 via an over-the-air connection. The UE 102 may be, but is not limited to, a smartphone, a tablet, a wearable computer device, a desktop computer, a laptop computer, an in-vehicle infotainment device, an in-vehicle entertainment device, an instrument cluster, a heads-up display device, an in-vehicle diagnostic device, a dashboard mobile device, a mobile data terminal, an electronic engine management system, an electronic/engine control unit, an electronic/engine control module, an embedded system, a sensor, a microcontroller, a control module, an engine management system, a networked appliance, a machine-type communication device, an M2M or D2D device, an internet of things device, and/or the like.
In some embodiments, the network 100 may include multiple UEs directly coupled to each other through edge link interfaces. The UE may be an M2M/D2D device that communicates using a physical side link channel (e.g., without limitation, a physical side link broadcast channel (PSBCH), a physical side link discovery channel (PSDCH), a physical side link shared channel (PSSCH), a physical side link control channel (PSCCH), a physical side link fundamental channel (PSFCH), etc.).
In some embodiments, the UE 102 may also communicate with the AP 106 over an over-the-air connection. The AP 106 may manage WLAN connections that may be used to offload some/all network traffic from the RAN 104. The connection between the UE 102 and the AP 106 may be in accordance with any IEEE 802.13 protocol, wherein the AP 106 may be wireless fidelity
Figure BDA0003287833940000031
A router. In some embodiments, UE 102, RAN 104, and AP 106 may utilize cellular WLAN aggregation (e.g., LTE-WLAN aggregation (LWA)/lightweight ip (lwip)). Cellular WLAN aggregation may involve a UE 102 configured by a RAN 104 to utilize both cellular radio resources and WLAN resources.
The RAN 104 may include one or more access nodes, such as AN 108. The AN 108 may terminate the air interface protocols of the UE 102 by providing access stratum protocols including RRC, Packet Data Convergence Protocol (PDCP), Radio Link Control (RLC), Medium Access Control (MAC), and L1 protocols. In this manner, the AN 108 may enable data/voice connectivity between the CN 120 and the UE 102. In some embodiments, the AN 108 may be implemented in a separate device or as one or more software entities running on a server computer, as part of a virtual network, for example, which may be referred to as a CRAN or virtual baseband unit pool. AN 108 may be referred to as a Base Station (BS), a gNB, a RAN node, AN evolved node b (eNB), a next generation eNB (ng-eNB), a node b (nodeb), a roadside unit (RSU), a TRxP, a TRP, and so on. The AN 108 may be a macrocell base station or a low power base station for providing microcells, picocells, or other similar cells having smaller coverage areas, smaller user capacities, or higher bandwidths than macrocells.
In embodiments where the RAN 104 comprises multiple ANs, they may be coupled to each other over AN X2 interface (in the case where the RAN 104 is AN LTE RAN) or AN Xn interface (in the case where the RAN 104 is a 5G RAN). The X2/Xn interface, which may be separated into a control plane interface/user plane interface in some embodiments, may allow the AN to communicate information related to handover, data/context transfer, mobility, load management, interference coordination, etc.
The ANs of the RANs 104 may each manage one or more cells, groups of cells, component carriers, and the like to provide the UE 102 with AN air interface for network access. UE 102 may be simultaneously connected with multiple cells provided by the same or different ANs of RAN 104. For example, UE 102 and RAN 104 may use carrier aggregation to allow UE 102 to connect with multiple component carriers, each corresponding to a primary cell (Pcell) or a secondary cell (Scell). In a dual connectivity scenario, the first AN may be a primary node providing a Master Cell Group (MCG) and the second AN may be a secondary node providing a Secondary Cell Group (SCG). The first/second AN can be any combination of eNB, gNB, ng-eNB, etc.
RAN 104 may provide an air interface over a licensed spectrum or an unlicensed spectrum. To operate in unlicensed spectrum, a node may use a Licensed Assisted Access (LAA), enhanced LAA (elaa), and/or further enhanced LAA (felaa) mechanism based on Carrier Aggregation (CA) technology with PCell/Scell. Prior to accessing the unlicensed spectrum, the node may perform a media/carrier sensing operation based on, for example, a Listen Before Talk (LBT) protocol.
In a vehicle-to-everything (V2X) scenario, the UE 102 or AN 108 may be or act as a roadside unit (RSU), which may refer to any transport infrastructure entity for V2X communications. The RSU may be implemented in or by AN appropriate AN or stationary (or relatively stationary) UE. An RSU implemented in or by a UE may be referred to as a "UE-type RSU"; an RSU implemented in or by an eNB may be referred to as an "eNB-type RSU"; RSUs implemented in the next generation nodeb (gNB) or by the gNB may be referred to as "gNB-type RSUs"; and so on. In one example, the RSU is a computing device coupled with radio frequency circuitry located at the curb side that provides connectivity support to passing vehicle UEs. The RSU may also include internal data storage circuitry for storing intersection map geometry, traffic statistics, media, and applications/software for sensing and controlling ongoing vehicle and pedestrian traffic. The RSU may provide very low latency communications required for high speed events, e.g., collision avoidance, traffic warnings, etc. Additionally or alternatively, the RSU may provide other cellular/WLAN communication services. The components of the RSU may be enclosed in a weatherproof enclosure suitable for outdoor installation and may include a network interface controller to provide a wired connection (e.g., ethernet) to a traffic signal controller or backhaul network.
In some embodiments, RAN 104 may be an LTE RAN 110 including an evolved node b (eNB), e.g., eNB 112. The LTE RAN 110 may provide an LTE air interface with the following characteristics: SCS at 15 kHz; a CP-OFDM waveform for DL and an SC-FDMA waveform for UL; turbo codes for data and TBCC for control, etc. The LTE air interface may rely on CSI-RS for CSI acquisition and beam management; relying on a PDSCH/PDCCH demodulation reference signal (DMRS) for PDSCH/PDCCH demodulation; and relying on CRS for cell search and initial acquisition, channel quality measurements, and channel estimation for coherent demodulation/detection at the UE. The LTE air interface may operate over the sub-6 GHz band.
In some embodiments, RAN 104 may be a Next Generation (NG) -RAN114 having a gNB (e.g., gNB 116) or gn-eNB (e.g., NG-eNB 118). The gNB 116 may connect with 5G-enabled UEs using a 5G NR interface. The gNB 116 may be connected to the 5G core through an NG interface, which may include an N2 interface or an N3 interface. Ng-eNB 118 may also be connected with the 5G core over the Ng interface, but may be connected with the UE over the LTE air interface. The gNB 116 and ng-eNB 118 may be connected to each other through an Xn interface.
In some embodiments, the NG interface may be divided into two parts, a NG user plane (NG-U) interface, which carries traffic data between nodes of the NG-RAN114 and UPF 148, and a NG control plane (NG-C) interface, which is a signaling interface (e.g., an N2 interface) between the NG-RAN114 and nodes of the access and mobility management function (AMF) 144.
NG-RAN114 may provide a 5G-NR air interface with the following characteristics: variable SCS; CP-OFDM for DL, CP-OFDM for UL, and DFT-s-OFDM; polarity, repetition, simplex, and Reed-Muller (Reed-Muller) codes for control, and LDPC for data. The 5G-NR air interface may rely on CSI-RS, PDSCH/PDCCH DMRS similar to the LTE air interface. The 5G-NR air interface may not use CRS, but may use PBCH DMRS for PBCH demodulation; performing phase tracking of the PDSCH using the PTRS; and time tracking using the tracking reference signal. The 5G-NR air interface may operate over the FR1 frequency band, which includes the sub-6 GHz band, or the FR2 frequency band, which includes the 24.25GHz to 52.6GHz band. The 5G-NR air interface may include SSBs, which are regions of a downlink resource grid including PSS/SSS/PBCH.
In some embodiments, the 5G-NR air interface may use BWP for various purposes. For example, BWP may be used for dynamic adaptation of SCS. For example, the UE 102 may be configured with multiple BWPs, where each BWP configuration has a different SCS. When the BWP is indicated to the UE 102 to change, the SCS of the transmission also changes. Another use case for BWP is related to power saving. In particular, the UE 102 may be configured with multiple BWPs with different numbers of frequency resources (e.g., PRBs) to support data transmission in different traffic load scenarios. BWPs containing a smaller number of PRBs may be used for data transmission with smaller traffic load while allowing power savings at UE 102 and, in some cases, at gNB 116. BWPs containing a large number of PRBs may be used in scenarios with higher traffic loads.
The RAN 104 is communicatively coupled to a CN 120, which includes network elements, to provide various functions to support data and telecommunications services to customers/subscribers (e.g., users of the UE 102). The components of the CN 120 may be implemented in one physical node or in different physical nodes. In some embodiments, NFV may be used to virtualize any or all functions provided by network elements of CN 120 onto physical computing/storage resources in servers, switches, and the like. Logical instances of the CN 120 may be referred to as network slices, and logical instantiations of a portion of the CN 120 may be referred to as network subslices.
In some embodiments, the CN 120 may be an LTE CN 122, which may also be referred to as an Evolved Packet Core (EPC). LTE CN 122 may include a Mobility Management Entity (MME)124, a Serving Gateway (SGW)126, a Serving GPRS Support Node (SGSN)128, a Home Subscriber Server (HSS)130, a Proxy Gateway (PGW)132, and a policy control and charging rules function (PCRF)134, which are coupled to one another by an interface (or "reference point") as shown. The functions of the elements of LTE CN 122 may be briefly introduced as follows.
MME 124 may implement mobility management functions to track the current location of UE 102 to facilitate patrol, bearer activation/deactivation, handover, gateway selection, authentication, etc.
The SGW 126 may terminate the S1 interface towards the RAN and route data packets between the RAN and the LTE CN 122. SGW 126 may be a local mobility anchor for inter-RAN node handovers and may also provide an anchor for inter-3 GPP mobility. Other responsibilities may include lawful interception, billing, and some policy enforcement.
SGSN 128 may track the location of UE 102 and perform security functions and access control. In addition, SGSN 128 may perform EPC inter-node signaling for mobility between different RAT networks; PDN and S-GW selection specified by MME 124; MME selection for handover, etc. The S3 reference point between MME 124 and SGSN 128 may enable user and bearer information exchange for inter-3 GPP access network mobility in idle/active state.
HSS 130 may include a database for network subscribers that includes subscription-related information that supports network entities handling communication sessions. HSS 130 may provide support for routing/roaming, authentication, admission, naming/addressing resolution, location dependency, etc. The S6a reference point between HSS 130 and MME 124 may enable the transmission of subscription and authentication data to authenticate/grant a user access to LTE CN 120.
PGW 132 may terminate the SGi interface towards Data Network (DN)136, which may include application/content server 138. PGW 132 may route data packets between LTE CN 122 and data network 136. PGW 132 may be coupled with SGW 126 through an S5 reference point to facilitate user plane tunneling and tunnel management. PGW 132 may also include nodes (e.g., PCEFs) for policy enforcement and charging data collection. Additionally, the SGi reference point between PGW 132 and data network 136 may be, for example, an operator external public, private PDN, or an operator internal packet data network for providing IMS services. PGW 132 may be coupled with PCRF 134 via a Gx reference point.
PCRF 134 is the policy and charging control element of LTE CN 122. PCRF 134 may be communicatively coupled to application/content server 138 to determine appropriate QoS and charging parameters for the service flow. PCRF 132 may provide the associated rules to the PCEF (via the Gx reference point) with the appropriate TFT and QCI.
In some embodiments, the CN 120 may be a 5G core network (5GC) 140. The 5GC 140 may include an authentication server function (AUSF)142, an access and mobility management function (AMF)144, a Session Management Function (SMF)146, a User Plane Function (UPF)148, a Network Slice Selection Function (NSSF)150, a network open function (NEF)152, an NF storage function (NRF)154, a Policy Control Function (PCF)156, a Unified Data Management (UDM)158, and an Application Function (AF)160, which are coupled to one another by interfaces (or "reference points"), as shown. The functions of the elements of the 5GC 140 may be briefly described as follows.
The AUSF 142 may store data for authentication of the UE 102 and handle authentication related functions. The AUSF 142 may facilitate a common authentication framework for various access types. The AUSF 142 may exhibit a Nausf service based interface in addition to communicating with other elements of the 5GC 140 through reference points as shown.
The AMF 144 may allow other functions of the 5GC 140 to communicate with the UE 102 and the RAN 104 and subscribe to notifications regarding mobility events of the UE 102. The AMF 144 may be responsible for registration management (e.g., registering the UE 102), connection management, reachability management, mobility management, lawful interception of AMF related events, and access authentication and permissions. AMF 144 may provide for the transmission of Session Management (SM) messages between UE 102 and SMF 146 and act as a transparent proxy for routing SM messages. AMF 144 may also provide for the transmission of SMS messages between UE 102 and the SMSF. AMF 144 may interact with AUSF 142 and UE 102 to perform various security anchoring and context management functions. Further, AMF 144 may be a termination point for the RAN CP interface, which may include or be an N2 reference point between RAN 104 and AMF 144; the AMF 144 may act as a termination point for NAS (N1) signaling and perform NAS ciphering and integrity protection. The AMF 144 may also support NAS signaling with the UE 102 over the N3 IWF interface.
SMF 146 may be responsible for SM (e.g., session establishment, tunnel management between UPF 148 and AN 108); UE IP address assignment and management (including optional permissions); selection and control of the UP function; configuring flow control at the UPF 148 to route the flow to the appropriate destination; termination of the interface to the policy control function; controlling a portion of policy enforcement, charging, and QoS; lawful interception (for SM events and interface to the LI system); terminate the SM portion of the NAS message; a downlink data notification; initiating AN-specific SM message (sent to AN 108 over N2 through AMF 144); and determining an SSC pattern for the session. SM may refer to the management of PDU sessions, and a PDU session or "session" may refer to a PDU connectivity service that provides or enables the exchange of PDUs between the UE 102 and the data network 136.
The UPF 148 may serve as an anchor point for intra-RAT and inter-RAT mobility, an external PDU session point to interconnect with the data network 136, and a branch point to support multi-homed PDU sessions. The UPF 148 may also perform packet routing and forwarding, perform packet inspection, perform the user plane part of policy rules, lawful intercepted packets (UP collection), perform traffic usage reporting, perform QoS processing for the user plane (e.g., packet filtering, gating, UL/DL rate enforcement), perform uplink traffic verification (e.g., SDF to QoS flow mapping), transport level packet marking in the uplink and downlink, and perform downlink packet buffering and downlink data notification triggering. The UPF 148 may include an uplink classifier to support routing of traffic flows to the data network.
The NSSF 150 may select a set of network slice instances that serve the UE 102. NSSF 150 may also determine allowed Network Slice Selection Assistance Information (NSSAI) and a mapping to a single NSSAI (S-NSSAI) of the subscription, if desired. The NSSF 150 may also determine a set of AMFs to be used to serve the UE 102, or determine a list of candidate AMFs, based on a suitable configuration and possibly by querying the NRFs 154. Selection of a set of network slice instances for UE 102 may be triggered by AMF 144 (with which UE 102 registers by interacting with NSSF 150), which may result in a change in the AMF. The NSSF 150 may interact with the AMF 144 via the N22 reference point; and may communicate with another NSSF in the visited network via an N31 reference point (not shown). Further, the NSSF 150 may expose an interface based on the NSSF service.
NEF 152 may securely expose services and capabilities provided by 3GPP network functions for third parties, internal disclosure/re-disclosure, AFs (e.g., AF 160), edge computing or fog computing systems, and the like. In these embodiments, NEF 152 may authenticate, license, or throttle AFs. NEF 152 may also translate information exchanged with AF 160 and information exchanged with internal network functions. For example, the NEF 152 may convert between the AF service identifier and the internal 5GC information. NEF 152 may also receive information from other NFs based on their public capabilities. This information may be stored as structured data at the NEF 152 or at the data store NF using a standardized interface. NEF 152 may then re-disclose the stored information to other NFs and AFs, or for other purposes such as analysis. In addition, NEF 152 may expose an interface based on the Nnef service.
NRF 154 may support a service discovery function, receive NF discovery requests from NF instances, and provide information of discovered NF instances to NF instances. NRF 154 also maintains information of available NF instances and their supported services. As used herein, the terms "instantiate," "instance," and the like may refer to creating an instance, "instance" may refer to a specific occurrence of an object, which may occur, for example, during execution of program code. Further, NRF 154 may expose an interface based on the nrrf service.
PCF 156 may provide policy rules to control plane functions to enforce them and may also support a unified policy framework to manage network behavior. PCF 156 may also implement a front end to access subscription information related to policy decisions in the UDR of UDM 158. In addition to communicating with functions through reference points as shown, PCF 156 also exhibits an Npcf service-based interface.
UDM 158 may process subscription-related information to support network entities handling communication sessions and may store subscription data for UE 102. For example, subscription data may be communicated via the N8 reference point between UDM 158 and AMF 144. UDM 158 may include two parts: front end and UDR are applied. The UDR may store policy data and subscription data for UDM 158 and PCF 156, and/or structured data and application data for the NEF 152 for disclosure (including PFD for application detection, application request information for multiple UEs 102). UDR 221 may expose an Nudr service-based interface to allow UDM 158, PCF 156, and NEF 152 to access specific sets of stored data, as well as read, update (e.g., add, modify), delete, and subscribe to notifications of relevant data changes in the UDR. The UDM may include a UDM-FE that is responsible for handling credentials, location management, subscription management, and the like. Several different front ends may serve the same user in different transactions. The UDM-FE accesses subscription information stored in the UDR and performs authentication credential processing, user identification processing, access permission, registration/mobility management, and subscription management. In addition to communicating with other NFs through reference points as shown, UDM 158 may also expose a numm service based interface.
The AF 160 may provide application impact on traffic routing, provide access to NEF, and interact with the policy framework for policy control.
In some embodiments, the 5GC 140 may enable edge computing by selecting an operator/third party service that is geographically close to the point at which the UE 102 attaches to the network. This may reduce latency and load on the network. To provide an edge computing implementation, the 5GC 140 may select a UPF 148 near the UE 102 and perform traffic steering from the UPF 148 to the data network 136 over the N6 interface. This may be based on UE subscription data, UE location, and information provided by the AF 160. In this way, the AF 160 may affect UPF (re) selection and traffic routing. Based on operator deployment, the network operator may allow the AF 160 to interact directly with the relevant NFs when the AF 160 is considered a trusted entity. In addition, the AF 160 may expose a Naf service-based interface.
The data network 136 may represent various network operator services, internet access, or third party services that may be provided by one or more servers, including, for example, an application/content server 138.
The present disclosure proposes enhancing the functionality of Machine Learning (ML) model sharing of distributed NWDAF for reasoning and training using the following solutions: solution 1: for inference-only model sharing; solution 2: for inference model sharing with joint learning model updating; solution 3: for inference model sharing with online learning data updates.
In these solutions, the provider NWDAF instance provides the trained model to the consumer NWDAF instance through a model provisioning service. The provider NWDAF instance first registers its ability to disclose a trained data model in the NRF. The consumer NWDAF instance discovers the address of the provider NWDAF instance by querying the NRF.
After service discovery, the consumer NWDAF instance may subscribe to the model provisioning service to obtain updated model/model parameters at all times, or request the model provisioning service to obtain data model/model parameters (one-time request).
The model itself is provided by the provider NWDAF instance to the consumer NWDAF instance in a file/transparent container.
Solution 1: for inference-only model sharing
Solution 1 provides the following solutions: one NWDAF instance shares a training model with other NWDAF instances for ML inference only.
Table 1 shows the NWDAF service for model provisioning in solution 1. As shown in table 1, an nwdaf _ model provision service is provided for subscribing to model provisioning, and an nwdaf _ model info service is provided for requesting model provisioning.
Table 1: NWDAF service for model provisioning
Figure BDA0003287833940000111
Figure BDA0003287833940000121
In some example embodiments, the input to the nwdaf model provision service or the nwdaf model info service may include the analysis ID(s), and optionally a timestamp to indicate the version of the current model (if available), and optionally: region of interest, UE type, application ID, single Network Slice Selection Assistance Information (NSSAI), and/or time.
In some example embodiments, the output of the nwdaf model provision service or the nwdaf model info service may include a model description having a model configuration with ML algorithms (e.g., Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), reinforcement learning, etc.) and model parameters.
In some example embodiments, the output of the nwdaf model provision service or the nwdaf model info service may also include a timestamp to indicate the version of the provisioned model. Optionally, the output of the nwdaf model provision service or the nwdaf model info service may further include a model ID. It is noted that the model ID is not globally unique information, but information local to the model provider NWDAF.
Fig. 2 illustrates a process for training model registration, discovery, and consumption, according to some example embodiments of the present disclosure. As shown in FIG. 2, a model registration process, a model discovery process, and a model consumption process are illustrated.
In the model registration process, at step 1, the provider NWDAF may register its training model provisioning capability (i.e., "model provisioning service" with supported analysis ID list) in the NRF as part of its configuration file in the NRF by calling the nrrf _ NFManagement _ NFregister _ request service operation. The NRF then stores the NWDAF profile and sends a registration response to the provider NWDAF by calling the nrrf _ NFManagement _ NFregister _ response service operation, as shown in steps 2 and 3.
In the model discovery process, at step 4, the consumer NWDAF may send a discovery Request for "model provisioning services" to the NRF using a list of service parameters (e.g., analysis ID, etc.) by invoking a nrrf _ NFDiscovery _ Request service operation. At step 5, the NRF may respond with an NWDAF instance that provides the requested "model provisioning service" by invoking an NRF _ NFDiscovery _ Request _ response service operation.
In the model consumption process, at step 6, the consumer NWDAF may subscribe to the "model provisioning service" of the discovered provider NWDAF instance by invoking the NWDAF _ model provisioning _ describe service operation, or request the "model provisioning service" of the discovered provider NWDAF by invoking the NWDAF _ dataModel provisioning _ request service operation. Next, at step 7, the discovered provider NWDAF responds with the requested training model/model configuration with ML algorithm and parameters and timestamp by calling the NWDAF _ model provision _ Response/NWDAF _ model provision _ Notify service operation.
In solution 1, the functionality of ML model sharing for distributed NWDAF is enhanced by adding timestamp information instead of version information as input/output information for the model provisioning service and ML model configuration comprising ML algorithm and model parameters as output information for the model provisioning service.
Solution 2: inference model sharing with joint learning model update
Solution 2 provides the following solutions: the NWDAF instances share training models with other NWDAF instances for ML inference model sharing and joint learning model updating.
Table 2 shows the NWDAF service for model provisioning in solution 2. As shown in table 2, an nwdaf _ model provision service is provided for subscription model provisioning, an nwdaf _ model info service is provided for request model provisioning, and an nwdaf _ localmodemupdate service is provided for notification model update.
Table 2: NWDAF service for model provisioning
Figure BDA0003287833940000131
In some example embodiments, the input to the nwdaf model provision service or the nwdaf model info service may include the analysis ID(s), and optionally a timestamp to indicate the version of the current model (if available), and optionally: region of interest, UE type, application ID, NSSAI, and/or time.
In some example embodiments, the input to the nwdaf model provision service or the nwdaf model info service may also optionally include an indication of local model update capability, e.g., for joint learning. If the consumer NWDAF has indicated reasoning capabilities and training capabilities in the local NWDAF profile at the NRF, and the provider NWDAF is aware of this local NWDAF capabilities, the indication of the local model update capabilities of the consumer NWDAF may be skipped.
In some example embodiments, the output of the nwdaf model provision service or the nwdaf model info service may include a model description having a model configuration with ML algorithms (e.g., Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), reinforcement learning, etc.) and model parameters.
In some example embodiments, the output of the nwdaf model provision service or the nwdaf model info service may also include a timestamp to indicate the version of the provisioned model. Optionally, the output of the nwdaf model provision service or the nwdaf model info service may further include a model ID. It is noted that the model ID is not globally unique information, but information local to the model provider NWDAF.
In some example embodiments, optionally, the output of the nwdaf model provision service or the nwdaf model info service may further include an indication to subscribe to local model updates using an update condition, e.g., the update condition indicates a periodic update time, a start and end update time, a number of training iterations, etc.
In some example embodiments, optionally, the output of the nwdaf model provision service or the nwdaf model info service may also include a description of the requested parameters for model update.
In some example embodiments, the inputs to the nwdaf localmodepdate Notify service include an analysis ID, local model parameters and a timestamp, and optionally a model ID.
In some example embodiments, the output of the nwdaf localmodepdate Notify service includes a success or failure indication.
When the provider NWDAF provides the trained model to the consumer NWDAF by invoking the NWDAF _ ModelProvision _ Notify service, the provider NWDAF may provide an indication of the NWDAF _ localmodepdate service subscribing to the consumer NWDAF to obtain a result of the locally updated model parameters if the consumer NWDAF can train the model. Unless the consumer NWDAF cancels the subscription to the model provisioning, the consumer NWDAF will continue to notify the provider NWDAF of its updated model.
FIG. 3 illustrates a process for training model registration, discovery, consumption, and update according to an example embodiment of the present disclosure. As shown in FIG. 3, a model registration process, a model discovery process, a model consumption process, and a model update process are illustrated.
In the model registration process, at step 1, the provider NWDAF may register its training model provisioning capabilities (i.e., "model provisioning service" with supported list of analysis IDs) in the NRF as part of its configuration file in the NRF by calling the nrrf _ NFManagement _ NFregister _ request service operation. The NRF then stores the NWDAF profile and sends a registration response to the provider NWDAF by calling the nrrf _ NFManagement _ NFregister _ response service operation, as shown in steps 2 and 3.
In the model discovery process, at step 4, the consumer NWDAF may send a discovery Request for "model provisioning services" to the NRF using a list of service parameters (e.g., analysis ID, etc.) by invoking a nrrf _ NFDiscovery _ Request service operation. At step 5, the NRF may respond with an NWDAF instance that provides the requested "model provisioning service" by invoking an NRF _ NFDiscovery _ Request _ response service operation.
In the model consumption process, at step 6, the consumer NWDAF may subscribe to the "model provisioning service" of the discovered provider NWDAF by invoking the NWDAF _ model provisioning _ description service operation, or request the "model provisioning service" of the discovered provider NWDAF by invoking the NWDAF _ dataModel provisioning _ request service operation. Next, at step 7, the discovered provider NWDAF instance responds with the requested training model/model configuration with algorithms and model parameters by invoking the NWDAF _ model provision _ Response/NWDAF _ model provision _ Notify service operation. At this time, if the consumer NWDAF has two capabilities: inference capabilities and training capabilities with local model updates, the provider NWDAF may determine to subscribe to local model update notifications for the consumer NWDAF. If the consumer NWDAF is only reasoning capable, the following steps 8-11 may be skipped.
During the model update process, the consumer NWDAF locally updates the model and model parameters if the consumer NWDAF can train the model, step 8. At step 9, if the provider NWDAF has made a subscription at step 7, the consumer NWDAF sends the result of the local model update in step 8 to the provider NWDAF by invoking a "NWDAF _ localhodellupdate _ Notify" service operation. The provider NWDAF aggregates the results of local update information from one or more consumer NFs (if needed) and updates the model at step 10. Next, at step 11, the provider NWDAF sends the updated model to the consumer NF by invoking the "NWDAF _ model provision _ Notify" service operation.
In solution 2, the functionality of ML model sharing for distributed NWDAFs is enhanced by enabling provider NWDAFs to subscribe to local model updates for joint learning if the consumer NWDAFs have indicated that they have local model update capability.
Solution 3: inference model sharing with online learning data update
Solution 3 provides the following solutions: the NWDAF instances share training models with other NWDAF instances for ML inference model sharing and joint learning model updating.
Table 3 shows the NWDAF service for model provisioning in solution 3. As shown in table 3, nwdaf _ model provision service is provided for subscription model provisioning, nwdaf _ ModelInfo service is provided for request model provisioning, and nwdaf _ localtaingdataupdate service is provided for notification of training data update.
Table 3: NWDAF service for model provisioning
Figure BDA0003287833940000161
In some example embodiments, the input to the nwdaf model provision service or the nwdaf model info service may include the analysis ID(s), and optionally a timestamp to indicate the version of the current model (if available), and optionally: region of interest, UE type, application ID, NSSAI, and/or time.
In some example embodiments, the input to the nwdaf model provision service or the nwdaf model info service may also optionally include an indication of online data update capability. In some example embodiments, the indication of online learning data update capability may be skipped if the consumer NWDAF's local NWDAF profile at the NRF has indicated that it has online learning data update capability and the provider NWDAF is aware of this local NWDAF capability.
In some example embodiments, the output of the nwdaf model provision service or the nwdaf model info service may include a model description having a model configuration with ML algorithms (e.g., Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), reinforcement learning, etc.) and model parameters.
In some example embodiments, the output of the nwdaf model provision service or the nwdaf model info service may also include a timestamp to indicate the version of the provisioned model. Optionally, the output of the nwdaf model provision service or the nwdaf model info service may further include a model ID. It is noted that the model ID is not globally unique information, but information local to the model provider NWDAF.
In some example embodiments, optionally, the output of the nwdaf model provision service or the nwdaf model info service may further include an indication to subscribe to online learning data updates using an update condition, e.g., the update condition indicates a periodic update time, a start and end update time, a number of training data sets per update, etc.
In some example embodiments, if online learning data updates are not indicated, the default is offline learning.
In some example embodiments, the inputs to the nwdaf LocalTrainingDataUpdate notification service operation include an analysis ID, a timestamp of the current model, a training data set, and context information during inference, e.g., inference timestamp, UE type, UE location, application ID, NSSAI, time, etc., and optionally a model ID.
In some example embodiments, the output of the nwdaf LocalTrainingDataUpdate notification service includes a success or failure indication.
When the provider NWDAF provides the trained model to the consumer NWDAF by invoking NWDAF model provision Notify, the provider NWDAF may provide an indication of the NWDAF LocalTrainingDataUpdate service subscribing to the consumer NWDAF to obtain the results (inferred input parameters and output parameters) of the training data set collected locally at the consumer NWDAF. Unless the provider NWDAF cancels the subscription for local training data updates, the consumer NWDAF will continue to notify the provider NWDAF of its training data set.
FIG. 4 illustrates a process for training model registration, discovery, consumption, and update according to another example embodiment of the present disclosure. As shown in FIG. 4, a model registration process, a model discovery process, a model consumption process, and an online learning data update process are illustrated.
In the model registration process, at step 1, the provider NWDAF may register its training model provisioning capabilities in the NRF (i.e., "model provisioning service" with a list of supported analytics IDs as part of its configuration file in the NRF) by calling the nrrf _ NFManagement _ NFregister _ request service operation, then the NRF stores the NWDAF configuration file and sends a registration response to the provider NWDAF by calling the nrrf _ NFManagement _ NFregister _ response service operation, as shown in steps 2 and 3.
In the model discovery process, at step 4, the consumer NWDAF may send a discovery Request for "model provisioning services" to the NRF using a list of service parameters (e.g., analysis ID, etc.) by invoking a nrrf _ NFDiscovery _ Request service operation. At step 5, the NRF may respond with an NWDAF instance that provides the requested "model provisioning service" by invoking an NRF _ NFDiscovery _ Request _ response service operation.
In the model consumption process, at step 6, the consumer NWDAF may subscribe to the "model provisioning service" of the discovered provider NWDAF instance by invoking the NWDAF _ model provisioning _ describe service operation, or request the "model provisioning service" of the discovered provider NWDAF by invoking the NWDAF _ dataModel provisioning _ request service operation. Next, at step 7, the discovered provider NWDAF instance responds with the requested training model/model configuration with algorithms and model parameters by invoking the NWDAF _ model provision _ Response/NWDAF _ model provision _ Notify service operation. At this time, if the consumer NWDAF has two capabilities: inference capabilities and local training data update capabilities, the provider NWDAF may determine to subscribe to local training data update notifications of the consumer NWDAF. If the consumer NWDAF is only reasoning capable, the following steps 8-11 may be skipped.
In the online learning data update process, at step 8, the consumer NWDAF may collect training data information locally as well as relevant context information during inference, such as inference timestamp, UE type, UE location, application ID, NSSAI, time, etc., while performing an inference operation. If the provider NWDAF has made a subscription in step 7, the consumer NWDAF sends the local update results of the training data information and the relevant context parameters in step 8 to the provider NWDAF by invoking a "NWDAF _ localtrianningdataupdate _ Notify" service operation in step 9. At step 10, the provider NWDAF uses the results of the locally updated information from one or more consumer NFs as training data (if needed) and updates the model. Next, at step 11, the provider NWDAF sends the updated model to the consumer NF by invoking the "NWDAF _ model provision _ Notify" service operation.
In solution 3, if the consumer NWDAF indicates that it has local training data update capability, the functionality of ML model sharing for distributed NWDAFs is enhanced by having the provider NWDAF subscribe to local training data updates for online learning.
Fig. 5 is a flowchart illustrating a method 500 for Machine Learning (ML) model sharing between distributed NWDAFs in a 5G network, according to some example embodiments of the present disclosure. At step 510, enabling the provider NWDAF to share the trained model with the consumer NWDAF through a model provisioning service, wherein an output of the model provisioning service includes the analysis ID(s) and a timestamp indicating a version of the trained model; and at step 520, the consumer NWDAF is enabled to discover a provider NWDAF for sharing the trained model, the provider NWDAF providing a model provisioning service registered in a Network Repository Function (NRF), wherein inputs to the model provisioning service include the analysis ID(s) and a timestamp indicating a version of the trained model.
In some example embodiments, the model provisioning service is supported by at least one of a nwdaf model provision service, a nwdaf model info service, a nwdaf localmodempdate service, and a nwdaf localtainingdataupdate service.
In some example embodiments, the subscription operation of the nwdaf _ model provision service and the request operation of the nwdaf _ model info service include the following inputs: analysis ID, timestamp (if available) indicating the current model version, and optional information: region of interest, UE type, application ID, NSSAI, time.
In some example embodiments, the response operation of the nwdaf model provision service and the response operation of the nwdaf model info service include at least one of the following information: model description, timestamp, model ID of model configuration with ML algorithm (e.g., Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), reinforcement learning, etc.) and model parameters.
In some example embodiments, the subscribing operation of the NWDAF model provision service and the requesting operation of the NWDAF model info service further comprise an indication of local model update capability for joint learning, wherein the indication of local model update capability may be skipped if the local NWDAF profile has been indicated at the NRF as having reasoning capability and training capability.
In some example embodiments, the responding operation of the nwdaf model provision service and the responding operation of the nwdaf model info service further include an indication of subscription to local model updates using an update condition, e.g., indicating a periodic update time, starting and ending updates, a number of training iterations, etc.
In some example embodiments, the subscribing operation of the NWDAF model provision service and the requesting operation of the NWDAF model info service further comprise an indication of online learning data update capability, wherein the indication of this capability may be skipped if the local NWDAF profile has been indicated at the NRF as having online learning data update capability.
In some example embodiments, the responding operation of the nwdaf model provision service and the responding operation of the nwdaf model info service further include an indication of subscribing to the online learning update using the update condition, e.g., indicating a periodic update time, a start and end update time, a number of training data sets per update, and the like.
Fig. 6 schematically illustrates a wireless network 600 in accordance with various embodiments. The wireless network 600 may include a UE 602 in wireless communication with AN 604. The UE 602 and the AN 604 may be similar to and substantially interchangeable with the co-located components described elsewhere herein.
The UE 602 may be communicatively coupled with AN 604 via a connection 606. Connection 606 is shown as an air interface to enable communicative coupling and may be consistent with a cellular communication protocol operating at millimeter wave (mmWave) or sub-6 GHz frequencies, such as the LTE protocol or the 5G NR protocol.
UE 602 may include a host platform 608 coupled with a modem platform 610. Host platform 608 may include application processing circuitry 612, which may be coupled with protocol processing circuitry 614 of modem platform 610. The application processing circuitry 612 may run various applications of source/receiver application data for the UE 602. The application processing circuitry 612 may also implement one or more layers of operations to send/receive application data to/from a data network. These layer operations may include transport (e.g., UDP) and internet (e.g., IP) operations.
Protocol processing circuit 614 may implement one or more layers of operations to facilitate the transmission or reception of data over connection 606. Layer operations implemented by the protocol processing circuit 614 may include, for example, MAC, RLC, PDCP, RRC, and NAS operations.
The modem platform 610 may further include digital baseband circuitry 616, the digital baseband circuitry 616 may implement one or more layer operations "below" the layer operations performed by the protocol processing circuitry 614 in the network protocol stack. These operations may include, for example, PHY operations including one or more of HARQ-ACK functions, scrambling/descrambling, encoding/decoding, layer mapping/demapping, modulation symbol mapping, received symbol/bit metric determination, multi-antenna port precoding/decoding, wherein these functions may include one or more of: space-time, space-frequency, or spatial coding, reference signal generation/detection, preamble sequence generation and/or decoding, synchronization sequence generation/detection, control channel signal blind decoding, and other related functions.
Modem platform 610 may further include transmit circuitry 618, receive circuitry 620, RF circuitry 622, and RF front end (RFFE) circuitry 624, which may include or be connected to one or more antenna panels 626. Briefly, the transmit circuit 618 may include a digital-to-analog converter, a mixer, Intermediate Frequency (IF) components, and the like; the receive circuitry 620 may include analog-to-digital converters, mixers, IF components, and the like; RF circuitry 622 may include low noise amplifiers, power tracking components, and the like; the RFFE circuitry 624 may include filters (e.g., surface/bulk acoustic wave filters), switches, antenna tuners, beam forming components (e.g., phased array antenna components), and so forth. The selection and arrangement of the components of transmit circuitry 618, receive circuitry 620, RF circuitry 622, RFFE circuitry 624, and antenna panel 626 (collectively, "transmit/receive components") may be specific to the details of a particular implementation, e.g., whether the communication is TDM or FDM, at mmWave or sub-6 GHz frequencies, etc. In some embodiments, the transmit/receive components may be arranged in multiple parallel transmit/receive chains, and may be arranged in the same or different chips/modules, etc.
In some embodiments, the protocol processing circuit 614 may include one or more instances of control circuitry (not shown) to provide control functionality for the transmit/receive components.
UE reception may be established by and via antenna panel 626, RFFE circuitry 624, RF circuitry 622, receive circuitry 620, digital baseband circuitry 616, and protocol processing circuitry 614. In some embodiments, the antenna panel 626 may receive transmissions from AN 604 by receiving beamformed signals received by multiple antennas/antenna elements of one or more antenna panels 626.
UE transmissions may be established via and through protocol processing circuitry 614, digital baseband circuitry 616, transmit circuitry 618, RF circuitry 622, RFFE circuitry 624, and antenna panel 626. In some embodiments, the transmit component of the UE 604 may apply a spatial filter to the data to be transmitted to form the transmit beam transmitted by the antenna elements of the antenna panel 626.
Similar to UE 602, AN 604 may include a host platform 628 coupled to a modem platform 630. The host platform 628 may include an application processing circuit 632 coupled with a protocol processing circuit 634 of the modem platform 630. The modem platform may also include digital baseband circuitry 636, transmit circuitry 638, receive circuitry 640, RF circuitry 642, RFFE circuitry 644, and antenna panel 646. The components of the AN 604 may be similar to, and substantially interchangeable with, the synonymous components of the UE 602. In addition to performing data transmission/reception as described above, the components of AN 608 may perform various logical functions including, for example, RNC functions such as radio bearer management, uplink and downlink dynamic radio resource management, and data packet scheduling.
Fig. 7 is a block diagram illustrating components capable of reading instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium) and performing any one or more of the methodologies discussed herein, according to some example embodiments. In particular, fig. 7 shows a diagrammatic representation of hardware resources 700, which includes one or more processors (or processor cores) 710, one or more memory/storage devices 720, and one or more communication resources 730, each of which may be communicatively coupled by a bus 740. The hardware resources 700 may be part of a UE, AN, or LMF. For embodiments utilizing node virtualization (e.g., NFV), hypervisor 702 may be executed to provide an execution environment for one or more network slices/subslices to utilize hardware resources 700.
Processor 710 (e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP) such as a baseband processor, an Application Specific Integrated Circuit (ASIC), a Radio Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, processor 712 and processor 714.
Memory/storage 720 may include a main memory, a disk storage, or any suitable combination thereof. The memory/storage 720 may include, but is not limited to, any type of volatile or non-volatile memory, such as Dynamic Random Access Memory (DRAM), Static Random Access Memory (SRAM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory, solid state storage, and the like.
Communication resources 730 may include interconnection or network interface components or other suitable devices to communicate with one or more peripherals 704 or one or more databases 706 via network 708. For example, communication resources 730 may include wired communication components (e.g., for coupling via a Universal Serial Bus (USB)), cellular communication components, NFC components, bluetooth components (e.g., bluetooth low energy), Wi-Fi components, and other communication components.
Instructions 750 may include software, programs, applications, applets, apps, or other executable code for causing at least any processor 710 to perform any one or more of the methods discussed herein. The instructions 750 may reside, completely or partially, within at least one of the processor 710 (e.g., within a processor's cache memory), the memory/storage 720, or any suitable combination thereof. Further, any portion of instructions 750 may be communicated to hardware resource 700 from any combination of peripheral device 704 or database 706. Thus, the processor 710, memory/storage 720, peripherals 704, and the memory of database 706 are examples of computer-readable and machine-readable media.
The following paragraphs describe examples of various embodiments.
Example 1 includes a method for Machine Learning (ML) model sharing between distributed NWDAFs in a 5G network, comprising: enabling the provider NWDAF to share the trained model with the consumer NWDAF through the model provisioning service; wherein the output of the model provisioning service comprises: one or more analysis IDs and timestamps indicating versions of the trained models.
Example 2 includes the method of example 1, further comprising: enabling the consumer NWDAF to discover the provider NWDAF for sharing the trained model, the provider NWDAF providing the model provisioning service registered in a Network Repository Function (NRF), wherein inputs of the model provisioning service include: one or more analysis IDs and timestamps indicating versions of the trained models.
Example 3 includes the method of example 1 or 2, wherein the output of the model provisioning service further includes model configuration information, the model configuration information including an ML algorithm and model parameters.
Example 4 includes the method of any one of examples 1-3, wherein the ML algorithm includes at least one of a Convolutional Neural Network (CNN) algorithm, a Recurrent Neural Network (RNN) algorithm, and a reinforcement learning algorithm.
Example 5 includes the method of any of examples 1-4, wherein the model provisioning service is supported by a nwdaf model provision service or a nwdaf model info service.
Example 6 includes the method of any one of examples 1-5, further comprising: subscribing, by the consumer NWDAF, to the model provisioning service through operation of the NWDAF model provisioning service to obtain updated models and/or model parameters of the trained model at all times.
Example 7 includes the method of any one of examples 1-6, further comprising: requesting, by the consumer NWDAF through operation of the NWDAF _ ModelInfo service, the model provisioning service to obtain the trained model and/or model parameters of the trained model, wherein the request is a one-time request.
Example 8 includes the method of any one of examples 1-7, further comprising: registering, by the provider NWDAF, in the NRF its ability to disclose the trained model; and discovering, by the consumer NWDAF, an address of the provider NWDAF by querying the NRF.
Example 9 includes the method of any one of examples 1-8, further comprising: when the provider NWDAF provides the trained model to the consumer NWDAF, the provider NWDAF subscribes to local model updates of the trained model at the consumer NWDAF for joint learning if the consumer NWDAF has indicated its local model update capability to the provider NWDAF; and providing, by the consumer NWDAF, a local model update of the trained model to the provider NWDAF by invoking an NWDAF _ localholdupdate _ Notify service operation.
Example 10 includes the method of any one of examples 1-9, further comprising: enabling the consumer NWDAF to provide an indication of a local model update capability of the consumer NWDAF to the provider NWDAF through the Nwdaf _ ModelProvision service or the Nwdaf _ ModelInfo service.
Example 11 includes the method of any one of examples 1-10, further comprising: enabling the consumer NWDAF to skip providing an indication of its local model update capability if the consumer NWDAF's local NWDAF profile has indicated at the NRF that local NWDAF capabilities include reasoning capabilities and training capabilities.
Example 12 includes the method of any one of examples 1-11, further comprising: enabling the provider NWDAF to subscribe to local model updates of the consumer NWDAF with update conditions.
Example 13 includes the method of any one of examples 1-12, wherein the update condition includes at least one of: a periodic update time, start and end update times, and a number of training iterations.
Example 14 includes the method of any one of examples 1-13, wherein the nwdaf localmalholdate Notify service operation includes the following inputs: the analysis ID, the local model configuration of ML algorithm and parameters, and the timestamp.
Example 15 includes the method of any one of examples 1-14, further comprising: while the provider NWDAF provides the trained model to the consumer NWDAF, enable the provider NWDAF to subscribe to local training data updates of the trained model at the consumer NWDAF for online learning if the consumer NWDAF has indicated its online learning data update capability to the provider NWDAF; and providing, by the consumer NWDAF, local training data updates of the trained model to the provider NWDAF by invoking a NWDAF localtainingdataupdate Notify service operation.
Example 16 includes the method of any one of examples 1-15, further comprising: enabling the consumer NWDAF to provide an indication of an online learning data update capability of the consumer NWDAF to the provider NWDAF through the Nwdaf ModelProvision service or the Nwdaf ModelInfo service.
Example 17 includes the method of any one of examples 1-16, further comprising: enabling the consumer NWDAF to skip providing the indication of its online learning data update capability if the consumer NWDAF's local NWDAF profile has indicated the online learning data update capability at the NRF.
Example 18 includes the method of any one of examples 1-17, further comprising: enabling the provider NWDAF to subscribe to online learning data updates of the consumer NWDAF with update conditions.
Example 19 includes the method of any one of examples 1-18, wherein the update condition includes at least one of: a periodic update time, a start and end update time, and a number of training data sets per update.
Example 20 includes the method of any one of examples 1-19, wherein if online learning data update capability is not indicated, default is offline learning.
Example 21 includes the method of any one of examples 1-20, wherein the nwdaf localtainingdataupdate Notify service operation includes the following inputs: the ID, training data set, and context information during inference are analyzed, along with the timestamp.
Example 22 includes the method of any one of examples 1-21, wherein the contextual information includes at least one of: inference timestamp, UE type, UE location, application ID, NSSAI, time.
Example 23 includes an apparatus for Machine Learning (ML) model sharing between distributed NWDAFs in a 5G network, comprising: a processing circuit configured to: enabling a provider NWDAF to share a trained model with a consumer NWDAF through a model provisioning service, wherein an output of the model provisioning service comprises: one or more analysis IDs and timestamps indicating versions of the trained models.
Example 24 includes the apparatus of example 23, the processing circuitry further configured to: enabling the consumer NWDAF to discover the provider NWDAF for sharing the trained model, the provider NWDAF providing the model provisioning service registered in a Network Repository Function (NRF), wherein inputs of the model provisioning service include: one or more analysis IDs and timestamps indicating versions of the trained models.
Example 25 includes the apparatus of example 23 or 24, wherein the output of the model provisioning service further includes model configuration information, the model configuration information including an ML algorithm and model parameters.
Example 26 includes the apparatus of any one of examples 23-25, wherein the ML algorithm includes at least one of a Convolutional Neural Network (CNN) algorithm, a Recurrent Neural Network (RNN) algorithm, and a reinforcement learning algorithm.
Example 27 includes the apparatus of any one of examples 23-26, wherein the model provisioning service is supported by a nwdaf model provision service or a nwdaf model info service.
Example 28 includes the apparatus of any one of examples 23-27, the processing circuitry further configured to: subscribing, by the consumer NWDAF, to the model provisioning service through operation of the NWDAF model provisioning service to obtain updated models and/or model parameters of the trained model at all times.
Example 29 includes the apparatus of any one of examples 23-28, the processing circuitry further configured to: requesting, by the consumer NWDAF through operation of the NWDAF _ ModelInfo service, the model provisioning service to obtain the trained model and/or model parameters of the trained model, wherein the request is a one-time request.
Example 30 includes the apparatus of any one of examples 23-29, the processing circuitry further configured to: registering, by the provider NWDAF, in the NRF its ability to disclose the trained model; and discovering, by the consumer NWDAF, an address of the provider NWDAF by querying the NRF.
Example 31 includes the apparatus of any one of examples 23-30, the processing circuitry further configured to: when the provider NWDAF provides the trained model to the consumer NWDAF, the provider NWDAF subscribes to local model updates of the trained model at the consumer NWDAF for joint learning if the consumer NWDAF has indicated its local model update capability to the provider NWDAF; and providing, by the consumer NWDAF, a local model update of the trained model to the provider NWDAF by invoking an NWDAF _ localholdupdate _ Notify service operation.
Example 32 includes the apparatus of any one of examples 23-31, the processing circuitry further configured to: enabling the consumer NWDAF to provide an indication of a local model update capability of the consumer NWDAF to the provider NWDAF through the Nwdaf _ ModelProvision service or the Nwdaf _ ModelInfo service.
Example 33 includes the apparatus of any one of examples 23-31, the processing circuitry further configured to: enabling the consumer NWDAF to skip providing an indication of its local model update capability if the consumer NWDAF's local NWDAF profile has indicated at the NRF that local NWDAF capabilities include reasoning capabilities and training capabilities.
Example 34 includes the apparatus of any one of examples 23-33, the processing circuitry further configured to: enabling the provider NWDAF to subscribe to local model updates of the consumer NWDAF with update conditions.
Example 35 includes the apparatus of any one of examples 23-34, wherein the update condition comprises at least one of: a periodic update time, start and end update times, and a number of training iterations.
Example 36 includes the apparatus of any one of examples 23-35, wherein the nwdaf localmalholdate Notify service operation includes the following inputs: the analysis ID, the local model configuration of ML algorithm and parameters, and the timestamp.
Example 37 includes the apparatus of any one of examples 23-36, the processing circuitry further configured to: while the provider NWDAF provides the trained model to the consumer NWDAF, enable the provider NWDAF to subscribe to local training data updates of the trained model at the consumer NWDAF for online learning if the consumer NWDAF has indicated its online learning data update capability to the provider NWDAF; and providing, by the consumer NWDAF, local training data updates of the trained model to the provider NWDAF by invoking a NWDAF localtainingdataupdate Notify service operation.
Example 38 includes the apparatus of any one of examples 23-37, the processing circuitry further configured to: enabling the consumer NWDAF to provide an indication of an online learning data update capability of the consumer NWDAF to the provider NWDAF through the Nwdaf ModelProvision service or the Nwdaf ModelInfo service.
Example 39 includes the apparatus of any one of examples 23-38, the processing circuitry further configured to: enabling the consumer NWDAF to skip providing the indication of its online learning data update capability if the consumer NWDAF's local NWDAF profile has indicated the online learning data update capability at the NRF.
Example 40 includes the apparatus of any one of examples 23-39, the processing circuitry further configured to: enabling the provider NWDAF to subscribe to online learning data updates of the consumer NWDAF with update conditions.
Example 41 includes the apparatus of any one of examples 23-40, wherein the update condition comprises at least one of: a periodic update time, a start and end update time, and a number of training data sets per update.
Example 42 includes the apparatus of any one of examples 23-41, wherein the default is offline learning if online learning data update capability is not indicated.
Example 43 includes the apparatus of any one of examples 23-42, wherein the nwdaf LocalTrainingDataUpdate Notify service operation includes the following inputs: the ID, training data set, and context information during inference are analyzed, along with the timestamp.
Example 44 includes the apparatus of any one of examples 23-43, wherein the contextual information includes at least one of: inference timestamp, UE type, UE location, application ID, NSSAI, time.
Example 45 includes a computer-readable medium having instructions stored thereon, which when executed by one or more processors cause the one or more processors to perform a method for Machine Learning (ML) model sharing between distributed NWDAFs in a 5G network, the method comprising: enabling the provider NWDAF to share the trained model with the consumer NWDAF through the model provisioning service; wherein the output of the model provisioning service comprises: one or more analysis IDs and timestamps indicating versions of the trained models.
Example 46 includes the computer-readable medium of example 45, wherein the method further comprises: enabling the consumer NWDAF to discover the provider NWDAF for sharing the trained model, the provider NWDAF providing the model provisioning service registered in a Network Repository Function (NRF), wherein inputs of the model provisioning service include: one or more analysis IDs and timestamps indicating versions of the trained models.
Example 47 includes the computer-readable medium of example 45 or 46, wherein the output of the model provisioning service further includes model configuration information, the model configuration information including the ML algorithm and model parameters.
Example 48 includes the computer-readable medium of any one of examples 45-47, wherein the ML algorithm includes at least one of a Convolutional Neural Network (CNN) algorithm, a Recurrent Neural Network (RNN) algorithm, and a reinforcement learning algorithm.
Example 49 includes the computer-readable medium of any one of examples 45-48, wherein the model provisioning service is supported by a nwdaf model provision service or a nwdaf model info service.
Example 50 includes the computer-readable medium of any one of examples 45-49, wherein the method further comprises: subscribing, by the consumer NWDAF, to the model provisioning service through operation of the NWDAF model provisioning service to obtain updated models and/or model parameters of the trained model at all times.
Example 51 includes the computer-readable medium of any one of examples 45-50, wherein the method further comprises: requesting, by the consumer NWDAF through operation of the NWDAF _ ModelInfo service, the model provisioning service to obtain the trained model and/or model parameters of the trained model, wherein the request is a one-time request.
Example 52 includes the computer-readable medium of any one of examples 45-51, wherein the method further comprises: registering, by the provider NWDAF, in the NRF its ability to disclose the trained model; and discovering, by the consumer NWDAF, an address of the provider NWDAF by querying the NRF.
Example 53 includes the computer-readable medium of any one of examples 45-52, wherein the method further comprises: when the provider NWDAF provides the trained model to the consumer NWDAF, the provider NWDAF subscribes to local model updates of the trained model at the consumer NWDAF for joint learning if the consumer NWDAF has indicated its local model update capability to the provider NWDAF; and providing, by the consumer NWDAF, a local model update of the trained model to the provider NWDAF by invoking an NWDAF _ localholdupdate _ Notify service operation.
Example 54 includes the computer-readable medium of any one of examples 45-53, wherein the method further comprises: enabling the consumer NWDAF to provide an indication of a local model update capability of the consumer NWDAF to the provider NWDAF through the Nwdaf _ ModelProvision service or the Nwdaf _ ModelInfo service.
Example 55 includes the computer-readable medium of any one of examples 45-54, wherein the method further comprises: enabling the consumer NWDAF to skip providing an indication of its local model update capability if the consumer NWDAF's local NWDAF profile has indicated at the NRF that local NWDAF capabilities include reasoning capabilities and training capabilities.
Example 56 includes the computer-readable medium of any one of examples 45-55, wherein the method further comprises: enabling the provider NWDAF to subscribe to local model updates of the consumer NWDAF with update conditions.
Example 57 includes the computer-readable medium of any one of examples 45-56, wherein the update condition includes at least one of: a periodic update time, start and end update times, and a number of training iterations.
Example 58 includes the computer-readable medium of any one of examples 45-57, wherein the nwdaf LocalModelUpdate Notify service operation comprises the following inputs: the analysis ID, the local model configuration of ML algorithm and parameters, and the timestamp.
Example 59 includes the computer-readable medium of any one of examples 45-58, wherein the method further comprises: while the provider NWDAF provides the trained model to the consumer NWDAF, enable the provider NWDAF to subscribe to local training data updates of the trained model at the consumer NWDAF for online learning if the consumer NWDAF has indicated its online learning data update capability to the provider NWDAF; and providing, by the consumer NWDAF, local training data updates of the trained model to the provider NWDAF by invoking a NWDAF localtainingdataupdate Notify service operation.
Example 60 includes the computer-readable medium of any one of examples 45-59, wherein the method further comprises: enabling the consumer NWDAF to provide an indication of an online learning data update capability of the consumer NWDAF to the provider NWDAF through the Nwdaf ModelProvision service or the Nwdaf ModelInfo service.
Example 61 includes the computer-readable medium of any one of examples 45-60, wherein the method further comprises: enabling the consumer NWDAF to skip providing the indication of its online learning data update capability if the consumer NWDAF's local NWDAF profile has indicated the online learning data update capability at the NRF.
Example 62 includes the computer-readable medium of any one of examples 45-61, wherein the method further comprises: enabling the provider NWDAF to subscribe to online learning data updates of the consumer NWDAF with update conditions.
Example 63 includes the computer-readable medium of any one of examples 45-62, wherein the update condition includes at least one of: a periodic update time, a start and end update time, and a number of training data sets per update.
Example 64 includes the computer-readable medium of any one of examples 45-63, wherein if online learning data update capability is not indicated, default is offline learning.
Example 65 includes the computer-readable medium of any one of examples 45-64, wherein the nwdaf localrainingdataupdate Notify service operation includes the following inputs: the ID, training data set, and context information during inference are analyzed, along with the timestamp.
Example 66 includes the computer-readable medium of any one of examples 45-65, wherein the contextual information includes at least one of: inference timestamp, UE type, UE location, application ID, NSSAI, time.
Example 67 includes an apparatus comprising means for performing the acts of the method of any of examples 1-22.
The foregoing detailed description includes references to the accompanying drawings, which form a part hereof. The drawings show, by way of illustration, specific embodiments that can be practiced. These embodiments are also referred to herein as "examples". Such examples may include elements in addition to those shown or described. However, the inventors also contemplate examples in which only those elements shown or described are provided. Moreover, the inventors also contemplate examples using any combination or permutation of those elements shown or described (or one or more aspects thereof) with respect to a particular example (or one or more aspects thereof) or with respect to other examples (or one or more aspects thereof).
All publications, patents, and patent documents mentioned in this document are incorporated by reference herein in their entirety as if individually incorporated by reference. If this document is inconsistent with the usage of those documents incorporated by reference, the usage in the cited references should be considered supplementary to that of this document; for irreconcilable inconsistencies, the usage in this document controls.
In this document, the terms "a" or "an" are used, as is common in patent documents, to include one or more, independent of any other instances or usages of "at least one" or "one or more. In this document, unless otherwise specified, the term "or" is used to refer to a non-exclusive or, for example, "a or B" includes "a but not B," B but not a "and" a and B. In the appended claims, the terms "including" and "in which" are used as the plain equivalents of the respective terms "comprising" and "wherein". Furthermore, in the following claims, the terms "comprising" and "including" are open-ended, that is, a system, device, article, or process that includes an element other than the elements listed after such term in a claim is also considered to be within the scope of that claim. Furthermore, in the following claims, the terms "first," "second," "third," and the like are used merely as labels, and do not impose numerical requirements on their objects.
The above description is intended to be illustrative and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. Other embodiments may be used, for example, by one of ordinary skill in the art upon reading the above description. The Abstract is provided to enable the reader to quickly ascertain the nature of the technical disclosure, and is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Furthermore, in the foregoing detailed description, various features may be grouped together to simplify the present disclosure. This should not be interpreted as intending that an unclaimed disclosed feature is essential to any claim. Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the detailed description, with each claim standing on its own as a separate embodiment. The scope of the embodiments should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.

Claims (25)

1. A method for Machine Learning (ML) model sharing between distributed NWDAFs in a 5G network, comprising:
enabling the provider NWDAF to share the trained model with the consumer NWDAF through the model provisioning service;
wherein the output of the model provisioning service comprises: one or more analysis IDs and timestamps indicating versions of the trained models.
2. The method of claim 1, further comprising:
enabling the consumer NWDAF to discover the provider NWDAF for sharing the trained model, the provider NWDAF providing the model provisioning service registered in a Network Repository Function (NRF), wherein inputs of the model provisioning service include: one or more analysis IDs and timestamps indicating versions of the trained models.
3. The method of claim 1, wherein the output of the model provisioning service further comprises model configuration information, the model configuration information comprising an ML algorithm and model parameters.
4. The method of claim 3, wherein the ML algorithm comprises at least one of a Convolutional Neural Network (CNN) algorithm, a Recurrent Neural Network (RNN) algorithm, and a reinforcement learning algorithm.
5. The method of claim 1, wherein the model provisioning service is supported by a nwdaf model provision service or a nwdaf model info service.
6. The method of claim 5, further comprising:
subscribing, by the consumer NWDAF, to the model provisioning service through operation of the NWDAF model provisioning service to obtain updated models and/or model parameters of the trained model at all times.
7. The method of claim 5, further comprising:
requesting, by the consumer NWDAF through operation of the NWDAF _ ModelInfo service, the model provisioning service to obtain the trained model and/or model parameters of the trained model, wherein the request is a one-time request.
8. The method of claim 2, further comprising:
registering, by the provider NWDAF, in the NRF its ability to disclose the trained model; and
discovering, by the consumer NWDAF, an address of the provider NWDAF by querying the NRF.
9. The method of claim 5, further comprising:
when the provider NWDAF provides the trained model to the consumer NWDAF, the provider NWDAF subscribes to local model updates of the trained model at the consumer NWDAF for joint learning if the consumer NWDAF has indicated its local model update capability to the provider NWDAF; and
providing, by the consumer NWDAF, a local model update of the trained model to the provider NWDAF by invoking a NWDAF localholdate Notify service operation.
10. The method of claim 9, further comprising:
enabling the consumer NWDAF to provide an indication of a local model update capability of the consumer NWDAF to the provider NWDAF through the Nwdaf _ ModelProvision service or the Nwdaf _ ModelInfo service.
11. The method of claim 10, further comprising:
enabling the consumer NWDAF to skip providing an indication of its local model update capability if the consumer NWDAF's local NWDAF profile has indicated at the NRF that local NWDAF capabilities include reasoning capabilities and training capabilities.
12. The method of claim 9, further comprising:
enabling the provider NWDAF to subscribe to local model updates of the consumer NWDAF with update conditions.
13. The method of claim 12, wherein the update condition comprises at least one of: a periodic update time, start and end update times, and a number of training iterations.
14. The method of claim 9, wherein the nwdaf localhodepimpdate Notify service operation includes the following inputs: the analysis ID, the local model configuration of ML algorithm and parameters, and the timestamp.
15. The method of claim 5, further comprising:
while the provider NWDAF provides the trained model to the consumer NWDAF, enable the provider NWDAF to subscribe to local training data updates of the trained model at the consumer NWDAF for online learning if the consumer NWDAF has indicated its online learning data update capability to the provider NWDAF; and
providing, by the consumer NWDAF, local training data updates of the trained model to the provider NWDAF by invoking an NWDAF localtainingdataupdate _ Notify service operation.
16. The method of claim 15, further comprising:
enabling the consumer NWDAF to provide an indication of an online learning data update capability of the consumer NWDAF to the provider NWDAF through the Nwdaf ModelProvision service or the Nwdaf ModelInfo service.
17. The method of claim 16, further comprising:
enabling the consumer NWDAF to skip providing the indication of its online learning data update capability if the consumer NWDAF's local NWDAF profile has indicated the online learning data update capability at the NRF.
18. The method of claim 15, further comprising:
enabling the provider NWDAF to subscribe to online learning data updates of the consumer NWDAF with update conditions.
19. The method of claim 18, wherein the update condition comprises at least one of: a periodic update time, a start and end update time, and a number of training data sets per update.
20. The method of claim 15, wherein if online learning data update capability is not indicated, then offline learning is defaulted.
21. The method of claim 15, wherein the nwdaf LocalTrainingDataUpdate Notify service operation comprises the following inputs: the ID, training data set, and context information during inference are analyzed, along with the timestamp.
22. The method of claim 21, wherein the contextual information comprises at least one of: inference timestamp, UE type, UE location, application ID, NSSAI, time.
23. An apparatus for Machine Learning (ML) model sharing between distributed NWDAFs in a 5G network, comprising:
a processing circuit configured to:
enabling the provider NWDAF to share the trained model with the consumer NWDAF through the model provisioning service,
wherein the output of the model provisioning service comprises: one or more analysis IDs and timestamps indicating versions of the trained models.
24. The apparatus of claim 23, the processing circuitry further configured to:
enabling the consumer NWDAF to discover the provider NWDAF for sharing the trained model, the provider NWDAF providing the model provisioning service registered in a Network Repository Function (NRF), wherein inputs of the model provisioning service include: one or more analysis IDs and timestamps indicating versions of the trained models.
25. The apparatus of claim 23, wherein the output of the model provisioning service further comprises model configuration information, the model configuration information comprising an ML algorithm and model parameters.
CN202111153446.4A 2020-09-30 2021-09-29 Method and apparatus for machine learning model sharing between distributed NWDAFs Pending CN114339821A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063085891P 2020-09-30 2020-09-30
US63/085,891 2020-09-30

Publications (1)

Publication Number Publication Date
CN114339821A true CN114339821A (en) 2022-04-12

Family

ID=81045578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111153446.4A Pending CN114339821A (en) 2020-09-30 2021-09-29 Method and apparatus for machine learning model sharing between distributed NWDAFs

Country Status (1)

Country Link
CN (1) CN114339821A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116346206A (en) * 2023-03-27 2023-06-27 广州爱浦路网络技术有限公司 AI/ML model distributed transmission method, device and system based on low orbit satellite and 5GS
CN116566846A (en) * 2023-07-05 2023-08-08 中国电信股份有限公司 Model management method and system, shared node and network node
WO2023213413A1 (en) * 2022-05-06 2023-11-09 Huawei Technologies Co., Ltd. Tracing and rollback continuity under analytics id transfer and ue mobility

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023213413A1 (en) * 2022-05-06 2023-11-09 Huawei Technologies Co., Ltd. Tracing and rollback continuity under analytics id transfer and ue mobility
CN116346206A (en) * 2023-03-27 2023-06-27 广州爱浦路网络技术有限公司 AI/ML model distributed transmission method, device and system based on low orbit satellite and 5GS
CN116566846A (en) * 2023-07-05 2023-08-08 中国电信股份有限公司 Model management method and system, shared node and network node
CN116566846B (en) * 2023-07-05 2023-09-22 中国电信股份有限公司 Model management method and system, shared node and network node

Similar Documents

Publication Publication Date Title
CN114443556A (en) Device and method for man-machine interaction of AI/ML training host
CN114339821A (en) Method and apparatus for machine learning model sharing between distributed NWDAFs
US11871460B2 (en) Domain name system (DNS)-based discovery of regulatory requirements for non-3GPP inter-working function (N3IWF) selection
CN113766502A (en) Apparatus for use in a UE, SMF entity, and provisioning server
CN113825234A (en) Apparatus for use in user equipment
CN113543337A (en) Handling MsgB scheduled uplink transmission collisions with dynamic SFI
CN115694700A (en) Apparatus for use in a wireless communication system
CN114641044A (en) Apparatus for use in source base station, target base station and user equipment
WO2022154961A1 (en) Support for edge enabler server and edge configuration server lifecycle management
US20240162955A1 (en) Beamforming for multiple-input multiple-output (mimo) modes in open radio access network (o-ran) systems
EP4239479A1 (en) Orchestration of computing services and resources for next generation systems
CN116390118A (en) Apparatus for use in ECSP and PLMN management systems
CN115708386A (en) Apparatus for use in a wireless communication system
CN116981056A (en) Apparatus for artificial intelligence or machine learning assisted beam management
CN115884234A (en) Apparatus for use in a wireless communication system
CN115776710A (en) Apparatus and method for next generation radio access network
CN117251224A (en) ML entity loading device for management service producer
CN116756556A (en) MnS and method for supporting ML training
CN115720338A (en) Apparatus for use in a wireless communication network
CN113573418A (en) Arrangement in MN or SN in EPS or 5GS
CN116264747A (en) Device for managing data analysis and management service consumer and producer
CN115278637A (en) Apparatus for use in a core network
WO2022032205A1 (en) Conditional handover failure reporting in minimization of drive tests (mdt)
CN115250465A (en) Apparatus for use in a core network
CN113676931A (en) AF entity in TSN and network-side TSN converter

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination