WO2023148010A1 - Gestion réseau-centrique du cycle de vie des modèles ai/ml déployés dans un équipement utilisateur (ue) - Google Patents

Gestion réseau-centrique du cycle de vie des modèles ai/ml déployés dans un équipement utilisateur (ue) Download PDF

Info

Publication number
WO2023148010A1
WO2023148010A1 PCT/EP2023/051268 EP2023051268W WO2023148010A1 WO 2023148010 A1 WO2023148010 A1 WO 2023148010A1 EP 2023051268 W EP2023051268 W EP 2023051268W WO 2023148010 A1 WO2023148010 A1 WO 2023148010A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
message
lcm
network node
network
Prior art date
Application number
PCT/EP2023/051268
Other languages
English (en)
Inventor
Pablo SOLDATI
Luca LUNARDI
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Publication of WO2023148010A1 publication Critical patent/WO2023148010A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/10Scheduling measurement reports ; Arrangements for measurement reports

Definitions

  • the present disclosure relates generally to wireless communication networks, and more specifically to techniques for managing artificial intelligence and/or machine learning (AI/ML) models used by user equipment (UE) when operating in such networks.
  • AI/ML artificial intelligence and/or machine learning
  • NR New Radio
  • 3GPP Third-Generation Partnership Project
  • eMBB enhanced mobile broadband
  • MTC machine type communications
  • URLLC ultra-reliable low latency communications
  • D2D side-link device-to-device
  • FIG. 1 illustrates an exemplary high-level view of the 5G network architecture, consisting of a Next Generation RAN (NG-RAN) 199 and a 5G Core (5GC) 198.
  • NG-RAN 199 can include a set of gNodeB's (gNBs) connected to the 5GC via one or more NG interfaces, such as gNBs 100, 150 connected via interfaces 102, 152, respectively.
  • the gNBs can be connected to each other via one or more Xn interfaces, such as Xn interface 140 between gNBs 100 and 150.
  • each of the gNBs can support frequency division duplexing (FDD), time division duplexing (TDD), or a combination thereof.
  • FDD frequency division duplexing
  • TDD time division duplexing
  • NG-RAN 199 is layered into a Radio Network Layer (RNL) and a Transport Network Layer (TNL).
  • RNL Radio Network Layer
  • TNL Transport Network Layer
  • the NG-RAN architecture i.e., the NG-RAN logical nodes and interfaces between them, is defined as part of the RNL.
  • NG, Xn, F1 For each NG-RAN interface (NG, Xn, F1) the related TNL protocol and the functionality are specified.
  • the TNL provides services for user plane transport and signaling transport.
  • the NG RAN logical nodes shown in Figure 1 include a central (or centralized) unit (CU or gNB-CU) and one or more distributed (or decentralized) units (DU or gNB-DU).
  • gNB 100 includes gNB-CU 110 and gNB- DUs 120 and 130.
  • CUs e.g., gNB-CU 110
  • CUs are logical nodes that host higher-layer protocols and perform various gNB functions such controlling the operation of DUs.
  • Each DU is a logical node that hosts lower-layer protocols and can include, depending on the functional split, various subsets of the gNB functions.
  • each of the CUs and DUs can include various circuitry needed to perform their respective functions, including processing circuitry, transceiver circuitry (e.g., for communication), and power supply circuitry.
  • a gNB-CU connects to gNB-DUs over respective F1 logical interfaces, such as interfaces 122 and 132 shown in Figure 1.
  • the gNB-CU and connected gNB-DUs are only visible to other gNBs and the 5GC as a gNB. In other words, the F1 interface is not visible beyond gNB-CU.
  • Centralized control plane protocols can be hosted in a different CU than centralized user plane protocols (e.g., PDCP-U).
  • a gNB-CU can be divided logically into a CU-CP function (including RRC and PDCP for signaling radio bearers) and CU-UP function (including PDCP for UP).
  • a single CU-CP can be associated with multiple CU-UPs in a gNB.
  • the CU-CP and CU-UP communicate with each other using the E1-AP protocol over the E1 interface, as specified in 3GPP TS 38.463 (v15.4.0).
  • the F1 interface between CU and DU (see Figure 1) is functionally split into F1-C between DU and CU-CP and F1-U between DU and CU-UP.
  • Three deployment scenarios for the split gNB architecture shown in Figure 1 are CU-CP and CU-UP centralized, CU-CP distributed/CU-UP centralized, and CU-CP centralized/CU-UP distributed.
  • FIG. 2 shows another high-level view of an exemplary 5G network architecture, including a NG-RAN 299 and 5GC 298.
  • NG-RAN 299 can include gNBs (e.g., 210a, b) and ng-eNBs (e.g., 220a, b) that are interconnected with each other via respective Xn interfaces.
  • the gNBs and ng-eNBs are also connected via the NG interfaces to 5GC 298, more specifically to access and mobility management functions (AMFs, e.g., 230a, b) via respective NG-C interfaces and to user plane functions (UPFs, e.g, 240a, b) via respective NG-U interfaces.
  • AMFs access and mobility management functions
  • UPFs user plane functions
  • the AMFs can communicate with one or more policy control functions (PCFs, e.g., 250a, b) and network exposure functions (NEFs, e.g., 260a, b).
  • PCFs policy control functions
  • NEFs network exposure functions
  • Each of the gNBs can support the NR radio interface including frequency division duplexing (FDD), time division duplexing (TDD), or a combination thereof.
  • Each of ng-eNBs can support the fourth generation (4G) Long- Term Evolution (LTE) radio interface. Unlike conventional LTE eNBs, however, ng-eNBs connect to the 5GC via the NG interface.
  • 4G Long- Term Evolution
  • Each of the gNBs and ng-eNBs can serve a geographic coverage area including one more cells, such as cells 211 a-b and 221 a-b shown in Figure 2.
  • a UE 205 can communicate with the gNB or ng-eNB serving that cell via the NR or LTE radio interface, respectively.
  • Figure 2 shows gNBs and ng-eNBs separately, it is also possible that a single NG-RAN node provides both types of functionality.
  • LTE Rel-12 introduced dual connectivity (DC) whereby a UE in RRC_CONNECTED state can be connected to two network nodes simultaneously, thereby improving connection robustness and/or capacity.
  • these two network nodes are referred to as "Master eNB” (MeNB) and “Secondary eNB” (SeNB), or more generally as master node (MN) and secondary node (SN).
  • MN Master eNB
  • SeNB Secondary Cell Group
  • a UE is configured with a Master Cell Group (MCG) associated with the MN and a Secondary Cell Group (SCG) associated with the SN.
  • MCG Master Cell Group
  • SCG Secondary Cell Group
  • 3GPP TR 38.804 (v14.0.0) describes various exemplary DC scenarios or configurations in which the MN and SN can apply NR, LTE, or both.
  • EN-DC refers to the scenario where the MN (eNB) employs LTE and the SN (gNB) employs NR, and both are connected to an LTE Evolved Packet Core (EPC).
  • EPC Evolved Packet Core
  • MR multi -RAT
  • Machine learning is a type of artificial intelligence (Al) that focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy.
  • ML algorithms build models based on sample (or "training”) data, with the models being used subsequently to make predictions or decisions.
  • ML algorithms can be used in a wide variety of applications (e.g., medicine, email filtering, speech recognition, etc.) in which it is difficult or unfeasible to develop conventional algorithms to perform the needed tasks.
  • AI/ML can be used to enhance the performance of a RAN, such as NG-RAN.
  • 3GPP document RP-201620 defines a study item on "Enhancement for Data Collection for NR and EN -DC”, which aims to study the functional framework for RAN intelligence enabled by further data collection through use cases, examples etc., and to identify the potential standardization impacts on current NG-RAN nodes and interfaces.
  • some objectives include studying high level principles for RAN intelligence enabled by Al, the Al functionality, input/output of the component for Al-enabled optimization, and identifying benefits of Al-enabled NG-RAN such as energy saving, load balancing, mobility management, coverage optimization, etc.
  • UEs may be desirable for UEs to use AI/ML functionality when operating in a RAN (e.g., NG- RAN).
  • a RAN e.g., NG- RAN
  • the interaction between UE and the RAN is limited to UE measurements that can be used as input for a training function in the RAN.
  • LCM life cycle management
  • possible locations for AI/ML model inference are currently unspecified.
  • Embodiments of the present disclosure provide specific improvements to LCM of AI/ML models deployed at UEs operating in a RAN, such as by providing, enabling, and/or facilitating solutions to exemplary problems summarized above and described in more detail below.
  • Embodiments include methods ⁇ e.g., procedures) performed by a first network node.
  • the exemplary method can include the first network node transmitting to a UE one or more of the following messages:
  • the exemplary method can include the first network node receiving from the UE one or more of the following messages: • a third message that includes an acknowledgement or a failure indication with respect to the LCM information requested by the second message;
  • the first message indicates support for one or more of the following LCM actions by the network:
  • the second message requests the UE to report one or more of the following LCM information:
  • the fourth message is received but the third message is not received, with the fourth message being an implicit acknowledgement with respect to the LCM information requested by the second message.
  • the second message also requests the UE to perform testing of the identified AI/ML model, and the fifth message is not transmitted.
  • the fifth message (i.e., when transmitted) includes one or more of the following:
  • the fifth message does not include the reference data set and one of the following applies:
  • the fifth message includes an explicit indication for the UE to use a locally-available data set for the requested testing
  • the report in the seventh message includes one or more of the following:
  • the seventh message is received but the sixth message is not received, with the seventh message being an implicit acknowledgement with respect to the testing requested by the fifth message.
  • these exemplary methods can also include determining that performance of the identified AI/ML model is not satisfactory based on the report in the seventh message, and performing one of the following based on the determination:
  • the eighth message includes one or more of the following:
  • these exemplary methods can also include receive from a second network node a ninth message that includes a request for LCM actions in relation to at least one AI/ML model deployed at one or more UEs, and transmitting to the second network node a tenth message that includes a report about LCM actions performed by the first network node on the at least one AI/ML model.
  • the LCM actions requested by the ninth message can include one or more of the following:
  • the report in the tenth message includes LCM actions and/or AI/ML models indicated to the UE by the eighth message.
  • any one of the following can apply:
  • the first and second network nodes are different network nodes in a RAN
  • the first and second network nodes are different units or functions of one network node in a RAN
  • one of the first and second network nodes is a RAN node and the other of the first and second network nodes is one of the following: a core network (CN) node or function, a service management and orchestration (SMC) function, or part of an operations/administration/ maintenance (CAM) system.
  • CN core network
  • SMC service management and orchestration
  • CAM operations/administration/ maintenance
  • Other embodiments include methods (e.g., procedures) performed by a UE.
  • these exemplary methods can include the UE receiving from a first network node one or more of the first, second, fifth, and eighth messages summarized above in relation to the first network node embodiments.
  • these exemplary methods can include the UE transmitting one or more of the third, fourth, sixth, seventh, and eleventh messages summarized above in relation to the first network node embodiments.
  • the first, second, third, fourth, fifth, sixth, seventh, eighth, and eleventh messages can have any of the same content and/or characteristics as summarized above for first network node embodiments.
  • these exemplary methods can also include performing testing of the identified AI/ML in accordance with the request in the fifth message.
  • the seventh message can include the results of the testing performed.
  • the eighth message is received in response to the seventh message and includes one or more of the following:
  • these exemplary methods can also include applying the new, updated, retrained, or modified AI/ML model identified in the eighth message for one or more of the following UE operations in the RAN:
  • Other embodiments include methods (e.g., procedures) performed by a second network node. These exemplary methods can include transmitting to a first network node a ninth message that includes a request for LCM actions in relation to at least one AI/ML model deployed at one or more UEs. These exemplary methods can also include receiving from the first network node a tenth message that includes a report about LCM actions performed by the first network node on the at least one AI/ML model.
  • the eighth and ninth messages can have any of the same content and/or characteristics as summarized above for first network node embodiments. In various embodiments, any one of the following can apply:
  • the first and second network nodes are different network nodes in a RAN
  • the first and second network nodes are different units or functions of one network node in a RAN
  • one of the first and second network nodes is a RAN node and the other of the first and second network nodes is one of the following: a CN node or function, an SMO function, or part of an CAM system.
  • network nodes e.g., base station, eNB, gNB, ng-eNB, etc. or unit/function thereof, CN node, 0AM, SMO, etc.
  • UEs e.g., wireless devices
  • Other embodiments include non-transitory, computer-readable media storing program instructions that, when executed by processing circuitry, configure such network nodes or UEs to perform operations corresponding to any of the exemplary methods described herein.
  • Embodiments described herein can provide mobile network operators (MNOs) a standardized and/or uniform way for network nodes to handle LCM actions for AI/ML models deployed at UEs.
  • Embodiments can enable an MNO to monitor performance of AI/ML models deployed at UEs, thereby ensuring that UEs operate with AI/ML model(s) that provide good performance and/or are suitable for prevailing radio environments (e.g., signal levels, interference, multipath, etc.), traffic conditions, mobility scenarios, load conditions, etc. in cells served by network nodes.
  • prevailing radio environments e.g., signal levels, interference, multipath, etc.
  • embodiments facilitate improved UE and network performance in terms of throughput, spectral efficiency, and/or energy usage .
  • Figures 1-2 illustrate two high-level views of an exemplary 5G/NR network architecture.
  • Figure 3 is a block diagram of an exemplary framework for RAN intelligence based on AI/ML models.
  • Figure 4 is a block diagram of another exemplary framework for RAN intelligence based on AI/ML models, including model management functionality.
  • Figure 5 is signal flow diagram of an exemplary AI/ML model LCM procedure between a UE, a first network node, and a second network node, according to various embodiments of the present disclosure .
  • Figures 6-9 show ASN.1 data structures for various fields or information elements (lEs) that can be used in messages used for LCM of AI/ML models deployed at a UE, according to various embodiments of the present disclosure.
  • LEs information elements
  • Figure 10 shows a flow diagram of an exemplary method (e.g., procedure) for a first network node (e.g., base station, eNB, gNB, ng-eNB, etc.), according to various embodiments of the present disclosure.
  • a first network node e.g., base station, eNB, gNB, ng-eNB, etc.
  • Figure 11 shows a flow diagram of an exemplary method (e.g., procedure) for a UE (e.g., wireless device), according to various embodiments of the present disclosure.
  • a UE e.g., wireless device
  • Figure 12 shows a flow diagram of an exemplary method (e.g., procedure) for a second network node (e.g., base station, eNB, gNB, ng-eNB, etc.), according to various embodiments of the present disclosure.
  • a second network node e.g., base station, eNB, gNB, ng-eNB, etc.
  • Figure 13 shows a communication system according to various embodiments of the present disclosure.
  • Figure 14 shows a UE according to various embodiments of the present disclosure.
  • Figure 15 shows a network node according to various embodiments of the present disclosure.
  • Figure 16 shows host computing system according to various embodiments of the present disclosure.
  • Figure 17 is a block diagram of a virtualization environment in which functions implemented by some embodiments of the present disclosure may be virtualized.
  • Figure 18 illustrates communication between a host computing system, a network node, and a UE via multiple connections, at least one of which is wireless, according to various embodiments of the present disclosure.
  • Radio Access Node As used herein, a “radio access node” (or equivalently “radio network node,” “radio access network node,” or “RAN node”) can be any node in a radio access network (RAN) that operates to wirelessly transmit and/or receive signals.
  • RAN radio access network
  • a radio access node examples include, but are not limited to, a base station ⁇ e.g., gNB in a 3GPP 5G/NR network or an enhanced or eNB in a 3GPP LTE network), base station distributed components ⁇ e.g., CU and DU), a high-power or macro base station, a low-power base station ⁇ e.g., micro, pico, femto, or home base station, or the like), an integrated access backhaul (IAB) node, a transmission point (TP), a transmission reception point (TRP), a remote radio unit (RRU or RRH), and a relay node.
  • a base station ⁇ e.g., gNB in a 3GPP 5G/NR network or an enhanced or eNB in a 3GPP LTE network
  • base station distributed components ⁇ e.g., CU and DU
  • a high-power or macro base station e.g., a low-power base station ⁇
  • a "core network node” is any type of node in a core network.
  • Some examples of a core network node include, e.g., a Mobility Management Entity (MME), a serving gateway (SGW), a PDN Gateway (P-GW), a Policy and Charging Rules Function (PCRF), an access and mobility management function (AMF), a session management function (SMF), a user plane function (UPF), a Charging Function (CHF), a Policy Control Function (PCF), an Authentication Server Function (AUSF), a location management function (LMF), or the like.
  • MME Mobility Management Entity
  • SGW serving gateway
  • P-GW PDN Gateway
  • PCRF Policy and Charging Rules Function
  • AMF access and mobility management function
  • SMF session management function
  • UPF user plane function
  • Charging Function CHF
  • PCF Policy Control Function
  • AUSF Authentication Server Function
  • LMF location management function
  • Wireless Device As used herein, a “wireless device” (or “WD” for short) is any type of device that is capable, configured, arranged and/or operable to communicate wirelessly with network nodes and/or other wireless devices. Communicating wirelessly can involve transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information through air. Unless otherwise noted, the term “wireless device” is used interchangeably herein with the term “user equipment” (or “UE” for short), with both of these terms having a different meaning than the term “network node”.
  • Radio Node As used herein, a "radio node” can be either a “radio access node” (or equivalent term) or a "wireless device.”
  • Network Node is any node that is either part of the radio access network ⁇ e.g., a radio access node or equivalent term) or of the core network ⁇ e.g., a core network node discussed above) of a cellular communications network.
  • a network node is equipment capable, configured, arranged, and/or operable to communicate directly or indirectly with a wireless device and/or with other network nodes or equipment in the cellular communications network, to enable and/or provide wireless access to the wireless device, and/or to perform other functions ⁇ e.g., administration) in the cellular communications network.
  • Base station may comprise a physical or a logical node transmitting or controlling the transmission of radio signals, e.g., eNB, gNB, ng-eNB, en-gNB, centralized unit (CU)/distributed unit (DU), transmitting radio network node, transmission point (TP), transmission reception point (TRP), remote radio head (RRH), remote radio unit (RRU), Distributed Antenna System (DAS), relay, etc.
  • eNB e.g., gNB, gNB, ng-eNB, en-gNB, centralized unit (CU)/distributed unit (DU), transmitting radio network node, transmission point (TP), transmission reception point (TRP), remote radio head (RRH), remote radio unit (RRU), Distributed Antenna System (DAS), relay, etc.
  • node can be any type of node that can in or with a wireless network (including RAN and/or core network), including a radio access node (or equivalent term), core network node, or wireless device.
  • a wireless network including RAN and/or core network
  • radio access node or equivalent term
  • core network node or wireless device.
  • node may be limited to a particular type (e.g., radio access node) based on its specific characteristics in any given context of use.
  • NR uses CP-OFDM (Cyclic Prefix Orthogonal Frequency Division Multiplexing) in the DL and both CP-OFDM and DFT-spread OFDM (DFT-S-OFDM) in the UL.
  • CP-OFDM Cyclic Prefix Orthogonal Frequency Division Multiplexing
  • DFT-S-OFDM DFT-spread OFDM
  • NR DL and UL physical resources are organized into equal-sized 1-ms subframes. A subframe is further divided into multiple slots of equal duration, with each slot including multiple OFDM-based symbols.
  • time-frequency resources can be configured much more flexibly for an NR cell than for an LTE cell.
  • SOS 15-kHz OFDM sub-carrier spacing
  • NR SOS can range from 15 to 240 kHz, with even greater SOS considered for future NR releases.
  • NR networks In addition to providing coverage via cells as in LTE, NR networks also provide coverage via "beams.”
  • a downlink (DL, i.e., network to UE) "beam” is a coverage area of a network-transmitted reference signal (RS) that may be measured or monitored by a UE.
  • RS can include any of the following: synchronization sig nal/P BCH block (SSB), channel state information RS (CSI -RS), tertiary reference signals (or any other sync signal), positioning RS (PRS), demodulation RS (DMRS), phase-tracking reference signals (PTRS), etc.
  • SSB is available to all UEs regardless of the state of their connection with the network, while other RS (e.g., CSI-RS, DM-RS, PTRS) are associated with specific UEs that have a network connection.
  • the radio resource control (RRC) protocol controls communications between UE and gNB at the radio interface as well as mobility of a UE between cells in the NG-RAN.
  • RRC also broadcasts system information (SI) and performs establishment, configuration, maintenance, and release of data radio bearers (DRBs) and signaling radio bearers (SRBs) and used by UEs. Additionally, RRC controls addition, modification, and release of carrier aggregation (CA) and dual-connectivity (DC) configurations for UEs.
  • RRC also performs various security functions such as key management.
  • RRCJDLE After a UE is powered ON it will be in the RRCJDLE state until an RRC connection is established with the network, at which time the UE will transition to RRCJDONNECTED state (e.g,, where data transfer can occur). The UE returns to RRCJDLE after the connection with the network is released.
  • RRCJDLE state the UE’s radio is active on a discontinuous reception (DRX) schedule configured by upper layers.
  • DRX active periods also referred to as “DRX On durations”
  • an RRCJDLE UE receives SI broadcast in the cell where the UE is camping, performs measurements of neighbor cells to support cell reselection, and monitors a paging channel on PDCCH for pages from 5GC via gNB.
  • NR RRC includes an RRCJNACTIVE state in which a UE is known (e.g., via UE context) by the serving gNB.
  • RRCJNACTIVE has some properties similar to a “suspended” condition used in LTE.
  • LTE Rel-12 introduced dual connectivity (DC) whereby a UE in RRC_CONNECTED state can be connected to two network nodes simultaneously, thereby improving connection robustness and/or capacity.
  • these two network nodes are referred to as “Master eNB” (MeNB) and “Secondary eNB” (SeNB). More generally, these two network nodes are referred to as master node (MN) and secondary node (SN).
  • MN Master eNB
  • SN secondary node
  • a UE is configured with a Master Cell Group (MCG) associated with the MN and a Secondary Cell Group (SCG) associated with the SN.
  • MCG Master Cell Group
  • SCG Secondary Cell Group
  • Each of these groups of serving cells include one MAC entity, a set of logical channels with associated RLC entities, a primary cell (PCell or PSCell), and optionally one or more secondary cells (SCells).
  • the term “Special Cell” refers to the PCell of the MCG or the PSCell of the SCG depending on whether the UE's MAC entity is associated with the MCG or the SCG, respectively.
  • SpCell refers to the PCell.
  • An SpCell is always activated and supports physical uplink control channel (PUCCH) transmission and contentionbased random access by UEs.
  • PUCCH physical uplink control channel
  • the MN provides SI and terminates the control plane (CP) connection towards the UE and, as such, is the UE's controlling node, including for handovers to and from SNs.
  • the MN terminates the connection between the RAN (e.g., eNB) and the MME for an LTE UE.
  • the reconfiguration, addition, and removal of SCells can be performed by RRC.
  • RRC Radio Resource Control
  • Both MN and SN can terminate the user plane (UP) to the UE.
  • the LTE DC UP includes three different types of bearers. MCG bearers are terminated in the MN, and the SN is not involved in the transport of UP data for MCG bearers. Likewise, SCG bearers are terminated in the SN, and the MN is not involved in the transport of UP data for SCG bearers. Finally, split bearers (and their corresponding S1 -U connections to S-GW) are also terminated in MN. However, PDCP data is transferred between MN and SN via X2-U. Both SN and MN are involved in transmitting data for split bearers.
  • 3GPP TR 38.804 (v14.0.0) describes various exemplary DC scenarios or configurations in which the MN and SN can apply NR, LTE, or both.
  • the following terminology is used to describe these exemplary DC scenarios or configurations:
  • LTE DC (/.e., both MN and SN employ LTE, as discussed above);
  • EN-DC LTE-NR DC where MN (eNB) employs LTE and SN (gNB) employs NR, and both are connected to EPC.
  • NGEN-DC LTE-NR dual connectivity where a UE is connected to one ng-eNB that acts as a MN and one gNB that acts as a SN. The ng-eNB is connected to the 5GC and the gNB is connected to the ng -eNB via the Xn interface.
  • NE-DC LTE-NR dual connectivity where a UE is connected to one gNB that acts as a MN and one ng-eNB that acts as a SN.
  • the gNB is connected to 5GC and the ng-eNB is connected to the gNB via the Xn interface.
  • NR-DC both MN and SN employ NR.
  • MR-DC multi-RAT DC: a generalization of the Intra-E-UTRA Dual Connectivity (DC) described in 3GPP TS 36.300 (v16.3.0), where a multiple Rx/Tx UE may be configured to utilize resources provided by two different nodes connected via non-ideal backhaul, one providing E-UTRA access and the other one providing NR access.
  • One node acts as the MN and the other as the SN.
  • the MN and SN are connected via a network interface and at least the MN is connected to the core network.
  • EN-DC, NE-DC, and NGEN-DC are different example cases of MR-DC.
  • 3GPP document RP-201620 defines a study item (SI) on "Enhancement for Data Collection for NR and EN-DC”, which aims to study the functional framework for RAN intelligence enabled by further data collection through use cases, examples etc. , and to identify the potential standardization impacts on current NG-RAN nodes and interfaces.
  • SI Study item
  • some objectives include:
  • 3GPP has released 3GPP Technical Report (TR) 37.817 that describes high -level principles that should be applied for Al-enabled RAN intelligence.
  • TR 37.817 This document also includes Figure 3, which is a block diagram of an exemplary framework for RAN intelligence based on AI/ML models.
  • 3GPP TR 37.817 (v1 .1 .0) describes the following high-level principles in the context of Figure 3:
  • Model Training and Model Inference functions should be able to request, if needed, specific information to be used to train or execute the AI/ML algorithm and to avoid reception of unnecessary information. The nature of such information depends on the use case and on the AI/ML algorithm.
  • Model Inference function should signal the outputs of the model only to nodes that have explicitly requested them (e.g., via subscription), or nodes that take actions based on the output from Model Inference.
  • the Data Collection block in Figure 3 is a function that provides input data to Model Training and Model Inference functions (described below).
  • AI/ML algorithm-specific data preparation e.g., data pre-processing and cleaning, formatting, and transformation
  • Examples of input data include measurements from UEs or different network entities, feedback from the Actor block (described below), and output from an AI/ML model.
  • the Model Training block in Figure 3 is a function that performs the ML model training, validation, and testing. The testing may generate model performance metrics.
  • the Model Training function is also responsible for data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) based on Training Data provided by the Data Collection function, if required.
  • the Model Training block includes a Model Deployment/Update procedure that is used to initially deploy a trained, validated, and tested AI/ML model to the Model Inference function , as well as to provide model updates to the Model Inference function. Details of the Model Deployment/Update procedure and the use case-specific AI/ML models transferred via this procedure are out of scope of the Rel-17 SI. The feasibility of single-vendor or multivendor environment has not been studied in the Rel-17 SI.
  • the Model Inference block in Figure 3 is a function that provides AI/ML model outputs such as predictions or decisions.
  • the Model Inference function may provide model performance feedback to the Model Training function, when applicable.
  • the Model Inference function is also responsible for data preparation (e.g. , data preprocessing and cleaning, formatting, and transformation) based on the Inference Data provided by the Data Collection function, if required. Details of the inference outputs and the model performance feedback are out of scope of the Rel-17 SI.
  • the Actor block in Figure 3 is a function that receives the output from the Model inference function and triggers or performs corresponding actions.
  • the Actor may trigger actions directed to itself or to other entities.
  • the Actor can also provide Feedback Information that may be needed to derive the Training Data, the Inference Data, or the performance feedback.
  • 3GPP TR 37.817 (v1 .1 .0) also identifies three use cases for RAN Intelligence: network energy savings, load balancing, and mobility optimization. 3GPP TR 37.817 (v1 .1 .0) describes that the following solutions can be used to support AI/ML-based network energy saving:
  • AI/ML Model Training is located in the GAM and AI/ML Model Inference is located in the gNB.
  • AI/ML Model Training and AI/ML Model Inference are both located in the gNB.
  • • gNB can continue Model Training based on AI/ML model trained in the 0AM.
  • AI/ML Model Training is located in the 0AM and AI/ML Model Inference is located in the gNB-CU.
  • 3GPP TR 37.817 (v1 .1 .0) also describes that the following solutions can be used to support AI/ML-based Mobility Optimization:
  • AI/ML Model Training function is deployed in 0AM, while the Model Inference function resides within the RAN node (e.g., gNB).
  • RAN node e.g., gNB
  • Both the AI/ML Model Training function and the AI/ML Model Inference function reside withi n the RAN node (e.g., gNB).
  • • gNB can continue Model Training based on AI/ML model trained in the 0AM.
  • AI/ML Model Training is located in CU-CP or CAM
  • AI/ML Model Inference function is located in CU-CP.
  • 3GPP TR 37.817 (v1 .1 .0) also describes that the following solutions can be used to support AI/ML-based Load Balancing:
  • AI/ML Model Training is located in the CAM and AI/ML Model Inference is located in the gNB.
  • AI/ML Model Training and AI/ML Model Inference are both located in the gNB.
  • • gNB can continue Model Training based on AI/ML model trained in the CAM.
  • AI/ML Model Training is located in the CAM and AI/ML Model Inference is located in the gNB-CU. • AI/ML Model Training and Model Inference are both located in the gNB-CU.
  • Model Management in the exemplary framework shown in Figure 3.
  • Figure 4 is a block diagram of another exemplary framework for RAN intelligence based on AI/ML models, including this Model Management functionality.
  • model deployment/update should be decided and performed by Model Management instead of by Model Training.
  • Model Management may also host a model repository in which models are stored.
  • Model Management should also control the Model Training function, e.g., by requesting model training and receiving the trained model(s).
  • Model Management should also support model performance monitoring, which is used to assist and control Model Inference.
  • the model performance feedback from Model Inference should be first sent to Model Management. If the performance is not ideal, Model Management may decide to fallback to a traditional algorithm or change/update the model being used.
  • Model Management may be hosted by OAM, gNB-CU, or other network entity(ies) depending on the use case. There is a need to clearly define the Model Management function in order to facilitate design and analysis of signaling needed to support the framework shown in Figure 4.
  • 3GPP document RP-213599 describes a Study on AI/ML for NR Air Interface. This study includes certain AI/ML use cases or scenarios where collaboration between gNB and UEs is needed, including:
  • CSI feedback enhancement e.g., overhead reduction, accuracy improvements, prediction, etc.
  • Beam management e.g., beam prediction in time and/or spatial domains for overhead/latency reduction, beam selection accuracy improvement, etc.
  • NLOS non-line-of-sight
  • UEs may be desirable for UEs to use AI/ML functionality when operating in a RAN (e.g., NG - RAN).
  • a RAN e.g., NG - RAN
  • the interaction between UE and the RAN is limited to UE measurements that can be used as input for a training function in the RAN.
  • the three use cases or scenarios mentioned above are examples where tighter interaction between UE and RAN is needed, particularly for LCM of AI/ML models.
  • possible locations for AI/ML model inference are currently unspecified.
  • embodiments of the present disclosure provide flexible and efficient techniques for a first network node to perform LCM of an AI/ML model deployed at a UE.
  • LCM performed by the first network node can include re-training, testing, verifying, validating, modifying, enabling, disabling, replacing, revoke, or updating the AI/ML model deployed at the UE.
  • the first network node can inform UEs of the LCM-related for AI/ML model(s) that are supported and/or provided by the first network node.
  • the first network node can request information about AI/ML model(s) deployed at UE(s) and obtain the requested information from the UE(s).
  • the first network node can provide UE(s) with test data for testing, verifying, validating, etc. an AI/ML model deployed at the UE(s) and obtain a report from the UE(s) about the outcome of the testing, verifying, validating, etc. based on the provided test data .
  • the first network node can provide UE(s) with conditions for performing the testing, verifying, validating, etc. based on the provided test data.
  • the first network node can send to UE(s) updated, retrained, or modified versions of AI/ML models deployed at the UE(s).
  • the first network node can inform a second network node of the LCM actions performed for an AI/ML model deployed at the UE.
  • Embodiments also include complementary methods performed by the UE and by the second network node.
  • Embodiments can provide various benefits, advantages, and/or solutions to problems described herein.
  • embodiments can provide mobile network operators (MNOs) a standardized and/or uniform way for network nodes to handle LCM actions for AI/ML models deployed at UEs.
  • Embodiments can also enable an MNO to monitor performance of AI/ML models deployed at UEs, thereby ensuring that UEs operate with AI/ML model(s) that provide good performance and/or are suitable for prevailing radio environments (e.g., signal l evels, interference, multipath, etc.), traffic conditions, mobility scenarios, load conditions, etc. in cells served by network nodes.
  • prevailing radio environments e.g., signal l evels, interference, multipath, etc.
  • embodiments facilitate improved UE and network performance in terms of throughput, spectral efficiency, and/or energy usage.
  • AI/ML model deployed at a UE is used to denote that at least an inference function of the AI/ML model is deployed at the UE, but does not preclude other functions or aspects (e.g., Actor) of the AI/ML to be deployed at the UE
  • embodiments disclosed herein are applicable to any type of AI/ML used in a RAN.
  • Non-limiting examples include supervised learning, deep learning, reinforcement learning, contextual multi -armed bandit algorithms, autoregression algorithms, etc. or combinations thereof .
  • Such algorithms may exploit functional approximation models, also referred to as AI/ML models, including feedforward neural networks, deep neural networks, recurrent neural networks, convolutional neural networks, etc.
  • reinforcement learning algorithms include deep reinforcement learning algorithms such as deep Q-network (DQN), proximal policy optimization (PPO), double Q -learning, policy gradient algorithms, off-policy learning algorithms, actor-critic algorithms, and advantage actor-critic algorithms (e.g., A2C, A3C, actorcritic with experience replay, etc.).
  • DQN deep Q-network
  • PPO proximal policy optimization
  • double Q -learning policy gradient algorithms
  • off-policy learning algorithms e.g., off-policy learning algorithms
  • actor-critic algorithms e.g., A2C, A3C, actorcritic with experience replay, etc.
  • advantage actor-critic algorithms e.g., A2C, A3C, actorcritic with experience replay, etc.
  • the AI/ML model(s) subject to LCM actions can be used by the UE and/or the network for various purposes, use cases, and/or operations, including any of the following:
  • CSI channel state information
  • CQI channel quality indicator
  • PMI precoding matrix indicator
  • SNR signal to noise ratio
  • SINR signal to interference plus noise ratio
  • RSRP reference signal received power
  • link adaptation parameters such as modulation order, modulation and coding scheme (MCS), etc.
  • MMS multimedia telephony service for IMS
  • DASH dynamic adaptive streaming over HTTP
  • VR virtual reality
  • AR augmented reality
  • XR extended reality
  • URLCC ultra-reliable low-latency communications
  • TSN time-sensitive networking
  • TSC time-sensitive communication
  • the first and second network nodes shown in Figure 5 can have any of the following arrangements and/or characteristics:
  • RAN nodes e.g., two gNBs, two eNBs, two en-gNBs, two ng-eNBs, etc.
  • gNB-CU-CP and gNB-DU • different units/functions of one RAN node (e.g., gNB-CU-CP and gNB-DU, gNB-CU-CP and gNB-CU-UP, etc.);
  • the first network node can be a first RAN node (e.g., gNB, eNB, en-gNB, ng-eNB, etc.) and the second network node can be a unit/function of a second RAN node (e.g., gNB-CU-CP);
  • a first RAN node e.g., gNB, eNB, en-gNB, ng-eNB, etc.
  • the second network node can be a unit/function of a second RAN node (e.g., gNB-CU-CP);
  • RAT Radio Access Technology
  • NR Radio Access Technology
  • WiFi Wireless Fidelity
  • RAN e.g., E-UTRAN or NG-RAN
  • different RANs e.g., one in NG-RAN and one in E- UTRAN
  • connected via a direct signaling connection e.g., XnAP
  • an indirect signaling connection e.g., via S1AP or NGAP to/from the CN
  • one of the first and second network nodes can be a RAN node (or unit/function thereof) while the other can be part of an operations/administration/maintenance (OAM) system or a service management and orchestration (SMO) function; and
  • OAM operations/administration/maintenance
  • SMO service management and orchestration
  • one of the first and second network nodes can be a RAN node (or unit/function thereof) while the other can be a CN node or function.
  • Figure 5 is signal flow diagram of an exemplary AI/ML model LCM procedures between a UE (520), a first network node (510), and a second network node (530), according to various embodiments of the present disclosure.
  • the embodiments summarized above will now be described in more detail in the context of Figure 5.
  • the numerical labels (e.g., “first”, “second”, etc.) given to the various messages shown in Figure 5 are used for distinguishing the messages from each other and are not intended to imply any specific order or sequence, unless expressly stated to the contrary.
  • the first network node can send (e.g., transmit) to one or more UEs a first message that indicates support for LCM actions by the network (e.g., in first network node, second network node, etc.) in relation to at least one AI/ML model deployed at the UE(s).
  • the first message transmitted by the first network node to the UE can be one or more of the following:
  • broadcast e.g., SI message or block
  • NAS Non-Access Stratum
  • AS Access Stratum
  • the first network node can indicate support for one or more of the following LCM actions:
  • the first network node can indicate support for any of the LCM actions as general support (e.g., overall support of signaling procedures used for LCM of AI/ML models deployed at UE), support for specific LCM features or options, support for specific types of AI/ML models, etc.
  • the first network node can indicate support for LCM of AI/ML models that relate to any of the various purposes, use cases, and/or operations mentioned above.
  • the first network node can indicate support for one or more specific types of AI/ML models, such as any of the exemplary types of AI/ML models mentioned above.
  • the first network node can send one or more second messages that request one or more UEs to report information for LCM of at least one AI/ML model currently deployed at the UE(s).
  • the first network node can request reporting of any of the following information:
  • AI/ML model identifier or identity
  • source e.g., supplier or vendor
  • network type e.g., private
  • indication of whether the LCM of the identified AI/ML model is enabled and/or supported at the UE
  • UE capabilities associated with the identified AI/ML model e.g., radio capabilities, service capabilities, etc.
  • the second message can be one or more existing RRC messages that have been extended to carry the LCM information request from the first network node to the UE.
  • Example RRC messages include UElnformationRequest and DLlnformationTransfer, with details of these example messages discussed below.
  • the first network node can receive from a UE a third message that includes an acknowledgement or a failure indication with respect to some or all the LCM information request in the second message.
  • the third message may include some or all the LCM information request in the second message.
  • the first network node can receive from a UE a fourth message that includes some or all the LCM information request in the second message
  • the third message or the fourth message can be one or more existing RRC messages that have been extended to carry the LCM information request in the second message.
  • Example RRC messages include UElnformationResponse and ULInformationTransfer, with details of these example messages discussed below.
  • the first network node can send to the UE one or more fifth messages that include any of the following testing information for AI/ML model(s) currently deployed at the UE:
  • conditions or events associated with performing the test e.g., for initiating, ending, pausing, and/or resuming the test, conditions to be fulfilled during the test (e.g., specific UE RRC state), etc.;
  • the reference data set can include a set of reference input-output pairs, where each reference output value (also referred to as "ground truth) represents an output that is expected to occur when the corresponding reference input value is appl ied to the model for verification.
  • the reference data set can include reference state-action pairs. Each reference action represents an expected output of the model or a decision of an AI/ML algorithm using the AI/ML model, when configu ring the AI/ML model with the reference state.
  • the UE when the fifth message(s) does not include a reference data set, the UE can use a (same or different) reference data set available locally at the UE for the requested testing .
  • the fifth message may include an explicit indication for the UE to use a locally -available data set for the requested testing.
  • the requested testing of an identified AI/ML model by the UE can include one or more of the following operations
  • the first network node can receive from the UE a sixth message that includes an acknowledgement or a failure indication with respect to some or all the testing requested in the fifth message(s).
  • the first network node can receive from the UE one or more seventh messages, with each seventh message including a report of a testing performed by the UE on AI/ML model(s) in accordance with the request in the fifth message(s).
  • each report can include an indication of reference data set(s) used for the testing, e.g., reference data set(s) provided by the first network node and/or reference data set(s) stored locally at the UE.
  • each report can include performance metric(s) requested by the first network node, such as accuracy, precision, recall, etc.
  • performance metrics that can be reported by the UE (by request or unsolicited) can include UE resources needed to execute the AI/ML model, UE energy consumption due to execution of the AI/ML model, performance relative to a threshold level, etc.
  • the first network node uses the received testing report(s) to determine whether a new AI/ML is required for the UE. For example, if the performance metric(s) provided by the UE indicate that the performance of the currently deployed AI/ML model is not satisfactory, the first network node may re-train, update, or modify the currently deployed AI/ML model. In case the first network node has no access to the AI/ML model currently deployed at the UE, the first network node may train a new AI/ML model for deployment at the UE.
  • the first RAN node sends to the UE one or more eighth messages, each including one or more of the following AI/ML model deployment information:
  • the eighth message can be responsive to the testing report(s) received in the seventh message(s).
  • the eighth message can be one or more existing RRC messages that have been extended to carry the information listed above from the first network node to the UE.
  • Example RRC messages include UElnformationRequest and DLlnformationTransfer, with details of these example messages discussed below.
  • the first network node can receive from a second network node a ninth message that includes a request for LCM action in relation to AI/ML model(s) deployed at one or more UEs (e.g., served by the first network node).
  • the LCM actions can include any of the examples discussed above in relation to other messages.
  • CAM second network node
  • the CAM may then trigger LCM actions on the AI/ML model by the first network node (e.g., gNB), including any of the LCM actions discussed above in relation to various messages.
  • the first network node e.g., gNB
  • the first network node can send to the second network node a tenth message that includes a report about LCM actions performed on AI/ML model (s) deployed at one or more UEs, such as by the first network node.
  • the tenth message can be responsive to and/or based on an eighth message received from a UE.
  • Each tenth message can include indications of the LCM actions performed (including success/failure), identifier(s) of AI/ML model(s) on which the LCM actions were performed, and identifier of UEs in which the AI/ML model(s) subject to the LCM actions are deployed.
  • the LCM actions can include any of the examples discussed above in relation to other messages.
  • the ninth message and/or the tenth message can be signaled over an interface between the first and the second network node, such as X2, Xn, F1 , E1 , NG, and S1 interfaces that can be part of a 3GPP system.
  • the particular interface used will depend on the particular types of the first and second network nodes, such as the examples listed above.
  • a UE can perform operations that are complementary to the first network node operations discussed above in relation to the first through eighth messages. For example, the UE receives the first message that the first network node sends. In some embodiments, the UE can also send to the first network node an eleventh message that indicates the UE's support for LCM actions by a network node in relation to AI/ML model(s) deployed at the UE.
  • the UE can indicate support for one or more of the following LCM actions by a network node (e.g., the first network node or the second network node):
  • a network node e.g., the first network node or the second network node
  • the UE can indicate support for any of the LCM actions as general support (e.g., overall support of signaling procedures used for LCM of AI/ML models deployed at UE), support for specific LCM features or options, support for specific types of AI/ML models, etc.
  • the UE can indicate support for LCM of AI/ML models that relate to any of the various purposes, use cases, and/or operations mentioned above .
  • the UE can indicate support for LCM of one or more specific types of AI/ML models, such as any of the exemplary types of AI/ML models mentioned above.
  • UE support for network-assisted LCM of AI/ML model deployed at the UE can be specified as an optional feature, e.g., in 3GPP TS 38.306 (V16.7.0).
  • Figure 5 shows separate first through eleventh messages, this is only for purposes of illustration. In some embodiments, multiples ones of these messages can be combined into single messages and/or certain ones of these messages may be absent and/or not requested.
  • the request for LCM information (described as being in the second message) can be sent together with test data (described as being in the fifth message) in a single message.
  • the acknowledgement of the information request (described as being in the third message) can be signaled implicitly, such as by the UE sending the fourth message that includes the report of requested LCM information (i.e., the fourth message acts as an implicit acknowledgement).
  • an AI/ML model update can be pushed from the first network node to the UE (or from the second network node to the UE via the first network node) without any prior testing.
  • FIGS. 6-9 show ASN.1 data structures for various fields or information elements (lEs) that can be used in messages used for LCM of AI/ML models deployed at a UE, according to various embodiments of the present disclosure. These are described individually below.
  • LEs information elements
  • Figure 6 shows an ASN.1 data structure for an exemplary UEInformationRequest-r18-IEs field (or IE) that can be added to a UElnformationRequest message to carry various LCM-related information sent by the first network node to the UE.
  • This IE includes an aimlModellnfoReq-r18 field whose presence indicates the UE should report AI/ML model information (e.g., in the fourth message).
  • This IE also includes an aimlModelTestResultlnfoReq- r18 field whose presence indicates the UE should report AI/ML model test information (e.g., in the seventh message).
  • Figure 7 shows an ASN.1 data structure for an exemplary UEInformationResponse-r18-IEs field (or IE) that can be added to a UElnformationResponse message to carry various LCM-related information sent by the UE to the first network node.
  • This IE includes an aimlModellnfoResult-r18 field that can carry the AI/ML model information reported by the UE (e.g., in the fourth message).
  • This IE also includes an aimlModelTestResultlnfo-r18 field that can carry the AI/ML model test results reported by the UE (e.g., in the seventh message).
  • Figure 8 shows an ASN.1 data structure for an exemplary ULInformationTransfer-r18-IEs field (or IE) that can be added to a ULInformationTransfer message to carry various LCM-related information sent by the UE to the first network node.
  • This IE includes an aimlModellnfoTransfer field that can carry the AI/ML model information reported by the UE (e.g., in the fourth message).
  • This IE also includes an aimlModelTestResultlnfoTransferf ⁇ e ⁇ d that can carry the AI/ML model test results reported by the UE (e.g., in the seventh message).
  • Figure 9 shows an ASN.1 data structure for an exemplary DLInformationTransfer-v18-IEs field (or IE) that can be added to a DLlnformationTransfer message to carry various LCM-related information sent by the first network node to the UE.
  • This IE includes an aimlModelTransferf ⁇ e ⁇ d that can carry AI/ML model information sent by the first network node (e.g., in the eighth message).
  • Figures 10-12 show exemplary methods ⁇ e.g., procedures) for a first network node, a UE, and a second network node, respectively.
  • various features of the operations described below correspond to various embodiments described above.
  • the exemplary methods shown in Figures 10-12 can be used cooperatively to provide various benefits, advantages, and/or solutions to problems described herein.
  • Figures 10-12 show specific blocks in particular orders, the operations of the exemplary methods can be performed in different orders than shown and can be combined and/or divided into blocks having different functionality than shown. Optional blocks or operations are indicated by dashed lines.
  • Figure 10 shows an exemplary method (e.g., procedure) for LCM of AI/ML models in UEs operating in a RAN, according to various embodiments of the present disclosure.
  • the exemplary method can be performed by a first network node ⁇ e.g, base station, eNB, gNB, ng-eNB, etc. or unit/function thereof, ON node, CAM, SMO, etc.) such as described elsewhere herein.
  • a first network node ⁇ e.g, base station, eNB, gNB, ng-eNB, etc. or unit/function thereof, ON node, CAM, SMO, etc.
  • the exemplary method can include the first network node transmitting to a UE one or more of the following messages, which are identified below by corresponding block numbers:
  • the exemplary method can include the first network node receiving from the UE one or more of the following messages, which are identified below by corresponding block numbers:
  • the first message indicates support for one or more of the following LCM actions by the network:
  • the second message requests the UE to report one or more of the following LCM information:
  • the fourth message is received but the third message is not received, with the fourth message being an implicit acknowledgement with respect to the LCM information requested by the second message.
  • the second message also requests the UE to perform testing of the identified AI/ML model, and the fifth message is not transmitted.
  • the fifth message (i.e., when transmitted) includes one or more of the following:
  • the fifth message does not include the reference data set and one of the following applies:
  • the fifth message includes an explicit indication for the UE to use a locally -available data set for the requested testing
  • the report in the seventh message includes one or more of the following:
  • the seventh message is received but the sixth message is not received, with the seventh message being an implicit acknowledgement with respect to the testing requested by the fifth message.
  • the exemplary method can also include the operations of blocks 1065- 1070, where the first network node can determine that performance of the identified AI/ML model is not satisfactory based on the report in the seventh message, and perform one of the following based on the determination:
  • the eighth message includes one or more of the following:
  • the exemplary method can also include the operations of blocks 1080-1090, where the first network node can receive from a second network node a ninth message that includes a request for LCM actions in relation to at least one AI/ML model deployed at one or more UEs, and transmit to the second network node a tenth message that includes a report about LCM actions performed by the first network node on the at least one AI/ML model.
  • the LCM actions requested by the ninth message can include one or more of the following:
  • the report in the tenth message includes LCM actions and/or AI/ML models indicated to the UE by the eighth message.
  • any one of the following can apply:
  • the first and second network nodes are different network nodes in a RAN
  • the first and second network nodes are different units or functions of one network node in a RAN
  • one of the first and second network nodes is a RAN node and the other of the first and second network nodes is one of the following: a core network (CN) node or function, a service management and orchestration (SMO) function, or part of an operations/administration/ maintenance (OAM) system.
  • CN core network
  • SMO service management and orchestration
  • OAM operations/administration/ maintenance
  • Figure 11 shows an exemplary method (e.g., procedure) for LCM of AI/ML models, according to various embodiments of the present disclosure.
  • the exemplary method can be performed by UE ⁇ e.g., wireless device) operating in a RAN, such as UEs described elsewhere herein.
  • the exemplary method can include the UE receiving from a first network node one or more of the following messages, which are identified below by corresponding block numbers:
  • the exemplary method can include the UE transmitting to the first network node one or more of the following messages, which are identified below by corresponding block numbers:
  • first, second, third, fourth, fifth, sixth, seventh, eighth, and eleventh messages can have any of the same content and/or characteristics as described above for first network node embodiments.
  • the exemplary method can also include the operations of block 1165, where the UE can perform testing of the identified AI/ML in accordance with the request in the fifth message.
  • the seventh message can include the results of the testing performed in block 1165.
  • the eighth message is received in response to the seventh message and includes one or more of the following:
  • the exemplary method can also include the operations of block 1190, where the UE can apply the new, updated, retrained, or modified AI/ML model identified in the eighth message for one or more of the following UE operations in the RAN:
  • Figure 12 shows an exemplary method (e.g., procedure) for LCM of AI/ML models in UEs operating in a RAN, according to various embodiments of the present disclosure.
  • the exemplary method can be performed by a second network node ⁇ e.g., base station, eNB, gNB, ng-eNB, etc. or unit/function thereof, CN node, OAM, SMO, etc.) such as described elsewhere herein.
  • a second network node e.g., base station, eNB, gNB, ng-eNB, etc. or unit/function thereof, CN node, OAM, SMO, etc.
  • the exemplary method can include the operations of block 1210, where the second network node can transmit to a first network node a ninth message that includes a request for LCM actions in relation to at least one AI/ML model deployed at one or more UEs.
  • the exemplary method can also include the operations of block 1220, where the second network node can receive from the first network node a tenth message that includes a report about LCM actions performed by the first network node on the at least one AI/ML model.
  • the numbers (e.g., eighth) associated with the respective messages are the same as previously used to describe corresponding messages (e.g., in Figure 5).
  • the eighth and ninth messages can have any of the same content and/or characteristics as described above for first network node embodiments. In various embodiments, any one of the following can apply:
  • the first and second network nodes are different network nodes in a RAN
  • the first and second network nodes are different units or functions of one network node in a RAN
  • one of the first and second network nodes is a RAN node and the other of the first and second network nodes is one of the following: a CN node or function, an SMO function, or part of an OAM system.
  • FIG. 13 shows an example of a communication system 1300 in accordance with some embodiments.
  • the communication system 1300 includes a telecommunication network 1302 that includes an access network 1304, such as a radio access network (RAN), and a core network 1306, which includes one or more core network nodes 1308.
  • telecommunication network 1302 can also include one or more Network Management (NM) nodes 1320, which can be part of an operation support system (OSS), a business support system (BSS), and/or an OAM system.
  • OSS operation support system
  • BSS business support system
  • OAM OAM
  • the NM nodes can monitor and/or control operations of other nodes in access network 1304 and core network 1306.
  • NM node 1320 can be configured to communicate with other nodes in access network 1304 and core network 1306 for these purposes.
  • Access network 1304 includes one or more access network nodes, such as network nodes 131 Oa-b (one or more of which may be generally referred to as network nodes 1310), or any other similar 3GPP access node or non- 3GPP access point.
  • Network nodes 1310 facilitate direct or indirect connection of UEs, such as by connecting UEs 1312a-d (one or more of which may be generally referred to as UEs 1312) to the core network 1306 over one or more wireless connections.
  • access network 1304 can include a service management and orchestration (SMO) system or node 1318, which can monitor and/or control operations of the access network nodes 1310. This arrangement can be used, for example, when access network 1304 utilizes an Open RAN (O-RAN) architecture.
  • SMO system 1318 can be configured to communicate with core network 1306 and/or host 1316, as shown in Figure 13.
  • Example wireless communications over a wireless connection include transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information without the use of wires, cables, or other material conductors.
  • communication system 1300 may include any number of wired or wireless networks, network nodes, UEs, and/or any other components or systems that may facilitate or participate in the communication of data and/or signals whether via wired or wireless connections.
  • Communication system 1300 may include and/or interface with any type of communication, telecommunication, data, cellular, radio network, and/or other similar type of system.
  • UEs 1312 may be any of a wide variety of communication devices, including wireless devices arranged, configured, and/or operable to communicate wirelessly with network nodes 1310 and other communication devices.
  • network nodes 1310 are arranged, capable, configured, and/or operable to communicate directly or indirectly with UEs 1312 and/or with other network nodes or equipment in telecommunication network 1302 to enable and/or provide network access, such as wireless network access, and/or to perform other functions, such as administration in telecommunication network 1302.
  • core network 1306 connects network nodes 1310 to one or more hosts, such as host 1316. These connections may be direct or indirect via one or more intermediary networks or devices. In other examples, network nodes may be directly coupled to hosts.
  • Core network 1306 includes one more core network nodes (e.g., core network node 1308) that are structured with hardware and software components. Features of these components may be substantially similar to those described with respect to the UEs, network nodes, and/or hosts, such that the descriptions thereof are generally applicable to the corresponding components of the core network node 1308.
  • Example core network nodes include functions of one or more of a Mobile Switching Center (MSC), Mobility Management Entity (MME), Home Subscriber Server (HSS), Access and Mobility Management Function (AMF), Session Management Function (SMF), Authentication Server Function (AUSF), Subscription Identifier De-concealing function (SIDF), Unified Data Management (UDM), Security Edge Protection Proxy (SEPP), Network Exposure Function (NEF), and/or a User Plane Function (UPF).
  • MSC Mobile Switching Center
  • MME Mobility Management Entity
  • HSS Home Subscriber Server
  • AMF Access and Mobility Management Function
  • SMF Session Management Function
  • AUSF Authentication Server Function
  • SIDF Subscription Identifier De-concealing function
  • UDM Unified Data Management
  • SEPP Security Edge Protection Proxy
  • NEF Network Exposure Function
  • UPF User Plane Function
  • Host 1316 may be under the ownership or control of a service provider other than an operator or provider of access network 1304 and/or telecommunication network 1302, and may be operated by the service provider or on behalf of the service provider.
  • Host 1316 may host a variety of applications to provide one or more service. Examples of such applications include live and pre-recorded audio/video content, data collection services such as retrieving and compiling data on various ambient conditions detected by a plurality of UEs, analytics functionality, social media, functions for controlling or otherwise interacting with remote devices, functions for an alarm and surveillance center, or any other such function performed by a server.
  • communication system 1300 of Figure 13 enables connectivity between the UEs, network nodes, and hosts.
  • the communication system may be configured to operate according to predefined rules or procedures, such as specific standards that include, but are not limited to: Global System for Mobile Communications (GSM); Universal Mobile Telecommunications System (UMTS); Long Term Evolution (LTE), and/or other suitable 2G, 3G, 4G, 5G standards, or any applicable future generation standard (e.g., 6G); wireless local area network (WLAN) standards, such as the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standards (WiFi); and/or any other appropriate wireless communication standard, such as the Worldwide Interoperability for Microwave Access (WiMax), Bluetooth, Z-Wave, Near Field Communication (NFC) ZigBee, LiFi, and/or any low-power wide-area network (LPWAN) standards such as LoRa and Sigfox.
  • GSM Global System for Mobile Communications
  • UMTS Universal Mobile Telecommunications System
  • LTE Long Term Evolution
  • telecommunication network 1302 is a cellular network that implements 3GPP standardized features. Accordingly, telecommunication network 1302 may support network slicing to provide different logical networks to different devices that are connected to telecommunication network 1302. For example, telecommunication network 1302 may provide Ultra Reliable Low Latency Communication (URLLC) services to some UEs, while providing Enhanced Mobile Broadband (eMBB) services to other UEs, and/or Massive Machine Type Communication (mMTC)/Massive loT services to yet further UEs.
  • URLLC Ultra Reliable Low Latency Communication
  • eMBB Enhanced Mobile Broadband
  • mMTC Massive Machine Type Communication
  • UEs 1312 are configured to transmit and/or receive information without direct human interaction.
  • a UE may be designed to transmit information to access network 1304 on a predetermined schedule, when triggered by an internal or external event, or in response to requests from access network 1304.
  • a UE may be configured for operating in single- or multi-RAT or multi-standard mode.
  • a UE may operate with any one or combination of Wi-Fi, NR (New Radio) and LTE, i.e., being configured for multi-radio dual connectivity (MR-DC), such as E-UTRAN (Evolved-UMTS Terrestrial Radio Access Network) New Radio - Dual Connectivity (EN-DC).
  • MR-DC multi-radio dual connectivity
  • hub 1314 communicates with access network 1304 to facilitate indirect communication between one or more UEs (e.g., UE 1312c and/or 1312d) and network nodes (e.g., network node 1310b).
  • UEs e.g., UE 1312c and/or 1312d
  • network nodes e.g., network node 1310b
  • hub 1314 may be a controller, router, content source and analytics, or any of the other communication devices described herein regarding UEs.
  • hub 1314 may be a broadband router enabling access to core network 1306 for the UEs.
  • hub 1314 may be a controller that sends commands or instructions to one or more actuators in the UEs. Commands or instructions may be received from the UEs, network nodes 1310, or by executable code, script, process, or other instructions in hub 1314.
  • hub 1314 may be a data collector that acts as temporary storage for UE data and, in some embodiments, may perform analysis or other processing of the data.
  • hub 1314 may be a content source. For example, for a UE that is a VR headset, display, loudspeaker or other media delivery device, hub 1314 may retrieve VR assets, video, audio, or other media or data related to sensory information via a network node, which hub 1314 then provides to the UE either directly, after performing local processing, and/or after adding additional local content.
  • hub 1314 acts as a proxy server or orchestrator for the UEs, in particular in if one or more of the UEs are low energy loT devices.
  • Hub 1314 may have a constant/persistent or intermittent connection to the network node 1310b. Hub 1314 may also allow for a different communication scheme and/or schedule between hub 1314 and UEs (e.g., UE 1312c and/or 1312d), and between hub 1314 and core network 1306. In other examples, hub 1314 is connected to core network 1306 and/or one or more UEs via a wired connection. Moreover, hub 1314 may be configured to connect to an M2M service provider over access network 1304 and/or to another UE over a direct connection. In some scenarios, UEs may establish a wireless connection with network nodes 1310 while still connected via hub 1314 via a wired or wireless connection.
  • UEs may establish a wireless connection with network nodes 1310 while still connected via hub 1314 via a wired or wireless connection.
  • hub 1314 may be a dedicated hub - that is, a hub whose primary function is to route communications to/from the UEs from/to the network node 1310b.
  • hub 1314 may be a non-dedicated hub - that is, a device which is capable of operating to route communications between the UEs and network node 1310b, but which is additionally capable of operating as a communication start and/or end point for certain data channels.
  • FIG 14 shows a UE 1400 in accordance with some embodiments.
  • a UE include, but are not limited to, a smart phone, mobile phone, cell phone, voice over IP (VoIP) phone, wireless local loop phone, desktop computer, personal digital assistant (PDA), wireless cameras, gaming console or device, music storage device, playback appliance, wearable terminal device, wireless endpoint, mobile station, tablet, laptop, laptop-embedded equipment (LEE), laptop-mounted equipment (LME), smart device, wireless customer-premise equipment (CPE), vehicle-mounted or vehicle embedded/integrated wireless device, etc.
  • Other examples include any UE identified by 3GPP, including a narrow band internet of things (NB-loT) UE, a machine type communication (MTC) UE, and/or an enhanced MTC (eMTC) UE.
  • NB-loT narrow band internet of things
  • MTC machine type communication
  • eMTC enhanced MTC
  • a UE may support device-to-device (D2D) communication, for example by implementing a 3GPP standard for sidelink communication, Dedicated Short-Range Communication (DSRC), vehicle-to-vehicle (V2V), vehicle-to- infrastructure (V2I), or vehicle-to-everything (V2X).
  • D2D device-to-device
  • DSRC Dedicated Short-Range Communication
  • V2V vehicle-to-vehicle
  • V2I vehicle-to- infrastructure
  • V2X vehicle-to-everything
  • a UE may not necessarily have a user in the sense of a human user who owns and/or operates the relevant device.
  • a UE may represent a device that is intended for sale to, or operation by, a human user but which may not, or which may not initially, be associated with a specific human user (e.g., a smart sprinkler controller).
  • a UE may represent a device that is not intended for sale to, or operation by, an end
  • UE 1400 includes processing circuitry 1402 that is operatively coupled via a bus 1404 to an input/output interface 1406, a power source 1408, a memory 1410, a communication interface 1412, and/or any other component, or any combination thereof.
  • Certain UEs may utilize all or a subset of the components shown in Figure 14. The level of integration between the components may vary from one UE to another UE. Further, certain UEs may contain multiple instances of a component, such as multiple processors, memories, transceivers, transmitters, receivers, etc.
  • Processing circuitry 1402 is configured to process instructions and data and may be configured to implement any sequential state machine operative to execute instructions stored as machine-readable computer programs in memory 1410.
  • Processing circuitry 1402 may be implemented as one or more hardware-implemented state machines (e.g., in discrete logic, field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), etc.); programmable logic together with appropriate firmware; one or more stored computer programs, general-purpose processors, such as a microprocessor or digital signal processor (DSP), together with appropriate software; or any combination of the above.
  • processing circuitry 1402 may include multiple central processing units (CPUs).
  • the input/output interface 1406 may be configured to provide an interface or interfaces to an input device, output device, or one or more input and/or output devices.
  • Examples of an output device include a speaker, a sound card, a video card, a display, a monitor, a printer, an actuator, an emitter, a smartcard, another output device, or any combination thereof.
  • An input device may allow a user to capture information into UE 1400.
  • Examples of an input device include a touch-sensitive or presence-sensitive display, a camera (e.g., a digital camera, a digital video camera, a web camera, etc.), a microphone, a sensor, a mouse, a trackball, a directional pad, a trackpad, a scroll wheel, a smartcard, and the like.
  • the presence-sensitive display may include a capacitive or resistive touch sensor to sense input from a user.
  • a sensor may be, for instance, an accelerometer, a gyroscope, a tilt sensor, a force sensor, a magnetometer, an optical sensor, a proximity sensor, a biometric sensor, etc., or any combination thereof.
  • An output device may use the same type of interface port as an input device. For example, a Universal Serial Bus (USB) port may be used to provide an input device and an output device.
  • USB Universal Serial Bus
  • power source 1408 is structured as a battery or battery pack. Other types of power sources, such as an external power source (e.g., an electricity outlet), photovoltaic device, or power cell, may be used. Power source 1408 may further include power circuitry for delivering power from power source 1408 itself, and/or an external power source, to the various parts of UE 1400 via input circuitry or an interface such as an electrical power cable. Delivering power may be, for example, for charging of power source 1408. Power circuitry may perform any formatting, converting, or other modification to the power from power source 1408 to make the power suitable for the respective components of UE 1400 to which power is supplied.
  • an external power source e.g., an electricity outlet
  • Photovoltaic device e.g., or power cell
  • Power source 1408 may further include power circuitry for delivering power from power source 1408 itself, and/or an external power source, to the various parts of UE 1400 via input circuitry or an interface such as an electrical power cable. Delivering power may be, for example,
  • Memory 1410 may be or be configured to include memory such as random access memory (RAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic disks, optical disks, hard disks, removable cartridges, flash drives, and so forth.
  • memory 1410 includes one or more application programs 1414, such as an operating system, web browser application, a widget, gadget engine, or other application, and corresponding data 1416.
  • Memory 1410 may store, for use by UE 1400, any of a variety of various operating systems or combinations of operating systems.
  • Memory 1410 may be configured to include a number of physical drive units, such as redundant array of independent disks (RAID), flash memory, USB flash drive, external hard disk drive, thumb drive, pen drive, key drive, high-density digital versatile disc (HD-DVD) optical disc drive, internal hard disk drive, Blu-Ray optical disc drive, holographic digital data storage (HDDS) optical disc drive, external mini-dual in-line memory module (DIMM), synchronous dynamic random access memory (SDRAM), external micro-DIMM SDRAM, smartcard memory such as tamper resistant module in the form of a universal integrated circuit card (UICC) including one or more subscriber identity modules (SIMs), such as a USIM and/or ISIM, other memory, or any combination thereof.
  • RAID redundant array of independent disks
  • HD-DVD high-density digital versatile disc
  • HDDS holographic digital data storage
  • DIMM external mini-dual in-line memory module
  • SDRAM synchronous dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • the UICC may for example be an embedded UICC (eUlCC), integrated UICC (IUICC) or a removable UICC commonly known as ‘SIM card.
  • eUlCC embedded UICC
  • IUICC integrated UICC
  • SIM card removable UICC commonly known as ‘SIM card.
  • Memory 1410 may allow UE 1400 to access instructions, application programs and the like, stored on transitory or non-transitory memory media, to off-load data, or to upload data.
  • An article of manufacture, such as one utilizing a communication system may be tangibly embodied as or in memory 1410, which may be or comprise a device-readable storage medium.
  • Processing circuitry 1402 may be configured to communicate with an access network or other network using communication interface 1412.
  • Communication interface 1412 may comprise one or more communication subsystems and may include or be communicatively coupled to an antenna 1422.
  • Communication interface 1412 may include one or more transceivers used to communicate, such as by communicating with one or more remote transceivers of another device capable of wireless communication (e.g., another UE or a network node in an access network).
  • Each transceiver may include a transmitter 1418 and/or a receiver 1420 appropriate to provide network communications (e.g., optical, electrical, frequency allocations, and so forth).
  • transmitter 1418 and receiver 1420 may be coupled to one or more antennas (e.g., antenna 1422) and may share circuit components, software or firmware, or alternatively be implemented separately.
  • communication functions of communication interface 1412 may include cellular communication, Wi-Fi communication, LPWAN communication, data communication, voice communication, multimedia communication, short-range communications such as Bluetooth, near-field communication, location-based communication such as the use of the global positioning system (GPS) to determine a location, another like communication function, or any combination thereof.
  • GPS global positioning system
  • Communications may be implemented in according to one or more communication protocols and/or standards, such as IEEE 802.11 , Code Division Multiplexing Access (CDMA), Wideband Code Division Multiple Access (WCDMA), GSM, LTE, New Radio (NR), UMTS, WiMax, Ethernet, transmission control protocol/internet protocol (TCP/IP), synchronous optical networking (SONET), Asynchronous Transfer Mode (ATM), QUIC, Hypertext Transfer Protocol (HTTP), and so forth.
  • CDMA Code Division Multiplexing Access
  • WCDMA Wideband Code Division Multiple Access
  • GSM Global System for Mobile communications
  • LTE Long Term Evolution
  • NR New Radio
  • UMTS Worldwide Interoperability for Microwave Access
  • WiMax Ethernet
  • TCP/IP transmission control protocol/internet protocol
  • SONET synchronous optical networking
  • ATM Asynchronous Transfer Mode
  • QUIC Hypertext Transfer Protocol
  • HTTP Hypertext Transfer Protocol
  • a UE may provide an output of data captured by its sensors, through its communication interface 1412, via a wireless connection to a network node.
  • Data captured by sensors of a UE can be communicated through a wireless connection to a network node via another UE.
  • the output may be periodic (e.g., once every 15 minutes if it reports the sensed temperature), random (e.g., to even out the load from reporting from several sensors), in response to a triggering event (e.g., an alert is sent when moisture is detected), in response to a request (e.g., a user initiated request), or a continuous stream (e.g., a live video feed of a patient).
  • a UE comprises an actuator, a motor, or a switch, related to a communication interface configured to receive wireless input from a network node via a wireless connection.
  • the states of the actuator, the motor, or the switch may change.
  • the UE may comprise a motor that adjusts the control surfaces or rotors of a drone in flight according to the received input or to a robotic arm performing a medical procedure according to the received input.
  • a UE when in the form of an Internet of Things (loT) device, may be a device for use in one or more application domains, these domains comprising, but not limited to, city wearable technology, extended industrial application and healthcare.
  • loT device are a device which is or which is embedded in: a connected refrigerator or freezer, a TV, a connected lighting device, an electricity meter, a robot vacuum cleaner, a voice controlled smart speaker, a home security camera, a motion detector, a thermostat, a smoke detector, a door/window sensor, a flood/moisture sensor, an electrical door lock, a connected doorbell, an air conditioning system like a heat pump, an autonomous vehicle, a surveillance system, a weather monitoring device, a vehicle parking monitoring device, an electric vehicle charging station, a smart watch, a fitness tracker, a head-mounted display for Augmented Reality (AR) or Virtual Reality (VR), a wearable for tactile augmentation or sensory enhancement, a water sprinkler, an animal- or item-t
  • AR Augmented
  • a UE may represent a machine or other device that performs monitoring and/or measurements, and transmits the results of such monitoring and/or measurements to another UE and/or a network node.
  • the UE may in this case be an M2M device, which may in a 3GPP context be referred to as an MTC device.
  • the UE may implement the 3GPP NB-loT standard.
  • a UE may represent a vehicle, such as a car, a bus, a truck, a ship and an airplane, or other equipment that is capable of monitoring and/or reporting on its operational status or other functions associated with its operation.
  • a first UE might be or be integrated in a drone and provide the drone's speed information (obtained through a speed sensor) to a second UE that is a remote controller operating the drone.
  • the first UE may adjust the throttle on the drone (e.g., by controlling an actuator) to increase or decrease the drone's speed.
  • the first and/or the second UE can also include more than one of the functionalities described above.
  • a UE might comprise the sensor and the actuator, and handle communication of data for both the speed sensor and the actuators.
  • Figure 15 shows a network node 1500 in accordance with some embodiments.
  • network nodes include, but are not limited to, access points (e.g., radio access points) and base stations (e.g., radio base stations, Node Bs, eNBs, gNBs, etc.).
  • access points e.g., radio access points
  • base stations e.g., radio base stations, Node Bs, eNBs, gNBs, etc.
  • Base stations may be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and so, depending on the provided amount of coverage, may be referred to as femto base stations, pico base stations, micro base stations, or macro base stations.
  • a base station may be a relay node or a relay donor node controlling a relay.
  • a network node may also include one or more (or all) parts of a distributed radio base station such as centralized digital units and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio.
  • RRUs remote radio units
  • RRHs Remote Radio Heads
  • Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio.
  • Parts of a distributed radio base station may also be referred to as nodes in a distributed antenna system (DAS).
  • DAS distributed antenna system
  • network nodes include multiple transmission point (multi-TRP) 5G access nodes, multistandard radio (MSR) equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi- cell/multicast coordination entities (MCEs), Operation and Maintenance (O&M) nodes, Operations Support System (OSS) nodes, Self-Organizing Network (SON) nodes, positioning nodes (e.g., Evolved Serving Mobile Location Centers (E-SMLCs)), and/or Minimization of Drive Tests (MDTs).
  • MSR multistandard radio
  • RNCs radio network controllers
  • BSCs base station controllers
  • BTSs base transceiver stations
  • OFDM Operation and Maintenance
  • OSS Operations Support System
  • SON Self-Organizing Network
  • positioning nodes e.g., Evolved Serving Mobile Location Centers (E-SMLCs)
  • Network node 1500 includes processing circuitry 1502, memory 1504, communication interface 1506, and power source 1508.
  • Network node 1500 may be composed of multiple physically separate components (e.g., a NodeB component and a RNC component, or a BTS component and a BSC component, etc.), which may each have their own respective components.
  • network node 1500 comprises multiple separate components (e.g., BTS and BSC components)
  • one or more of the separate components may be shared among several network nodes.
  • a single RNC may control multiple NodeBs.
  • each unique NodeB and RNC pair may in some instances be considered a single separate network node.
  • network node 1500 may be configured to support multiple radio access technologies (RATs).
  • RATs radio access technologies
  • Network node 1500 may also include multiple sets of the various illustrated components for different wireless technologies integrated into network node 1500, for example GSM, WCDMA, LTE, NR, WiFi, Zigbee, Z-wave, LoRaWAN, Radio Frequency Identification (RFID) or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within network node 1500.
  • wireless technologies for example GSM, WCDMA, LTE, NR, WiFi, Zigbee, Z-wave, LoRaWAN, Radio Frequency Identification (RFID) or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within network node 1500.
  • RFID Radio Frequency Identification
  • Processing circuitry 1502 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other network node 1500 components, such as memory 1504, to provide network node 1500 functionality.
  • processing circuitry 1502 includes a system on a chip (SOC). In some embodiments, processing circuitry 1502 includes one or more of radio frequency (RF) transceiver circuitry 1512 and baseband processing circuitry 1514. In some embodiments, RF transceiver circuitry 1512 and baseband processing circuitry 1514 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of RF transceiver circuitry 1512 and baseband processing circuitry 1514 may be on the same chip or set of chips, boards, or units.
  • SOC system on a chip
  • processing circuitry 1502 includes one or more of radio frequency (RF) transceiver circuitry 1512 and baseband processing circuitry 1514.
  • RF transceiver circuitry 1512 and baseband processing circuitry 1514 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of RF transceiver circuitry 1512 and baseband processing
  • Memory 1504 may comprise any form of volatile or non-volatile computer-readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device-readable and/or computer-executable memory devices that store information, data, and/or instructions that may be used by processing circuitry 1502.
  • volatile or non-volatile computer-readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-vola
  • Memory 1504 may store any suitable instructions, data, or information, including a computer program, software, an application including one or more of logic, rules, code, tables, and/or other instructions (collectively denoted computer program product 1504a) capable of being executed by processing circuitry 1502 and utilized by network node 1500. Memory 1504 may be used to store any calculations made by processing circuitry 1502 and/or any data received via communication interface 1506. In some embodiments, processing circuitry 1502 and memory 1504 is integrated.
  • Communication interface 1506 is used in wired or wireless communication of signaling and/or data between a network node, access network, and/or UE. As illustrated, communication interface 1506 comprises port(s)/terminal(s) 1516 to send and receive data, for example to and from a network over a wired connection. Communication interface 1506 also includes radio front-end circuitry 1518 that may be coupled to, or in certain embodiments a part of, antenna 1510. Radio front-end circuitry 1518 comprises filters 1520 and amplifiers 1522. Radio front-end circuitry 1518 may be connected to an antenna 1510 and processing circuitry 1502. The radio front-end circuitry may be configured to condition signals communicated between antenna 1510 and processing circuitry 1502.
  • Radio front-end circuitry 1518 may receive digital data that is to be sent out to other network nodes or UEs via a wireless connection. Radio frontend circuitry 1518 may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters 1520 and/or amplifiers 1522. The radio signal may then be transmitted via antenna 1510. Similarly, when receiving data, antenna 1510 may collect radio signals which are then converted into digital data by radio front-end circuitry 1518. The digital data may be passed to processing circuitry 1502. In other embodiments, the communication interface may comprise different components and/or different combinations of components.
  • network node 1500 does not include separate radio front-end circuitry 1518, instead, processing circuitry 1502 includes radio front-end circuitry and is connected to antenna 1510. Similarly, in some embodiments, all or some of RF transceiver circuitry 1512 is part of communication interface 1506. In still other embodiments, communication interface 1506 includes one or more ports or terminals 1516, radio front-end circuitry 1518, and RF transceiver circuitry 1512, as part of a radio unit (not shown), and communication interface 1506 communicates with baseband processing circuitry 1514, which is part of a digital unit (not shown).
  • Antenna 1510 may include one or more antennas, or antenna arrays, configured to send and/or receive wireless signals. Antenna 1510 may be coupled to radio front-end circuitry 1518 and may be any type of antenna capable of transmitting and receiving data and/or signals wirelessly. In certain embodiments, antenna 1510 is separate from network node 1500 and connectable to network node 1500 through an interface or port.
  • Antenna 1510, communication interface 1506, and/or processing circuitry 1502 may be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by the network node. Any information, data and/or signals may be received from a U E, another network node and/or any other network equipment. Similarly, antenna 1510, communication interface 1506, and/or processing circuitry 1502 may be configured to perform any transmitting operations described herein as being performed by the network node. Any information, data and/or signals may be transmitted to a UE, another network node and/or any other network equipment.
  • Power source 1508 provides power to the various components of network node 1500 in a form suitable for the respective components (e.g., at a voltage and current level needed for each respective component). Power source 1508 may further comprise, or be coupled to, power management circuitry to supply the components of network node 1500 with power for performing the functionality described herein.
  • network node 1500 may be connectable to an external power source (e.g., the power grid, an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry of power source 1508.
  • power source 1508 may comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry. The battery may provide backup power should the external power source fail.
  • Embodiments of network node 1500 may include additional components beyond those shown in Figure 15 for providing certain aspects of the network node's functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein.
  • network node 1500 may include user interface equipment to allow input of information into network node 1500 and to allow output of information from network node 1500. This may allow a user to perform diagnostic, maintenance, repair, and other administrative functions for network node 1500.
  • FIG 16 is a block diagram of a host 1600, which may be an embodiment of host 1316 of Figure 13, in accordance with various aspects described herein.
  • host 1600 may be or comprise various combinations hardware and/or software, including a standalone server, a blade server, a cloud-implemented server, a distributed server, a virtual machine, container, or processing resources in a server farm.
  • Host 1600 may provide one or more services to one or more UEs.
  • Host 1600 includes processing circuitry 1602 that is operatively coupled via a bus 1604 to an input/output interface 1606, a network interface 1608, a power source 1610, and a memory 1612.
  • Other components may be included in other embodiments. Features of these components may be substantially similar to those described with respect to the devices of previous figures, such as Figures 14 and 15, such that the descriptions thereof are generally applicable to the corresponding components of host 1600.
  • Memory 1612 may include one or more computer programs including one or more host application programs 1614 and data 1616, which may include user data, e.g., data generated by a UE for host 1600 or data generated by host 1600 for a UE.
  • host 1600 may utilize only a subset or all of the components shown.
  • Host application programs 1614 may be implemented in a container-based architecture and may provide support for video codecs (e.g., Versatile Video Coding (WC), High Efficiency Video Coding (HEVC), Advanced Video Coding (AVC), MPEG, VP9) and audio codecs (e.g., FLAG, Advanced Audio Coding (AAC), MPEG, G.711), including transcoding for multiple different classes, types, or implementations of UEs (e.g., handsets, desktop computers, wearable display systems, heads-up display systems).
  • Host application programs 1614 may also provide for user authentication and licensing checks and may periodically report health, routes, and content availability to a central node, such as a device in or on the edge of a core network.
  • host 1600 may select and/or indicate a different host for over-the-top services for a UE.
  • Host application programs 1614 may support various protocols, such as the HTTP Live Streaming (HLS) protocol, Real-Time Messaging Protocol (RTMP), Real-Time Streaming Protocol (RTSP), Dynamic Adaptive Streaming over HTTP (MPEG-DASH), etc.
  • HTTP Live Streaming HLS
  • RTMP Real-Time Messaging Protocol
  • RTSP Real-Time Streaming Protocol
  • MPEG-DASH Dynamic Adaptive Streaming over HTTP
  • FIG 17 is a block diagram illustrating a virtualization environment 1700 in which functions implemented by some embodiments may be virtualized.
  • virtualizing means creating virtual versions of apparatuses or devices which may include virtualizing hardware platforms, storage devices and networking resources.
  • virtualization can be applied to any device described herein, or components thereof, and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components.
  • Some or all of the functions described herein may be implemented as virtual components executed by one or more virtual machines (VMs) implemented in one or more virtual environments 1700 hosted by one or more of hardware nodes, such as a hardware computing device that operates as a network node, UE, core network node, or host.
  • VMs virtual machines
  • the virtual node does not require radio connectivity (e.g., a core network node or host)
  • the node may be entirely virtualized.
  • Applications 1702 (which may alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc.) are run in the virtualization environment 1700 to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein.
  • Hardware 1704 includes processing circuitry, memory that stores software and/or instructions (collectively denoted computer program product 1704a) executable by hardware processing circuitry, and/or other hardware devices as described herein, such as a network interface, input/output interface, and so forth.
  • Software may be executed by the processing circuitry to instantiate one or more virtualization layers 1706 (also referred to as hypervisors or virtual machine monitors (VMMs)), provide VMs 1708a-b (one or more of which may be generally referred to as VMs 1708), and/or perform any of the functions, features and/or benefits described in relation with some embodiments described herein.
  • the virtualization layer 1706 may present a virtual operating platform that appears like networking hardware to the VMs 1708.
  • VMs 1708 comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and may be run by a corresponding virtualization layer 1706.
  • VMs 1708 may be implemented on one or more of VMs 1708, and the implementations may be made in different ways.
  • Virtualization of the hardware is in some contexts referred to as network function virtualization (NFV).
  • NFV network function virtualization
  • NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which can be located in data centers, and customer premise equipment.
  • a VM 1708 may be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine.
  • Each of the VMs 1708, and that part of hardware 1704 that executes that VM be it hardware dedicated to that VM and/or hardware shared by that VM with others of the VMs, forms separate virtual network elements.
  • a virtual network function is responsible for handling specific network functions that run in one or more VMs 1708 on top of the hardware 1704 and corresponds to the application 1702.
  • Hardware 1704 may be implemented in a standalone network node with generic or specific components. Hardware 1704 may implement some functions via virtualization. Alternatively, hardware 1704 may be part of a larger cluster of hardware (e.g., such as in a data center or CPE) where many hardware nodes work together and are managed via management and orchestration 1710, which, among others, oversees lifecycle management of applications 1702.
  • hardware 1704 is coupled to one or more radio units that each include one or more transmitters and one or more receivers that may be coupled to one or more antennas. Radio units may communicate directly with other hardware nodes via one or more appropriate network interfaces and may be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station.
  • some signaling can be provided with the use of a control system 1712 which may alternatively be used for communication between hardware nodes and radio units.
  • Figure 18 shows a communication diagram of a host 1802 communicating via a network node 1804 with a UE 1806 over a partially wireless connection in accordance with some embodiments.
  • host 1802 Like host 1600, embodiments of host 1802 include hardware, such as a communication interface, processing circuitry, and memory. Host 1802 also includes software, which is stored in or accessible by host 1802 and executable by the processing circuitry.
  • the software includes a host application that may be operable to provide a service to a remote user, such as UE 1806 connecting via an over-the-top (OTT) connection 1850 extending between UE 1806 and host 1802.
  • OTT over-the-top
  • a host application may provide user data which is transmitted using OTT connection 1850.
  • Network node 1804 includes hardware enabling it to communicate with host 1802 and UE 1806.
  • Connection 1860 may be direct or pass through a core network (like core network 1306 of Figure 13) and/or one or more other intermediate networks, such as one or more public, private, or hosted networks.
  • an intermediate network may be a backbone network or the Internet.
  • UE 1806 includes hardware and software, which is stored in or accessible by UE 1806 and executable by the UE's processing circuitry.
  • the software includes a client application, such as a web browser or operator-specific "app” that may be operable to provide a service to a human or non-human user via UE 1806 with the support of host 1802.
  • client application such as a web browser or operator-specific "app” that may be operable to provide a service to a human or non-human user via UE 1806 with the support of host 1802.
  • an executing host application may communicate with the executing client application via OTT connection 1850 terminating at UE 1806 and host 1802.
  • the UE's client application may receive request data from the host's host application and provide user data in response to the request data.
  • OTT connection 1850 may transfer both the request data and the user data.
  • the UE's client application may interact with the user to generate the user data that it provides to the host application through OTT connection 1850.
  • OTT connection 1850 may extend via a connection 1860 between host 1802 and network node 1804 and via a wireless connection 1870 between network node 1804 and UE 1806 to provide the connection between host 1802 and UE 1806.
  • Connection 1860 and wireless connection 1870, over which OTT connection 1850 may be provided, have been drawn abstractly to illustrate the communication between host 1802 and UE 1806 via network node 1804, without explicit reference to any intermediary devices and the precise routing of messages via these devices.
  • host 1802 provides user data, which may be performed by executing a host application.
  • the user data is associated with a particular human user interacting with UE 1806.
  • the user data is associated with a UE 1806 that shares data with host 1802 without explicit human interaction.
  • host 1802 initiates a transmission carrying the user data towards UE 1806.
  • Host 1802 may initiate the transmission responsive to a request transmitted by UE 1806. The request may be caused by human interaction with UE 1806 or by operation of the client application executing on UE 1806.
  • the transmission may pass via network node 1804, in accordance with the teachings of the embodiments described throughout this disclosure.
  • network node 1804 transmits to UE 1806 the user data that was carried in the transmission that host 1802 initiated, in accordance with the teachings of the embodiments described throughout this disclosure.
  • UE 1806 receives the user data carried in the transmission, which may be performed by a client application executed on UE 1806 associated with the host application executed by host 1802.
  • UE 1806 executes a client application which provides user data to host 1802.
  • the user data may be provided in reaction or response to the data received from host 1802.
  • UE 1806 may provide user data, which may be performed by executing the client application.
  • the client application may further consider user input received from the user via an input/output interface of UE 1806. Regardless of the specific manner in which the user data was provided, UE 1806 initiates, in step 1818, transmission of the user data towards host 1802 via network node 1804.
  • network node 1804 receives user data from UE 1806 and initiates transmission of the received user data towards host 1802.
  • host 1802 receives the user data carried in the transmission initiated by UE 1806.
  • embodiments described herein can provide mobile network operators (MNOs) a standardized and/or uniform way for network nodes to handle LCM actions for AI/ML models deployed at UEs.
  • MNOs mobile network operators
  • Embodiments can enable an MNO to monitor performance of AI/ML models deployed at UEs, thereby ensuring that UEs operate with AI/ML model(s) that provide good performance and/or are suitable for prevailing radio environments (e.g., signal levels, interference, multipath, etc.), traffic conditions, mobility scenarios, load conditions, etc. in cells served by network nodes.
  • embodiments facilitate improved UE and network performance in terms of throughput, spectral efficiency, and/or energy usage.
  • OTT services will experience improved network performance, which increases the value of such OTT services to end users and service providers.
  • factory status information may be collected and analyzed by host 1802.
  • host 1802 may process audio and video data which may have been retrieved from a UE for use in creating maps.
  • host 1802 may collect and analyze real-time data to assist in controlling vehicle congestion (e.g., controlling traffic lights).
  • host 1802 may store surveillance video uploaded by a UE.
  • host 1802 may store or control access to media content such as video, audio, VR or AR which it can broadcast, multicast or unicast to UEs.
  • host 1802 may be used for energy pricing, remote control of non-time critical electrical load to balance power generation needs, location services, presentation services (such as compiling diagrams etc. from data collected from remote devices), or any other function of collecting, retrieving, storing, analyzing and/or transmitting data.
  • a measurement procedure may be provided for the purpose of monitoring data rate, latency and other factors on which the one or more embodiments improve.
  • the measurement procedure and/or the network functionality for reconfiguring the OTT connection may be implemented in software and hardware of host 1802 and/or UE 1806.
  • sensors (not shown) may be deployed in or in association with other devices through which OTT connection 1850 passes; the sensors may participate in the measurement procedure by supplying values of the monitored quantities exemplified above, or supplying values of other physical quantities from which software may compute or estimate the monitored quantities.
  • Reconfiguring OTT connection 1850 may include message format, retransmission settings, preferred routing etc.; the reconfiguring need not directly alter the operation of network node 1804. Such procedures and functionalities may be known and practiced in the art.
  • measurements may involve proprietary UE signaling that facilitates measurements of throughput, propagation times, latency and the like, by host 1802. The measurements may be implemented in that software causes messages to be transmitted, in particular empty or 'dummy' messages, using OTT connection 1850 while monitoring propagation times, errors, etc.
  • the term unit can have conventional meaning in the field of electronics, electrical devices and/or electronic devices and can include, for example, electrical and/or electronic circuitry, devices, modules, processors, memories, logic solid state and/or discrete devices, computer programs or instructions for carrying out respective tasks, procedures, computations, outputs, and/or displaying functions, and so on, as such as those that are described herein.
  • any appropriate steps, methods, features, functions, or benefits disclosed herein may be performed through one or more functional units or modules of one or more virtual apparatuses.
  • Each virtual apparatus may comprise a number of these functional units.
  • These functional units may be implemented via processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include Digital Signal Processor (DSPs), special-purpose digital logic, and the like.
  • the processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as Read Only Memory (ROM), Random Access Memory (RAM), cache memory, flash memory devices, optical storage devices, etc.
  • Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein.
  • the processing circuitry may be used to cause the respective functional unit to perform corresponding functions according one or more embodiments of the present disclosure.
  • device and/or apparatus can be represented by a semiconductor chip, a chipset, or a (hardware) module comprising such chip or chipset; this, however, does not exclude the possibility that a functionality of a device or apparatus, instead of being hardware implemented, be implemented as a software module such as a computer program or a computer program product comprising executable software code portions for execution or being run on a processor.
  • functionality of a device or apparatus can be implemented by any combination of hardware and software.
  • a device or apparatus can also be regarded as an assembly of multiple devices and/or apparatuses, whether functionally in cooperation with or independently of each other.
  • devices and apparatuses can be implemented in a distributed fashion throughout a system, so long as the functionality of the device or apparatus is preserved. Such and similar principles are considered as known to a skilled person.
  • Embodiments of the techniques and apparatus described herein also include, but are not limited to, the following enumerated examples:
  • the first message indicates support for one or more of the following LCM actions by the network: training or retraining of a currently deployed AI/ML model; deployment of new or updated AI/ML model; revocation of a currently deployed AI/ML model; stopping, start, or restarting use of a currently deployed AI/ML model; enabling or disabling use of a currently deployed AI/ML model; and performance supervision of a currently deployed AI/ML model.
  • AI/ML model identifier ; source of identified AI/ML model; version of identified AI/ML model; services, service groups, service types, and/or network types associated with identified AI/ML model; indication of whether the network is allowed to train or retrain the identified AI/ML model; indication of whether LCM of the identified AI/ML model is enabled and/or supported at the UE; indication of whether testing of the identified AI/ML model is enabled and/or supported at the UE; indication of whether reporting results of UE testing of the identified AI/ML model is enabled, supported, ongoing, or completed at the UE; indication of whether modification or update of the identified AI/ML model is enabled, supported, ongoing, or completed at the UE; indication of whether supervision of the identified AI/ML model is enabled, supported, ongoing, or completed at the UE; one or more conditions or events to be fulfilled before deploying an updated version of the identified AI/ML model at the UE; and
  • AI/ML model identifier a request to test, verify, validate, or evaluate the identified AI/ML model; a reference data set that the UE can use or is instructed to use to test, verify, validate, or evaluate the AI/ML model as requested; conditions or events associated with performing the requested testing, a request for a report about the requested testing after completion; conditions or events associated with the requested reporting; a request for an indication of performance of the identified AI/ML model; and performance metrics to be reported by the UE, as a result of the requested testing.
  • the fifth message does not include the reference data set; and one of the following applies: the fifth message includes an explicit indication for the UE to use a locally -available data set for the requested testing; or the absence of the reference data set is an implicit indication for the UE to use a locally- available data set for the requested testing.
  • the report in the seventh message includes one or more of the following: one or more of the performance metrics identified in the fifth message, obtained by the UE based on the testing; one or more further performance metrics not identified in the fifth message, obtained by the UE based on the testing; and an indication of one or more reference data sets used for the testing.
  • A10 The method of any of embodiments A6-A9, further comprising: determining that performance of the identified AI/ML model is not satisfactory based on the report in the seventh message; and performing one of the following based on the determination: updating, modifying, or retraining the identified AI/ML model; or training a new AI/ML model to be deployed at the UE as a replacement for the identified AI/ML model.
  • the eighth message includes one or more of the following: a new, updated, retrained, or modified AI/ML model to be deployed at the UE; a request or command to revoke, enable, disable, stop, start, or restart an AI/ML model currently deployed at the UE; and an identifier of each of these AI/ML models currently deployed or to be deployed at the UE.
  • A12 The method of any of embodiments A1-A11 , further comprising: receiving from a second network node a ninth message that includes a request for LCM actions in relation to at least one AI/ML model deployed at one or more UEs; and transmitting to the second network node a tenth message that includes a report about LCM actions performed by the first network node on the at least one AI/ML model.
  • the LCM actions requested by the ninth message include one or more of the following: training or retraining of a currently deployed AI/ML model; deployment of new or updated AI/ML model; revocation of a currently deployed AI/ML model; stopping, start, or restarting use of a currently deployed AI/ML model; enabling or disabling use of a currently deployed AI/ML model; and performance supervision of a currently deployed AI/ML model.
  • the LCM information included in the eighth message is based on the LCM actions requested by the ninth message; and the report in the tenth message includes LCM actions and/or AI/ML models indicated to the UE by the eighth message.
  • A15 The method of any of embodiments A12-A14, wherein one of the following applies: the first and second network nodes are different network nodes in a RAN; the first and second network nodes are different units or functions of one network node in a RAN; the first and second network nodes are in different RANs; or one of the first and second network nodes is a RAN node and the other of the first and second network nodes is one of the following: a core network (CN) node or function, a service management and orchestration (SMO) function, or part of an operations/administration/maintenance (CAM) system.
  • CN core network
  • SMO service management and orchestration
  • CAM operations/administration/maintenance
  • a method performed by a user equipment (UE), operating in a radio access network (RAN), for life cycle management (LCM) of artificial intelligence/machine learning (AI/ML) models comprising: receiving from a first network node one or more of the following: a first message that indicates support for LCM actions by the network in relation to at least one AI/ML model currently deployed at the UE; a second message that requests the UE to report LCM information for at least one AI/ML model currently deployed at the UE; a fifth message that requests the UE to perform testing for at least one AI/ML model currently deployed at the UE; and an eighth message that includes LCM information for at least one AI/ML model currently or to be deployed at the UE; and/or transmitting to the first network node one or more of the following: a third message that includes an acknowledgement or a failure indication with respect to the LCM information requested by the second message; a fourth message that includes at least a portion of the LCM information requested by the second message; a sixth message that includes an acknowledgement
  • the first message indicates support for one or more of the following LCM actions by the network: training or retraining of a currently deployed AI/ML model; deployment of new or updated AI/ML model; revocation of a currently deployed AI/ML model; stopping, start, or restarting use of a currently deployed AI/ML model; enabling or disabling use of a currently deployed AI/ML model; and performance supervision of a currently deployed AI/ML model.
  • AI/ML model identifier ; source of identified AI/ML model; version of identified AI/ML model; services, service groups, service types, and/or network types associated with identified AI/ML model; indication of whether the network is allowed to train or retrain the identified AI/ML model; indication of whether LCM of the identified AI/ML model is enabled and/or supported at the UE; indication of whether testing of the identified AI/ML model is enabled and/or supported at the UE; indication of whether reporting results of UE testing of the identified AI/ML model is enabled, supported, ongoing, or completed at the UE; indication of whether modification or update of the identified AI/ML model is enabled, supported, ongoing, or completed at the UE; indication of whether supervision of the identified AI/ML model is enabled, supported, ongoing, or completed at the UE; one or more conditions or events to be fulfilled before deploying an updated version of the identified AI/ML model at the UE; and
  • AI/ML model identifier a request to test, verify, validate, or evaluate the identified AI/ML model; a reference data set that the UE can use or is instructed to use to test, verify, validate, or evaluate the AI/ML model as requested; conditions or events associated with performing the requested testing, a request for a report about the requested testing after completion; conditions or events associated with the requested reporting; a request for an indication of performance of the identified AI/ML model; and performance metrics to be reported by the UE, as a result of the requested testing.
  • the fifth message does not include the reference data set; and one of the following applies: the fifth message includes an explicit indication for the UE to use a locally -available data set for the requested testing; or the absence of the reference data set is an implicit indication for the UE to use a locally - available data set for the requested testing.
  • AI/ML model identified in the eighth message for one or more of the following UE operations in the RAN: channel state information (CSI) estimation and/or compression; beam management; positioning; link adaptation; estimating UE and/or network energy saving for a UE configuration; signal quality estimation; and
  • CSI channel state information
  • LCM life cycle management
  • AI/ML artificial intelligence/machine learning
  • the method of embodiment C1 wherein the LCM actions by the ninth message include one or more of the following: training or retraining of a currently deployed AI/ML model; deployment of new or updated AI/ML model; revocation of a currently deployed AI/ML model; stopping, start, or restarting use of a currently deployed AI/ML model; enabling or disabling use of a currently deployed AI/ML model; and performance supervision of a currently deployed AI/ML model.
  • the first and second network nodes are different network nodes in a RAN; the first and second network nodes are different units or functions of one network node in a RAN; the first and second network nodes are in different RANs; or one of the first and second network nodes is a RAN node and the other of the first and second network nodes is one of the following: a core network (CN) node or function, a service management and orchestration (SMO) function, or part of an operations/administration/ maintenance (OAM) system.
  • CN core network
  • SMO service management and orchestration
  • OAM operations/administration/ maintenance
  • a first network node configured for life cycle management (LCM) of artificial intel ligence/machi ne learning (AI/ML) models in user equipment (UEs) operating in a radio access network (RAN), the first network node comprising: communication interface circuitry configured to communicate with UEs and with at least a second network node; and processing circuitry operatively coupled to the communication interface circuitry, whereby the processing circuitry and the communication interface circuitry are configured to perform operations corresponding to any of the methods of embodiments A1-A15.
  • LCM life cycle management
  • AI/ML artificial intel ligence/machi ne learning
  • a first network node configured for life cycle management (LCM) of artificial I ntell igence/machine learning (AI/ML) models in user equipment (UEs) operating in a radio access network (RAN), the first network node being configured to perform operations corresponding to any of the methods of embodiments A1-A15.
  • LCM life cycle management
  • AI/ML artificial I ntell igence/machine learning
  • a non-transitory, computer-readable medium storing computer-executable instructions that, when executed by processing circuitry associated with a first network node configured for life cycle management (LCM) of artificial intelligence/machine learning (AI/ML) models in user equipment (UEs) operating in a radio access network (RAN), configure the first network node to perform operations corresponding to any of the methods of embodiments A1-A15.
  • LCM life cycle management
  • AI/ML artificial intelligence/machine learning
  • UEs user equipment
  • RAN radio access network
  • a computer program product comprising computer-executable instructions that, when executed by processing circuitry associated with a first network node configured for life cycle management (LCM) of artificial intelligence/machine learning (AI/ML) models in user equipment (UEs) operating in a radio access network (RAN), configure the first network node to perform operations corresponding to any of the methods of embodiments A1-A15.
  • LCM life cycle management
  • AI/ML artificial intelligence/machine learning
  • UEs user equipment
  • RAN radio access network
  • a user equipment configured to operate in a radio access network (RAN) and for life cycle management (LCM) of artificial intelligence/machine learning (AI/ML) models, the UE comprising: communication interface circuitry configured to communicate with at least a first network node; and processing circuitry operatively coupled to the communication interface circuitry, whereby the processing circuitry and the communication interface circuitry are configured to perform operations corresponding to any of the methods of embodiments B1-B12.
  • RAN radio access network
  • LCM life cycle management
  • AI/ML artificial intelligence/machine learning
  • a user equipment configured to operate in a radio access network (RAN) and for life cycle management (LCM) of artificial intelligence/machine learning (AI/ML) models, the UE being further configured to perform operations corresponding to any of the methods of embodiments B1-B12.
  • RAN radio access network
  • LCM life cycle management
  • AI/ML artificial intelligence/machine learning
  • a non-transitory, computer-readable medium storing computer-executable instructions that, when executed by processing circuitry associated with a user equipment (UE) configured for life cycle management (LCM) of artificial intelligence/machine learning (AI/ML) models, configure the UE to perform operations corresponding to any of the methods of embodiments B1-B12.
  • UE user equipment
  • LCM life cycle management
  • AI/ML artificial intelligence/machine learning
  • a computer program product comprising computer-executable instructions that, when executed by processing circuitry associated with a user equipment (UE) configured for life cycle management (LCM) of artificial intelligence/machine learning (AI/ML) models, configure the UE to perform operations corresponding to any of the methods of embodiments B1-B12.
  • UE user equipment
  • LCM life cycle management
  • AI/ML artificial intelligence/machine learning
  • a second network node configured for life cycle management (LCM) of artificial intelligence/machine learning (AI/ML) models in user equipment (UEs) operating in a radio access network (RAN), the second network node comprising: communication interface circuitry configured to communicate with UEs and with at least a first network node; and processing circuitry operatively coupled to the communication interface circuitry, whereby the processing circuitry and the communication interface circuitry are configured to perform operations corresponding to any of the methods of embodiments C1-C3.
  • LCM life cycle management
  • AI/ML artificial intelligence/machine learning
  • a second network node configured for life cycle management (LCM) of artificial intelligence/machine learning (AI/ML) models in user equipment (UEs) operating in a radio access network (RAN), the second network node being configured to perform operations corresponding to any of the methods of embodiments C1-C3.
  • LCM life cycle management
  • AI/ML artificial intelligence/machine learning
  • a non-transitory, computer-readable medium storing computer-executable instructions that, when executed by processing circuitry associated with a second network node configured for life cycle management (LCM) of artificial intelligence/machine learning (AI/ML) models in user equipment (UEs) operating in a radio access network (RAN), configure the second network node to perform operations corresponding to any of the methods of embodiments C1-C3.
  • LCM life cycle management
  • AI/ML artificial intelligence/machine learning
  • UEs user equipment
  • RAN radio access network
  • a computer program product comprising computer-executable instructions that, when executed by processing circuitry associated with a second network node configured for life cycle management (LCM) of artificial intelligence/machine learning (AI/ML) models in user equipment (UEs) operating in a radio access network (RAN), configure the second network node to perform operations corresponding to any of the methods of embodiments CIGS.
  • LCM life cycle management
  • AI/ML artificial intelligence/machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Les réalisations comprennent des procédés exécutés par un premier nœud de réseau pour la gestion du cycle de vie (LCM) de modèles d'intelligence artificielle/apprentissage machine (AI/ML) déployés dans des équipements utilisateurs (UE) fonctionnant dans un réseau d'accès radio (RAN). Ces procédés comprennent la transmission à un UE d'un ou de plusieurs messages (par exemple, le premier, le deuxième, le cinquième et le huitième message) qui comprennent des demandes liées au LCM et/ou des informations relatives aux modèles AI/ML actuellement déployés ou devant être déployés à l'UE. Ces procédés consistent notamment à recevoir de l'UE un ou plusieurs des divers messages (par exemple, troisième, quatrième, sixième, septième et onzième messages) qui peuvent répondre au(x) message(s) transmis et/ou contenir des informations supplémentaires sur au moins un modèle AI/ML déployé au niveau de l'UE. D'autres modes de réalisation comprennent des procédés complémentaires pour un UE et un second nœud de réseau, ainsi que des nœuds de réseau et des UE configurés pour exécuter ces procédés.
PCT/EP2023/051268 2022-02-07 2023-01-19 Gestion réseau-centrique du cycle de vie des modèles ai/ml déployés dans un équipement utilisateur (ue) WO2023148010A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202263307275P 2022-02-07 2022-02-07
US63/307,275 2022-02-07
US202263309153P 2022-02-11 2022-02-11
US63/309,153 2022-02-11

Publications (1)

Publication Number Publication Date
WO2023148010A1 true WO2023148010A1 (fr) 2023-08-10

Family

ID=85122925

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2023/051268 WO2023148010A1 (fr) 2022-02-07 2023-01-19 Gestion réseau-centrique du cycle de vie des modèles ai/ml déployés dans un équipement utilisateur (ue)

Country Status (1)

Country Link
WO (1) WO2023148010A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022013095A1 (fr) * 2020-07-13 2022-01-20 Telefonaktiebolaget Lm Ericsson (Publ) Gestion d'un dispositif sans fil permettant d'assurer une connexion à un réseau de communication
WO2022015221A1 (fr) * 2020-07-14 2022-01-20 Telefonaktiebolaget Lm Ericsson (Publ) Gestion d'un dispositif sans fil utilisable pour se connecter à un réseau de communication
WO2022013090A1 (fr) * 2020-07-13 2022-01-20 Telefonaktiebolaget Lm Ericsson (Publ) Gestion d'un dispositif sans fil utilisable pour une connexion à un réseau de communication

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022013095A1 (fr) * 2020-07-13 2022-01-20 Telefonaktiebolaget Lm Ericsson (Publ) Gestion d'un dispositif sans fil permettant d'assurer une connexion à un réseau de communication
WO2022013090A1 (fr) * 2020-07-13 2022-01-20 Telefonaktiebolaget Lm Ericsson (Publ) Gestion d'un dispositif sans fil utilisable pour une connexion à un réseau de communication
WO2022015221A1 (fr) * 2020-07-14 2022-01-20 Telefonaktiebolaget Lm Ericsson (Publ) Gestion d'un dispositif sans fil utilisable pour se connecter à un réseau de communication

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
3GPP DOCUMENT RP-201620
3GPP TECHNICAL REPORT (TR) 37.817
3GPP TR 37.817
3GPP TR 38.804
3GPP TS 38.306
3GPP TS 38.463

Similar Documents

Publication Publication Date Title
WO2023014257A1 (fr) Maintenance de configuration de qualité d'expérience
WO2023191682A1 (fr) Gestion de modèles d'intelligence artificielle/d'apprentissage machine entre des nœuds radio sans fil
WO2023113674A1 (fr) Fonctionnement d'équipement utilisateur (ue) avec configuration d'économie d'énergie de station de base
WO2023022642A1 (fr) Signalisation de surchauffe prédite d'ue
WO2022240334A1 (fr) Reconfigurations conditionnelles de cellules dans des groupes de cellules secondaires
WO2023148010A1 (fr) Gestion réseau-centrique du cycle de vie des modèles ai/ml déployés dans un équipement utilisateur (ue)
WO2024125362A1 (fr) Procédé et appareil de commande de liaison de communication entre dispositifs de communication
WO2023148009A1 (fr) Gestion de cycle de vie centrée sur l'utilisateur de modèles ai/ml déployés dans un équipement utilisateur (ue)
US20240195593A1 (en) Methods, Devices and Computer Program Products for Exploiting Predictions for Capacity and Coverage Optimization
US20240196274A1 (en) Methods for Mobility Setting Adjustment based on Predictions
WO2023211356A1 (fr) Surveillance de fonctionnalité d'apprentissage automatique d'équipement utilisateur
WO2023095037A1 (fr) Rapport mdt journalisé faisant intervenir une mobilité inter-rat
WO2023217557A1 (fr) Traducteur d'intelligence artificielle/apprentissage automatique (ia/aa) pour réseau central 5g (5gc)
WO2023132774A1 (fr) Gestion de déclencheurs pour rapport qoe visible par ran
WO2023073677A2 (fr) Mesures dans un réseau de communication
WO2023068977A1 (fr) Rapport de mesurage de mdt pour équipements utilisateurs se trouvant dans n'importe quel état de sélection de cellule
WO2024033808A1 (fr) Mesures de csi pour mobilité intercellulaire
WO2023027619A1 (fr) Gestion de mesures de couche d'application configurées pendant un transfert intercellulaire
EP4381777A1 (fr) Réduction de la disponibilité d'une configuration de test d'entraînement basée sur la signalisation
WO2023224527A1 (fr) Distribution de mesures de qoe visibles par ran
WO2023062509A1 (fr) Activation de cellule secondaire basée sur un signal de référence temporaire par l'intermédiaire d'une commande de ressources radio
WO2024035309A1 (fr) Procédés, appareil et support lisible par ordinateur associés à un changement conditionnel de cellule
WO2024038116A1 (fr) Signalisation d'informations de réalité étendue
WO2024035293A1 (fr) Sélection d'équipement utilisateur (ue) de cellules candidates à mesurer pour une mobilité inter-cellules l1/l2
WO2023277752A1 (fr) Rapport de reconfigurations erronées

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23702237

Country of ref document: EP

Kind code of ref document: A1