CN116266815A - Apparatus for supporting artificial intelligence or machine learning in wireless communication system - Google Patents

Apparatus for supporting artificial intelligence or machine learning in wireless communication system Download PDF

Info

Publication number
CN116266815A
CN116266815A CN202211382241.8A CN202211382241A CN116266815A CN 116266815 A CN116266815 A CN 116266815A CN 202211382241 A CN202211382241 A CN 202211382241A CN 116266815 A CN116266815 A CN 116266815A
Authority
CN
China
Prior art keywords
entity
updated
new
function
enabled
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211382241.8A
Other languages
Chinese (zh)
Inventor
姚羿志
乔伊·周
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Publication of CN116266815A publication Critical patent/CN116266815A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/50Testing arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W88/00Devices specially adapted for wireless communication networks, e.g. terminals, base stations or access point devices
    • H04W88/18Service support devices; Network management devices

Abstract

The present application relates to an apparatus for supporting Artificial Intelligence (AI) or Machine Learning (ML) in a wireless communication system, wherein the apparatus for supporting AI or ML comprises processor circuitry configured to: verifying the new or updated AI or ML entity based on entity verification data; testing the new or updated AI or ML entity based on the entity test data when the verification result of the new or updated AI or ML entity meets the entity verification result expectation; and deploying the new or updated AI or ML entity to the target AI or ML enabled function when the test results of the new or updated AI or ML entity satisfy the entity test results expectations and the new or updated AI or ML entity is selected to be deployed to the target AI or ML enabled function.

Description

Apparatus for supporting artificial intelligence or machine learning in wireless communication system
Cross Reference to Related Applications
The present application is based on and claims priority from U.S. application Ser. No.63/290,270, filed on 12/16 of 2021, the entire contents of which are incorporated herein by reference.
Technical Field
Embodiments of the present disclosure relate generally to the field of wireless communications, and more particularly, to an apparatus for supporting Artificial Intelligence (AI) or Machine Learning (ML) in a wireless communication system.
Background
Mobile communications have evolved from early voice systems to today's highly complex integrated communication platforms. A 5G or New Radio (NR) wireless communication system will provide various users and applications with access to information and sharing of data anytime and anywhere.
Drawings
Embodiments of the present disclosure will be illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which like reference numerals refer to similar elements.
Fig. 1 illustrates a workflow diagram for management of AI or ML entities for an inference function deployed in a 5G wireless communication system, according to some embodiments of the present disclosure.
Fig. 2 illustrates a workflow diagram for management of inference functions integrated with AI or ML entities in a 5G wireless communication system, according to some embodiments of the present disclosure.
Fig. 3 illustrates a schematic diagram of a network in accordance with various embodiments of the present disclosure.
Fig. 4 illustrates a schematic diagram of a wireless network in accordance with various embodiments of the present disclosure.
Fig. 5 illustrates a block diagram of components capable of reading instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium) and performing any one or more of the methods discussed herein, in accordance with various embodiments of the disclosure.
Detailed Description
Various aspects of the illustrative embodiments will be described using terms commonly employed by those skilled in the art to convey the substance of the disclosure to others skilled in the art. However, it will be apparent to those skilled in the art that many alternative embodiments may be implemented using portions of the described aspects. For purposes of explanation, specific numbers, materials, and configurations are set forth in order to provide a thorough understanding of the illustrative embodiments. It will be apparent, however, to one skilled in the art that alternative embodiments may be practiced without these specific details. In other instances, well-known features may be omitted or simplified in order not to obscure the illustrative embodiments.
Furthermore, various operations will be described as multiple discrete operations in turn, in a manner that is most helpful in understanding the illustrative embodiments; however, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations need not be performed in the order of presentation.
The phrases "in an embodiment," "in one embodiment," and "in some embodiments" are repeated herein. These phrases generally do not refer to the same embodiment; however, they may also refer to the same embodiments. The terms "comprising," "having," and "including" are synonymous, unless the context dictates otherwise. The phrases "A or B" and "A/B" mean "(A), (B), or (A and B)".
AI or ML technology is widely used in 5G wireless communication systems, and the following AI or ML management capabilities need to be studied to enable and support AI or ML in 5G wireless communication systems:
-AI or ML model and AI or ML functional verification (validation);
-AI or ML model and AI or ML enabled function (pre-deployment) test;
-deployment of new or updated AI or ML models and AI or ML enabled functions;
-AI or ML enabled configuration; and
-AI or ML enabled performance evaluation of the function.
AI or ML workflows involving AI or ML management capabilities may be generic to different domains of a 5G wireless communication system and to different AI or ML reasoning functions. However, there is currently no scheme for supporting AI or ML in a 5G wireless communication system.
In view of the above, the present disclosure proposes an AI or ML workflow for supporting AI or ML in a 5G wireless communication system, wherein AI or ML enabled functions use AI or ML entities (e.g., ML models or entities containing ML models and their related metadata) to reason and are thus also referred to as inference functions.
Fig. 1 illustrates a workflow diagram for management of AI or ML entities for an inference function deployed in a 5G wireless communication system, according to some embodiments of the present disclosure. As shown in fig. 1, a workflow 100 for AI or ML entity management for an inference function deployed in a 5G wireless communication system includes stages or operations briefly described as follows:
S102, AI or ML entity training: machine learning is performed using entity training data to generate new or updated AI or ML entities that can be used for reasoning. That is, new or updated AI or ML entities may be generated by training the original AI or ML entities based on the entity training data. AI or ML entity training refers to training an ML model associated with AI or ML entities.
S104, AI or ML entity verification: entity verification data is used to cross-verify new or updated AI or ML entities to evaluate the performance of the new or updated AI or ML entities. The new or updated AI or ML entity may be tested if its verification result meets the entity verification result expectations, otherwise further (re) training of the new or updated AI or ML entity may be required. That is, a new or updated AI or ML entity may be verified based on the entity verification data, and then the new or updated AI or ML entity may be further (re) trained when the verification result of the new or updated AI or ML entity does not meet the entity verification result expectations, wherein verifying the new or updated AI or ML entity comprises evaluating the performance of the new or updated AI or ML entity based on the entity verification data.
S106, AI or ML entity testing: the new or updated AI or ML entities are tested using the entity test data to evaluate the performance of the new or updated AI and ML entities. If the test results of the new or updated AI or ML entity meet the entity test results expectations, the new or updated AI or ML entity may be considered a candidate for verification and/or deployment, otherwise further (re) training of the new or updated AI or ML entity may be required. That is, when the verification result of the new or updated AI or ML entity meets the entity verification result expectation, the new or updated AI or ML entity may be tested based on the entity test data, and then when the test result of the new or updated AI or ML entity does not meet the entity test result expectation, the new or updated AI or ML entity may be further trained, wherein testing the new or updated AI or ML entity includes evaluating performance of the new or updated AI or ML entity based on the entity test data.
S108, AI or ML entity verification: verifying whether the new or updated AI or ML entity is capable of operating in the target inference function or in a reference inference function, wherein the new or updated AI or ML entity is to be deployed into the target inference function, the reference inference function being identical to or capable of emulating the target inference function. In some cases, this operation may be skipped, for example, where the input and output data types and formats remain unchanged from the AI or ML entity last deployed. That is, it may be verified whether the new or updated AI or ML entity is capable of operating in the target inference function or in the reference inference function, and the new or updated AI or ML entity may be deployed to the target inference function when it is verified that the new or updated AI or ML entity is capable of operating in the target inference function or in the reference inference function.
S110, AI or ML entity deployment: when a new or updated AI or ML entity is selected for deployment to the target inference function, the new or updated AI or ML entity is deployed to the (already deployed) target inference function based on the results of the AI or ML entity testing and/or verifying operations. If an old AI or ML entity already exists in the target inference function for the same inference purpose as the new or updated AI or ML entity, the old AI or ML entity is replaced when the new or updated AI or ML entity is deployed. When a new or updated AI or ML entity is successfully deployed to the target inference function, the new or updated AI or ML entity is ready to infer. That is, in the case where a new or updated AI or ML entity is selected to be deployed to the target inference function, when the test result of the new or updated AI or ML entity satisfies the entity test result expectation and verifies that the new or updated AI or ML entity is capable of operating in the target inference function or in the reference inference function (if it is required to verify whether the new or updated AI or ML entity is capable of operating in the target inference function or in the reference inference function), the new or updated AI or ML entity is deployed to the target inference function.
S112, inference function monitoring: after the target inference function is activated, the target inference function (including performance assessment and/or fault monitoring) is monitored. Based on the monitoring results, the target inference function may be (re) configured, upgraded, or terminated, and further (re) training of new or updated AI or ML entities may be required.
S114, reasoning function configuration: the target inference function is configured to control inference and to activate and deactivate the target inference function. Activation or deactivation of the target inference function automatically activates or deactivates the inference capability of the new or updated AI or ML entity deployed in the target inference function.
S116, the reasoning function is terminated: terminating the target inference function. If the target inference function is terminated, the new or updated AI or ML entities deployed in the target inference function are terminated altogether.
Fig. 2 illustrates a workflow diagram for management of inference functions integrated with AI or ML entities in a 5G wireless communication system, according to some embodiments of the present disclosure. As shown in fig. 2, a workflow 200 for management of inference functions integrated with AI or ML entities in a 5G wireless communication system includes stages or operations briefly described as follows:
s202, initializing an inference function: an initial version of the inference function integrated with the AI or ML entity is generated. That is, an inference function integrated with an AI or ML entity is initialized.
S204, verification of an inference function: the inference functions integrated with the AI or ML entities are cross-checked using the function verification data to evaluate the performance of the inference functions integrated with the AI or ML entities. If the verification result of the inference function integrated with the AI or ML entity meets the functional verification result expectation, the inference function integrated with the AI or ML entity may be tested, otherwise further (re) training of the AI or ML entity integrated in the inference function may be required. That is, the AI or ML entity-integrated inference function may be verified based on the function verification data, and when the verification result of the AI or ML entity-integrated inference function does not satisfy the function verification result expectation, the AI or ML entity integrated in the inference function may be further trained, wherein verifying the AI or ML entity-integrated inference function includes evaluating the performance of the AI or ML entity-integrated inference function based on the function verification data.
S206, inference function test: the inference function integrated with the AI or ML entity is tested using the functional test data to evaluate the performance of the inference function integrated with the AI or ML entity. If the test results of the AI-or ML-entity-integrated inference functions meet the functional test result expectations, the AI-or ML-entity-integrated inference functions may be considered candidates for deployment, otherwise further upgrades of the AI-or ML-entity-integrated inference functions may be required. That is, when the verification result of the AI-or ML-entity-integrated inference function satisfies the function verification result expectation, the AI-or ML-entity-integrated inference function may be tested based on the function test data, and when the test result of the AI-or ML-entity-integrated inference function does not satisfy the function test result expectation, the inference function may be upgraded with a new or updated AI-or ML-entity, wherein testing the AI-or ML-entity-integrated inference function includes evaluating the performance of the AI-or ML-entity-integrated inference function based on the function test data.
S208, verifying an inference function: verifying whether the AI-or ML-entity-integrated reasoning function can work in the target entity or in a reference entity, wherein the AI-and ML-entity-integrated reasoning function is to be deployed into the target entity, the reference entity being identical to or capable of emulating the target entity. In some cases, this operation may be skipped, for example, where the input and output data types and formats remain unchanged from the last deployed reasoning function. That is, it may be verified whether the AI-or ML-entity-integrated inference function can operate in the target entity or in the reference entity, and when it is verified that the AI-or ML-entity-integrated inference function can operate in the target entity or in the reference entity, the AI-or ML-entity-integrated inference function may be deployed to the target entity.
S210, upgrading an inference function: an upgraded version of the inference function integrated with the AI or ML entity is generated. That is, when the test result of the inference function integrated with the AI or ML entity does not satisfy the function test result expectation, the inference function integrated with the AI or ML entity may be upgraded with a new or updated AI or ML entity.
S212, reasoning function deployment: when the AI-or ML-entity-integrated inference function is selectively deployed to the target entity, the AI-or ML-entity-integrated inference function is deployed to the target entity based on the operation result of the inference function test and/or verification. The inference function integrated with the AI or ML entity may be a new or updated version of the inference function already deployed. In the context of inference function deployment, an upgraded inference function refers to an inference function that integrates new or updated AI or ML entities, and replaces old inference functions when deploying the upgraded inference function.
S214, inference function monitoring: after the AI-or ML-entity-integrated reasoning function is activated, the AI-or ML-entity-integrated reasoning function (including performance assessment and fault monitoring) is monitored. Based on the monitoring results, the inference function integrated with the AI or ML entity can be (re) configured, upgraded, or terminated.
S216, reasoning function configuration: the AI-or ML-entity-integrated reasoning function is configured to control reasoning and activate and deactivate the AI-or ML-entity-integrated reasoning function. Activation or deactivation of an inference function integrated with an AI or ML entity automatically activates or deactivates the inference capability of the AI or ML entity integrated in the inference function.
S218, the inference function terminates: the inference function integrated with the AI or ML entity is terminated. If the inference function integrated with the AI or ML entity is terminated, the AI or ML entity integrated with the inference function is terminated together.
It should be appreciated that an apparatus for supporting AI or ML may be used to implement workflow 100 for AI or ML entity management of an inference function deployed in a 5G wireless communication system or workflow 200 for management of an inference function integrated with AI or ML entities in a 5G wireless communication system, wherein the apparatus includes a processor circuit configured to perform operations S102-S116 or operations S202-S218.
Fig. 3-4 illustrate various systems, devices, and components that may implement aspects of the disclosed embodiments.
Fig. 3 illustrates a schematic diagram of a network 300, according to various embodiments of the present disclosure. The network 300 may operate in accordance with the 3GPP technical specifications of a Long Term Evolution (LTE) or 5G/NR system. However, the example embodiments are not limited in this respect and the described embodiments may be applied to other networks that benefit from the principles described herein, such as future 3GPP systems, and the like.
The network 300 may include a UE 302, which may include any mobile or non-mobile computing device designed to communicate with a Radio Access Network (RAN) 304 via an over-the-air connection. The UE 302 may be, but is not limited to, a smart phone, tablet computer, wearable computer device, desktop computer, laptop computer, in-vehicle infotainment device, in-vehicle entertainment device, dashboard, heads-up display device, on-board diagnostic device, dashboard mobile device, mobile data terminal, electronic engine management system, electronic/engine control unit, electronic/engine control module, embedded system, sensor, microcontroller, control module, engine management system, network device, machine-to-machine (M2M) or device-to-device (D2D) device, internet of things (IoT) device, etc.
In some embodiments, the network 300 may include multiple UEs directly coupled to each other through a side link interface. The UE may be an M2M/D2D device that communicates using a physical sidelink channel (e.g., without limitation, a Physical Sidelink Broadcast Channel (PSBCH), a Physical Sidelink Discovery Channel (PSDCH), a Physical Sidelink Shared Channel (PSSCH), a Physical Sidelink Control Channel (PSCCH), a physical sidelink substrate channel (PSFCH), etc.).
In some embodiments, the UE 302 may also communicate with an Access Point (AP) 306 over an over-the-air connection. AP 306 may manage Wireless Local Area Network (WLAN) connections, which may be used to offload from RAN 304Some/all of the network traffic. The connection between the UE 302 and the AP 306 may conform to any IEEE 802.11 protocol, where the AP 306 may be wireless fidelity
Figure BDA0003928880030000081
And a router. In some embodiments, UE 302, RAN 304, and AP 306 may utilize cellular WLAN aggregation (e.g., LTE-WLAN aggregation (LWA)/lightweight IP (LWIP)). Cellular WLAN aggregation may involve configuring UE 302 by RAN 304 to utilize both cellular radio resources and WLAN resources.
RAN 304 may include one or more access nodes, e.g., access Node (AN) 308.AN 308 may terminate the air interface protocol of UE 302 by providing access layer protocols including Radio Resource Control (RRC) protocol, packet Data Convergence Protocol (PDCP), radio Link Control (RLC) protocol, medium Access Control (MAC) protocol, and L1 protocol. In this way, the AN 308 may enable a data/voice connection between the Core Network (CN) 320 and the UE 302. In some embodiments, AN 308 may be implemented in a discrete device or as one or more software entities running on a server computer (as part of a virtual network, which may be referred to as a distributed RAN (CRAN) or virtual baseband unit pool, for example). The AN 308 may be referred to as a Base Station (BS), a next generation base station (gNB), a RAN node, AN evolved node B (eNB), a next generation eNB (ng-eNB), a node B (NodeB), a roadside unit (RSU), a transmission reception point (TRxP), a transmission point (TRP), and the like. The AN 308 may be a macrocell base station or a low power base station for providing a microcell, picocell, or other similar cell having a smaller coverage area, smaller user capacity, or higher bandwidth than the macrocell.
In embodiments where the RAN 304 includes multiple ANs, the ANs may be coupled to each other through AN X2 interface (if the RAN 304 is AN LTE RAN) or AN Xn interface (if the RAN 304 is a 5G RAN). In some embodiments, the X2/Xn interface, which may be separated into control/user plane interfaces, may allow the AN to communicate information related to handoff, data/context transfer, mobility, load management, interference coordination, etc.
The AN of RAN 304 may separately manage one or more cells, groups of cells, component carriers, etc. to provide AN air interface for network access to UE 302. The UE 302 may be connected simultaneously with multiple cells provided by the same or different ANs of the RAN 304. For example, the UE 302 and the RAN 304 may use carrier aggregation to allow the UE 302 to connect with multiple component carriers, each component carrier corresponding to a primary cell (PCell) or a secondary cell (SCell). In a dual connectivity scenario, a first AN may be a primary node providing a primary cell group (MCG) and a second AN may be a secondary node providing a Secondary Cell Group (SCG). The first/second AN may be any combination of eNB, gNB, ng-enbs, etc.
RAN 304 may provide the air interface on licensed spectrum or unlicensed spectrum. To operate in unlicensed spectrum, a node may use License Assisted Access (LAA), enhanced LAA (eLAA), and/or further enhanced LAA (feLAA) mechanisms based on Carrier Aggregation (CA) techniques of PCell/Scell. Prior to accessing the unlicensed spectrum, the node may perform a medium/carrier sensing operation based on, for example, a Listen Before Talk (LBT) protocol.
In a vehicle-to-everything (V2X) scenario, the UE 302 or AN 308 may be or act as a roadside unit (RSU), which may refer to any transport infrastructure entity for V2X communications. The RSU may be implemented in or by a suitable AN or stationary (or relatively stationary) UE. An RSU implemented in or by a UE may be referred to as a "UE-type RSU"; an RSU implemented in or by an eNB may be referred to as an "eNB-type RSU"; RSUs implemented in or by next generation nodebs (gnbs) may be referred to as "gNB-type RSUs" or the like. In one example, the RSU is a computing device coupled with a radio frequency circuit located at the roadside that provides connectivity support to passing vehicle UEs. The RSU may also include internal data storage circuitry for storing intersection map geometry, traffic statistics, media, and applications/software for sensing and controlling ongoing vehicle and pedestrian traffic. The RSU may provide very low latency communications required for high speed events (e.g., collision avoidance, traffic alerts, etc.). Additionally or alternatively, the RSU may provide other cellular/WLAN communication services. The components of the RSU may be enclosed in a weather-proof enclosure suitable for outdoor installation, and may include a network interface controller to provide a wired connection (e.g., ethernet) to a traffic signal controller or backhaul network.
In some embodiments, RAN 304 may be an LTE RAN 310 including an evolved node B (eNB), e.g., eNB 312.LTE RAN 310 may provide an LTE air interface with the following characteristics: subcarrier spacing (SCS) of 15 kHz; a single carrier frequency division multiple access (SC-FDMA) waveform for the Uplink (UL) and a cyclic prefix orthogonal frequency division multiplexing (CP-OFDM) waveform for the Downlink (DL); turbo codes for data, tail Biting Convolutional Codes (TBCCs) for control, and the like. The LTE air interface may rely on channel state information reference signals (CSI-RS) for CSI acquisition and beam management; PDSCH/PDCCH demodulation is performed in dependence on Physical Downlink Shared Channel (PDSCH)/Physical Downlink Control Channel (PDCCH) demodulation reference signals (DMRS); and relying on Cell Reference Signals (CRS) for cell search and initial acquisition, channel quality measurements, and channel estimation, and on channel estimation for coherent demodulation/detection at the UE. The LTE air interface may operate on the 6GHz sub-band.
In some embodiments, RAN 304 may be a Next Generation (NG) -RAN 314 with a gNB (e.g., gNB 316) or gn-eNB (e.g., NG-eNB 318). The gNB316 may connect with 5G enabled UEs using a 5G NR interface. The gNB316 may connect with the 5G core through a NG interface, which may include an N2 interface or an N3 interface. The NG-eNB 318 may also connect with the 5G core over the NG interface, but may connect with the UE over the LTE air interface. The gNB316 and the ng-eNB 318 may be connected to each other via an Xn interface.
In some embodiments, the NG interface may be divided into two parts, an NG user plane (NG-U) interface that carries traffic data between the nodes of the UPF 348 and NG-RAN 314 (e.g., an N3 interface) and an NG control plane (NG-C) interface that is a signaling interface between the access and mobility management function (AMF) 344 and the nodes of the NG-RAN 314 (e.g., an N2 interface).
NG-RAN 314 may provide a 5G-NR air interface with the following characteristics: a variable SCS; cyclic prefix-orthogonal frequency division multiplexing (CP-OFDM) for DL, CP-OFDM for UL, and DFT-s-OFDM; polarity, repetition, simplex, and reed-muller codes for control; and a low density parity check code (LDPC) for data. The 5G-NR air interface may rely on channel state reference signals (CSI-RS), PDSCH/PDCCH demodulation reference signals (DMRS) like the LTE air interface. The 5G-NR air interface may not use Cell Reference Signals (CRSs), but may use Physical Broadcast Channel (PBCH) demodulation reference signals (DMRS) for PBCH demodulation; phase tracking of PDSCH using Phase Tracking Reference Signals (PTRS); and performing time tracking using the tracking reference signal. The 5G-NR air interface may operate on an FR1 band including a 6GHz sub-band or an FR2 band including 24.25GHz to 52.6GHz bands. The 5G-NR air interface may include a synchronization signal and a PBCH block (SSB), which is a region of a downlink resource grid including a Primary Synchronization Signal (PSS)/Secondary Synchronization Signal (SSS)/PBCH.
In some embodiments, the 5G-NR air interface may use bandwidth part (BWP) for various purposes. For example, BWP may be used for dynamic adaptation of SCS. For example, UE 302 may be configured with multiple BWP, where each BWP configuration has a different SCS. When a BWP change is indicated to the UE 302, the SCS of the transmission is also changed. Another use case of BWP relates to power saving. In particular, the UE 302 may be configured with multiple BWPs having different numbers of frequency resources (e.g., PRBs) to support data transmission in different traffic load scenarios. BWP containing a smaller number of PRBs may be used for data transmission with smaller traffic load while allowing power saving at UE 302 and in some cases the gNB 316. BWP comprising a large number of PRBs may be used for scenarios with higher traffic load.
RAN 304 is communicatively coupled to CN 320, which includes network elements, to provide various functions to support data and telecommunications services to clients/subscribers (e.g., users of UE 302). The components of CN 320 may be implemented in one physical node or in a different physical node. In some embodiments, network Function Virtualization (NFV) may be used to virtualize any or all of the functions provided by the network elements of CN 320 onto physical computing/storage resources in servers, switches, and the like. The logical instance of CN 320 may be referred to as a network slice, and the logical instance of a portion of CN 320 may be referred to as a network sub-slice.
In some embodiments, CN 320 may be LTE CN 322, which may also be referred to as an Evolved Packet Core (EPC). LTE CN 322 may include a Mobility Management Entity (MME) 324, a Serving Gateway (SGW) 326, a serving General Packet Radio Service (GPRS) support node (SGSN) 328, a Home Subscriber Server (HSS) 330, a Proxy Gateway (PGW) 332, and a policy control and charging rules function (PCRF) 334, which are coupled to each other through an interface (or "reference point") as shown. The function of the elements of LTE CN 322 may be briefly described as follows.
The MME 324 may implement mobility management functions to track the current location of the UE 302 to facilitate paging, bearer activation/deactivation, handover, gateway selection, authentication, and the like.
SGW 326 may terminate the S1 interface towards the RAN and route data packets between the RAN and LTE CN 322. SGW 326 may be a local mobility anchor for inter-RAN node handover and may also provide an anchor for inter-3 GPP mobility. Other responsibilities may include lawful interception, billing, and some policy enforcement.
SGSN 328 can track the location of UE 302 and perform security functions and access control. In addition, SGSN 328 may perform EPC inter-node signaling for mobility between different Radio Access Technology (RAT) networks; MME 324 specified PDN and S-GW selection; MME selection for handover, etc. The S3 reference point between MME 324 and SGSN 328 may enable user and bearer information exchange for inter-3 GPP access network mobility in the idle/active state.
HSS 330 may include a database for network users that includes subscription-related information that supports network entity handling communication sessions. HSS 330 may provide support for routing/roaming, authentication, authorization, naming/addressing resolution, location dependencies, and the like. The S6a reference point between the HSS 330 and the MME 324 may enable the transmission of subscription and authentication data for authenticating/authorizing user access to the LTE CN 320.
PGW 332 may terminate an SGi interface towards a Data Network (DN) 336 that may include an application/content server 338. PGW 332 may route data packets between LTE CN 322 and data network 336. PGW 332 may be coupled to SGW 326 via an S5 reference point to facilitate user plane tunneling and tunnel management. PGW 332 may also include nodes (e.g., PCEFs) for policy enforcement and charging data collection. In addition, the SGi reference point between PGW 332 and data network 336 may be, for example, an operator external public, private PDN, or an operator internal packet data network for providing IP Multimedia Subsystem (IMS) services. PGW 332 may be coupled with PCRF 334 via a Gx reference point.
PCRF 334 is a policy and charging control element of LTE CN 322. PCRF 334 may be communicatively coupled to application/content server 338 to determine appropriate quality of service (QoS) and charging parameters for the service flows. PCRF 332 may provide the relevant rules to the PCEF (via Gx reference point) with the appropriate Traffic Flow Templates (TFTs) and QoS Class Identifiers (QCIs).
In some embodiments, CN 320 may be a 5G core network (5 GC) 340. The 5gc 340 may include an authentication server function (AUSF) 342, an access and mobility management function (AMF) 344, a Session Management Function (SMF) 346, a User Plane Function (UPF) 348, a Network Slice Selection Function (NSSF) 350, a network open function (NEF) 352, an NF storage function (NRF) 354, a Policy Control Function (PCF) 356, a Unified Data Management (UDM) 358, and an Application Function (AF) 360, which are coupled to each other through an interface (or "reference point") as shown. The function of the elements of the 5gc 340 may be briefly described as follows.
The AUSF 342 may store data for authentication of the UE 302 and process authentication related functions. The AUSF 342 may facilitate a common authentication framework for various access types. In addition to communicating with other elements of the 5gc 340 through reference points as shown, the AUSF 342 may also present an interface based on the Nausf service.
The AMF 344 may allow other functions of the 5gc 340 to communicate with the UE 302 and RAN 304 and subscribe to notifications about mobility events of the UE 302. The AMF 344 may be responsible for registration management (e.g., registering the UE 302), connection management, reachability management, mobility management, lawful intercept AMF related events, and access authentication and authorization. The AMF 344 may provide for the transmission of Session Management (SM) messages between the UE 302 and the SMF346 and act as a transparent proxy for routing SM messages. The AMF 344 may also provide for transmission of SMS messages between the UE 302 and the SMSF. The AMF 344 may interact with the AUSF 342 and the UE 302 to perform various security anchoring and context management functions. Furthermore, the AMF 344 may be an end point of the RAN CP interface, which may include or be an N2 reference point between the RAN 304 and the AMF 344; the AMF 344 may act as an endpoint for NAS (N1) signaling and perform NAS ciphering and integrity protection. The AMF 344 may also support NAS signaling communication with the UE 302 over an N3 IWF interface.
The SMF 346 may be responsible for SM (e.g., tunnel management, session establishment between UPF348 and AN 308); UE IP address allocation and management (including optional authorization); selection and control of the UP function; configuring flow control at the UPF348 to route traffic to the appropriate destination; termination of the interface to the policy control function; control policy enforcement, charging, and a portion of QoS; lawful interception (for SM events and interfaces to LI systems); terminating the SM portion of the NAS message; downlink data notification; AN-specific SM information is initiated (sent over N2 to AN 308 through AMF 344); and determining the SSC mode of the session. SM may refer to the management of PDU sessions, and PDU sessions or "sessions" may refer to PDU connectivity services that provide or enable PDU exchanges between UE 302 and data network 336.
UPF348 may serve as an anchor point for intra-RAT and inter-RAT mobility, an external PDU session point interconnected with data network 336, and a branching point to support multi-homing PDU sessions. UPF348 may also perform packet routing and forwarding, perform packet inspection, perform policy rules user plane part, lawful intercept packets (UP collection), perform traffic usage reporting, perform QoS processing for the user plane (e.g., packet filtering, gating, UL/DL rate enforcement), perform uplink traffic verification (e.g., SDF to QoS flow mapping), transmit class packet marking in the uplink and downlink, and perform downlink packet buffering and downlink data notification triggering. UPF348 may include an uplink classifier to support routing traffic flows to the data network.
NSSF 350 may select a set of network slice instances to serve UE 302. The NSSF 350 may also determine the allowed Network Slice Selection Assistance Information (NSSAI) and the mapping to subscribed individual NSSAIs (S-NSSAIs), if desired. NSSF 350 may also determine the set of AMFs to use for serving UE 302, or a list of candidate AMFs, based on a suitable configuration and possibly by querying NRF 354. The selection of a set of network slice instances for UE 302 may be triggered by AMF 344 (with which UE 302 registers by interacting with NSSF 350), which may result in a change in AMF. NSSF 350 may interact with AMF 344 via an N22 reference point; and may communicate with another NSSF in the visited network via an N31 reference point (not shown). In addition, NSSF 350 may expose an interface based on the Nnssf service.
The NEF352 may securely disclose services and capabilities provided by 3GPP network functions for third parties, internal exposure/re-exposure, AF (e.g., AF 360), edge computing or fog computing systems, and the like. In these embodiments, NEF352 may authenticate, authorize, or restrict AF. The NEF352 may also convert information exchanged with the AF 360 and information exchanged with internal network functions. For example, the NEF352 may translate between AF service identifiers and internal 5GC information. The NEF352 may also receive information from other NFs based on their open capabilities. This information may be stored as structured data at NEF352 or at data store NF using a standardized interface. The NEF352 may then re-expose the stored information to other NFs and AFs, or for other purposes such as analysis. In addition, NEF352 may expose an interface based on Nnef services.
NRF 354 may support a service discovery function, receive NF discovery requests from NF instances, and provide information of the discovered NF instances to the NF instances. NRF 354 also maintains information of available NF instances and services supported by them. As used herein, the terms "instantiate," "instance," and the like may refer to creating an instance, "instance" may refer to a specific occurrence of an object, which may occur, for example, during execution of program code. Further, NRF 354 may expose an interface based on the Nnrf service.
PCF 356 may provide policy rules to control plane functions to enforce those policy rules and may also support a unified policy framework to manage network behavior. PCF 356 may also implement a front end to access subscription information related to policy decisions in the UDR of UDM 358. In addition to communicating with functions through reference points as shown, PCF 356 also presents an interface based on the Npcf service.
The UDM 358 may process subscription related information to support network entities in handling communication sessions and may store subscription data for the UE 302. For example, subscription data may be communicated via an N8 reference point between the UDM 358 and the AMF 344. UDM 358 may include two parts: application front-end and User Data Record (UDR). The UDR may store policy data and subscription data for UDM 358 and PCF 356, and/or structured data and application data for NEF 352 for exposure (including PFD for application detection, application request information for multiple UEs 302). The UDR may expose an interface based on the Nudr service to allow UDM 358, PCF 356, and NEF 352 to access specific sets of stored data, as well as to read, update (e.g., add, modify), delete, and subscribe to notifications of related data changes in the UDR. The UDM may include a UDM-FE (UDM front end) that is responsible for handling credentials, location management, subscription management, etc. Several different front ends may serve the same user in different transactions. The UDM-FE accesses subscription information stored in the UDR and performs authentication credential processing, user identification processing, access authorization, registration/mobility management, and subscription management. In addition to communicating with other NFs through reference points as shown, the UDM 358 may also present an interface based on Nudm services.
AF 360 may provide application impact on traffic routing, provide access to the NEF, and interact with the policy framework for policy control.
In some embodiments, the 5gc 340 may enable edge computation by selecting an operator/third party service that is geographically close to the point where the UE 302 connects to the network. This may reduce latency and load on the network. To provide edge computing implementations, the 5gc 340 may select a UPF348 close to the UE 302 and perform traffic steering from the UPF348 to the data network 336 over the N6 interface. This may be based on UE subscription data, UE location, and information provided by AF 360. Thus, AF 360 may affect UPF (re) selection and traffic routing. Based on the operator deployment, the network operator may allow AF 360 to interact directly with the associated NF when AF 360 is considered a trusted entity. In addition, AF 360 may expose an interface based on Naf services.
The data network 336 may represent various network operator services, internet access, or third party services that may be provided by one or more servers, including, for example, an application/content server 338.
Fig. 4 illustrates a wireless network 400 in accordance with various embodiments. The wireless network 400 may include a UE 402 in wireless communication with AN 404. The UE 402 and the AN 404 may be similar to and substantially interchangeable with the synonym components described elsewhere herein.
UE 402 may be communicatively coupled with AN 404 via connection 406. Connection 406 is shown as an air interface to enable communicative coupling and may operate at millimeter wave or below 6GHz frequencies in accordance with a cellular communication protocol, such as the LTE protocol or the 5G NR protocol.
The UE 402 may include a host platform 408 coupled with a modem platform 410. Host platform 408 may include application processing circuitry 412, which may be coupled with protocol processing circuitry 414 of modem platform 410. The application processing circuitry 412 may run various applications that provide/receive application data for the UE 402. The application processing circuitry 412 may also implement one or more layer operations to send and receive application data to and from the data network. These layer operations may include transport (e.g., UDP) and internet (e.g., IP) operations.
Protocol processing circuitry 414 may implement one or more layers of operations to facilitate the transmission or reception of data over connection 406. Layer operations implemented by the protocol processing circuit 414 may include, for example, medium Access Control (MAC), radio Link Control (RLC), packet Data Convergence Protocol (PDCP), radio Resource Control (RRC), and non-access stratum (NAS) operations.
Modem platform 410 may further include digital baseband circuitry 416, which digital baseband circuitry 416 may implement one or more layer operations "below" the layer operations performed by protocol processing circuitry 414 in the network protocol stack. These operations may include, for example, PHY operations including one or more of HARQ-ACK functions, scrambling/descrambling, encoding/decoding, layer mapping/demapping, modulation symbol mapping, received symbol/bit metric determination, multi-antenna port precoding/decoding, where these functions may include one or more of space-time, space-frequency, or spatial coding, reference signal generation/detection, preamble sequence generation and/or decoding, synchronization sequence generation/detection, control channel signal blind decoding, and other related functions.
Modem platform 410 may further include transmit circuitry 418, receive circuitry 420, RF circuitry 422, and RF front end (RFFE) circuitry 424, which may include or be connected to one or more antenna panels 426. Briefly, the transmit circuitry 418 may include digital-to-analog converters, mixers, intermediate Frequency (IF) components, and the like; the receive circuit 420 may include analog-to-digital converters, mixers, IF components, etc.; RF circuitry 422 may include low noise amplifiers, power tracking components, and the like; RFFE circuit 424 may include filters (e.g., surface/bulk acoustic wave filters), switches, antenna tuners, beam forming components (e.g., phased array antenna components), and so forth. The selection and arrangement of the components of transmit circuitry 418, receive circuitry 420, RF circuitry 422, RFFE circuitry 424, and antenna panel 426 (collectively, "transmit/receive components") may be specific to the specifics of the particular implementation, e.g., whether the communication is Time Division Multiplexed (TDM) or Frequency Division Multiplexed (FDM), at mmWave or below 6GHz frequencies, etc. In some embodiments, the transmit/receive components may be arranged in a plurality of parallel transmit/receive chains, and may be arranged in the same or different chips/modules, etc.
In some embodiments, protocol processing circuit 414 may include one or more instances of control circuitry (not shown) to provide control functions for the transmit/receive components.
UE reception may be established through and via antenna panel 426, RFFE circuitry 424, RF circuitry 422, receive circuitry 420, digital baseband circuitry 416, and protocol processing circuitry 414. In some embodiments, the antenna panel 426 may receive transmissions from the AN 404 by receiving beamformed signals received by multiple antennas/antenna elements of one or more antenna panels 426.
UE transmissions may be established via and through protocol processing circuitry 414, digital baseband circuitry 416, transmit circuitry 418, RF circuitry 422, RFFE circuitry 424, and antenna panel 426. In some embodiments, the transmit component of the UE 402 may apply spatial filtering to the data to be transmitted to form a transmit beam that is transmitted by the antenna elements of the antenna panel 426.
Similar to the UE 402, the AN 404 may include a host platform 428 coupled with a modem platform 430. Host platform 428 may include application processing circuitry 432 coupled with protocol processing circuitry 434 of modem platform 430. The modem platform may also include digital baseband circuitry 436, transmit circuitry 438, receive circuitry 440, RF circuitry 442, RFFE circuitry 444, and antenna panel 446. The components of AN 404 may be similar to, and substantially interchangeable with, the same-name components of UE 402. In addition to performing data transmission/reception as described above, the components of the AN 404 may perform various logic functions including, for example, radio Network Controller (RNC) functions such as radio bearer management, uplink and downlink dynamic radio resource management, and data packet scheduling.
Fig. 5 is a block diagram illustrating components capable of reading instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium) and performing any one or more of the methods discussed herein, according to some example embodiments. In particular, fig. 5 shows a schematic diagram of a hardware resource 500, the hardware resource 500 comprising one or more processors (or processor cores) 510, one or more memory/storage devices 520, and one or more communication resources 530, wherein each of these processors, memory/storage devices, and communication resources may be communicatively coupled via a bus 540 or other interface circuitry. For embodiments that utilize node virtualization (e.g., network Function Virtualization (NFV)), the hypervisor 502 may be executed to provide an execution environment for one or more network slices/sub-slices to utilize the hardware resources 500.
Processor 510 may include, for example, a processor 512 and a processor 514. Processor 510 may be, for example, a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP) such as a baseband processor, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Radio Frequency Integrated Circuit (RFIC), another processor (including those discussed herein), or any suitable combination thereof.
Memory/storage 520 may include main memory, a disk storage device, or any suitable combination thereof. Memory/storage 520 may include, but is not limited to, any type of volatile, nonvolatile, or semi-volatile memory, such as Dynamic Random Access Memory (DRAM), static Random Access Memory (SRAM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory, solid state memory, and the like.
Communication resources 530 may include an interconnection or network interface controller, component, or other suitable device to communicate with one or more peripheral devices 504 or one or more databases 506 or other network elements via network 508. For example, the communication resources 530 may include wired communication components (e.g., for coupling via USB, ethernet, etc.), cellular communication components, near Field Communication (NFC) components, and the like,
Figure BDA0003928880030000181
(or->
Figure BDA0003928880030000182
Low energy) component, < >>
Figure BDA0003928880030000183
Components, and other communication components.
The instructions 550 may include software, programs, applications, applets, applications, or other executable code for causing at least any one of the processors 510 to perform any one or more of the methods discussed herein. The instructions 550 may reside, completely or partially, within at least one of the processor 510 (e.g., in a cache of the processor), the memory/storage 520, or any suitable combination thereof. Further, any portion of instructions 550 may be transferred from any combination of peripherals 504 or databases 506 to hardware resource 500. Thus, the memory of the processor 510, the memory/storage device 520, the peripheral devices 504, and the database 506 are examples of computer-readable and machine-readable media.
The following paragraphs describe examples of various embodiments.
Example 1 includes an apparatus for supporting Artificial Intelligence (AI) or machine learning (AI) in a wireless communication system, wherein the apparatus comprises processor circuitry configured to: verifying the new or updated AI or ML entity based on entity verification data; testing the new or updated AI or ML entity based on entity test data when the verification result of the new or updated AI or ML entity meets an entity verification result expectation; and deploying the new or updated AI or ML entity to the target AI or ML enabled function when the test results of the new or updated AI or ML entity satisfy entity test results expectations and the new or updated AI or ML entity is selected to be deployed to the target AI or ML enabled function.
Example 2 includes the apparatus of example 1, wherein the processor circuit is further configured to: the new or updated AI or ML entity is generated by training the original AI or ML entity based on entity training data.
Example 3 includes the apparatus of example 1, wherein the processor circuit is further configured to: the new or updated AI or ML entity is further trained when the verification result of the new or updated AI or ML entity does not meet the entity verification result expectation.
Example 4 includes the apparatus of example 1, wherein the processor circuit is further configured to: the new or updated AI or ML entity is further trained when the test results of the new or updated AI or ML entity do not meet the entity test result expectations.
Example 5 includes the apparatus of example 1, wherein the processor circuit is further configured to: verifying whether the new or updated AI or ML entity is capable of operating in the target AI or ML enabled function or in a reference AI or ML enabled function, wherein the reference AI or ML enabled function is the same as or capable of emulating the target AI or ML enabled function.
Example 6 includes the apparatus of example 5, wherein the new or updated AL or ML entity is deployed to the target AI or ML enabled function when it is verified that the new or updated AL or ML entity is capable of operating in the target AI or ML enabled function or in the reference AI or ML enabled function.
Example 7 includes the apparatus of example 1, wherein verifying the new or updated AI or ML entity comprises: the performance of the new or updated AI or ML entity is evaluated based on the entity verification data.
Example 8 includes the apparatus of example 1, wherein testing the new or updated AI or ML entity comprises: the performance of the new or updated AI or ML entity is evaluated based on the entity test data.
Example 9 includes the apparatus of example 1, wherein the old AI or ML entity is replaced with the new or updated AI or ML entity when the old AI or ML entity is present in the target AI or ML enabled function for the same reasoning purpose as the new or updated AI or ML entity.
Example 10 includes an apparatus for supporting Artificial Intelligence (AI) or Machine Learning (ML) in a wireless communication system, wherein the apparatus comprises processor circuitry configured to: initializing an AI or ML enabled function integrated with an AI or ML entity; verifying the AI or ML enabled function based on function verification data; testing the AI or ML enabled function based on functional test data when the AI or ML enabled function verification meets a function verification result expectation; and deploying the AI or ML enabled function to a target entity when the test result of the AI or ML enabled function meets a functional test result expectation and the AI or ML enabled function is selected to be deployed to the target entity.
Example 11 includes the apparatus of example 10, wherein the processor circuit is further configured to: the AI or ML entity is further trained when the AI or ML-enabled verification results do not meet the functional verification result expectations.
Example 12 includes the apparatus of example 10, wherein the processor circuit is further configured to: when the test results of the AI or ML enabled functionality do not meet the functional test results expectations, the AI or ML enabled functionality is upgraded with a new or updated AI or ML entity.
Example 13 includes the apparatus of example 10, wherein the processor circuit is further configured to: verifying whether the AI or ML enabled function is capable of working in the target entity or in a reference entity that is the same as or capable of emulating the target entity.
Example 14 includes the apparatus of example 13, wherein the AI or ML enabled function is deployed to the target entity when it is verified that the AI or ML enabled function is capable of working in the target entity or in the reference entity.
Example 15 includes the apparatus of example 10, wherein verifying the AI or ML enabled function comprises: the performance of the AI or ML enabled functions is evaluated based on the function verification data.
Example 16 includes the apparatus of example 10, wherein testing the AI or ML enabled function comprises: the performance of the AI or ML enabled functions is evaluated based on the functional test data.
Example 17 includes the apparatus of example 10, wherein the processor circuit is further configured to: after the AI or ML enabled function is activated, the AI or ML enabled function is monitored.
Example 18 includes the apparatus of example 17, wherein monitoring the AI or ML enabled function includes performance evaluation or fault monitoring of the AI or ML enabled function.
Example 19 includes the apparatus of example 10, wherein the processor circuit is further configured to: configuring or terminating the AI or ML enabled function.
Example 20 includes the apparatus of example 19, wherein configuring the AI or ML enabled function comprises initially configuring, reconfiguring, activating, or deactivating the AI or ML enabled function.
Example 21 includes the apparatus of example 20, wherein the AI or ML entity is activated, deactivated, or terminated with the AI or ML enabled function when the AI or ML enabled function is activated, deactivated, or terminated.
Example 22 includes a method for supporting Artificial Intelligence (AI) or machine learning (AI) in a wireless communication system, the method comprising: verifying the new or updated AI or ML entity based on entity verification data; testing the new or updated AI or ML entity based on entity test data when the verification result of the new or updated AI or ML entity meets an entity verification result expectation; and deploying the new or updated AI or ML entity to the target AI or ML enabled function when the test results of the new or updated AI or ML entity satisfy entity test results expectations and the new or updated AI or ML entity is selected to be deployed to the target AI or ML enabled function.
Example 23 includes the method of example 22, further comprising: the new or updated AI or ML entity is generated by training the original AI or ML entity based on entity training data.
Example 24 includes the method of example 22, further comprising: the new or updated AI or ML entity is further trained when the verification result of the new or updated AI or ML entity does not meet the entity verification result expectation.
Example 25 includes the method of example 22, further comprising: the new or updated AI or ML entity is further trained when the test results of the new or updated AI or ML entity do not meet the entity test result expectations.
Example 26 includes the method of example 22, further comprising: verifying whether the new or updated AI or ML entity is capable of operating in the target AI or ML enabled function or in a reference AI or ML enabled function, wherein the reference AI or ML enabled function is the same as or capable of emulating the target AI or ML enabled function.
Example 27 includes the method of example 26, wherein the new or updated AL or ML entity is deployed to the target AI or ML enabled function when it is verified that the new or updated AL or ML entity is capable of operating in the target AI or ML enabled function or in the reference AI or ML enabled function.
Example 28 includes the method of example 22, wherein verifying the new or updated AI or ML entity comprises: the performance of the new or updated AI or ML entity is evaluated based on the entity verification data.
Example 29 includes the method of example 22, wherein testing the new or updated AI or ML entity comprises: the performance of the new or updated AI or ML entity is evaluated based on the entity test data.
Example 30 includes the method of example 22, wherein when an old AI or ML entity exists in the target AI or ML enabled function for the same reasoning purpose as the new or updated AI or ML entity, the old AI or ML entity is replaced with the new or updated AI or ML entity.
Example 31 includes a method for supporting Artificial Intelligence (AI) or Machine Learning (ML) in a wireless communication system, the method comprising: initializing an AI or ML enabled function integrated with an AI or ML entity; verifying the AI or ML enabled function based on function verification data; testing the AI or ML enabled function based on functional test data when the AI or ML enabled function verification meets a function verification result expectation; and deploying the AI or ML enabled function to a target entity when the test result of the AI or ML enabled function meets a functional test result expectation and the AI or ML enabled function is selected to be deployed to the target entity.
Example 32 includes the method of example 31, further comprising: the AI or ML entity is further trained when the AI or ML-enabled verification results do not meet the functional verification result expectations.
Example 33 includes the method of example 31, further comprising: when the test results of the AI or ML enabled functionality do not meet the functional test results expectations, the AI or ML enabled functionality is upgraded with a new or updated AI or ML entity.
Example 34 includes the method of example 31, further comprising: verifying whether the AI or ML enabled function is capable of working in the target entity or in a reference entity that is the same as or capable of emulating the target entity.
Example 35 includes the method of example 34, wherein the AI or ML-enabled function is deployed to the target entity when it is verified that the AI or ML-enabled function is capable of working in the target entity or in the reference entity.
Example 36 includes the method of example 31, wherein verifying the AI or ML enabled function comprises: the performance of the AI or ML enabled functions is evaluated based on the function verification data.
Example 37 includes the method of example 31, wherein testing the AI or ML enabled function comprises: the performance of the AI or ML enabled functions is evaluated based on the functional test data.
Example 38 includes the method of example 31, further comprising: after the AI or ML enabled function is activated, the AI or ML enabled function is monitored.
Example 39 includes the method of example 38, wherein monitoring the AI or ML enabled function includes performance evaluation or fault monitoring of the AI or ML enabled function.
Example 40 includes the method of example 31, further comprising: configuring or terminating the AI or ML enabled function.
Example 41 includes the method of example 40, wherein configuring the AI or ML enabled function comprises initially configuring, reconfiguring, activating, or deactivating the AI or ML enabled function.
Example 42 includes the method of example 41, wherein the AI or ML entity is activated, deactivated, or terminated with the AI or ML enabled function when the AI or ML enabled function is activated, deactivated, or terminated.
Example 43 includes a computer-readable storage medium having stored thereon computer-executable instructions, wherein the computer-executable instructions, when executed by processor circuitry for use in an apparatus for supporting Artificial Intelligence (AI) or Machine Learning (ML) in a wireless communication system, cause the processor circuitry to perform the method of any of examples 22-42.
Example 44 includes an apparatus to support Artificial Intelligence (AI) or Machine Learning (ML) in a wireless communication system, comprising means to perform the method of any of examples 22-42.
Although certain embodiments have been illustrated and described herein for purposes of description, a wide variety of alternate and/or equivalent embodiments or implementations calculated to achieve the same purposes may be substituted for the embodiments shown and described without departing from the scope of the present disclosure. This application is intended to cover any adaptations or variations of the embodiments discussed herein. Accordingly, the embodiments described herein are obviously limited only by the following claims and equivalents thereof.

Claims (25)

1. An apparatus for supporting Artificial Intelligence (AI) or machine learning (AI) in a wireless communication system, wherein the apparatus comprises processor circuitry configured to:
verifying the new or updated AI or ML entity based on entity verification data;
testing the new or updated AI or ML entity based on entity test data when the verification result of the new or updated AI or ML entity meets an entity verification result expectation; and
when the test results of the new or updated AI or ML entity meet entity test results expectations and the new or updated AI or ML entity is selected to be deployed to a target AI or ML enabled function, the new or updated AI or ML entity is deployed to the target AI or ML enabled function.
2. The apparatus of claim 1, wherein the processor circuit is further configured to:
the new or updated AI or ML entity is generated by training the original AI or ML entity based on entity training data.
3. The apparatus of claim 1, wherein the processor circuit is further configured to:
the new or updated AI or ML entity is further trained when the verification result of the new or updated AI or ML entity does not meet the entity verification result expectation.
4. The apparatus of claim 1, wherein the processor circuit is further configured to:
the new or updated AI or ML entity is further trained when the test results of the new or updated AI or ML entity do not meet the entity test result expectations.
5. The apparatus of claim 1, wherein the processor circuit is further configured to:
verifying whether the new or updated AI or ML entity is capable of operating in the target AI or ML enabled function or in a reference AI or ML enabled function, wherein the reference AI or ML enabled function is the same as or capable of emulating the target AI or ML enabled function.
6. The apparatus of claim 5, wherein the new or updated AL or ML entity is deployed to the target AI or ML enabled function when it is verified that the new or updated AL or ML entity is capable of operating in the target AI or ML enabled function or in the reference AI or ML enabled function.
7. The apparatus of claim 1, wherein verifying the new or updated AI or ML entity comprises: the performance of the new or updated AI or ML entity is evaluated based on the entity verification data.
8. The apparatus of claim 1, wherein testing the new or updated AI or ML entity comprises: the performance of the new or updated AI or ML entity is evaluated based on the entity test data.
9. The apparatus of claim 1, wherein the old AI or ML entity is replaced with the new or updated AI or ML entity when the old AI or ML entity is present in the target AI or ML enabled function for the same reasoning purpose as the new or updated AI or ML entity.
10. An apparatus for supporting Artificial Intelligence (AI) or Machine Learning (ML) in a wireless communication system, wherein the apparatus comprises processor circuitry configured to:
Initializing an AI or ML enabled function integrated with an AI or ML entity;
verifying the AI or ML enabled function based on function verification data;
testing the AI or ML enabled function based on functional test data when the AI or ML enabled function verification meets a function verification result expectation; and
the AI or ML enabled function is deployed to a target entity when the test result of the AI or ML enabled function meets a functional test result expectation and the AI or ML enabled function is selected to be deployed to the target entity.
11. The apparatus of claim 10, wherein the processor circuit is further configured to:
the AI or ML entity is further trained when the AI or ML-enabled verification results do not meet the functional verification result expectations.
12. The apparatus of claim 10, wherein the processor circuit is further configured to:
when the test results of the AI or ML enabled functionality do not meet the functional test results expectations, the AI or ML enabled functionality is upgraded with a new or updated AI or ML entity.
13. The apparatus of claim 10, wherein the processor circuit is further configured to:
Verifying whether the AI or ML enabled function is capable of working in the target entity or in a reference entity that is the same as or capable of emulating the target entity.
14. The apparatus of claim 13, wherein the AI or ML enabled function is deployed to the target entity upon verifying that the AI or ML enabled function is capable of working in the target entity or in the reference entity.
15. The apparatus of claim 10, wherein verifying the AI or ML enabled function comprises: the performance of the AI or ML enabled functions is evaluated based on the function verification data.
16. The apparatus of claim 10, wherein testing the AI or ML enabled function comprises: the performance of the AI or ML enabled functions is evaluated based on the functional test data.
17. The apparatus of claim 10, wherein the processor circuit is further configured to:
after the AI or ML enabled function is activated, the AI or ML enabled function is monitored.
18. The apparatus of claim 17, wherein monitoring the AI or ML enabled function comprises performance evaluation or fault monitoring of the AI or ML enabled function.
19. The apparatus of claim 10, wherein the processor circuit is further configured to:
configuring or terminating the AI or ML enabled function.
20. The apparatus of claim 19, wherein configuring the AI or ML enabled function comprises initially configuring, reconfiguring, activating, or deactivating the AI or ML enabled function.
21. The apparatus of claim 20, wherein the AI or ML entity is activated, deactivated, or terminated with the AI or ML enabled function when the AI or ML enabled function is activated, deactivated, or terminated.
22. A computer-readable storage medium having stored thereon computer-executable instructions, wherein the computer-executable instructions, when executed by processor circuitry for use in an apparatus for supporting Artificial Intelligence (AI) or Machine Learning (ML) in a wireless communication system, cause the processor circuitry to:
verifying the new or updated AI or ML entity based on entity verification data;
testing the new or updated AI or ML entity based on entity test data when the verification result of the new or updated AI or ML entity meets an entity verification result expectation; and
When the test results of the new or updated AI or ML entity meet entity test results expectations and the new or updated AI or ML entity is selected to be deployed to a target AI or ML enabled function, the new or updated AI or ML entity is deployed to the target AI or ML enabled function.
23. The computer-readable storage medium of claim 22, wherein the computer-executable instructions, when executed by the processor circuit, further cause the processor circuit to:
the new or updated AI or ML entity is generated by training the original AI or ML entity based on entity training data.
24. A computer-readable storage medium having stored thereon computer-executable instructions, wherein the computer-executable instructions, when executed by processor circuitry for use in an apparatus for supporting Artificial Intelligence (AI) or Machine Learning (ML) in a wireless communication system, cause the processor circuitry to:
initializing an AI or ML enabled function integrated with an AI or ML entity;
verifying the AI or ML enabled function based on function verification data;
testing the AI or ML enabled function based on functional test data when the AI or ML enabled function verification meets a function verification result expectation; and
The AI or ML enabled function is deployed to a target entity when the test result of the AI or ML enabled function meets a functional test result expectation and the AI or ML enabled function is selected to be deployed to the target entity.
25. The computer readable storage medium of claim 24, wherein the computer executable instructions, when executed by the processor circuit, further cause the processor circuit to:
the AI or ML entity is further trained when the AI or ML-enabled verification results do not meet the functional verification result expectations.
CN202211382241.8A 2021-12-16 2022-11-07 Apparatus for supporting artificial intelligence or machine learning in wireless communication system Pending CN116266815A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163290270P 2021-12-16 2021-12-16
US63/290,270 2021-12-16

Publications (1)

Publication Number Publication Date
CN116266815A true CN116266815A (en) 2023-06-20

Family

ID=86744181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211382241.8A Pending CN116266815A (en) 2021-12-16 2022-11-07 Apparatus for supporting artificial intelligence or machine learning in wireless communication system

Country Status (1)

Country Link
CN (1) CN116266815A (en)

Similar Documents

Publication Publication Date Title
CN115694700A (en) Apparatus for use in a wireless communication system
CN114765485A (en) Apparatus for use in user equipment
CN116648900A (en) Support for lifecycle management for edge-enabled servers and edge configuration servers
CN114765826A (en) Arrangement in an access node
CN114641044A (en) Apparatus for use in source base station, target base station and user equipment
CN113766502A (en) Apparatus for use in a UE, SMF entity, and provisioning server
CN116266815A (en) Apparatus for supporting artificial intelligence or machine learning in wireless communication system
CN118042463A (en) Apparatus and method for data verification
CN117234889A (en) ML entity testing device for management service consumer and producer
CN117528565A (en) Apparatus and computer readable storage medium for mitigating EAS discovery failures
CN116744333A (en) Device for supporting 6G OAM system
CN117014852A (en) Device for policy provisioning of UE
CN117251224A (en) ML entity loading device for management service producer
CN116981056A (en) Apparatus for artificial intelligence or machine learning assisted beam management
CN116264747A (en) Device for managing data analysis and management service consumer and producer
CN115884234A (en) Apparatus for use in a wireless communication system
CN116756556A (en) MnS and method for supporting ML training
CN115834314A (en) Arrangement in a base station
CN116390118A (en) Apparatus for use in ECSP and PLMN management systems
CN117156496A (en) Apparatus for use in user plane service function entity
CN115278637A (en) Apparatus for use in a core network
CN114584270A (en) Apparatus for use in user equipment
CN115250465A (en) Apparatus for use in a core network
CN117595974A (en) User equipment and device used therein
CN117479178A (en) Network control repeater and device used therein

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication