CN115250502A - Apparatus and method for RAN intelligent network - Google Patents

Apparatus and method for RAN intelligent network Download PDF

Info

Publication number
CN115250502A
CN115250502A CN202210334930.5A CN202210334930A CN115250502A CN 115250502 A CN115250502 A CN 115250502A CN 202210334930 A CN202210334930 A CN 202210334930A CN 115250502 A CN115250502 A CN 115250502A
Authority
CN
China
Prior art keywords
gnb
interface
inference result
inference
result information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210334930.5A
Other languages
Chinese (zh)
Inventor
李梓伊
亚历山大·西罗金
叶书苹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Publication of CN115250502A publication Critical patent/CN115250502A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/10Scheduling measurement reports ; Arrangements for measurement reports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/06Authentication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W74/00Wireless channel access, e.g. scheduled or random access
    • H04W74/08Non-scheduled or contention based access, e.g. random access, ALOHA, CSMA [Carrier Sense Multiple Access]

Abstract

The present disclosure relates to apparatus and methods for RAN intelligent networks. A method and apparatus for a gnnodeb (gNB) are provided. The method comprises the following steps: sending a request for Machine Learning (ML) inference result information of a second gNB to the second gNB through an Xn interface between the gNB and the second gNB; receiving an ML inference result report from the second gNB through an Xn interface, the ML inference result report including ML inference result information; and making one or more decisions based on the received ML inference result report and the one or more ML inference results of the gNB.

Description

Apparatus and method for RAN intelligent network
Priority declaration
This application claims priority from PCT international application serial No. PCT/CN2021/084924 entitled "MACHINE LEARNING CAPABILITY AND INFERENCE RESULT SIGNALING FOR RAN INTELLIGENT NETWORK", filed on 1/4/2021. The entire contents of this application are incorporated herein by reference in their entirety.
Technical Field
Embodiments of the present disclosure relate generally to the field of communications, and in particular, to an apparatus and method for a Radio Access Network (RAN) intelligent network.
Background
Machine Learning (ML) applications for RAN intelligence have been extensively studied in academic circles and standardization organizations, such as O-RAN, 3GPP RAN3, 3GPP SA2, SA5, etc. In 3GPP release 17, 3GPP RAN3 has agreed to study the standardized impact of enabling RAN intelligent networks as part of a new air interface (NR) and EUTRA NR dual connectivity (endec) data collection enhancement study. As part of the RAN intelligent network activity functional framework, its interface and data collection impact will be studied for the identified use cases. Load balancing, power saving and mobility enhancements have been considered as main use cases.
Disclosure of Invention
In one aspect of the present disclosure, a method for a gNodeB (gNB) is provided, including: sending a request for Machine Learning (ML) inference result information of a second gNB to the second gNB through an Xn interface between the gNB and the second gNB; receiving an ML inference result report from the second gNB through the Xn interface, the ML inference result report including the ML inference result information; and making one or more decisions based on the received ML inference result report and the one or more ML inference results of the gNB.
In another aspect of the present disclosure, there is provided an apparatus for a gNodeB (gNB)), including: an interface circuit; and a processor circuit coupled with the interface circuit, wherein the processor circuit is configured to: sending a request for Machine Learning (ML) inference result information of a second gNB to the second gNB through an Xn interface between the gNB and the second gNB; receiving an ML inference result report from a second gNB through the Xn interface, the ML inference result report including the ML inference result information; and making one or more decisions based on the received ML inference result report and the one or more ML inference results of the gNB.
Drawings
Embodiments of the disclosure will be described by way of example, and not limitation, with reference to the figures of the accompanying drawings, in which like references indicate similar elements, and in which:
fig. 1 illustrates an example architecture of a system according to some embodiments of the present disclosure.
Fig. 2 illustrates exemplary successful operation of the inference result report initiation process, in accordance with various embodiments of the present disclosure.
Fig. 3 illustrates exemplary unsuccessful operation of an inference result reporting initiation process, in accordance with various embodiments of the present disclosure.
Fig. 4 illustrates exemplary successful operation of the inference result reporting process, according to various embodiments of the present disclosure.
Fig. 5 illustrates an exemplary predictive handover request/response procedure in accordance with various embodiments of the present disclosure.
Fig. 6 illustrates exemplary successful operation of a machine learning capability report initiation process according to various embodiments of the present disclosure.
Fig. 7 illustrates exemplary unsuccessful operation of a machine learning capability report initiation process, according to various embodiments of the present disclosure.
Fig. 8 illustrates an exemplary first ML model according to various embodiments of the present disclosure.
Fig. 9 illustrates an exemplary second ML model according to various embodiments of the present disclosure.
Fig. 10 illustrates an example message flow for ML-based load balancing, in accordance with various embodiments of the present disclosure.
Fig. 11 illustrates another exemplary message flow for ML-based load balancing, in accordance with various embodiments of the present disclosure.
Fig. 12 shows a flow diagram of a method 1200 for a RAN intelligent network, in accordance with some embodiments of the present disclosure.
Fig. 13 illustrates a network according to various embodiments of the disclosure.
Fig. 14 schematically illustrates a wireless network in accordance with various embodiments of the present disclosure.
Fig. 15 illustrates example components of a device according to some embodiments of the present disclosure.
Fig. 16 illustrates an example of an infrastructure device in accordance with various embodiments.
Fig. 17 is a block diagram illustrating components capable of reading instructions from a machine-readable or computer-readable medium and performing any one or more of the methodologies discussed herein, according to some example embodiments.
Detailed Description
Various aspects of the illustrative embodiments will be described using terms commonly employed by those skilled in the art to convey the substance of the disclosure to others skilled in the art. However, it will be readily appreciated by those skilled in the art that many alternative embodiments may be practiced using portions of the described aspects. For purposes of explanation, specific numbers, materials, and configurations are set forth in order to provide a thorough understanding of the illustrative embodiments. However, it will be apparent to one skilled in the art that alternative embodiments may be practiced without the specific details. In other instances, well-known features may be omitted or simplified in order not to obscure the illustrative embodiments.
Further, various operations will be described as multiple discrete operations, in a manner that is most helpful in understanding the illustrative embodiments; however, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations need not be performed in the order of presentation.
The phrases "in an embodiment," "in one embodiment," and "in some embodiments" are used repeatedly herein. The phrase generally does not refer to the same embodiment; however, it may refer to the same embodiment. The terms "comprising," "having," and "including" are synonymous, unless the context dictates otherwise. The phrases "A or B" and "A/B" mean "(A), (B) or (A and B)".
Fig. 1 illustrates an example architecture of a system 100 according to some embodiments of the present disclosure. The following description is provided for an example system 100 operating in conjunction with a Long Term Evolution (LTE) system standard provided by the 3GPP Technical Specification (TS) and a 5G or New Radio (NR) system standard. However, the example embodiments are not limited in this respect and the described embodiments may be applied to other networks that benefit from the principles described herein, such as future 3GPP systems (e.g., sixth generation (6G)) systems, institute of Electrical and Electronics Engineers (IEEE) 802.16 protocols (e.g., wireless Metropolitan Area Network (MAN), worldwide Interoperability for Microwave Access (WiMAX), etc.), and so forth.
As shown in FIG. 1, the system 100 can include a UE 101a and a UE 101b (collectively referred to as "UE(s) 101"). As used herein, the term "user equipment" or "UE" may refer to devices having radio communication capabilities and may describe remote users of network resources in a communication network. The terms "user equipment" or "UE" may be considered synonyms and may be referred to as a client, a mobile phone, a mobile device, a mobile terminal, a user terminal, a mobile unit, a mobile station, a mobile user, a subscriber, a user, a remote station, an access agent, a user agent, a receiver, a radio, a reconfigurable mobile, and the like. Furthermore, the terms "user equipment" or "UE" may include any type of wireless/wired device or any computing device that includes a wireless communication interface. In this example, the UE 101 is shown as a smartphone (e.g., a handheld touchscreen mobile computing device connectable to one or more cellular networks), but may also include any mobile or non-mobile computing device, such as a consumer electronics device, a cellular phone, a smartphone, a feature phone, a tablet, a wearable computer device, a Personal Digital Assistant (PDA), a pager, a wireless handset, a desktop computer, a laptop computer, an in-vehicle infotainment system (IVI), an in-vehicle entertainment (ICE) device, a dashboard (Instrument Cluster, IC), a heads-up display (HUD) device, an in-vehicle diagnostics (OBD) device, a dashboard mobile Device (DME), a Mobile Data Terminal (MDT), an Electronic Engine Management System (EEMS), an electronic/Engine Control Unit (ECU), an electronic/Engine Control Module (ECM), an embedded system, a microcontroller, a control module, an Engine Management System (EMS), a networked or "smart" device, a Machine Type Communication (MTC) device, a machine-to-machine (M2M), an internet of things (IoT) device, and/or the like.
In some embodiments, any of the UEs 101 may include an IoT UE, which may include a network access layer designed for low power IoT applications that utilize short-term UE connections. IoT UEs may utilize technologies such as M2M or MTC to exchange data with MTC servers or devices via PLMNs, proximity-based services (ProSe) or device-to-device (D2D) communications, sensor networks, or IoT networks. Data exchange for M2M or MTC may be machine initiated data exchange. An IoT network describes interconnected IoT UEs that may include uniquely identifiable embedded computing devices (within the internet infrastructure) with short-term connections. The IoT UE may execute background applications (e.g., keep-alive messages, status updates, etc.) to facilitate connection of the IoT network.
UE 101 may be configured to connect with (e.g., communicatively couple with) RAN 110. In an embodiment, RAN110 may be a Next Generation (NG) RAN or a 5G RAN, an evolved Universal Mobile Telecommunications System (UMTS) terrestrial radio access network (E-UTRAN), or a legacy RAN, such as a UTRAN (UMTS terrestrial radio access network) or a GERAN (GSM (global system for Mobile communications or group Sp specific Mobile) EDGE (GSM evolution) radio access network). As used herein, the term "NG RAN" or the like may refer to RAN110 operating in an NR or 5G system 100, and the term "E-UTRAN" or the like may refer to RAN110 operating in an LTE or 4G system 100. The UE 101 utilizes connections (or channels) 103 and 104, respectively, each connection including a physical communication interface or layer (discussed in further detail below). As used herein, the term "channel" may refer to any tangible or intangible transmission medium that communicates data or a stream of data. The term "channel" may be synonymous and/or equivalent to "communication channel," "data communication channel," "transmission channel," "data transmission channel," "access channel," "data access channel," "link," "data link," "carrier," "radio frequency carrier," and/or any other similar term denoting a path or medium through which data is communicated. In addition, the term "link" may refer to a connection between two devices for the purpose of transmitting and receiving information over a Radio Access Technology (RAT).
In this example, connections 103 and 104 are shown as air interfaces to enable communicative coupling, and may be consistent with a cellular communication protocol, such as a global system for mobile communications (GSM) protocol, a Code Division Multiple Access (CDMA) network protocol, a push-to-talk (PTT) protocol, a cellular PTT (POC) protocol, a Universal Mobile Telecommunications System (UMTS) protocol, a 3GPP Long Term Evolution (LTE) protocol, a fifth generation (5G) protocol, a New Radio (NR) protocol, and/or any other communication protocol discussed herein. In an embodiment, the UE 101 may exchange communication data directly via the ProSe interface 105. The ProSe interface 105 may alternatively be referred to as a Sidelink (SL) interface 105 and may include one or more logical channels including, but not limited to, a Physical Sidelink Control Channel (PSCCH), a physical sidelink shared channel (PSCCH), a Physical Sidelink Discovery Channel (PSDCH), and a Physical Sidelink Broadcast Channel (PSBCH).
UE 101b is shown configured to access an Access Point (AP) 106 (also referred to as "WLAN node 106", "WLAN terminal 106", or "WT106", etc.) via a connection 107. The connection 107 may comprise a local wireless connection, such as a connection consistent with any IEEE 802.11 protocol, where the AP 106 would comprise a wireless fidelity (WiFi) router. In this example, the AP 106 is shown connected to the internet without being connected to the core network of the wireless system (described in further detail below). In various embodiments, UE 101b, RAN110, and AP 106 may be configured to utilize LTE-WLAN aggregation (LWA) operations and/or WLAN LTE/WLAN radio level integration (LWIP) operations with IPsec tunneling. LWA operation may involve UE 101b in RRC _ CONNECTED being configured by RAN node 111 to utilize radio resources of LTE and WLAN. The LWIP operation may involve the UE 101b using WLAN radio resources (e.g., connection 107) via an internet protocol security (IPsec) protocol tunnel to authenticate and encrypt packets (e.g., internet Protocol (IP) packets) sent over the connection 107. An IPsec tunnel may include encapsulating the entire original IP packet and adding a new packet header to protect the original header of the IP packet.
RAN110 may include one or more RAN nodes 111a and 111b (collectively referred to as "RAN node(s) 111") that enable connections 103 and 104. As used herein, the terms "Access Node (AN)", "access point", "RAN node", and the like may describe a device that provides radio baseband functionality for data and/or voice connections between a network and one or more users. These access nodes may be referred to as Base Stations (BSs), next generation node BS (gnbs), RAN nodes, evolved nodebs (enbs), nodebs, road Side Units (RSUs), transmission reception points (TRxP or TRP), etc., and may include ground stations (e.g., ground access points) or satellite stations that provide coverage within a geographic area (e.g., a cell). As used herein, the term "NG RAN node" or the like may refer to a RAN node 111 (e.g., a gNB) operating in the NR or 5G system 100, and the term "E-UTRAN node" or the like may refer to a RAN node 111 (e.g., an eNB) operating in the LTE or 4G system 100. According to various embodiments, the RAN node 111 may be implemented as one or more dedicated physical devices such as a macro cell base station and/or a Low Power (LP) base station for a femto cell, pico cell or other similar cell providing smaller coverage area, less user capacity or higher bandwidth than a macro cell.
In some embodiments, all or part of the RAN node 111 may be implemented as one or more software entities running on a server computer as part of a virtual network, which may be referred to as a Cloud Radio Access Network (CRAN) and/or a virtual baseband unit pool (vbbp). In these embodiments, the CRAN or vbbp may implement RAN functional partitioning, such as: PDCP partitioning, where RRC and PDCP layers are operated by the CRAN/vbbp, while other layer 2 (L2) protocol entities are operated by individual RAN nodes 111; MAC/PHY division, where RRC, PDCP, RLC and MAC layers are operated by the CRAN/vbbp, and PHY layers are operated by individual RAN nodes 111; or "lower PHY" division, where the RRC, PDCP, RLC, MAC layers and upper parts of the PHY layers are operated by the CRAN/vbup and lower parts of the PHY layers are operated by the individual RAN node 111. The virtualization framework allows the processor cores of RAN node 111 to be freed up to execute other virtualized applications. In some implementations, the individual RAN nodes 111 may represent individual gNB-DUs that are connected to the gNB-CUs via individual F1 interfaces (not shown in fig. 1). In these implementations, the gbb-DUs may include one or more remote radio heads or radio front-end modules (RFEM), and the gbb-CUs may be operated by a server (not shown) located in the RAN110 or by a server pool in a similar manner to the CRAN/vbbp. Additionally or alternatively, one or more RAN nodes 111 may be next generation enbs (NG-enbs), which are RAN nodes that provide E-UTRA user plane and control plane protocol terminations towards the UE 101 and which are connected to the 5GC via an NG interface.
In a V2X scenario, one or more RAN nodes 111 may be or act as RSUs. The term "roadside unit" or "RSU" may refer to any transportation infrastructure entity for V2X communication. The RSU may be implemented in or by a suitable RAN node or a fixed (or relatively stationary) UE, where the RSU in or by the UE may be referred to as a "UE-type RSU", the RSU in or by the eNB may be referred to as an "eNB-type RSU", the RSU in or by the gNB may be referred to as a "gNB-type RSU", and so on. In one example, an RSU is a computing device coupled with radio frequency circuitry located at the curb side that provides connectivity support for a passing vehicle UE 101 (vUE 101). The RSU may also include internal data storage circuitry for storing intersection map geometry, traffic statistics, media, and applications/software for sensing and controlling ongoing vehicle and pedestrian traffic. The RSU may operate on the 5.9GHz Direct Short Range Communication (DSRC) band to provide very low latency communications required for high speed events, such as collision avoidance, traffic warnings, etc. Additionally or alternatively, the RSU may operate on the cellular V2X frequency band to provide the low latency communications described above as well as other cellular communication services. Additionally or alternatively, the RSU may operate as a WiFi hotspot (2.4 GHz band) and/or provide a connection to one or more cellular networks to provide uplink and downlink communications. The computing device(s) and some or all of the radio frequency circuitry of the RSU may be enclosed in a weatherproof enclosure suitable for outdoor installation, and may include a network interface controller to provide wired (e.g., ethernet) connectivity to a traffic signal controller and/or a backhaul network.
Any RAN node 111 may terminate the air interface protocol and may be the first point of contact for the UE 101. In some embodiments, any RAN node 111 may fulfill various logical functions of RAN110, including but not limited to Radio Network Controller (RNC) functions such as radio bearer management, uplink and downlink dynamic radio resource management and data packet scheduling, and mobility management.
In an embodiment, the UEs 101 may be configured to communicate with each other or any of the RAN nodes 111 over a multicarrier communication channel in accordance with various communication techniques, such as, but not limited to, orthogonal Frequency Division Multiple Access (OFDMA) communication techniques (e.g., for downlink communications) or single carrier frequency division multiple access (SC-FDMA) communication techniques (e.g., for uplink and ProSe or sidelink communications), using Orthogonal Frequency Division Multiplexing (OFDM) communication signals, although the scope of the embodiments is not limited in this respect. The OFDM signal may include a plurality of orthogonal subcarriers.
In some embodiments, the downlink resource grid may be used for downlink transmissions from any RAN node 111 to UE 101, while uplink transmissions may use similar techniques. The grid may be a time-frequency grid, referred to as a resource grid or time-frequency resource grid, which is the physical resource in the downlink per slot. Such a time-frequency plane representation is common practice for OFDM systems, which makes radio resource allocation intuitive. Each column and each row of the resource grid corresponds to one OFDM symbol and one OFDM subcarrier, respectively. The duration of the resource grid in the time domain corresponds to one time slot in a radio frame. The smallest time-frequency unit in the resource grid is represented as a resource element. Each resource grid includes a plurality of resource blocks, which describe the mapping of certain physical channels to resource elements. Each resource block comprises a set of resource elements; in the frequency domain, this may represent the minimum amount of resources that can currently be allocated. There are several different physical downlink channels transmitted using such resource blocks.
According to various embodiments, the UE 101 and the RAN node 111 communicate (e.g., transmit and receive) data over a licensed medium (also referred to as "licensed spectrum" and/or "licensed band") and an unlicensed shared medium (also referred to as "unlicensed spectrum and/or" unlicensed band "). The licensed spectrum may include channels operating in a frequency range of about 400MHz to about 3.8GHz, while the unlicensed spectrum may include a 5GHz band.
To operate in unlicensed spectrum, the UE 101 and RAN node 111 may operate using Licensed Assisted Access (LAA), enhanced LAA (eLAA), and/or other eLAA (feLAA) mechanisms. In these implementations, UE 101 and RAN node 111 may perform one or more known medium sensing operations and/or carrier sensing operations to determine whether one or more channels in the unlicensed spectrum are unavailable or otherwise occupied prior to transmission in the unlicensed spectrum. The medium/carrier sensing operation may be performed according to a Listen Before Talk (LBT) protocol.
LBT is a mechanism in which a device (e.g., UE 101, RAN node 111,112, etc.) senses a medium (e.g., channel or carrier frequency) and transmits when the medium is sensed to be idle (or when a particular channel in the medium is sensed to be unoccupied). The medium sensing operation may include Clear Channel Assessment (CCA) that utilizes at least Energy Detection (ED) to determine whether other signals are present on the channel to determine whether the channel is occupied or clear. The LBT mechanism allows the cellular/LAA network to coexist with incumbent systems in unlicensed spectrum and with other LAA networks. ED may include sensing Radio Frequency (RF) energy over an expected transmission band for a period of time and comparing the sensed RF energy to a predetermined or configured threshold.
Generally, an incumbent system in the 5GHz band is a WLAN based on IEEE 802.11 technology. WLANs employ a contention-based channel access mechanism known as carrier sense multiple access with collision avoidance (CSMA/CA). Here, when a WLAN node (e.g., a Mobile Station (MS) such as UE 101, AP 106) intends to transmit, the WLAN node may first perform a CCA prior to the transmission. In addition, a back-off mechanism is used to avoid collisions in the case where more than one WLAN node senses the channel as idle and transmits at the same time. The backoff mechanism may be a counter drawn randomly within a Contention Window Size (CWS) that is exponentially increased when collisions occur and reset to a minimum value when a transmission is successful. The LBT mechanism designed for LAA is somewhat similar to CSMA/CA of WLAN. In some implementations, an LBT procedure for a DL or UL transmission burst including a PDSCH or PUSCH transmission, respectively, may have an LAA contention window of variable length between X and Y extended CCA (ECCA) slots, where X and Y are the minimum and maximum values of CWS for LAA. In one example, the minimum CWS for LAA transmission may be 9 microseconds (μ s); however, the size of the CWS and the Maximum Channel Occupancy Time (MCOT) (e.g., transmission bursts) may be based on government regulatory requirements.
The LAA mechanism is established based on the Carrier Aggregation (CA) technique of the LTE-Advanced (LTE-Advanced) system. In CA, each aggregated carrier is referred to as a Component Carrier (CC). The CCs may have bandwidths of 1.4, 3, 5, 10, 15, or 20MHz, and may be aggregated for up to five CCs, and thus, the maximum aggregated bandwidth is 100MHz. In a Frequency Division Duplex (FDD) system, the number of aggregated carriers may be different for DL and UL, where the number of UL CCs is equal to or lower than the number of DL component carriers. In some cases, individual CCs may have different bandwidths than other CCs. In a Time Division Duplex (TDD) system, the number of CCs and the bandwidth of each CC are typically the same for DL and UL.
The CA also includes individual serving cells to provide individual CCs. The coverage of the serving cell may be different, e.g., because CCs on different frequency bands will experience different path losses. A primary serving cell or primary cell (PCell) may provide a primary CC (PCC) for both UL and DL and may handle Radio Resource Control (RRC) and non-access stratum (NAS) related activities. The other serving cells are referred to as secondary cells (scells), and each SCell may provide a separate secondary CC (SCC) for both UL and DL. SCCs may be added and removed as needed, while changing the PCC may require the UE 101 to undergo handover. In LAA, eLAA, and feLAA, some or all scells may operate in unlicensed spectrum (referred to as "LAA scells"), and the LAA scells are assisted by pcells operating in licensed spectrum. When a UE is configured with more than one LAA SCell, the UE may receive a UL grant on the configured LAA SCell, the UL grant indicating different Physical Uplink Shared Channel (PUSCH) starting positions within the same subframe.
The Physical Downlink Shared Channel (PDSCH) may carry user data and higher layer signaling to the UE 101. A Physical Downlink Control Channel (PDCCH) may carry information on a transport format and resource allocation related to a PDSCH channel, and the like. It may also inform the UE 101 of transport format, resource allocation and H-ARQ (hybrid automatic repeat request) information related to the uplink shared channel. In general, downlink scheduling (allocation of control and shared channel resource blocks to UEs 101b within a cell) may be performed at any RAN node 111 based on channel quality information fed back from any UE 101. The downlink resource allocation information may be sent on a PDCCH for (e.g., allocated to) each UE 101.
The PDCCH may use Control Channel Elements (CCEs) to convey control information. The PDCCH complex-valued symbols may first be organized into quadruplets before mapping to resource elements, and then permuted using a subblock interleaver for rate matching. Each PDCCH may be transmitted using one or more of these CCEs, where each CCE may correspond to nine sets of four physical resource elements called Resource Element Groups (REGs). Four Quadrature Phase Shift Keying (QPSK) symbols may be mapped to each REG. The PDCCH may be transmitted using one or more CCEs, depending on the size of Downlink Control Information (DCI) and channel conditions. There may be four or more different PDCCH formats (e.g., aggregation levels, L =1, 2, 4, or 8) defined in LTE with different numbers of CCEs.
Some embodiments may use the concept of resource allocation for control channel information, which is an extension of the above-described concept. For example, some embodiments may use an Enhanced Physical Downlink Control Channel (EPDCCH) that uses PDSCH resources for control information transmission. The EPDCCH may be transmitted using one or more Enhanced Control Channel Elements (ECCEs). Similar to the above, each ECCE may correspond to nine sets of four physical resource elements referred to as Enhanced Resource Element Groups (EREGs). In some cases, ECCE may have other numbers of EREGs.
The RAN nodes 111 may be configured to communicate with each other via an interface 112. In embodiments where system 100 is an LTE system, interface 112 may be an X2 interface 112. An X2 interface may be defined between two or more RAN nodes 111 (e.g., two or more enbs, etc.) connected to the EPC 120 and/or two enbs connected to the EPC 120. In some implementations, the X2 interface may include an X2 user plane interface (X2-U) and an X2 control plane interface (X2-C). The X2-U may provide a flow control mechanism for user data packets transmitted over the X2 interface and may be used to communicate information about user data transfer between enbs. For example, the X2-U may provide specific sequence number information for user data transmitted from the master eNB (MeNB) to the secondary eNB (SeNB); information on successful in-order transmission of PDCP PDUs for user data from the SeNB to the UE 101; information of PDCP PDUs not delivered to the UE 101; information about the current minimum required buffer size at the SeNB for transmitting user data to the UE; and so on. X2-C may provide intra-LTE access mobility functions including context transfer from source eNB to target eNB, user plane transfer control, etc.; a load management function; and an inter-cell interference coordination function.
In embodiments where system 100 is a 5G or NR system, interface 112 may be an Xn interface 112. An Xn interface is defined between two or more RAN nodes 111 (e.g., two or more gnbs, etc.) connected to the 5GC 120, between a RAN node 111 (e.g., a gNB) connected to the 5GC 120 and an eNB, and/or between two enbs connected to the 5GC 120. In some implementations, the Xn interface can include an Xn user plane (Xn-U) interface and an Xn control plane (Xn-C) interface. The Xn-U can provide unsecured transport of user plane PDUs and support/provide data forwarding and flow control functionality. Xn-C may provide: management and error handling functions; managing the function of the Xn-C interface; mobility support for a UE 101 in CONNECTED mode (e.g., CM-CONNECTED) includes functionality to manage CONNECTED mode UE mobility between one or more RAN nodes 111. Mobility support may include context transfer from the old (source) serving RAN node 111 to the new (target) serving RAN node 111; and control of user plane tunnels between the old (source) serving RAN node 111 and the new (target) serving RAN node 111. The protocol stack of the Xn-U may include a transport network layer established above an Internet Protocol (IP) transport layer and a GTP-U layer above UDP(s) and/or IP layers for carrying user plane PDUs. The Xn-C protocol stack may include an application layer signaling protocol, referred to as the Xn application protocol (Xn-AP), and a transport network layer built over SCTP. SCTP may be located above the IP layer and may provide guaranteed delivery of application layer messages. In the transport IP layer, point-to-point transport is used to deliver signaling PDUs. In other implementations, the Xn-U protocol stack and/or the Xn-C protocol stack may be the same as or similar to the user plane and/or control plane protocol stack(s) shown and described herein.
RAN110 is shown communicatively coupled to a core network, in this embodiment, core Network (CN) 120.CN 120 may include a plurality of network elements 122 configured to provide various data and telecommunications services to clients/subscribers (e.g., users of UE 101) connected to CN120 through RAN 110. The term "network element" may describe a physical or virtualized device used to provide wired or wireless communication network services. The term "network element" may be considered synonymous to and/or referred to as: a networking computer, network hardware, network device, router, switch, hub, bridge, radio network controller, radio access network device, gateway, server, virtualized Network Function (VNF), network Function Virtualization Infrastructure (NFVI), and/or the like. The components of CN120 may be implemented in one physical node or separate physical nodes, including components that read and execute instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium). In some embodiments, network Function Virtualization (NFV) may be used to virtualize any or all of the above network node functions via executable instructions stored in one or more computer-readable storage media (described in further detail below). Logical instantiations of the CN120 may be referred to as network slices, and logical instantiations of a portion of the CN120 may be referred to as network subslices. The NFV architecture and infrastructure may be used to virtualize one or more network functions or be executed by dedicated hardware onto physical resources including a combination of industry standard server hardware, storage hardware, or switches. In other words, the NFV system may be used to perform a virtual or reconfigurable implementation of one or more EPC components/functions.
In general, the application server 130 may be an element that provides applications that use IP bearer resources with a core network (e.g., UMTS Packet Service (PS) domain, LTE PS data services, etc.). The application server 130 may also be configured to support one or more communication services (e.g., voice over internet protocol (VoIP) sessions, PTT sessions, group communication sessions, social networking services, etc.) for the UE 101 via the EPC 120.
In an embodiment, CN120 may be a 5GC (referred to as "5GC 120," etc.), and RAN110 may connect with CN120 via NG interface 113. In an embodiment, the NG interface 113 may be divided into two parts: a NG user plane (NG-U) interface 114 that carries traffic data between RAN node 111 and User Plane Functions (UPFs); and an S1 control plane (NG-C) interface 115, which is a signaling interface between the RAN node 111 and the AMF.
In embodiments, CN120 may be a 5G CN (referred to as "5GC 120," etc.), while in other embodiments, CN120 may be an Evolved Packet Core (EPC). In the case where CN120 is an EPC (referred to as "EPC 120," etc.), RAN110 may connect with CN120 via S1 interface 113. In an embodiment, the S1 interface 13 may be divided into two parts: an S1 user plane (S1-U) interface 114, which carries traffic data between RAN node 111 and serving gateway (S-GW); and an S1-Mobility Management Entity (MME) interface 115, which is a signaling interface between RAN node 111 and the MME.
Self-organizing networks (SON) and Minimization of Drive Tests (MDT) features support data measurement and reporting since LTE. NR continues to support these functions to support the collection and reporting of data from UEs to improve network performance. These measurements and reports may be used as a baseline for RAN intelligence studies. However, in order to enable overall network optimization for a network based on the ML algorithm, more data (collected from the UE and the network) may be needed. With richer information from neighboring cells or higher layers (e.g., management Data Analysis Service (MDAS)), the trained ML model can achieve better performance by analyzing the collected data. Furthermore, given that each gNB has the capability of ML training and reasoning, the output of the ML model or the reasoning results of the ML model can also be used as input to other algorithms in itself or in neighboring cells. Therefore, it is important to enrich the current data collection to support prediction metrics and exchange these data between different network nodes over an interface.
In SON, measurements are sent between different NG-RAN nodes to report and exchange information for network performance optimization. These measurement data are reported over the Xn interface. Measurement reports may also be sent over the F1 interface, where the gNB Distribution Unit (DU) may report measurement data to the gNB Central Unit (CU). However, these measurement reports are based on current network status or information collected over a period of time.
In RAN intelligent networks, AI/ML models are trained based on data collected from past and current network performance. The output of the AI/ML model may be the prediction results (i.e., traffic, channel state, radio resource state, etc.) or the action space (i.e., enable/disable functions, configuration, handover decisions, etc.). In this case, the network can exploit these results (e.g., inference results) of the AI/ML model and optimize future network performance with the help of predictions or next actions by connected network nodes. Inference results can be specified for different use cases.
In order to support inferential data transport in current NG-RAN networks, messages need to be exchanged over the Xn/F1/NG interface.
In the present disclosure, data collection of predictive data and messages/signaling is provided for a RAN intelligent network to support data exchange via F1 interface, xn interface, NG interface, etc.
In the present disclosure, a novel message "inference result signaling" carrying an inference result output as an ML model and a novel Information Element (IE) transmitted between different network nodes and layers through an Xn interface and/or an F1 interface are provided. In addition to the inference results of resources and Transport Network Layer (TNL) capacity, etc., a "predicted effective time" IE and a "predicted confidence level" IE are provided to indicate the effective time and the confidence level of the inference results, respectively. In order to distinguish machine learning capabilities between different gnbs and their components, it is also proposed to report machine learning capabilities to their upper layer or neighboring nodes.
The proposed new messages and IEs carrying ML inference results and/or machine learning capabilities may help network nodes within the RAN intelligent network to learn about the actions or states of neighboring nodes in the future. With the information, the network node can perform more accurate performance optimization, resource allocation and the like according to the inference/prediction information of the machine learning model and the inference/prediction information of the neighbor node. Furthermore, the proposed exchange of inference results and/or machine learning capabilities also helps to reduce the signaling overhead between different network nodes.
Inference result signaling via Xn interface
In some embodiments, in the Xn application protocol, two solutions may be considered to exchange inference results: 1) New Xn procedures and messages, and 2) new IEs in existing messages and procedures.
New Xn procedures and messages
In some embodiments, new processes may be used for inference result reporting and updating. Example embodiments of procedures and message flows are shown below.
In some embodiments, a NG-RAN node may use an inference result reporting initiation procedure to request reporting of an inference result to another NG-RAN node. In some embodiments, each of the two NG-RAN nodes may be a gNB, with communications between the two NG-RAN nodes over an Xn interface. In some embodiments, the inference result report initiating procedure may use non-UE related signaling.
Fig. 2 illustrates exemplary successful operation of the inference result report initiation process, in accordance with various embodiments of the present disclosure. As shown in fig. 2, the NG-RAN node 1 may initiate the procedure by sending an inference result request message to the NG-RAN node 2 to start generating an inference result report, to stop an inference report or to add a cell to report an inference result. After receiving the inference result request message, the NG-RAN node 2:
-if Registration Request IE is set to "start", the requested inference result report generation shall be initiated according to the parameters given in the Request; or
-if the registration request IE is set to "stop", all cell inference result report generation should be stopped and the report terminated; or
If the registration request IE is set To "add", the Cell indicated in the Cell To Report List IE should be added To the previously initiated inference result Report generation for a given measurement ID. This information should be ignored if inference result generation has been initiated for the Cell indicated in the Cell To Report List IE.
As described above, the inference result request message is sent by the NG-RAN node 1 to the NG-RAN node 2 over the Xn interface to initiate the request report according to the parameters given in the message. In one embodiment, the inference result request message may include an Information Element (IE) as shown in table 1.
Direction: NG-RAN node 1- > NG-RAN node 2
TABLE 1
Figure BDA0003576437720000171
Figure BDA0003576437720000181
Returning to fig. 2, the NG-RAN node 2, upon receiving the inference result request message, sends an inference result response message to the NG-RAN node 1 to indicate that the request for inference result reporting changes the configuration successfully.
The inference result response message is sent by the NG-RAN node 2 to the NG-RAN node 1 over the Xn interface to indicate that the requested inference result report has been successfully initiated for all measurement objects included in the measurement.
In one embodiment, the inference result response message may include an IE as shown in table 2.
The direction is as follows: NG-RAN node 2- > NG-RAN node 1
TABLE 2
Figure BDA0003576437720000182
Fig. 3 illustrates exemplary unsuccessful operation of an inference result reporting initiation process, in accordance with various embodiments of the present disclosure. As shown in fig. 3, the NG-RAN node 2, upon receiving the inference result request from the NG-RAN node 1, may send an inference result failure message to the NG-RAN node 1 if any requested inference result report cannot be initiated. In some embodiments, the inference result failure message may carry an appropriate cause value.
In one embodiment, the inference result failure message may include an IE as shown in table 3.
TABLE 3
Figure BDA0003576437720000191
In some embodiments, the inference result reporting procedure may be initiated by the NG-RAN node to report the results of inference result reporting allowed by the NG-RAN node after a successful inference result reporting initiation procedure.
Fig. 4 illustrates exemplary successful operation of the inference result reporting process, according to various embodiments of the present disclosure. As shown in fig. 4, the NG-RAN node 2 may send the result of the accepted inference result report to the NG-RAN node 1 in an inference result update message. The accepted report is a report that was successfully initiated during the initiation of a previous inference result report.
The inference result update message is sent by the NG-RAN node 2 to the NG-RAN node 1 over the Xn interface to report the result of the requested inference result. In one embodiment, the inference result update message may include an IE as shown in table 4.
Direction: NG-RAN node 2- > NG-RAN node 1.
TABLE 4
Figure BDA0003576437720000201
In some embodiments, as shown in table 4, the inference result update message may include at least one of a mobility change prediction IE, a dual connectivity and carrier aggregation prediction IE, a radio resource status prediction IE, a TNL capacity index prediction IE, and a cell capacity prediction IE.
The mobility change prediction IE contains the inferred/predicted value of the Handover Trigger compared to the current value of the Handover Trigger (Handover Trigger) and the predicted number of UEs for Handover. Table 5 shows detailed information of the mobility change prediction IE.
TABLE 5
Figure BDA0003576437720000211
The dual connectivity and carrier aggregation prediction IE contains the prediction status of the functionality that enables/disables dual connectivity and carrier aggregation. Table 6 shows details of the dual connectivity and carrier aggregation prediction IE.
TABLE 6
Figure BDA0003576437720000212
The radio resource status IE indicates the predicted usage of PRBs per cell and per SSB region for all traffic in the downlink and uplink and the predicted usage of PDCCH CCEs for downlink and uplink scheduling. Table 7 shows detailed information of the radio resource status IE.
TABLE 7
Figure BDA0003576437720000221
Figure BDA0003576437720000231
The TNL capacity index prediction IE indicates the predicted available capacity of the transport network experienced by the NG RAN cell. Table 8 shows detailed information of the TNL capacity index prediction IE.
TABLE 8
Figure BDA0003576437720000232
The cell capacity prediction value IE indicates a prediction value for classifying the cell capacity with respect to other cells. The cell capacity prediction value IE indicates only the prediction resources configured for traffic purposes. Table 9 shows details of the cell capacity prediction value IE.
TABLE 9
Figure BDA0003576437720000241
New IE in existing message/procedure
The new IEs proposed above, such as the mobility change prediction IE, the dual connectivity and carrier aggregation prediction IE, the radio resource status prediction IE, the TNL capacity index prediction IE, or the cell capacity prediction IE, may also be added to the existing procedures and/or messages according to their utilization, which are not limited herein. An example of load balancing is given below. Existing messages need to be enhanced to support inference result exchange between NG-RAN nodes, for example, existing messages may be:
RESOURCE STATUS UPDATE (RESOURCE STATUS UPDATE)
-MOBILITY Change REQUEST (MOBILITY CHANGE REQUEST)
The NG-RAN node 2 sends a resource status update message to the NG-RAN node 1 to report the result of the requested measurement. In one embodiment, the resource status update message may be enhanced by adding one or more new IEs to support inference result exchange between NG-RAN nodes. Table 10 shows details of the enhanced resource status update message. The newly added IE is underlined.
The direction is as follows: NG-RAN node 2 → NG-RAN node 1.
TABLE 10
Figure BDA0003576437720000251
The mobility change request message is sent by the NG-RAN node 1 to the NG-RAN node 2 to initiate adaptation of the mobility parameters. In one embodiment, mobility change request messages may be enhanced by adding one or more new IEs to support the exchange of inference results between NG-RAN nodes. Table 11 shows detailed information of the enhanced mobility change request. The newly added IE is underlined.
Direction: NG-RAN node 1 → NG-RAN node 2.
TABLE 11
Figure BDA0003576437720000261
In some embodiments, the messages exchanged between NG-RAN nodes will remain the same, while the inference results are embedded in the existing IE.
In an example, the inference result may be embedded in an existing TNL capacity indicator IE in an existing resource status update message sent by the NG-RAN node 2 to the NG-RAN node 1 via the Xn interface for reporting the result of the requested measurement. The TNL capacity index IE indicates the offered and available capacity of the transport network experienced by the NG RAN cell. Table 12 shows detailed information of the enhanced TNL capacity index IE. The enhanced content is underlined.
TABLE 12
Figure BDA0003576437720000271
In one embodiment, the inference result may be embedded in an existing Radio Resource Status (Radio Resource Status) IE in an existing Resource Status update message sent by the NG-RAN node 2 to the NG-RAN node 1 over the Xn interface to report the result of the requested measurement. The radio resource status IE indicates the usage of PRBs per cell and per SSB region for all traffic in downlink and uplink and the usage of PDCCH CCEs for downlink and uplink scheduling. Table 13 shows detailed information of the enhanced radio resource status IE. The enhanced content is underlined.
Watch 13
Figure BDA0003576437720000281
Figure BDA0003576437720000291
Figure BDA0003576437720000301
Note that for other use cases such as energy saving and mobility enhancement, the predicted trend and the accurate predicted value may be added as one or more new IEs to the corresponding message, as defined in section 9.2 of 3gpp ts38.423, which is not limited herein.
In some embodiments, for energy saving use cases, predicted energy efficiency, predicted energy STATUS, validity time, predicted RESOURCE STATUS, etc. may be exchanged over the Xn interface and/or the F1 interface in a new inference result request/response message or an existing message (e.g., RESOURCE STATUS UPDATE "). Table 14 shows detailed information of the enhanced resource status update message for power saving. The newly added IE is underlined.
TABLE 14
Figure BDA0003576437720000302
In some embodiments, for mobility enhanced use cases, PREDICTED HANDOVER timing/priority/RESOURCE reservation/CHO related configurations, UE PREDICTED trajectories, etc. may be exchanged via the Xn interface through a new inference result REQUEST/response message, a new "PREDICTED HANDOVER REQUEST" (PREDICTED) message, or an existing message "RESOURCE STATUS UPDATE" (RESOURCE STATUS REQUEST), "HANDOVER REQUEST" (HANDOVER REQUEST) "etc.
Fig. 5 illustrates an exemplary predicted handover request/response procedure in accordance with various embodiments of the present disclosure. As shown in fig. 5, NG-RAN node 1 may send a predicted handover request message to NG-RAN node 2. NG-RAN node 2, upon receiving the PREDICTED HANDOVER request message, sends a PREDICTED HANDOVER RESPONSE (PREDICTED HANDOVER RESPONSE) message to NG-RAN node 1 to indicate that the requested change for the PREDICTED HANDOVER has been successfully configured and may perform the PREDICTED HANDOVER.
The predicted HANDOVER REQUEST message may include, in addition to the existing IEs in the HANDOVER REQUEST, the predicted HANDOVER execution time, the predicted effective time for the HANDOVER, the predicted resource allocation, the predicted selected UE context, etc.
Inference result signaling via F1 interface
In some embodiments, when a Distribution Unit (DU) of an NG-RAN node (e.g., gNB) acts as an ML inference node, the DU may need to report prediction information to upper layers so that the upper layers can use this prediction result and corresponding rewards to adjust the performance of the ML model.
Taking the use case of load balancing as an example, the enhancement message for load balancing may include a resource status update message. A resource status update message is sent by the gNB-DU to the gNB-CU over the F1 interface between the gNB-DU and the gNB-CU to report the result of the requested measurement. In one example, resource status update messages may be enhanced to support the exchange of inference results between DUs and CUs, as shown in table 15. Table 15 shows details of the enhanced resource status update message. The enhanced content is underlined.
The direction is as follows: gNB-DU → gNB-CU.
Watch 15
Figure BDA0003576437720000321
In one embodiment, the inference result may be embedded in a radio resource status IE in a resource status update message sent by the gNB-DU to the gNB-CU over the F1 interface. The radio resource status IE indicates PRB usage per cell and per SSB region for all traffic in the downlink and uplink. As shown in table 16, the radio resource status IE may be enhanced to support the exchange of inference results between DUs and CUs. Table 16 shows detailed information of the enhanced radio resource status IE. The enhanced content is underlined.
TABLE 16
Figure BDA0003576437720000341
Figure BDA0003576437720000351
A Capacity Value (Capacity Value) IE indicates the amount of resources per cell and per SSB area available with respect to the total gbb-DU resources. Capacity values should be measured and reported in order to preserve the minimum gbb-DU resource usage of the existing services depending on the implementation. The capacity value IE may be weighted according to the ratio of the cell capacity classification values (if available). In one embodiment, the capacity value IE may be enhanced to support the exchange of inference results between DUs and CUs, as shown in table 17. Table 17 shows details of the enhanced capacity value IE. The enhanced content is underlined.
TABLE 17
Figure BDA0003576437720000361
Inference result signaling via NG interface
In some embodiments, the inference results are reported to the Core Network (CN) or the CN receives inference results of other NG-RAN nodes over the NG interface. The NG interface may also contain the reason for the switch if the AI/ML model generated the inference results for the particular use case.
A new cause group "AI/ML forecast" is added as a new cause group for the NGAP protocol. Multiple handover reasons may be included as IE types under this set of reasons, such as load balancing, mobility, network power saving, etc. This new reason may also be added as a new element under "other reasons" as "AI/ML prediction". The "AI/ML prediction" can be considered as "UE Activity Behavior Information Source (Source of UE Activity Behavior Information)" in "Expected UE Activity Behavior" (e.g., section 9.3.1.94 in TS 38.413). The cause group in TS38.413 is shown in table 18, for example, and the enhancement is shown by the underline.
Watch 18
Figure BDA0003576437720000371
The NG interface may also include predicted location information generated by an NG-RAN node and transmitted from the NG-RAN node to the CN. The predicted location information may be included in a plurality of messages including a path switch request, a switch notification, a location report, an RRC inactivity transition report, a UE context modification response, and the like. The predicted location information in TS38.413 is shown, for example, in table 19, and the enhancement content is underlined as follows.
Watch 19
Figure BDA0003576437720000372
Figure BDA0003576437720000381
Machine learning capability signaling
In RAN intelligent networks, limited in hardware capabilities or network capabilities, not all network nodes may have ML training and reasoning capabilities. In order to distinguish machine learning capabilities between different gnbs and their components, it is proposed to report machine learning capabilities to their upper layer or neighboring nodes.
In some embodiments, the inference result reporting process described above may be used as a machine learning capability indication. In one embodiment, the NG-RAN node 2 may send an inference result failure message if the NG-RAN node 2 does not support machine learning.
In NR, the gNB-CU and the gNB-DU may be allocated in different locations with different hardware. In this case, the machine learning capabilities of the CUs and DUs may differ according to their hardware capabilities, network stack/algorithm capabilities, etc. To address this issue, the present disclosure provides an F1 interface management flow that checks machine learning capabilities.
In some embodiments, the gNB-CU uses a machine learning capability initiation procedure to request reporting of machine learning capabilities to the gNB-DU. In some embodiments, the machine learning capability initiation procedure uses non-UE related signaling.
Fig. 6 illustrates exemplary successful operation of a machine learning capability initiation process according to various embodiments of the present disclosure. As shown in fig. 6, the gNB-CU may initiate the machine learning capability initiation procedure by sending a machine learning capability request message to the gNB-DU to request machine learning capabilities or to add cells to report measurements. After receiving the machine learning capability request message, the gNB-DU:
-shall report "support" or "not support" according to its own capabilities; or
If the registration request IE is set To "add", the Cell indicated in the Report Cell List (Cell To Report List) IE should be added To the measurement previously initiated for a given measurement ID. This information will be ignored if measurements have been initiated for the cells indicated in the reporting cell list IE.
Fig. 7 illustrates example unsuccessful operations of a machine learning capability initiation process, according to various embodiments of the present disclosure. As shown in fig. 7, if any requested measurements cannot be initiated, the gNB-DU should send a machine learning capability failure message to the gNB-CU. In some embodiments, the machine learning capability failure message may carry an appropriate cause value.
Examples of ML-enabled load balancing
In some embodiments, consider a load balancing use case that enables ML in a SON. Based on different AI/ML algorithms, there are two possible ML models that can be used to provide assistance information for load balancing use cases.
1) The first ML model provides auxiliary information such as a prediction result of traffic/resource status.
2) The second ML model provides a predicted action space for switching decisions/feature enablement, etc.
Among the first type of ML models, the first ML model is used to provide a prediction result of a traffic state, a resource utilization state, even a handover trigger threshold, and the like. The prediction results indicate recent trends in network changes and resource management. Compared with the traditional load balancing switching decision, the gNB can make a switching decision according to the prediction information with the help of the prediction result. This may help the network improve network performance by avoiding delays in load balancing decisions based on radio resource status.
Fig. 8 illustrates an example of a first ML model according to various embodiments of the present disclosure. As shown in fig. 8, example inputs to the first ML model may include at least one of: traffic volume, cell capacity, cell radio resource status report, handover trigger threshold. Note that the ML model input data may come from the local RAN node only, or from the local RAN node and neighboring cells. Example outputs of the first ML model may include at least one of: traffic prediction, cell radio resource utilization prediction, handover trigger threshold prediction.
The second scheme based on the second ML model allows the ML model to provide guidance to the wireless network more flexibly. The output of the second ML model may be an action space of rrcreeconfiguration (RRC reconfiguration) of the UE, for example, whether to trigger handover of the UE, whether to enable/disable DC/CA function of the UE, reduce/increase radio resources of a certain UE, and the like. According to the proposed configuration, the network can decide by implementation whether to follow the policies provided by the ML model.
Fig. 9 illustrates an example of a second ML model according to various embodiments of the present disclosure. The input of the second ML model may include at least one of: traffic volume, cell capacity, cell radio resource status report, handover trigger threshold, the output of the second ML model may comprise at least one of: radio resource policy, handover decision, DC-enable and CA-enable.
Fig. 10 illustrates an example of a message flow for ML-based load balancing, where ML training is located at operation, administration and maintenance (OAM) and ML inference is located at the gNB CU, in accordance with various embodiments of the present disclosure.
As shown in FIG. 10, two options are presented for AI/ML based load balancing. The first option includes that the source gNB CU and the neighbor gNB CU perform ML model inference respectively to obtain predicted results such as predicted traffic and resource status, and exchange the predicted results, so that the source gNB CU makes load balancing decisions and/or switching decisions according to the predicted traffic and resource status of the source gNB CU and the predicted traffic and resource status of the neighbor gNB CU. Second, the source gNB CU may perform ML model reasoning to obtain the predicted action and provide the predicted action to the neighboring gNB CU. In response to receiving the predicted action, the neighboring gbb CU may send an action response to the source gbb CU. Based on the predicted actions and the transmission of action responses, the source gNB CU may make load balancing decisions and/or handover decisions.
Fig. 11 illustrates another example of a message flow for ML-based load balancing, where both ML training and ML inference are located at the gNB CU, in accordance with various embodiments of the present disclosure.
As shown in fig. 11, the source gNB CU receives the baseline strategy for AI/ML-based load balancing from OAM, and performs ML model training and reasoning and exchanges prediction results, such as predicted traffic and resource status, with neighboring gNB CUs so that the source gNB CU can make load balancing decisions and/or switching decisions based on its predicted traffic and resource status and the predicted traffic and resource status of the neighboring gNB CUs. The predicted load balancing decision and/or handover decision also follow the baseline policy of the OAM configuration.
Fig. 12 illustrates a flow diagram of a method 1200 for a RAN intelligent network, in accordance with some embodiments of the present disclosure. The method 1100 may be performed by a NG-RAN node (e.g., a gNB).
Method 1200 may include steps 1210, 1220, and 1230. However, in some embodiments, method 1200 may include more or fewer or different steps, and the disclosure is not limited thereto.
In step 1210, the first gNB sends a request for ML inference result information of the second gNB to the second gNB via an Xn interface between the first gNB and the second gNB.
In step 1220, the first gNB receives an ML inference result report including ML inference result information from the second gNB via the Xn interface.
In step 1230, one or more decisions are made based on the received ML inference result report and one or more ML inference results of the first gNB.
In some embodiments, the machine learning reasoning results report includes: an indication of a time of validity of the machine learning reasoning results information and/or an indication of a confidence level of the machine learning reasoning results information.
In some embodiments, the first gNB may send a request for ML capability information of the second gNB to the second gNB over the Xn interface, and receive an indication of the ML capability information of the second gNB over the Xn interface.
In some embodiments, the first gNB may report ML inference result information of the DU from the DU to the CU through an F1 interface between a Central Unit (CU) and a Distribution Unit (DU) of the gNB.
In some embodiments, the first gNB may report an indication of a time of validity of the ML inference result information and/or an indication of a confidence level of the ML inference result information from the DUs of the gNB to the CUs of the gNB over the F1 interface.
In some embodiments, the first gNB may report ML capability information of the DUs of the gNB from the DUs to the CUs of the gNB over the F1 interface.
In some embodiments, the first gNB may transmit the ML inference result information of the gNB to the core network through the NG interface.
In some embodiments, the first gNB may receive a baseline policy from the OAM and make one or more decisions from the received ML inference result report and one or more ML inference results of the gNB, following the baseline policy from the OAM.
In some embodiments, the machine-learned reasoning results information is transmitted through a new reasoning results request/response process.
In some embodiments, the ML inference result information is transmitted over the existing Xn interface, F1 interface, and/or NG interface using the new prediction IE.
In some embodiments, the ML inference result information includes at least one of: radio resource state prediction, TNL capacity prediction, cell capacity prediction, mobility change prediction, cause set, UE trajectory/location prediction, predicted handover request/response, predicted energy efficiency/state, and dual connectivity and carrier aggregation prediction.
In some embodiments, the ML inference result information includes one or more predicted results and/or predicted action spaces.
In some embodiments, the one or more predictions indicate at least one of a traffic status, a channel status, a radio resource status, and a handover trigger threshold.
In some embodiments, the predicted action space indicates at least one of a handover decision and feature enablement.
In some embodiments, the first gNB performs load balancing, power saving, mobility optimization, and/or handover based on the received ML inference result report and one or more machine learning inference results of the gNB.
The proposed new messages and IEs carrying ML inference results and/or ML capabilities may help network nodes within the RAN intelligent network to learn about the actions or states of neighboring nodes in the future. With the information, the network node can perform more accurate performance optimization/resource allocation and the like according to the inference/prediction information of the machine learning model and the inference/prediction information of the neighbor node. Furthermore, the proposed inference result and/or ML capability exchange also helps to reduce the signaling overhead between different network nodes.
Fig. 13 shows a diagram of a network 1300 according to various embodiments of the present disclosure. The network 1300 may operate in a manner consistent with the 3GPP technical specifications for LTE or 5G/NR systems. However, the example embodiments are not limited in this respect, and the described embodiments may be applied to other networks, such as future 3GPP systems and the like, that benefit from the principles described herein.
Network 1300 may include a UE1302, which may include any mobile or non-mobile computing device designed to communicate with RAN1304 via an over-the-air connection. The UE1302 may be, but is not limited to, a smartphone, a tablet, a wearable computer device, a desktop computer, a laptop computer, an in-vehicle infotainment device, an in-vehicle entertainment device, an instrument cluster, a heads-up display device, an in-vehicle diagnostic device, a dashboard mobile device, a mobile data terminal, an electronic engine management system, an electronic/engine control unit, an electronic/engine control module, an embedded system, a sensor, a microcontroller, a control module, an engine management system, a networked appliance, a machine-type communication device, an M2M or D2D device, an internet-of-things device, and/or the like.
In some embodiments, the network 1300 may include multiple UEs directly coupled to each other through edge link interfaces. The UE may be an M2M/D2D device that communicates using a physical side link channel (e.g., without limitation, a physical side link broadcast channel (PSBCH), a physical side link discovery channel (PSDCH), a physical side link shared channel (PSSCH), a physical side link control channel (PSCCH), a physical side link basic channel (PSFCH), etc.).
In some embodiments, the UE1302 may also communicate with the AP 1306 over an over-the-air connection. The AP 1306 may manage WLAN connections that may be used to offload some/all network traffic from the RAN 1304. The connection between the UE1302 and the AP 1306 may be in accordance with any IEEE 802.13 protocol, wherein the AP 1306 may be wireless fidelity
Figure BDA0003576437720000431
A router. In some embodiments, the UE1302, RAN1304, and AP 1306 may utilize cellular WLAN aggregation (e.g., LTE-WLAN aggregation (LWA)/lightweight IP (LWIP)). Cellular WLAN aggregation may involve a UE1302 configured by a RAN1304 utilizing both cellular radio resources and WLAN resources.
The RAN1304 can include one or more access nodes, such as AN 1308. The AN 1308 can terminate air interface protocols of the UE1302 by providing access stratum protocols including RRC, packet Data Convergence Protocol (PDCP), radio Link Control (RLC), medium Access Control (MAC), and L1 protocols. In this manner, AN 1308 may enable data/voice connectivity between CN1320 and UE 1302. In some embodiments, the AN 1308 may be implemented in a separate device or as one or more software entities running on a server computer, as part of a virtual network, for example, which may be referred to as a CRAN or virtual baseband unit pool. AN 1308 can be referred to as a Base Station (BS), a gNB, a RAN node, AN evolved node B (eNB), a next generation eNB (ng-eNB), a node B (NodeB), a roadside unit (RSU), a TRxP, a TRP, and so on. The AN 1308 may be a macrocell base station or a low power base station that provides a microcell, picocell, or other similar cell with smaller coverage area, smaller user capacity, or higher bandwidth than a macrocell.
In embodiments where the RAN1304 includes multiple ANs, they may be coupled to each other over AN X2 interface (in the case where the RAN1304 is AN LTE RAN) or AN Xn interface (in the case where the RAN1304 is a 5G RAN). The X2/Xn interface, which in some embodiments may be separated into a control plane interface/user plane interface, may allow the AN to communicate information related to handover, data/context transfer, mobility, load management, interference coordination, etc.
The ANs of RAN1304 can each manage one or more cells, groups of cells, component carriers, etc., to provide UE1302 with AN air interface for network access. The UE1302 may be simultaneously connected with multiple cells provided by the same or different ANs of the RAN 1304. For example, the UE1302 and RAN1304 may use carrier aggregation to allow the UE1302 to connect with multiple component carriers, each component carrier corresponding to a primary cell (Pcell) or a secondary cell (Scell). In a dual connectivity scenario, a first AN may be a master node providing a Master Cell Group (MCG) and a second AN may be a secondary node providing a Secondary Cell Group (SCG). The first/second AN can be any combination of eNB, gNB, ng-eNB, etc.
RAN1304 may provide an air interface over licensed or unlicensed spectrum. To operate in unlicensed spectrum, a node may use a Licensed Assisted Access (LAA), enhanced LAA (eLAA), and/or further enhanced LAA (feLAA) mechanism based on Carrier Aggregation (CA) technology with PCell/Scell. Prior to accessing the unlicensed spectrum, the node may perform a media/carrier sensing operation based on, for example, a Listen Before Talk (LBT) protocol.
In a vehicle-to-everything (V2X) scenario, the UE1302 or AN 1308 may be or act as a Road Side Unit (RSU), which may refer to any transport infrastructure entity for V2X communication. The RSU may be implemented in or by AN appropriate AN or stationary (or relatively stationary) UE. An RSU implemented in or by a UE may be referred to as a "UE-type RSU"; an RSU implemented in or by an eNB may be referred to as an "eNB-type RSU"; an RSU implemented in or by a next generation NodeB (gNB) may be referred to as a "gNB-type RSU"; and so on. In one example, the RSU is a computing device coupled with radio frequency circuitry located at the curb side that provides connectivity support to passing vehicle UEs. The RSU may also include internal data storage circuitry for storing intersection map geometry, traffic statistics, media, and applications/software for sensing and controlling ongoing vehicle and pedestrian traffic. The RSU may provide very low latency communications required for high speed events, e.g., collision avoidance, traffic warnings, etc. Additionally or alternatively, the RSU may provide other cellular/WLAN communication services. The components of the RSU may be enclosed in a weatherproof enclosure suitable for outdoor installation and may include a network interface controller to provide a wired connection (e.g., ethernet) to a traffic signal controller or backhaul network.
In some embodiments, RAN1304 may be an LTE RAN 1310 that includes an evolved node B (eNB), e.g., eNB 1312.LTE RAN 1310 may provide an LTE air interface with the following characteristics: SCS at 15 kHz; a CP-OFDM waveform for DL and an SC-FDMA waveform for UL; turbo codes for data and TBCC for control, etc. The LTE air interface may rely on CSI-RS for CSI acquisition and beam management; relying on a PDSCH/PDCCH demodulation reference signal (DMRS) for PDSCH/PDCCH demodulation; and relying on CRS for cell search and initial acquisition, channel quality measurements, and channel estimation for coherent demodulation/detection at the UE. The LTE air interface may operate over the sub-6 GHz band.
In some embodiments, the RAN1304 may be a Next Generation (NG) -RAN1314 having a gNB (e.g., gNB 1316) or a gn-eNB (e.g., NG-eNB 1318). The gNB 1316 may connect with 5G-enabled UEs using a 5G NR interface. The gNB 1316 may be connected to the 5G core through an NG interface, which may include an N2 interface or an N3 interface. The Ng-eNB 1318 may also be connected with the 5G core over the Ng interface, but may be connected with the UE over the LTE air interface. The gNB 1316 and ng-eNB 1318 may be connected to each other through an Xn interface.
In some embodiments, the NG interface may be divided into two parts, an NG user plane (NG-U) interface, which carries traffic data between nodes of the NG-RAN1314 and UPF1348, and an NG control plane (NG-C) interface, which is a signaling interface (e.g., N2 interface) between the NG-RAN1314 and nodes of the access and mobility management function (AMF) 1344.
The NG-RAN1314 may provide a 5G-NR air interface with the following characteristics: variable SCS; CP-OFDM for DL, CP-OFDM for UL, and DFT-s-OFDM; polar, repetition, simplex, and Reed-Muller (Reed-Muller) codes for control, and LDPC for data. The 5G-NR air interface may rely on CSI-RS, PDSCH/PDCCH DMRS similar to the LTE air interface. The 5G-NR air interface may not use CRS, but may use PBCH DMRS for PBCH demodulation; performing phase tracking of the PDSCH using the PTRS; and time tracking using the tracking reference signal. The 5G-NR air interface may operate over the FR1 band, which includes the sub-6 GHz band, or the FR2 band, which includes the 24.25GHz to 52.6GHz band. The 5G-NR air interface may include SSBs, which are regions of a downlink resource grid including PSS/SSS/PBCH.
In some embodiments, the 5G-NR air interface may use BWP for various purposes. For example, BWP may be used for dynamic adaptation of SCS. For example, the UE1302 may be configured with multiple BWPs, where each BWP configuration has a different SCS. When the BWP is indicated to the UE1302 to change, the SCS of the transmission also changes. Another use case for BWP is related to power saving. In particular, the UE1302 may be configured with multiple BWPs with different numbers of frequency resources (e.g., PRBs) to support data transmission in different traffic load scenarios. BWPs containing a smaller number of PRBs may be used for data transmission with smaller traffic load while allowing power savings at the UE1302 and, in some cases, the gNB 1316. BWPs containing a large number of PRBs may be used in scenarios with higher traffic loads.
RAN1304 is communicatively coupled to CN1320, which comprises a network element, to provide various functions to support data and telecommunications services to customers/subscribers (e.g., users of UE 1302). The components of CN1320 may be implemented in one physical node or may be implemented in different physical nodes. In some embodiments, NFV may be used to virtualize any or all of the functions provided by the network elements of CN1320 onto physical computing/storage resources in servers, switches, and the like. The logical instances of CN1320 may be referred to as network slices, and the logical instantiations of a portion of CN1320 may be referred to as network subslices.
In some embodiments, CN1320 may be LTE CN 1322, which may also be referred to as an Evolved Packet Core (EPC). LTE CN 1322 may include a Mobility Management Entity (MME) 1324, a Serving Gateway (SGW) 1326, a Serving GPRS Support Node (SGSN) 1328, a Home Subscriber Server (HSS) 1330, a Proxy Gateway (PGW) 1332, and a policy control and charging rules function (PCRF) 1334, which are coupled to each other by an interface (or "reference point") as shown. The functions of the elements of LTE CN 1322 may be briefly introduced as follows.
The MME1324 may implement mobility management functions to track the current location of the UE1302 to facilitate patrol, bearer activation/deactivation, handover, gateway selection, authentication, and so forth.
The SGW 1326 may terminate the S1 interface towards the RAN and route data packets between the RAN and the LTE CN 1322. SGW 1326 may be a local mobility anchor for inter-RAN node handovers and may also provide an anchor for inter-3 GPP mobility. Other responsibilities may include lawful interception, charging, and some policy enforcement.
The SGSN 1328 may track the location of the UE1302 and perform security functions and access control. In addition, the SGSN 1328 may perform EPC inter-node signaling for mobility between different RAT networks; PDN and S-GW selection specified by MME 1324; MME selection for handover, etc. An S3 reference point between the MME1324 and the SGSN 1328 may enable user and bearer information exchange for inter-3 GPP access network mobility in idle/active state.
HSS 1330 may include a database for network users that includes subscription-related information that supports network entities handling communication sessions. HSS 1330 may provide support for routing/roaming, authentication, admission, naming/addressing resolution, location dependency, etc. The S6a reference point between HSS 1330 and MME1324 may enable the transmission of subscription and authentication data to authenticate/grant a user access to LTE CN 1320.
PGW 1332 may terminate the SGi interface towards a Data Network (DN) 1336, which may include an application/content server 1338. The PGW 1332 may route data packets between the LTE CN 1322 and a data network 1336. The PGW 1332 may be coupled with the SGW 1326 through an S5 reference point to facilitate user plane tunneling and tunnel management. PGW 1332 may also include nodes (e.g., PCEFs) for policy enforcement and charging data collection. Additionally, the SGi reference point between PGW 1332 and data network 1336 may be, for example, an operator external public, private PDN, or operator internal packet data network for providing IMS services. The PGW 1332 may be coupled with the PCRF 1334 via a Gx reference point.
PCRF 1334 is the policy and charging control element of LTE CN 1322. The PCRF 1334 may be communicatively coupled to the application/content server 1338 to determine appropriate QoS and charging parameters for a service flow. The PCRF 1332 may provide the associated rules to the PCEF (via the Gx reference point) with the appropriate TFTs and QCIs.
In some embodiments, CN1320 may be 5G core network (5 GC) 1340. The 5GC1340 may include an authentication server function (AUSF) 1342, an access and mobility management function (AMF) 1344, a Session Management Function (SMF) 1346, a User Plane Function (UPF) 1348, a Network Slice Selection Function (NSSF) 1350, a network open function (NEF) 1352, an NF storage function (NRF) 1354, a Policy Control Function (PCF) 1356, a Unified Data Management (UDM) 1358, and an Application Function (AF) 1360, which are coupled to one another by interfaces (or "reference points") as shown. The function of the elements of the 5GC1340 can be briefly described as follows.
The AUSF 1342 may store data for authentication of the UE1302 and handle authentication-related functions. The AUSF 1342 may facilitate a common authentication framework for various access types. The AUSF 1342 may also exhibit a Nausf service based interface, in addition to communicating with other elements of the 5GC1340 through reference points as shown.
The AMF 1344 may allow other functions of the 5GC1340 to communicate with the UE1302 and the RAN1304 and subscribe to notifications about mobility events for the UE 1302. The AMF 1344 may be responsible for registration management (e.g., registering the UE 1302), connection management, reachability management, mobility management, lawful interception of AMF related events, and access authentication and permissions. AMF 1344 may provide for the transmission of Session Management (SM) messages between UE1302 and SMF 1346 and act as a transparent proxy for routing SM messages. The AMF 1344 may also provide for the transmission of SMS messages between the UE1302 and the SMSF. AMF 1344 may interact with AUSF 1342 and UE1302 to perform various security anchoring and context management functions. Further, AMF 1344 may be a termination point for the RAN CP interface, which may include or be an N2 reference point between RAN1304 and AMF 1344; the AMF 1344 may act as a termination point for NAS (N1) signaling and perform NAS ciphering and integrity protection. The AMF 1344 may also support NAS signaling with the UE1302 over the N3 IWF interface.
SMF 1346 may be responsible for SM (e.g., session establishment, tunnel management between UPF1348 and AN 1308); UE IP address assignment and management (including optional permissions); selection and control of the UP function; configuring flow control at UPF1348 to route the flow to the appropriate destination; termination of the interface to the policy control function; controlling a portion of policy enforcement, charging, and QoS; lawful interception (for SM events and interface to the LI system); terminate the SM part of the NAS message; a downlink data notification; initiating AN specific SM message (sent to AN 1308 over N2 through AMF 1344); and determining an SSC pattern for the session. SM may refer to the management of PDU sessions, and a PDU session or "session" may refer to a PDU connectivity service that provides or enables the exchange of PDUs between the UE1302 and the data network 1336.
The UPF1348 may serve as an anchor point for intra-RAT and inter-RAT mobility, an external PDU session point to interconnect with the data network 1336, and a branch point to support multi-homed PDU sessions. The UPF1348 may also perform packet routing and forwarding, perform packet inspection, perform the user plane part of policy rules, lawful intercepted packets (UP collection), perform traffic usage reporting, perform QoS processing for the user plane (e.g., packet filtering, gating, UL/DL rate enforcement), perform uplink traffic verification (e.g., SDF to QoS flow mapping), transport level packet marking in the uplink and downlink, and perform downlink packet buffering and downlink data notification triggering. UPF1348 may include an uplink classifier to support routing of traffic flows to a data network.
The NSSF 1350 may select a set of network slice instances that serve the UE 1302. NSSF 1350 may also determine allowed Network Slice Selection Assistance Information (NSSAI) and mapping to a single NSSAI (S-NSSAI) of the subscription, if desired. The NSSF 1350 may also determine the set of AMFs to use to serve the UE1302, or determine a list of candidate AMFs, based on a suitable configuration and possibly by querying the NRF 1354. Selection of a set of network slice instances for the UE1302 may be triggered by the AMF 1344 (with which the UE1302 registers by interacting with the NSSF 1350), which may result in a change in the AMF. The NSSF 1350 may interact with the AMF 1344 via the N22 reference point; and may communicate with another NSSF in the visited network via an N31 reference point (not shown). Further, NSSF 1350 may expose interfaces based on NSSF services.
NEF 1352 may securely expose services and capabilities provided by 3GPP network functions for third parties, internal disclosure/re-disclosure, AF (e.g., AF 1360), edge computing or fog computing systems, and the like. In these embodiments, NEF 1352 may authenticate, license, or throttle AFs. NEF 1352 may also translate information exchanged with AF1360 and information exchanged with internal network functions. For example, the NEF 1352 may convert between the AF service identifier and the internal 5GC information. NEF 1352 may also receive information from other NFs based on their public capabilities. This information may be stored as structured data at NEF 1352 or at the data store NF using a standardized interface. NEF 1352 may then re-disclose the stored information to other NFs and AFs, or for other purposes such as analysis. In addition, NEF 1352 may expose an interface based on the Nnef service.
NRF 1354 may support a service discovery function, receive NF discovery requests from NF instances, and provide information of discovered NF instances to NF instances. NRF 1354 also maintains information on available NF instances and the services it supports. As used herein, the terms "instantiate," "instance," and the like may refer to creating an instance, "instance" may refer to a specific occurrence of an object, which may occur, for example, during execution of program code. Further, NRF 1354 may expose an interface based on the Nnrf service.
PCFs 1356 may provide policy rules to control plane functions to enforce them and may also support a unified policy framework to manage network behavior. PCF1356 may also implement a front end to access subscription information related to policy decisions in the UDR of UDM 1358. In addition to communicating with functions through reference points as shown, PCF1356 also presents an interface based on Npcf services.
UDM1358 may process subscription-related information to support network entities handling communication sessions and may store subscription data for UE 1302. For example, subscription data may be communicated via an N8 reference point between UDM1358 and AMF 1344. UDM1358 may include two parts: front end and UDR are applied. The UDR may store policy data and subscription data for UDM1358 and PCF1356, and/or structured data and application data for disclosure (including PFD for application detection, application request information for multiple UEs 1302) for NEF 1352. UDR 221 may expose a Nudr service-based interface to allow UDM1358, PCF1356, and NEF 1352 to access a particular collection of stored data, as well as to read, update (e.g., add, modify), delete, and subscribe to notifications of relevant data changes in the UDR. The UDM may include a UDM-FE that is responsible for handling credentials, location management, subscription management, and the like. Several different front ends may serve the same user in different transactions. The UDM-FE accesses subscription information stored in the UDR and performs authentication credential processing, user identification processing, access permission, registration/mobility management, and subscription management. In addition to communicating with other NFs through reference points as shown, UDM1358 may also expose a numm service based interface.
The AF1360 may provide application influence on traffic routing, provide access to NEF, and interact with the policy framework for policy control.
In some embodiments, the 5GC1340 may enable edge computing by selecting an operator/third party service that is geographically close to the point at which the UE1302 attaches to the network. This may reduce latency and load on the network. To provide an edge computing implementation, the 5GC1340 can select a UPF1348 near the UE1302 and perform traffic steering from the UPF1348 to the data network 1336 through an N6 interface. This may be based on UE subscription data, UE location, and information provided by AF 1360. In this way, the AF1360 may affect UPF (re-) selection and traffic routing. Based on operator deployment, the network operator may allow the AF1360 to interact directly with the relevant NFs when the AF1360 is considered a trusted entity. Additionally, the AF1360 may expose a Naf service-based interface.
Data network 1336 may represent various network operator services, internet access, or third party services, which may be provided by one or more servers, including, for example, an application/content server 1338.
Fig. 14 schematically illustrates a wireless network 1400 in accordance with various embodiments. The wireless network 1400 can include a UE1402 in wireless communication with AN 1404. The UE1402 and the AN 1404 may be similar to and substantially interchangeable with the co-located components described elsewhere herein.
The UE1402 can be communicatively coupled with AN 1404 via a connection 1406. Connection 1406 is shown as an air interface to enable communication coupling and may be consistent with cellular communication protocols operating at millimeter wave (mmWave) or sub-6 GHz frequencies, such as the LTE protocol or the 5G NR protocol.
UE1402 may include a host platform 1408 coupled with a modem platform 1410. Host platform 1408 may include application processing circuitry 1412, which may be coupled with protocol processing circuitry 1414 of modem platform 1410. The application processing circuitry 1412 may run various applications of the source/receiver application data for the UE 1402. The application processing circuitry 1412 may also implement one or more layers of operations to send/receive application data to/from the data network. These layer operations may include transport (e.g., UDP) and internet (e.g., IP) operations.
The protocol processing circuitry 1414 may implement one or more layers of operations to facilitate the transmission or reception of data over the connection 1406. Layer operations implemented by the protocol processing circuit 1414 may include, for example, MAC, RLC, PDCP, RRC, and NAS operations.
The modem platform 1410 may further include digital baseband circuitry 1416, which digital baseband circuitry 1416 may implement one or more layer operations that are "lower" than the layer operations performed by the protocol processing circuitry 1414 in the network protocol stack. These operations may include, for example, PHY operations including one or more of HARQ-ACK functions, scrambling/descrambling, encoding/decoding, layer mapping/demapping, modulation symbol mapping, received symbol/bit metric determination, multi-antenna port precoding/decoding, wherein these functions may include one or more of: space-time, space-frequency, or spatial coding, reference signal generation/detection, preamble sequence generation and/or decoding, synchronization sequence generation/detection, control channel signal blind decoding, and other related functions.
Modem platform 1410 may further include transmit circuitry 1418, receive circuitry 1420, RF circuitry 1422, and RF front end (RFFE) circuitry 1424, which may include or be connected to one or more antenna panels 1426. Briefly, the transmit circuit 1418 may include digital-to-analog converters, mixers, intermediate Frequency (IF) components, and the like; the receive circuitry 1420 may include analog-to-digital converters, mixers, IF components, and the like; the RF circuit 1422 may include a low noise amplifier, power tracking components, and the like; RFFE circuitry 1424 can include filters (e.g., surface/bulk acoustic wave filters), switches, antenna tuners, beam forming components (e.g., phased array antenna components), and so forth. The selection and arrangement of the components of transmit circuitry 1418, receive circuitry 1420, RF circuitry 1422, RFFE circuitry 1424, and antenna panel 1426 (collectively, "transmit/receive components") may be specific to the details of a particular implementation, e.g., whether the communication is TDM or FDM, at mmWave or sub-6 GHz frequencies, etc. In some embodiments, the transmit/receive components may be arranged in multiple parallel transmit/receive chains, and may be arranged in the same or different chips/modules, etc.
In some embodiments, the protocol processing circuitry 1414 may include one or more instances of control circuitry (not shown) to provide control functionality for the transmit/receive components.
UE reception may be established by and via antenna panel 1426, RFFE circuitry 1424, RF circuitry 1422, receive circuitry 1420, digital baseband circuitry 1416, and protocol processing circuitry 1414. In some embodiments, antenna panel 1426 may receive transmissions from AN 1404 by receiving beamformed signals received by multiple antennas/antenna elements of one or more antenna panels 1426.
UE transmissions may be established via and through the protocol processing circuitry 1414, the digital baseband circuitry 1416, the transmit circuitry 1418, the RF circuitry 1422, the RFFE circuitry 1424, and the antenna panel 1426. In some embodiments, transmit components of UE 1404 may apply spatial filters to data to be transmitted to form transmit beams transmitted by antenna elements of antenna panel 1426. Similar to UE1402, AN 1404 may include a host platform 1428 coupled to a modem platform 1430. Host platform 1428 may include application processing circuitry 1432 coupled to protocol processing circuitry 1434 of modem platform 1430. The modem platform may also include digital baseband circuitry 1436, transmit circuitry 1438, receive circuitry 1440, RF circuitry 1442, RFFE circuitry 1444, and antenna panel 1446. The components of the AN 1404 can be similar to, and substantially interchangeable with, the homonymous components of the UE 1402. In addition to performing data transmission/reception as described above, the components of AN 1408 may perform various logical functions including, for example, RNC functions such as radio bearer management, uplink and downlink dynamic radio resource management, and data packet scheduling.
FIG. 15 illustrates example components of a device 1500 according to some embodiments. In some embodiments, device 1500 may include application circuitry 1502, baseband circuitry 1504, radio Frequency (RF) circuitry 1506, front End Module (FEM) circuitry 1508, one or more antennas 1510, and Power Management Circuitry (PMC) 1515 coupled together at least as shown. The illustrated components of the apparatus 1500 may be included in a UE or AN. In some embodiments, the apparatus 1500 may include fewer elements (e.g., the AN may not use the application circuitry 1502, but rather include a processor/controller to process IP data received from the EPC). In some embodiments, device 1500 may include additional elements, such as memory/storage devices, displays, cameras, sensors, or input/output (I/O) interfaces. In other embodiments, the components described below may be included in more than one device (e.g., for a Cloud-RAN (C-RAN) implementation, the circuitry may be included separately in more than one device).
The application circuitry 1502 may include one or more application processors. For example, the application circuitry 1502 may include circuitry such as, but not limited to: one or more single-core or multi-core processors. The processor(s) may include any combination of general-purpose processors and special-purpose processors (e.g., graphics processors, application processors, etc.). The processor may be coupled with or may include memory/storage and may be configured to execute instructions stored in the memory/storage to enable various applications and/or operating systems to run on the device 1500. In some embodiments, the processor of application circuitry 1502 may process IP packets received from the EPC.
Baseband circuitry 1504 may include circuitry such as, but not limited to: one or more single-core or multi-core processors. The baseband circuitry 1504 may include one or more baseband processors or control logic to process baseband signals received from the receive signal path of the RF circuitry 1506 and to generate baseband signals for the transmit signal path of the RF circuitry 1506. Baseband processing circuitry 1504 may interface with application circuitry 1502 to generate and process baseband signals and control the operation of RF circuitry 1506. For example, in some embodiments, the baseband circuitry 1504 may include a third generation (3G) baseband processor 1504A, a fourth generation (4G) baseband processor 1504B, a fifth generation (5G) baseband processor 1504C, or other baseband processor(s) 1504D for other existing generations, generations in development or to be developed in the future (e.g., sixth generation (6G), etc.). The baseband circuitry 1504 (e.g., one or more of the baseband processors 1504A-D) may handle various radio control functions that support communication with one or more radio networks via the RF circuitry 1506. In other embodiments, some or all of the functionality of the baseband processors 1504A-D may be included in modules stored by the memory 1504G and executed via a Central Processing Unit (CPU) 1504E. The radio control functions may include, but are not limited to: signal modulation/demodulation, encoding/decoding, radio frequency shifting, etc. In some embodiments, the modulation/demodulation circuitry of the baseband circuitry 1504 may include Fast Fourier Transform (FFT), precoding, and/or constellation mapping/demapping functionality. In some embodiments, the encoding/decoding circuitry of the baseband circuitry 1504 may include convolution, tail-biting convolution, turbo, viterbi (Viterbi), and/or Low Density Parity Check (LDPC) encoder/decoder functionality. Embodiments of modulation/demodulation and encoder/decoder functions are not limited to these examples, and may include other suitable functions in other embodiments.
In some embodiments, the baseband circuitry 1504 may include one or more audio Digital Signal Processors (DSPs) 1504F. The audio DSP(s) 1504F may include elements for compression/decompression and echo cancellation, and may include other suitable processing elements in other embodiments. In some embodiments, components of the baseband circuitry may be combined as appropriate in a single chip, a single chipset, or disposed on the same circuit board. In some embodiments, some or all of the constituent components of baseband circuitry 1504 and application circuitry 1502 may be implemented together, for example, on a system on a chip (SOC).
In some embodiments, the baseband circuitry 1504 may provide communications compatible with one or more radio technologies. For example, in some embodiments, baseband circuitry 1504 may support communication with an Evolved Universal Terrestrial Radio Access Network (EUTRAN) or other Wireless Metropolitan Area Network (WMAN), wireless Local Area Network (WLAN), wireless Personal Area Network (WPAN). Embodiments in which the baseband circuitry 1504 is configured to support radio communications of more than one wireless protocol may be referred to as multi-mode baseband circuitry.
The RF circuitry 1506 may support communication with wireless networks using modulated electromagnetic radiation through a non-solid medium. In various embodiments, the RF circuitry 1506 may include switches, filters, amplifiers, and the like to facilitate communication with the wireless network. The RF circuitry 1506 may include a receive signal path that may include circuitry to down-convert RF signals received from the FEM circuitry 1508 and provide baseband signals to the baseband circuitry 1504. The RF circuitry 1506 may also include a transmit signal path that may include circuitry to up-convert baseband signals provided by the baseband circuitry 1504 and provide an RF output signal to the FEM circuitry 1508 for transmission.
In some embodiments, the receive signal path of the RF circuitry 1506 may include a mixer circuit 1506a, an amplifier circuit 1506b, and a filter circuit 1506c. In some embodiments, the transmit signal path of the RF circuitry 1506 may include a filter circuit 1506c and a mixer circuit 1506a. The RF circuitry 1506 may also include a synthesizer circuit 1506d for synthesizing the frequencies used by the mixer circuits 1506a of the receive and transmit signal paths. In some embodiments, the mixer circuitry 1506a of the receive signal path may be configured to downconvert RF signals received from the FEM circuitry 1508 based on the synthesized frequency provided by the synthesizer circuitry 1506 d. The amplifier circuit 1506b may be configured to amplify the downconverted signal, and the filter circuit 1506c may be a Low Pass Filter (LPF) or a Band Pass Filter (BPF) configured to remove unwanted signals from the downconverted signal to generate an output baseband signal. The output baseband signal may be provided to baseband circuitry 1504 for further processing. In some embodiments, the output baseband signal may be a zero frequency baseband signal, but this is not required. In some embodiments, mixer circuit 1506a of the receive signal path may comprise a passive mixer, although the scope of the embodiments is not limited in this respect.
In some embodiments, mixer circuitry 1506a of the transmit signal path may be configured to upconvert the input baseband signal based on a synthesis frequency provided by synthesizer circuitry 1506d to generate an RF output signal for FEM circuitry 1508. The baseband signal may be provided by the baseband circuitry 1504 and may be filtered by the filter circuitry 1506c.
In some embodiments, the mixer circuitry 1506a of the receive signal path and the mixer circuitry 1506a of the transmit signal path may comprise two or more mixers and may be arranged for quadrature down-conversion and/or up-conversion, respectively. In some embodiments, the mixer circuit 1506a of the receive signal path and the mixer circuit 1506a of the transmit signal path may include two or more mixers and may be arranged for image rejection (e.g., hartley image rejection). In some embodiments, the mixer circuit 1506a of the receive signal path and the mixer circuit 1506a of the transmit signal path may be arranged for direct down-conversion and/or direct up-conversion, respectively. In some embodiments, mixer circuit 1506a of the receive signal path and mixer circuit 1506a of the transmit signal path may be configured for superheterodyne operation.
In some embodiments, the output baseband signal and the input baseband signal may be analog baseband signals, although the scope of the embodiments is not limited in this respect. In some alternative embodiments, the output baseband signal and the input baseband signal may be digital baseband signals. In these alternative embodiments, the RF circuitry 1506 may include analog-to-digital converter (ADC) and digital-to-analog converter (DAC) circuitry, and the baseband circuitry 1504 may include a digital baseband interface to communicate with the RF circuitry 1506.
In some dual-mode embodiments, separate radio IC circuitry may be provided to process signals for each spectrum, although the scope of the embodiments is not limited in this respect.
In some embodiments, synthesizer circuit 1506d may be a fractional-N or fractional-N/N +1 type synthesizer, although the scope of the embodiments is not limited in this respect as other types of frequency synthesizers may be suitable. For example, synthesizer circuit 1506d may be a delta-sigma synthesizer, a frequency multiplier, or a synthesizer including a phase locked loop with a frequency divider.
The synthesizer circuit 1506d may be configured to synthesize an output frequency for use by the mixer circuit 1506a of the RF circuit 1506 based on the frequency input and the divider control input. In some embodiments, synthesizer circuit 1506d may be a fractional-N/N +1 type synthesizer.
In some embodiments, the frequency input may be provided by a Voltage Controlled Oscillator (VCO), but this is not required. The divider control input may be provided by the baseband circuitry 1504 or the application processor 1502 depending on the desired output frequency. In some embodiments, the divider control input (e.g., N) may be determined from a look-up table based on the channel indicated by the application processor 1502.
Synthesizer circuit 1506d of RF circuit 1506 may include a frequency divider, a Delay Locked Loop (DLL), a multiplexer, and a phase accumulator. In some embodiments, the divider may be a dual-mode divider (DMD) and the phase accumulator may be a Digital Phase Accumulator (DPA). In some embodiments, the DMD may be configured to divide the input signal by N or N +1 (e.g., based on the carry out) to provide a fractional division ratio. In some example embodiments, a DLL may include a set of cascaded, tunable delay elements, a phase detector, a charge pump, and a D-type flip-flop. In these embodiments, the delay elements may be configured to decompose the VCO period into at most Nd equal phase groups, where Nd is the number of delay elements in the delay line. In this manner, the DLL provides negative feedback to help ensure that the total delay through the delay line is one VCO cycle.
In some embodiments, synthesizer circuit 1506d may be configured to generate a carrier frequency as the output frequency, while in other embodiments the output frequency may be a multiple of the carrier frequency (e.g., twice the carrier frequency, four times the carrier frequency) and used with a quadrature generator and divider circuit to generate a plurality of signals at the carrier frequency having a plurality of mutually different phases. In some embodiments, the output frequency may be the LO frequency (fLO). In some embodiments, the RF circuit 1506 may include an IQ/polarity converter.
FEM circuitry 1508 may include a receive signal path that may include circuitry configured to manipulate RF signals received from one or more antennas 1510, amplify the received signals, and provide amplified versions of the received signals to RF circuitry 1506 for further processing. The FEM circuitry 1508 may also include a transmit signal path, which may include circuitry configured to amplify signals provided by the RF circuitry 1506 for transmission by one or more of the one or more antennas 1510. In various embodiments, amplification through either the transmit signal path or the receive signal path may be done only in the RF circuitry 1506, only in the FEM 1508, or both the RF circuitry 1506 and the FEM 1508.
In some embodiments, FEM circuitry 1508 may include TX/RX switches to switch between transmit mode and receive mode operation. The FEM circuitry may include a receive signal path and a transmit signal path. The receive signal path of the FEM circuitry may include a Low Noise Amplifier (LNA) to amplify the received RF signal and provide the amplified received RF signal as an output (e.g., to the RF circuitry 1506). The transmit signal path of FEM circuitry 1508 may include a Power Amplifier (PA) to amplify an input RF signal (e.g., provided by RF circuitry 1506) and one or more filters to generate an RF signal for subsequent transmission (e.g., by one or more of the one or more antennas 1510).
In some embodiments, the PMC 1512 may manage power provided to the baseband circuitry 1504. Specifically, the PMC 1512 may control power selection, voltage scaling, battery charging, or DC-DC conversion. The PMC 1512 may generally be included when the device 1500 is capable of being battery powered, for example, when the device is included in a UE. The PMC 1512 may improve power conversion efficiency while providing desired implementation size and heat dissipation characteristics.
Although figure 15 shows the PMC 1512 coupled only to the baseband circuitry 1504. However, in other embodiments, the PMC 1512 may additionally or alternatively be coupled with and perform similar power management operations on other components, such as, but not limited to, the application circuitry 1502, the RF circuitry 1506, or the FEM 1508.
In some embodiments, the PMC 1512 may control or otherwise be part of various power saving mechanisms of the device 1500. For example, if the device 1500 is in an RRC Connected state where it is still Connected to the RAN node when the device 1500 expects to receive traffic soon, and then may enter a state called discontinuous reception mode (DRX) after a period of inactivity. During this state, the device 1500 may be powered down for a brief interval of time, thereby saving power.
If there is no data traffic activity for an extended period of time, the device 1500 can transition to an RRC _ Idle state in which the device 1500 is disconnected from the network and no operations such as channel quality feedback, handover, etc. are performed. The device 1500 enters a very low power state and performs paging, where the device 1500 again periodically wakes up to listen to the network and then powers down again. Device 1500 may not receive data in this state, and in order to receive data, it may transition back to the RRC _ Connected state.
The additional power-save mode may allow the device to be unavailable to the network for a period longer than the paging interval (ranging from a few seconds to a few hours). During this time, the device has no access to the network at all and may be completely powered down. Any data transmitted during this period will incur a significant delay and the delay is assumed to be acceptable.
The processor of the application circuitry 1502 and the processor of the baseband circuitry 1504 may be used to execute elements of one or more instances of a protocol stack. For example, the processor of the baseband circuitry 1504, alone or in combination, can be configured to perform layer 3, layer 2, or layer 1 functions, while the processor of the application circuitry 1504 can utilize data (e.g., packet data) received from these layers and further perform layer 4 functions (e.g., transmission Communication Protocol (TCP) and User Datagram Protocol (UDP) layers). As mentioned herein, layer 3 may include an RRC layer. As referred to herein, layer 2 may include a Medium Access Control (MAC) layer, a Radio Link Control (RLC) layer, and a Packet Data Convergence Protocol (PDCP) layer. As mentioned herein, layer 1 may comprise the Physical (PHY) layer of the UE/RAN node.
In some embodiments, AN or RAN node herein may comprise a server as described above.
Fig. 16 shows an example of an infrastructure device 1600 according to various embodiments. Infrastructure device 1600 (or "system 1600") may be implemented as a base station, a radio head, a RAN node, etc., such as RAN nodes 111 and 112 shown and described previously. In other examples, system 1600 may be implemented in or by a UE, application server(s) 160, and/or any other element/device discussed herein. The system 1600 may include one or more of the following: application circuitry 1605, baseband circuitry 1610, one or more radio front-end modules 1615, memory 1620, power Management Integrated Circuits (PMICs) 1625, power tee circuitry 1630, network controller 1635, network interface connector 1640, satellite positioning circuitry 1645, and user interface 1650. In some embodiments, device 1600 may include additional elements, such as memory/storage, a display, a camera, sensors, or input/output (I/O) interface elements. In other embodiments, the components described below may be included in more than one device (e.g., for a cloud RAN (C-RAN) implementation, the circuitry may be included separately in more than one device).
As used herein, the term "circuitry" may refer to, be part of, or include hardware components such as the following that are configured to provide the described functionality: electronic circuits, logic circuits, processors (shared, dedicated, or group) and/or memories (shared, dedicated, or group), application Specific Integrated Circuits (ASICs), field-programmable devices (FPDs) (e.g., field-programmable gate arrays (FPGAs), programmable Logic Devices (PLDs), complex PLDs (complex PLDs, CPLDs), high-capacity PLDs (HCPLDs), structured ASICs, or System on Chip (socs)), digital Signal Processors (DSPs), and so forth. In some embodiments, the circuitry may execute one or more software or firmware programs to provide at least some of the described functionality. Furthermore, the term "circuitry" may also refer to a combination of one or more hardware elements (or circuitry used in an electrical or electronic system) and program code for performing the functions of the program code. In these embodiments, the combination of hardware elements and program code may be referred to as a particular type of circuitry.
The terms "application circuitry" and/or "baseband circuitry" may be considered synonymous with "processor circuitry" and may be referred to as "processor circuitry". As used herein, the term "processor circuit" may refer to, be part of, or include circuitry that: the circuit is capable of sequentially and automatically performing a sequence of arithmetic or logical operations; and recording, storing and/or transmitting digital data. The term "processor circuit" may refer to one or more application processors, one or more baseband processors, physical Central Processing Units (CPUs), single-core processors, dual-core processors, tri-core processors, quad-core processors, and/or any other device capable of executing or otherwise manipulating computer-executable instructions, such as program code, software modules, and/or functional processes.
The application circuitry 1605 may include one or more Central Processing Unit (CPU) cores and one or more of the following: a cache memory, a Low Dropout (LDO) regulator, an interrupt controller, a Serial Interface such as SPI, I2C, or a Universal programmable Serial Interface module, a Real Time Clock (RTC), a timer-counter including interval and watchdog timers, a Universal input/output (I/O or IO), a memory card controller such as a Secure Digital (SD)/multimedia card (MMC), a Universal Serial Bus (USB) Interface, a Mobile Industrial Processor Interface (MIPI) Interface, and a Joint Test Access Group (JTAG) Test Access port. By way of example, the application circuit 1605 may include one or more Intels
Figure BDA0003576437720000601
Or
Figure BDA0003576437720000602
A processor; ultra-Micro semiconductor (Advanced Micro device)es,AMD)
Figure BDA0003576437720000603
A processor, an Accelerated Processing Unit (APU), or
Figure BDA0003576437720000604
A processor; and so on. In some embodiments, system 1600 may not utilize application circuit 1605, but may instead include a dedicated processor/controller to process IP data received from the EPC or 5GC, for example.
Additionally or alternatively, the application circuitry 1605 may include circuitry such as (but not limited to) the following: one or more Field Programmable Devices (FPDs), such as Field Programmable Gate Arrays (FPGAs) and the like; programmable Logic Devices (PLDs), such as Complex PLDs (CPLDs), high-capacity PLDs (HCPLDs), and the like; ASICs, such as structured ASICs and the like; programmable SoC (PSoC); and so on. In such embodiments, the circuitry of the application circuitry 1605 may comprise a logic block or logic architecture, including other interconnected resources, that may be programmed to perform various functions, such as the processes, methods, functions, etc. of the various embodiments discussed herein. In such embodiments, the circuitry of the application circuit 1605 may include storage units (e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory, static memory (e.g., static Random Access Memory (SRAM), antifuse, etc.) for storing logic blocks, logic architectures, data, etc. in a lookup-table (LUT), and so forth.
Baseband circuitry 1610 may be implemented, for example, as a solder-in substrate including one or more integrated circuits, a single packaged integrated circuit soldered to a main circuit board, or a multi-chip module containing two or more integrated circuits. Although not shown, baseband circuitry 1610 may include one or more digital baseband systems that may be coupled to a CPU subsystem, an audio subsystem, and an interface subsystem via an interconnect subsystem. The digital baseband subsystem may also be coupled to the digital baseband interface and the mixed signal baseband subsystem via additional interconnect subsystems. Each interconnection subsystem may include a bus system, a point-to-point connection, a Network On Chip (NOC) fabric, and/or some other suitable bus or interconnection technology, such as those discussed herein. The audio subsystem may include digital signal processing circuitry, buffer memory, program memory, voice processing accelerator circuitry, data converter circuitry such as analog-to-digital and digital-to-analog converter circuitry, analog circuitry including one or more amplifiers and filters, and/or other similar components. In an aspect of the disclosure, baseband circuitry 1610 can include protocol processing circuitry with one or more instances of control circuitry (not shown) to provide control functions for digital baseband circuitry and/or radio frequency circuitry (e.g., radio front-end module 1615).
User interface circuitry 1650 may include one or more user interfaces designed to enable user interaction with system 1600 or peripheral component interfaces designed to enable interaction with peripheral components of system 1600. The user interface may include, but is not limited to, one or more physical or virtual buttons (e.g., a reset button), one or more indicators (e.g., a Light Emitting Diode (LED)), a physical keyboard or keypad, a mouse, a touchpad, a touch screen, a speaker or other audio emitting device, a microphone, a printer, a scanner, a headset, a display screen or display device, and so forth. The peripheral component interfaces may include, but are not limited to, a non-volatile memory port, a Universal Serial Bus (USB) port, an audio jack, a power supply interface, and the like.
The radio front-end module (RFEM) 1615 may include a millimeter wave RFEM and one or more sub-millimeter wave Radio Frequency Integrated Circuits (RFICs). In some implementations, the one or more sub-millimeter wave RFICs may be physically separate from the millimeter wave RFEM. The RFIC may include a connection to one or more antennas or antenna arrays, and the RFEM may be connected to multiple antennas. In alternative implementations, both millimeter-wave and sub-millimeter-wave radio functions may be implemented in the same physical radio front-end module 1615. RFEM 1615 may include both millimeter wave and sub-millimeter wave antennas.
The memory circuitry 1620 may include one or more of the following: volatile memory including Dynamic Random Access Memory (DRAM) and/or Synchronous Dynamic Random Access Memory (SDRAM); and nonvolatile memory (NVM), including high speed electrically erasable memory (often referred to as flash memory), phase change random access memory (PRAM), magnetoresistive Random Access Memory (MRAM), and the like, and may include data from one or more of the above-mentioned sources
Figure BDA0003576437720000611
And
Figure BDA0003576437720000612
a three-dimensional (3D) cross point (XPOINT) memory. Memory circuit 1620 may be implemented as one or more of a solder-in package integrated circuit, a socket memory module, and a plug-in memory card.
The PMIC 1625 may include a voltage regulator, a surge protector, a power alarm detection circuit, and one or more backup power sources such as a battery or capacitor. The power alarm detection circuit may detect one or more of power down (under voltage) and surge (over voltage) conditions. Power tee circuitry 1630 can provide power drawn from the network cable to provide both power supply and data connectivity to infrastructure device 1600 with a single cable.
The network controller circuit 1635 may utilize a standard network interface protocol such as ethernet, GRE tunnel based ethernet, multiprotocol Label Switching (MPLS) based ethernet, or some other suitable protocol to provide connectivity to the network. Network connectivity can be provided to/from the infrastructure device 1600 via a network interface connector 1640 using a physical connection, which can be electrical (commonly referred to as a "copper interconnect"), optical, or wireless. Network controller circuitry 1635 may include one or more special purpose processors and/or FPGAs to communicate using one or more of the above-described protocols. In some implementations, the network controller circuitry 1635 may include multiple controllers to provide connectivity to other networks using the same or different protocols.
Positioning circuit 1645 may include circuitry to receive and decode signals transmitted by one or more constellations of navigation satellites of a Global Navigation Satellite System (GNSS). Examples of a Navigation Satellite Constellation (or GNSS) may include the Global Positioning System (GPS) in the united states, the Global Navigation System (GLONASS) in russia, the galileo System in the european union, the beidou Navigation Satellite System in china, the regional Navigation System or the GNSS augmentation System (e.g., indian Constellation Navigation with Indian Navigation, NAVIC), the Quasi-Zenith Satellite System (QZSS) in japan, the Satellite Integrated Doppler orbit imaging and Radio Positioning in france (dongler and Radio-Positioning Integrated by Satellite System, DORIS), and so forth. The positioning circuitry 1645 may include various hardware elements (e.g., including hardware devices such as switches, filters, amplifiers, antenna elements, and so forth to facilitate communication over-the-air (OTA) communication) to communicate with components of a positioning network (e.g., navigation satellite constellation nodes).
Nodes or satellites of the navigation satellite constellation(s) ("GNSS nodes") may provide positioning services by continuously transmitting or broadcasting GNSS signals along the line of sight that may be used by GNSS receivers (e.g., positioning circuitry 1645 and/or positioning circuitry implemented by UEs 101, 102, etc.) to determine their GNSS locations. The GNSS signals may include a pseudorandom code (e.g., a sequence of ones and zeros) known to the GNSS receiver and a message including a time of transmission (ToT) of the code epoch (e.g., a defined point in the pseudorandom code sequence) and a GNSS node position at the ToT. A GNSS receiver may monitor/measure GNSS signals transmitted/broadcast by multiple GNSS nodes (e.g., four or more satellites) and solve various equations to determine a corresponding GNSS location (e.g., spatial coordinates). The GNSS receiver also implements a clock that is generally less stable and accurate than the atomic clock of the GNSS node, and the GNSS receiver may use the measured GNSS signals to determine a deviation of the GNSS receiver from real time (e.g., a deviation of the GNSS receiver clock from the GNSS node time). In some embodiments, the Positioning circuit 1645 may include a Micro-Technology for Positioning, navigation, and Timing (Micro-PNT) IC that uses a master Timing clock to perform position tracking/estimation without GNSS assistance.
The GNSS receiver may measure the time of arrival (ToA) of GNSS signals from multiple GNSS nodes according to its own clock. The GNSS receiver may determine a time of flight (ToF) value for each received GNSS signal based on ToA and ToT, and may then determine a three-dimensional (3D) position and clock bias based on ToF. The 3D location may then be converted to latitude, longitude, and altitude. The positioning circuit 1645 may provide data to the application circuit 1605, which may include one or more of location data or time data. The application circuitry 1605 may use the time data to operate synchronously with other radio base stations (e.g., of the RAN node 111,112, etc.).
The components shown in fig. 16 may communicate with each other using interface circuitry. As used herein, the term "interface circuit" may refer to, be part of, or include a circuit that supports the exchange of information between two or more components or devices. The term "interface circuit" may refer to one or more hardware interfaces, such as a bus, an input/output (I/O) interface, a peripheral component interface, a network interface card, and so forth. Any suitable bus technology may be used in various implementations, which may include any number of technologies, including Industry Standard Architecture (ISA), extended ISA (EISA), peripheral Component Interconnect (PCI), PCI express, or any number of other technologies. The bus may be a dedicated bus, such as used in SoC-based systems. Other bus systems may be included, such as an I2C interface, SPI interface, point-to-point interface, and power bus, among others.
Fig. 17 is a block diagram illustrating components capable of reading instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium) and performing any one or more of the methodologies discussed herein, according to some example embodiments. In particular, fig. 17 shows a diagrammatic representation of hardware resources 1700, which includes one or more processors (or processor cores) 1710, one or more memory/storage devices 1720, and one or more communication resources 1730, each of which may be communicatively coupled via a bus 1740. Hardware resources 1700 may be part of a UE, AN, or LMF. For embodiments utilizing node virtualization (e.g., NFV), hypervisor 1702 may be executed to provide an execution environment for one or more network slices/subslices to utilize hardware resources 1700.
Processor 1710 (e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP) such as a baseband processor, an Application Specific Integrated Circuit (ASIC), a Radio Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, processor 1712 and processor 1717.
Memory/storage 1720 may include main memory, disk storage, or any suitable combination thereof. Memory/storage 1720 may include, but is not limited to, any type of volatile or non-volatile memory, such as Dynamic Random Access Memory (DRAM), static Random Access Memory (SRAM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory, solid state storage, and the like.
The communication resources 1730 can include interconnection or network interface components or other suitable devices to communicate with one or more peripheral devices 1704 or one or more databases 1706 via a network 1708. For example, communication resources 1730 can include wired communication components (e.g., for coupling via a Universal Serial Bus (USB)), cellular communication components, NFC components, bluetooth components (e.g., bluetooth low energy), wi-Fi components, and other communication components.
The instructions 1750 may include software, programs, applications, applets, apps, or other executable code for causing at least any of the processors 1710 to perform any one or more of the methods discussed herein. The instructions 1750 may reside, completely or partially, within at least one of the processor 1710 (e.g., within a processor's cache memory), the memory/storage 1720, or any suitable combination thereof. Further, any portion of instructions 1750 may be transferred to hardware resource 1700 from any combination of peripherals 1704 or database 1706. Thus, the processor 1710, memory/storage 1720, peripherals 1704, and memory of database 1706 are examples of computer-readable and machine-readable media.
The following paragraphs describe examples of various embodiments.
Example 1 includes a method for a gNodeB (gNB), comprising: sending a request for Machine Learning (ML) inference result information of a second gNB to the second gNB through an Xn interface between the gNB and the second gNB; receiving an ML inference result report from the second gNB through the Xn interface, the ML inference result report including the ML inference result information; and making one or more decisions based on the received ML inference result report and the one or more ML inference results of the gNB.
Example 2 includes the method of example 1, wherein the ML inference result report includes: an indication of a time of validity of the ML inference result information and/or an indication of a confidence level of the ML inference result information.
Example 3 includes the method of example 1 or 2, further comprising: sending, to the second gNB through the Xn interface, a request for ML capability information of the second gNB; and receiving, via the Xn interface, an indication of ML capability information for the second gNB.
Example 4 includes the method of any one of examples 1-3, further comprising: reporting ML inference result information of a Distribution Unit (DU) from a Centralized Unit (CU) of the gNB to the CU through an F1 interface between the DU and the CU.
Example 5 includes the method of any one of examples 1-4, further comprising: reporting, from DUs of the gNB, an indication of a validity time of the ML inference result information and/or an indication of a confidence level of the ML inference result information to CUs of the gNB via the F1 interface.
Example 6 includes the method of any one of examples 1-5, further comprising: reporting ML capability information of the DU of the gNB from the DU to the CU of the gNB through the F1 interface.
Example 7 includes the method of any one of examples 1-6, further comprising: and transmitting the ML inference result information of the gNB to a core network through an NG interface.
Example 8 includes the method of any one of examples 1-7, further comprising: receiving a baseline policy from the OAM; and making the one or more decisions based on the received ML inference result report and one or more ML inference results of the gNB, in compliance with a baseline policy from the OAM.
Example 9 includes the method of any one of examples 1-8, wherein the ML inference result information is transmitted through a new inference result request/response process.
Example 10 includes the method of any one of examples 1-9, wherein the ML inference result information is transmitted over an existing Xn interface, F1 interface, and/or NG interface using a new prediction IE.
Example 11 includes the method of any one of examples 1-10, wherein the ML inference result information includes at least one of: radio resource state prediction, TNL capacity prediction, cell capacity prediction, mobility change prediction, cause set, UE trajectory/location prediction, predicted handover request/response, predicted energy efficiency/state, and dual connectivity and carrier aggregation prediction.
Example 12 includes the method of any one of examples 1-11, wherein the ML inference result information includes one or more predictors and/or predicted action spaces.
Example 13 includes the method of any one of examples 1-12, wherein the one or more predictions indicate at least one of a traffic status, a channel status, a radio resource status, and a handover trigger threshold.
Example 14 includes the method of any one of examples 1-13, wherein the predicted action space indicates at least one of a handover decision and feature enablement.
Example 15 includes the method of any one of examples 1-14, wherein making the one or more decisions as a function of the received ML inference result report and one or more ML inference results for the gNB comprises: performing load balancing, energy conservation, mobility optimization, and/or handover according to the received ML inference result report and the one or more machine learning inference results of the gNB.
Example 16 includes an apparatus for a gNodeB (gNB), an interface circuit; and a processor circuit coupled with the interface circuit, wherein the processor circuit is configured to: sending, to a second gNB, a request for Machine Learning (ML) inference result information of the second gNB through an Xn interface between the gNB and the second gNB; receiving an ML inference result report from the second gNB through the Xn interface, the ML inference result report including the ML inference result information; and making one or more decisions based on the received ML inference result report and the one or more ML inference results of the gNB.
Example 17 includes the apparatus of example 16, wherein the ML inference result report includes: an indication of a time of validity of the ML inference result information and/or an indication of a confidence level of the ML inference result information.
Example 18 includes the apparatus of examples 16 or 17, wherein the processor circuit is further to: sending, to the second gNB through the Xn interface, a request for ML capability information of the second gNB; and receiving, via the Xn interface, an indication of ML capability information for the second gNB.
Example 19 includes the apparatus of any one of examples 16-18, wherein the processor circuit is further to: reporting ML inference result information of a Distribution Unit (DU) from a Centralized Unit (CU) of the gNB to the CU through an F1 interface between the DU and the CU.
Example 20 includes the apparatus of any one of examples 16-19, wherein the processor circuit is further to: reporting, from DUs of the gNB, an indication of a validity time of the ML inference result information and/or an indication of a confidence level of the ML inference result information to CUs of the gNB via the F1 interface.
Example 21 includes the apparatus of any one of examples 16-20, wherein the processor circuit is further to: reporting ML capability information of a DU of the gNB from the DU to a CU of the gNB through the F1 interface.
Example 22 includes the apparatus of any one of examples 16-21, wherein the processor circuit is further to: and transmitting the ML inference result information of the gNB to a core network through an NG interface.
Example 23 includes the apparatus of any one of examples 16-22, wherein the processor circuit is further to: receiving a baseline policy from the OAM; and making the one or more decisions based on the received ML inference result report and the one or more ML inference results of the gNB, following a baseline policy from the OAM.
Example 24 includes the apparatus of any one of examples 16-24, wherein the ML inference result information is transmitted via a new inference result request/response process.
Example 25 includes the apparatus of any one of examples 16-25, wherein the ML inference result information is transmitted over an existing Xn interface, F1 interface, and/or NG interface using a new prediction IE.
Example 26 includes the apparatus of any one of examples 16-25, wherein the ML inference result information includes at least one of: radio resource state prediction, TNL capacity prediction, cell capacity prediction, mobility change prediction, cause set, UE trajectory/location prediction, predicted handover request/response, predicted energy efficiency/state, and dual connectivity and carrier aggregation prediction.
Example 27 includes the apparatus of any one of examples 16-26, wherein the ML inference result information includes one or more predictors and/or predicted action spaces.
Example 28 includes the apparatus of any one of examples 16-27, wherein the one or more predictions indicate at least one of a traffic status, a channel status, a radio resource status, and a handover trigger threshold.
Example 29 includes the apparatus of any one of examples 16-28, wherein the predicted action space is indicative of at least one of a handover decision and feature enablement.
Example 30 includes the apparatus of any one of examples 16-29, wherein the processor circuit is further to: performing load balancing, energy conservation, mobility optimization, and/or handover according to the received ML inference result report and the one or more machine learning inference results of the gNB.
Example 31 includes a method for a gnnodeb (gNB), comprising: means for sending a request to a second gNB for Machine Learning (ML) inference result information of the second gNB over an Xn interface between the gNB and the second gNB; means for receiving an ML inference result report from the second gNB over the Xn interface, the ML inference result report including the ML inference result information; and means for making one or more decisions based on the received ML inference result report and one or more ML inference results of the gNB.
Example 32 includes the apparatus of example 31, wherein the ML inference result report includes: an indication of a time of validity of the ML inference result information and/or an indication of a confidence level of the ML inference result information.
Example 33 includes the apparatus of example 31 or 32, further comprising: means for sending a request for ML capability information of the second gNB to the second gNB through the Xn interface; and means for receiving, over the Xn interface, an indication of ML capability information for the second gNB.
Example 34 includes the apparatus of any one of examples 31-33, further comprising: means for reporting ML inference result information of a Central Unit (CU) of the gNB from a Distribution Unit (DU) to the CU through an F1 interface between the DU and the DU.
Example 35 includes the apparatus of any one of examples 31-34, further comprising: means for reporting, from DUs of the gNB, an indication of a validity time of the ML inference result information and/or an indication of a confidence level of the ML inference result information to CUs of the gNB via the F1 interface.
Example 36 includes the apparatus of any one of examples 31-35, further comprising: means for reporting ML capability information of a DU of the gNB from the DU to a CU of the gNB over the F1 interface.
Example 37 includes the apparatus of any one of examples 31-36, further comprising: and the device is used for transmitting the ML inference result information of the gNB to a core network through an NG interface.
Example 38 includes the apparatus of any one of examples 31-37, further comprising: means for receiving a baseline policy from the OAM; and means for making the one or more decisions based on the received ML inference result report and the one or more ML inference results of the gNB, following a baseline policy from the OAM.
Example 39 includes the apparatus of any one of examples 31-38, wherein the ML inference result information is transmitted through a new inference result request/response process.
Example 40 includes the apparatus of any one of examples 31-39, wherein the ML inference result information is transmitted over an existing Xn interface, F1 interface, and/or NG interface using a new prediction IE.
Example 41 includes the apparatus of any one of examples 31-40, wherein the ML inference result information includes at least one of: radio resource state prediction, TNL capacity prediction, cell capacity prediction, mobility change prediction, cause set, UE trajectory/location prediction, predicted handover request/response, predicted energy efficiency/state, and dual connectivity and carrier aggregation prediction.
Example 42 includes the apparatus of any one of examples 31-41, wherein the ML inference result information includes one or more predictors and/or predicted action spaces.
Example 43 includes the apparatus of any one of examples 31-42, wherein the one or more predictions indicate at least one of a traffic status, a channel status, a radio resource status, and a handover trigger threshold.
Example 44 includes the apparatus of any one of examples 31-43, wherein the predicted action space is indicative of at least one of a handover decision and feature enablement.
Example 45 includes the apparatus of any one of examples 31-44, further comprising: means for performing load balancing, energy conservation, mobility optimization, and/or handover based on the received ML inference result report and the one or more machine learning inference results of the gNB.
Example 46 includes a computer-readable storage medium having instructions stored thereon, which when executed by one or more processors, cause the one or more processors to perform a method comprising: : sending, to a second gNB, a request for Machine Learning (ML) inference result information of the second gNB through an Xn interface between the gNB and the second gNB; receiving an ML inference result report from the second gNB through the Xn interface, the ML inference result report including the ML inference result information; and making one or more decisions based on the received ML inference result report and the one or more ML inference results of the gNB.
Example 47 includes the computer-readable storage medium of example 46, wherein the ML inference result report includes: an indication of a time of validity of the ML inference result information and/or an indication of a confidence level of the ML inference result information.
Example 48 includes the computer-readable storage medium of example 46 or 47, the method further comprising: sending, to the second gNB through the Xn interface, a request for ML capability information of the second gNB; and receiving, over the Xn interface, an indication of ML capability information for the second gNB.
Example 49 includes the computer-readable storage medium of any one of examples 46-48, the method further comprising: reporting ML inference result information of a Distribution Unit (DU) from a Centralized Unit (CU) of the gNB to the CU through an F1 interface between the DU and the CU.
Example 50 includes the computer-readable storage medium of any one of examples 46-49, the method further comprising: reporting, from DUs of the gNB, an indication of a validity time of the ML inference result information and/or an indication of a confidence level of the ML inference result information to CUs of the gNB via the F1 interface.
Example 51 includes the computer-readable storage medium of any one of examples 46-50, the method further comprising: reporting ML capability information of the DU of the gNB from the DU to the CU of the gNB through the F1 interface.
Example 52 includes the computer-readable storage medium of any one of examples 46-51, the method further comprising: and transmitting the ML inference result information of the gNB to a core network through an NG interface.
Example 53 includes the computer-readable storage medium of any one of examples 46-52, the method further comprising: receiving a baseline policy from the OAM; and making the one or more decisions based on the received ML inference result report and one or more ML inference results of the gNB, in compliance with a baseline policy from the OAM.
Example 54 includes the computer-readable storage medium of any one of examples 46-53, wherein the ML inference result information is transmitted through a new inference result request/response process.
Example 55 includes the computer-readable storage medium of any one of examples 46-54, wherein the ML inference result information is transmitted over an existing Xn interface, F1 interface, and/or NG interface using a new predicted IE.
Example 56 includes the computer-readable storage medium of any one of examples 46-55, wherein the ML inference result information includes at least one of: radio resource status prediction, TNL capacity prediction, cell capacity prediction, mobility change prediction, cause set, UE trajectory/location prediction, predicted handover request/response, predicted energy efficiency/status, and dual connectivity and carrier aggregation prediction.
Example 57 includes the computer-readable storage medium of any one of examples 46-56, wherein the ML inference result information includes one or more predictors and/or a predicted action space.
Example 58 includes the computer-readable storage medium of any one of examples 46-57, wherein the one or more predictions indicate at least one of a traffic status, a channel status, a radio resource status, and a handover trigger threshold.
Example 59 includes the computer-readable storage medium of any one of examples 46-58, wherein the predicted action space is indicative of at least one of a handover decision and feature enablement.
Example 60 includes the computer-readable storage medium of any one of examples 46-59, wherein the method further comprises: performing load balancing, energy conservation, mobility optimization, and/or handover according to the received ML inference result report and the one or more machine learning inference results of the gNB.
The foregoing detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments that can be practiced. These embodiments are also referred to herein as "examples. Such examples may include elements other than those shown or described. However, the inventors also contemplate examples in which only those elements shown or described are provided. Moreover, the inventors also contemplate examples using any combination or permutation of those elements shown or described (or one or more aspects thereof), with respect to a particular example (or one or more aspects thereof) or with respect to other examples (or one or more aspects thereof).
All publications, patents, and patent documents mentioned in this document are incorporated by reference herein in their entirety as if individually incorporated by reference. If this document is inconsistent with the usage of those documents incorporated by reference, the usage in the cited references should be considered supplementary to that of this document; for irreconcilable inconsistencies, the usage in this document controls.
In this document, the terms "a" or "an" are used, as is common in patent documents, to include one or more, independent of any other instances or usages of "at least one" or "one or more. In this document, unless otherwise specified, the term "or" is used to refer to a non-exclusive or, for example, "a or B" includes "a but not B," B but not a "and" a and B. In the appended claims, the terms "including" and "in which" are used as the plain equivalents of the respective terms "comprising" and "wherein". Furthermore, in the following claims, the terms "comprising" and "including" are open-ended, that is, a system, device, article, or process that includes an element other than the elements listed after such term in a claim is also considered to be within the scope of that claim. Furthermore, in the following claims, the terms "first," "second," "third," and the like are used merely as labels, and do not impose numerical requirements on their objects.
The above description is intended to be illustrative and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. Other embodiments may be used, for example, by one of ordinary skill in the art upon reading the foregoing description. The Abstract is provided to enable the reader to quickly ascertain the nature of the technical disclosure, and is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Furthermore, in the foregoing detailed description, various features may be grouped together to simplify the disclosure. This should not be interpreted as intending that an unclaimed disclosed feature is essential to any claim. Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the detailed description, with each claim standing on its own as a separate embodiment. The scope of the embodiments should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.

Claims (25)

1. A method for a gNodeB (gNB), comprising:
sending a request for Machine Learning (ML) inference result information of a second gNB to the second gNB through an Xn interface between the gNB and the second gNB;
receiving an ML inference result report from the second gNB through the Xn interface, the ML inference result report including the ML inference result information; and
making one or more decisions based on the received ML inference result report and one or more ML inference results of the gNB.
2. The method of claim 1, wherein the ML inference result report comprises: an indication of a validity time of the ML inference result information and/or an indication of a confidence level of the ML inference result information.
3. The method of claim 1, further comprising:
sending, to the second gNB through the Xn interface, a request for ML capability information of the second gNB; and
receiving, over the Xn interface, an indication of ML capability information of the second gNB.
4. The method of claim 1, further comprising:
reporting ML inference result information of a Distribution Unit (DU) from a Centralized Unit (CU) of the gNB to the CU through an F1 interface between the DU and the CU.
5. The method of claim 4, further comprising:
reporting, from DUs of the gNB, an indication of a validity time of the ML inference result information and/or an indication of a confidence level of the ML inference result information to CUs of the gNB via the F1 interface.
6. The method of claim 4, further comprising:
reporting ML capability information of a DU of the gNB from the DU to a CU of the gNB through the F1 interface.
7. The method of claim 1, further comprising:
and transmitting the ML inference result information of the gNB to a core network through an NG interface.
8. The method of claim 1, further comprising:
receiving a baseline policy from the OAM; and
making the one or more decisions based on the received ML inference result report and one or more ML inference results of the gNB, in compliance with a baseline policy from the OAM.
9. The method of claim 1, wherein the ML inference result information is transmitted through a new inference result request/response process.
10. The method of claim 1, wherein the ML inference result information is transmitted over an existing Xn interface, F1 interface, and/or NG interface using a new predicted IE.
11. The method of any of claims 1-10, wherein the ML inference result information comprises at least one of: radio resource state prediction, TNL capacity prediction, cell capacity prediction, mobility change prediction, cause set, UE trajectory/location prediction, predicted handover request/response, predicted energy efficiency/state, and dual connectivity and carrier aggregation prediction.
12. The method of any of claims 1-10, wherein the ML inference result information comprises one or more predictors and/or predicted action spaces.
13. The method of claim 12, wherein the one or more predictions indicate at least one of a traffic status, a channel status, a radio resource status, and a handover trigger threshold.
14. The method of claim 12, wherein the predicted action space indicates at least one of a handover decision and feature enablement.
15. The method of claim 1, wherein making the one or more decisions based on the received ML inference result report and the one or more ML inference results of the gNB comprises:
performing load balancing, energy conservation, mobility optimization, and/or handover according to the received ML inference result report and the one or more machine learning inference results of the gNB.
16. An apparatus for a gNodeB (gNB), comprising:
an interface circuit; and
a processor circuit coupled with the interface circuit,
wherein the processor circuit is to:
sending a request for Machine Learning (ML) inference result information of a second gNB to the second gNB through an Xn interface between the gNB and the second gNB;
receiving an ML inference result report from a second gNB through the Xn interface, the ML inference result report including the ML inference result information; and
making one or more decisions based on the received ML inference result report and one or more ML inference results of the gNB.
17. The apparatus of claim 16, wherein the ML inference result report comprises: an indication of a time of validity of the ML inference result information and/or an indication of a confidence level of the ML inference result information.
18. The apparatus of claim 16, wherein the processor circuit is further to:
sending, to the second gNB through the Xn interface, a request for ML capability information of the second gNB; and
receiving, over the Xn interface, an indication of ML capability information of the second gNB.
19. The apparatus of claim 16, wherein the processor circuit is further to:
reporting ML inference result information of a Distribution Unit (DU) of the gNB from the DU to the CU through an F1 interface between the DU and the CU.
20. The apparatus of claim 19, wherein the processor circuit is further to:
reporting, from DUs of the gNB, an indication of a validity time of the ML inference result information and/or an indication of a confidence level of the ML inference result information to CUs of the gNB via the F1 interface.
21. The apparatus of claim 19, wherein the processor circuit is further to:
reporting ML capability information of a DU of the gNB from the DU to a CU of the gNB through the F1 interface.
22. The apparatus of claim 16, wherein the processor circuit is further to:
and transmitting the ML inference result information of the gNB to a core network through an NG interface.
23. The apparatus of claim 16, wherein the processor circuit is further to:
receiving a baseline policy from the OAM; and
making the one or more decisions based on the received ML inference result report and one or more ML inference results of the gNB, in compliance with a baseline policy from the OAM.
24. The apparatus of claim 16, wherein the machine learning reasoning results information is transmitted through a new reasoning results request/response procedure.
25. The apparatus of claim 16, wherein the ML inference result information is transmitted over an existing Xn interface, F1 interface, and/or NG interface using a new predicted IE.
CN202210334930.5A 2021-04-01 2022-03-31 Apparatus and method for RAN intelligent network Pending CN115250502A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CNPCT/CN2021/084924 2021-04-01
CN2021084924 2021-04-01

Publications (1)

Publication Number Publication Date
CN115250502A true CN115250502A (en) 2022-10-28

Family

ID=83697766

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210334930.5A Pending CN115250502A (en) 2021-04-01 2022-03-31 Apparatus and method for RAN intelligent network

Country Status (1)

Country Link
CN (1) CN115250502A (en)

Similar Documents

Publication Publication Date Title
US11432135B2 (en) Vehicle-to-everything (V2X) control function based on user equipment (UE) capability
US11166183B2 (en) Measurement gap and synchronization signal block—based measurement timing configuration scheduling
US11659564B2 (en) Radio control channel resource set design
CN110999192B (en) Circuits, apparatuses, and media for Resource Element (RE) mapping in a New Radio (NR)
CN113261327A (en) Random Access Channel (RACH) optimization and automatic neighbor relation creation for 5G networks
CN113785506A (en) Beam switching based on DCI indication for multiple TRP URLLC
US20210203449A1 (en) Mechanism on response of pre-allocated resource based pusch transmission
CN114245994A (en) RRM measurement restriction for CLI measurements
CN113572495A (en) System and method for multiplexing UL control and UL data transmission
CN112866898A (en) Apparatus and method for 5G NR positioning in NRPPa
CN112804717A (en) Apparatus and method for notifying QoS information to application server
CN112953998A (en) Apparatus and method for UE unaware EAS IP address replacement
CN114449465A (en) Apparatus and method for charging of 5GS capability to support edge computing
CN113676911A (en) Apparatus and method for interference suppression for NR-LTE dynamic spectrum sharing
CN113543338A (en) System and method for multiplexing or de-multiplexing overlapping UL transmissions
CN112654036A (en) Apparatus and method for blind decoding and/or channel estimation capability indication
CN115250502A (en) Apparatus and method for RAN intelligent network
CN116963167A (en) Apparatus and method for collecting RSRQ and SINR for each SSB
CN115701690A (en) Apparatus and method for provisioning management services with asynchronous operation
CN115707137A (en) Apparatus and method for resource reselection with multiple sensing occasions
CN115085778A (en) Apparatus and method for AI-based MIMO operation
WO2024102301A1 (en) Ue behavior and conditions with reduced prs measurement samples
CN117156502A (en) Apparatus and method for enhancing CHO including SCG configuration
CN115696551A (en) Apparatus and method for initial synchronization and beam acquisition
CN115551121A (en) Apparatus and method for backward compatibility for sidelink DRX support

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination