WO2024093057A1 - Dispositifs, procédés et support de stockage lisible par ordinateur pour communication - Google Patents

Dispositifs, procédés et support de stockage lisible par ordinateur pour communication Download PDF

Info

Publication number
WO2024093057A1
WO2024093057A1 PCT/CN2023/078188 CN2023078188W WO2024093057A1 WO 2024093057 A1 WO2024093057 A1 WO 2024093057A1 CN 2023078188 W CN2023078188 W CN 2023078188W WO 2024093057 A1 WO2024093057 A1 WO 2024093057A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
csi
terminal device
processing units
csi report
Prior art date
Application number
PCT/CN2023/078188
Other languages
English (en)
Inventor
Bingchao LIU
Jianfeng Wang
Haiming Wang
Tingnan BAO
Original Assignee
Lenovo (Beijing) Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo (Beijing) Limited filed Critical Lenovo (Beijing) Limited
Priority to PCT/CN2023/078188 priority Critical patent/WO2024093057A1/fr
Publication of WO2024093057A1 publication Critical patent/WO2024093057A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition

Definitions

  • Example embodiments of the present disclosure generally relate to the field of telecommunication, and in particular, to a terminal device, a network device, methods, and a computer readable storage medium for communication.
  • the 3rd Generation Partnership Project (3GPP) is working on the study of the potential benefit by adopting artificial intelligence (AI) /machine learning (ML) model for air interface in some use cases, such as employing AI/ML model in beam prediction.
  • AI artificial intelligence
  • ML machine learning
  • each AI/ML model corresponds to a set of hardware resources at least including memories and MACs (Multiplier and adder) . Therefore, each AI/ML model can only be used for one AI/ML prediction operation at a time instance. Some management needs to be introduced in the AI/ML model deploying.
  • example embodiments of the present disclosure provide a solution for communication with AI/ML model.
  • a terminal device comprising a processor and a transceiver coupled to the processor.
  • the processor is configured to: transmit, to a network device via the transceiver, a number of processing units for an artificial intelligence (AI) /machine learning (ML) model or an indication of a plurality of AI/ML models for an AI/ML functionality.
  • the processor is also configured to: receive, from the network device via the transceiver, a configuration for an operation of the terminal device associated with the AI/ML model or the AI/ML functionality, wherein the configuration is determined based on the number of the processing units or the plurality of AI/ML models.
  • a network device comprising a processor and a transceiver coupled to the processor.
  • the processor is configured to: receive, from a terminal device via the transceiver, a number of processing units for an artificial intelligence (AI) /machine learning (ML) model or an indication of a plurality of AI/ML models for an AI/ML functionality.
  • the processor is also configured to: determine, based on the number of processing units or the plurality of AI/ML models, a configuration for an operation of the terminal device associated with the AI/ML model or the AI/ML functionality.
  • the processor is also configured to: transmit the configuration to the terminal device via the transceiver.
  • a method performed by a terminal device comprises: transmitting, to a network device, a number of processing units for an artificial intelligence (AI) /machine learning (ML) model or an indication of a plurality of AI/ML models for an AI/ML functionality.
  • the method also comprises: receiving, from the network device, a configuration for an operation of the terminal device associated with the AI/ML model or the AI/ML functionality, wherein the configuration is determined based on the number of the processing units or the plurality of AI/ML models.
  • a method performed by a network device comprises: receiving, from a terminal device, a number of processing units for an artificial intelligence (AI) /machine learning (ML) model or an indication of a plurality of AI/ML models for an AI/ML functionality.
  • the method also comprises: determining, based on the number of processing units or the plurality of AI/ML models, a configuration for an operation of the terminal device associated with the AI/ML model or the AI/ML functionality.
  • the method also comprises: transmitting the configuration to the terminal device.
  • a non-transitory computer readable medium having program instructions stored thereon.
  • the program instructions When the program instructions are executed by an apparatus, cause the apparatus at least to: transmitting, to a network device, a number of processing units for an artificial intelligence (AI) /machine learning (ML) model or an indication of a plurality of AI/ML models for an AI/ML functionality; and receiving, from the network device, a configuration for an operation of the terminal device associated with the AI/ML model or the AI/ML functionality, wherein the configuration is determined based on the number of the processing units or the plurality of AI/ML models.
  • AI artificial intelligence
  • ML machine learning
  • a non-transitory computer readable medium having program instructions stored thereon.
  • the program instructions When the program instructions are executed by an apparatus, cause the apparatus at least to: receiving, from a terminal device, a number of processing units for an artificial intelligence (AI) /machine learning (ML) model or an indication of a plurality of AI/ML models for an AI/ML functionality; determining, based on the number of processing units or the plurality of AI/ML models, a configuration for an operation of the terminal device associated with the AI/ML model or the AI/ML functionality; and transmitting the configuration to the terminal device.
  • AI artificial intelligence
  • ML machine learning
  • FIG. 1 illustrates an example communication system in which some embodiments of the present disclosure may be implemented
  • FIG. 2 illustrates an example AI/ML model deployment in which some example embodiments of the present disclosure may be implemented
  • FIG. 3 illustrates a process flow for communication with AI/ML model in accordance with some example embodiments of the present disclosure
  • FIG. 4 illustrates AI/ML model based AI/ML management in accordance with some example embodiments of the present disclosure
  • FIG. 5 illustrates AI/ML functionality based AI/ML management in accordance with some example embodiments of the present disclosure
  • FIG. 6 illustrates an example AI/ML model or processing unit occupation time in accordance with some example embodiments of the present disclosure
  • FIG. 7A illustrates another example AI/ML model or processing unit occupation time in accordance with some example embodiments of the present disclosure
  • FIG. 7B illustrates a further example AI/ML model or processing unit occupation time in accordance with some example embodiments of the present disclosure
  • FIG. 8 illustrates an example report with both first type processing unit and second type processing unit in accordance with some example embodiments of the present disclosure
  • FIG. 9 illustrates an example report with first type processing unit in accordance with some example embodiments of the present disclosure
  • FIG. 10 illustrates an example of a method implemented at a terminal device in accordance with some example embodiments of the present disclosure
  • FIG. 11 illustrates an example of a method implemented at a network device in accordance with some example embodiments of the present disclosure.
  • FIG. 12 illustrates a simplified block diagram of a device that is suitable for implementing embodiments of the present disclosure.
  • references in the present disclosure to “one embodiment” , “an embodiment” , “an example embodiment” , and the like indicate that the embodiment described may include a particular feature, structure, or characteristic, but it is not necessary that every embodiment includes the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • first and second etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and similarly, a second element could be termed a first element, without departing from the scope of example embodiments.
  • the term “and/or” includes any and all combinations of one or more of the listed terms.
  • the term “communication network” refers to a network following any suitable communication standards, such as the fifth generation new radio (5G NR) , Long Term Evolution (LTE) , LTE-Advanced (LTE-A) , Wideband Code Division Multiple Access (WCDMA) , High-Speed Packet Access (HSPA) , Narrow Band Internet of Things (NB-IoT) and so on.
  • 5G NR fifth generation new radio
  • LTE Long Term Evolution
  • LTE-A LTE-Advanced
  • WCDMA Wideband Code Division Multiple Access
  • HSPA High-Speed Packet Access
  • NB-IoT Narrow Band Internet of Things
  • the communications between a terminal device and a network device in the communication network may be performed according to any suitable generation communication protocols, including, but not limited to, the fourth generation (4G) , 4.5G, the future fifth generation (5G) communication protocols, and/or any other protocols either currently known or to be developed in the future.
  • Embodiments of the present disclosure may be applied in various communication systems. Given the rapid development in communications, there will of course also be future type communication technologies and systems with which the present disclosure may be embodied. It should not be seen as limiting the scope of the present disclosure to only the aforementioned system.
  • NF refers to a function in 5G core network, including at least one of Network Slice Selection Function (NSSF) , Network Exposure Function (NEF) , Network Repository Function (NRF) , Policy Control Function (PCF) , Unified Data Management (UDM) , Unified Data Repository (UDR) , Application Function (AF) , Network Data Analytics Function (NWDAF) , trusted non-3GPP gateway function (TNGF) , Authentication Server Function (AUSF) , Access and Mobility Management Function (AMF) , Session Management Function (SMF) , and User Plane Function (UPF) .
  • NSSF Network Slice Selection Function
  • NEF Network Exposure Function
  • NRF Network Repository Function
  • PCF Policy Control Function
  • UDM Unified Data Management
  • UDR Unified Data Repository
  • AF Application Function
  • NWDAF Network Data Analytics Function
  • TNGF trusted non-3GPP gateway function
  • AUSF Authentication Server Function
  • AMF Access and Mobility Management Function
  • SMF Ses
  • terminal device refers to any end device that may be capable of wireless communication.
  • a terminal device may also be referred to as a communication device, user equipment (UE) , a Subscriber Station (SS) , a Portable Subscriber Station, a Mobile Station (MS) , or an Access Terminal (AT) .
  • UE user equipment
  • SS Subscriber Station
  • MS Mobile Station
  • AT Access Terminal
  • the terminal device may include, but not limited to, a mobile phone, a cellular phone, a smart phone, voice over IP (VoIP) phones, wireless local loop phones, a tablet, a wearable terminal device, a personal digital assistant (PDA) , portable computers, desktop computer, image capture terminal devices such as digital cameras, gaming terminal devices, music storage and playback appliances, vehicle-mounted wireless terminal devices, wireless endpoints, mobile stations, laptop-embedded equipment (LEE) , laptop-mounted equipment (LME) , USB dongles, smart devices, wireless customer-premises equipment (CPE) , an Internet of Things (IoT) device, a watch or other wearable, a head-mounted display (HMD) , a vehicle, a drone, a medical device and applications (for example, remote surgery) , an industrial device and applications (for example, a robot and/or other wireless devices operating in an industrial and/or an automated processing chain contexts) , a consumer electronics device, a device operating on commercial and/or industrial wireless networks
  • 3GPP are working on the study the potential benefit by adopting AI/ML function for air interface for some identified use cases.
  • One scenario that is the AI/ML function is employed for the UE side by adopting multiple AI/ML models, where the AI/ML models can be used for channel state information (CSI) or beam prediction.
  • CSI channel state information
  • FIG. 1 illustrates an example communication system in which some embodiments of the present disclosure may be implemented.
  • beam prediction between a network device 110 and a terminal device 120 can be achieved with AI/ML model.
  • the AI/ML model may be used to predict the best-K beams or beam pairs from a beam set based on the measurement on another beam set, where the number of beams or beam pairs in the prediction beam set is larger than the number of beams or beam pairs in the measurement beam set.
  • some AI/ML model may be used to predict the CSI or the best beams or beam pairs for future time instances based on the historical measurement.
  • each AI/ML model corresponds to a set of hardware resources including memories and MACs (Multiplier and adder) . Therefore, each AI/ML model can only be used for one AI/ML prediction operation at a time instance.
  • Network and UE need to align the occupation of all the available AI/ML models for higher efficient AI/ML operation. It is needed to achieve the management of the AI/ML models deployed at the UE side based on the concept of AI/ML model occupation. Those skilled in the art can understand that AI/ML model management is also needed in other cases, such as CSI compression, positioning, etc.
  • processing unit for CSI is introduced for higher efficient CSI triggering.
  • CSI processing criteria is illustrated in details in the following.
  • the terminal device 120 indicates the network device 110, the number of supported simultaneous CSI calculations N CPU with RRC parameter simultaneousCSI-ReportsPerCC in a component carrier, and simultaneousCSI-ReportsAllCC across all component carriers. If the terminal device 120 supports N CPU simultaneous CSI calculations, it is said to have N CPU processing units for processing CSI reports. If L processing units are occupied for calculation of CSI reports in a given symbol such as OFDM symbol, the terminal device 120 has N CPU -L unoccupied processing units. If N CSI reports calculation starts occupying their respective processing units on the same symbol such as orthogonal frequency division multiplexing (OFDM) symbol on which N CPU -L processing units are unoccupied.
  • OFDM orthogonal frequency division multiplexing
  • the terminal device 120 is not required to update the N-M requested CSI reports with lowest priority, where 0 ⁇ M ⁇ Nis the largest value such that holds.
  • O CPU 1 for a CSI report with CSI-ReportConfig with higher layer parameter reportQuantity set to 'cri-RSRP' , 'ssb-Index-RSRP' , 'cri-SINR' , 'ssb-Index-SINR' , 'cri-RSRP-Index' , 'ssb-Index-RSRP-Index' , 'cri-SINR-Index' , 'ssb-Index-SINR-Index' or 'none' .
  • CSI-RS-ResourceSet with higher layer parameter trs-Info is not configured.
  • “cri” corresponds to CSI-RS resource indicatior, which is related to beam indicator.
  • CSI Physical uplink shared channel
  • HARQ-ACK transport block or hybrid automatic repeat request acknowledgement
  • a periodic or semi-persistent CSI report excluding an initial semi-persistent (SP) CSI report on PUSCH after the PDCCH triggering the report, occupies one or more processing units from the first symbol of the earliest one of each CSI-RS /Channel State Information –Interference Measurement (CSI-IM) /Synchronization Signal and PBCH block (SSB) resource for channel or interference measurement, respective latest CSI-RS/CSI-IM/SSB occasion no later than the corresponding CSI reference resource, until the last symbol of the configured physical uplink shared channel (PUSCH) or physical uplink control channel (PUCCH) carrying the report.
  • SP semi-persistent
  • An aperiodic CSI report occupies one or more processing units from the first symbol after the physical downlink control channel (PDCCH) triggering the CSI report until the last symbol of the scheduled PUSCH carrying the report.
  • PDCCH physical downlink control channel
  • An initial semi-persistent CSI report on PUSCH after the PDCCH trigger occupies one or more processing units from the first symbol after the PDCCH until the last symbol of the scheduled PUSCH carrying the report.
  • the PDCCH reception includes two PDCCH candidates from two respective search space sets, for the purpose of determining the processing unit occupation duration, the PDCCH candidate that ends later in time is used.
  • the one or more processing units are occupied for a number of OFDM symbols as follows.
  • a semi-persistent CSI report excluding an initial semi-persistent CSI report on PUSCH after the PDCCH triggering the report, occupies one or more processing units from the first symbol of the earliest one of each transmission occasion of periodic or semi-persistent CSI-RS/CSI-IM/SSB resource for channel measurement for L1-RSRP computation, until Z′ 3 symbols after the last symbol of the latest one of the CSI-RS/CSI-IM/SSB resource for channel measurement for L1-RSRP computation in each transmission occasion.
  • An aperiodic CSI report occupies one of more processing units from the first symbol after the PDCCH triggering the CSI report until the last symbol between Z 3 symbols after the first symbol after the PDCCH triggering the CSI report and Z′ 3 symbols after the last symbol of the latest one of each CSI-RS/CSI-IM/SSB resource for channel measurement for L1-RSRP computation.
  • Z 3 , Z′ 3 are defined in the table 5.4-2 of 3GPP specification 38.213.
  • the terminal device 110 is not expected to have more active CSI-RS ports or active CSI-RS resources in active bandwidth parts (BWPs) than reported as capability.
  • None zero power (NZP) CSI-RS resource is active in a duration of time defined as the following.
  • the occupation time starts from the end of the PDCCH containing the request and ends at the end of the scheduled PUSCH containing the report associated with this aperiodic CSI-RS.
  • searchSpaceLinking for the purpose of determining the NZP CSI-RS resource active duration, the PDCCH candidate that ends later in time among the two linked PDCCH candidates is used.
  • the occupation time starts from the end of when the activation command is applied, and ends at the end of when the deactivation command is applied.
  • the occupation time starts when the periodic CSI-RS is configured by higher layer signalling, and ends when the periodic CSI-RS configuration is released. If a CSI-RS resource is referred N times by one or more CSI report settings, the CSI-RS resource and the CSI-RS ports within the CSI-RS resource are counted N times.
  • a CSI-RS resource set for channel measurement configured with two resource groups and N resource pairs, if a CSI-RS resource is referred X times by one of the M CSI-RS resources, and/or one or two resource pairs, the CSI-RS resource and the CSI-RS ports within the CSI-RS resource are counted X times.
  • study can be achieved with life cycle management (LCM) procedure on the basis that an AI/ML model has a model ID with associated information and/or model functionality at least for some AI/ML operations.
  • LCM life cycle management
  • model selection for model selection, activation, deactivation, switching, and fallback at least for UE sided models and two-sided models, study can be achieved with the following mechanisms. If the model selection is decided by the network device 110, it can be network-initiated. The model selection can also be UE-initiated, and requested to the network. If the model selection is decided by the terminal device 120, it can be event-triggered as configured by the network device 110, and the terminal device 120’s decision is reported to network device 110. It can be UE-autonomous, the terminal device 120’s decision is reported to the network device 110. It can be UE-autonomous, the terminal device 120’s decision is not reported to the network device 110.
  • study of potential specification impact can enable the development of a set of specific models, such as scenario or configuration-specific and site-specific models, as compared to unified models.
  • User data privacy needs to be preserved.
  • the provision of assistance information may need to consider feasibility of disclosing proprietary information to the other side.
  • study of the specification impact may support multiple AI/ML models for the same functionality, at least including the following aspects: procedure and assistance signaling for the AI/ML model switching and/or selection.
  • study can be achieved in the following mechanisms for LCM procedures.
  • indication of activation/deactivation/switching/fallback is based on individual AI/ML functionality.
  • the terminal device 120 may have one AI/ML model for the functionality, or the terminal device 120 may have multiple AI/ML models for the functionality. It is needed to determine whether or how to indicate the AI/ML functionality.
  • indication of model selection/activation/deactivation/switching/fallback is based on individual model IDs.
  • a process or method of identifying an AI/ML model can be understood between the network device 110 and the terminal device 120.
  • a process or method of identifying an AI/ML functionality can be understood between the network device 110 and the terminal device 120.
  • both single-side and dual-side AI/ML models can be studied.
  • single-side AI/ML models can be used for beam management and positioning scenarios and dual-side AI/ML models can be used for CSI compressing.
  • the AI/ML model can be deployed at the terminal device 120 or be deployed at the network device 110.
  • dual-side AI/ML model a pair of AI/ML models is deployed at both UE and network sides.
  • FIG. 2 illustrates an example AI/ML model deployment in which some example embodiments of the present disclosure may be implemented.
  • the AI/ML models in terminal side or terminal device 120 such as AI/ML encoders 210, 230, 250, are used for CSI compressing while the AI/ML models in network side or network device 110, such as AI/ML decoders 220, 240, 260, are used for CSI de-compressing.
  • the AI/ML inference is performed by the network device 110 and the network device 110 can manage the AI/ML models without or with little specification impact.
  • AI/ML model management is needed to ensure the terminal device 120 and the network device 110 have the common understanding of the occupation of all the AI/ML models.
  • one AI/ML model may correspond to a dedicated set of hardware resources and different AI/ML models correspond to separate hardware resources.
  • High performance terminal devices may deploy multiple AI/ML models for the same or for different purposes for highly efficient AI/ML operation.
  • the hardware resource for an AI/ML model when used in a time instance, it cannot be used for another operation.
  • different AI/ML models can be used for different use cases for different scenarios.
  • an AI/ML model used for spatial beam prediction may not be used for temporal beam prediction, and an AI/ML model may not be applicable for both low-speed and high-speed scenarios.
  • Two AI/ML model management methods are proposed, named as AI/ML model based method and AI/ML functionality based method.
  • FIG. 3 illustrates a process flow for communication with AI/ML model in accordance with some example embodiments of the present disclosure.
  • the terminal device 120 transmits (302) a number of processing units for an AI/ML model or an indication of a plurality of AI/ML models for an AI/ML functionality 305 to the network device 110.
  • the network device 110 can get basic information, such as the processing unit capability for AI/ML model, or structure of the AI/ML models for the AI/ML functionality from the terminal device 120.
  • the network device 110 transmits (308) configuration for operation 310 to the terminal device 120.
  • the operation can be providing the CSI report, performing the beam prediction, performing the CSI compression, performing the CSI prediction, or positioning. This way, the network device 110 can trigger the AI/ML operation in the terminal device 120, such as CSI report or beam prediction, and the operation can be flexible according to different use cases.
  • FIG. 4 illustrates AI/ML model based AI/ML management in accordance with some example embodiments of the present disclosure.
  • different AI/ML models may have different input requirement for the same or different purpose.
  • the necessary description on the AI/ML model inputs/outputs, usage or the applicable scenario (s) are reported in the capability report or the registration information at least when the AI/ML models are trained by the terminal device 120.
  • the terminal device 120 may have multiple AI/ML processing units to operate the deployed AI/ML model as illustrated in Figure 4.
  • the terminal device 120 can further report the number of processing units for AI/ML inference for each identified AI/ML model, each of the processing units can be independently used for AI/ML inference, and all the processing units can be simultaneously used for AI/ML inference.
  • AI/ML model 410 needs AI/ML processing units 415, 420, 425, and the terminal device 120 can report one of more of the following to the network device 110 for model 410: model input format, model output format, number of processing units for AI/ML model 410, usage, or application scenario.
  • AI/ML model 430 needs AI/ML processing units 435, 440, 445, and the terminal device 120 can report one of more of the following to the network device 110 for model 430: model input format, model output format, number of processing units for AI/ML model 430, usage, or application scenario. This way, the terminal device can report detailed capability and structure information of AI/ML model or AI/ML functionality, to make the processing unit resource management more efficient.
  • the terminal device 120 When the terminal device 120 is configured with a CSI measurement and/or CSI report associated with an AI/ML model, then the terminal device 120 can use any of the processing units for AI/ML inference to generate the corresponding CSI. This way, the processing units can be used more efficiently. For higher efficient management of the AI/ML models, the terminal device 120 and the network device 110 can have the same understanding on the occupation of all processing units for AI/ML inference.
  • FIG. 5 illustrates AI/ML functionality based AI/ML management in accordance with some example embodiments of the present disclosure.
  • different AI/ML models may correspond to different or the same AI/ML functionalities as illustrated in Figure 5.
  • the terminal device 120 may report one or more information items in each AI/ML functionality: usage, applicable scenario, the input/output format, and possible the AI/ML models belong to this functionality.
  • usage for example, in 500, there are AI/ML functionalities 510, 530.
  • AI/ML models 515, 520, 525 belong to the AI/ML functionality 510, and AI/ML models 535, 540, 545 belong to the AI/ML functionality 530.
  • the terminal device 120 can report model input format, model output format, usage such as CSI compressing, applicable scenario, and AI/ML models 515, 520, 525 for AI/ML functionality 510.
  • the terminal device 120 can also report model input format, model output format, usage such as CSI compressing, applicable scenario, and AI/ML models 535, 540, 545 for AI/ML functionality 530. This way, the network device 110 can get detailed capability and structure information for AI/ML functionality from the terminal device 120, and make the processing unit management more efficiently. In some embodiments, all the AI/ML models belonging to the same functionality identification have the same usage, applicable scenario, and the same input/output format. This way, the compatibility can be achieved among the AI/ML models in the AI/ML functionality.
  • a CSI report configuration may be associated with an AI/ML functionality ID to tell the terminal device 120 to generate the CSI with AI/ML operation, and any of the AI/ML models belonging to the same AI/ML functionality can be used for the CSI inference.
  • the network device 110 can directly associate an AI/ML model ID to a CSI report for the same purpose. This way, the network device 110 can control all the terminal devices in the cell, thus can make a cell level optimization in the network device 110. While in this case, an AI/ML model ID can only be associated with a CSI report configuration.
  • the terminal device 120 and the network device 110 can have the same understanding on the occupitation of all the AI/ML models deployed by the terminal device 120.
  • the terminal device 120 and the network device 110 may have the same understanding on the occupation of the AI/ML models and the processing units.
  • the UE behavior when there is no available AI/ML model or processing unit in the terminal device 120 can be specified.
  • the number of AI/ML models or processing units occupied for a CSI report calculation can be analyzed as following.
  • the number of AI/ML models or processing units occupied for a CSI report can be different for different CSI types, either for beam management or for PMI/CQI reporting.
  • the occupied number of processing units can be as following.
  • a CSI report with CSI-ReportConfig with higher layer parameter reportQuantity set to 'cri-RSRP' , 'ssb-Index-RSRP' , 'cri-SINR' , 'ssb-Index-SINR' , 'cri-RSRP-Index' , 'ssb-Index-RSRP-Index' , 'cri-SINR-Index' , 'ssb-Index-SINR-Index' corresponds to a CSI report for beam report.
  • the beam management is relatively simple in calculation, and can reduce processing unit or AI/ML model occupation.
  • OAPU Ks AI/ML models or processing units are occupied for a CSI report for PMI/CQI reporting, where Ks is the number of CSI-RS resources in the CSI-RS resource set for channel measurement.
  • a CSI report with CSI-ReportConfig with higher layer parameter reportQuantity set to 'cri-RI-PMI-CQI' , 'cri-RI-i1' , 'cri-RI-i1-CQI' , 'cri-RI-CQI' , or 'cri-RI-LI-PMI-CQI' corresponds to a CSI report for PMI/CQI reporting.
  • PMI/CQI report calculation is relatively complex in calculation, the accurate calculation can improve the efficiency of processing unit or AI/ML model management.
  • the AI/ML model or processing unit occupation criterion can be defined for different CSI report types.
  • FIG. 6 illustrates an example AI/ML model or processing unit occupation time in accordance with some example embodiments of the present disclosure.
  • one or more processing units or AI/ML models are occupied from the first symbol of the earliest one of each CSI-RS/CSI-IM/SSB resource for channel measurement or interference measurement, respectively latest CSI-RS/CSI-IM/SSB occasion no later than the corresponding CSI reference resource, which corresponds to a DL slot to receive the reference signal for the CSI report, until the last symbol of the configured PUSCH/PUCCH carrying the CSI report 630.
  • the CSI-RS/CSI-IM/SSB resource can be CSI-RS 610 corresponding to the n th CSI report in the CSI reference resource, 620 is downlink slot as the CSI reference resource, and PUCCH carrying beam report 630 can be the n th CSI report.
  • the duration in which AI/ML model or processing unit is occupied for CSI calculation can be 640.
  • Embodiments in FIG. 6 provides an illustration for one processing unit occupation, and the ‘PUSCH/PUCCH carrying CSI report’ is used.
  • the processing unit or AI/ML model can be allocated when the CSI-RS/CSI-IM/SSB resource comes, and avoid allocation before the CSI-RS/CSI-IM/SSB resource, to improve the usage efficiency of the processing unit and the AI/ML model.
  • one or more processing units are occupied from the first symbol of the earliest one of each CSI-RS/CSI-IM/SSB resource for channel measurement or interference measurement, respectively latest CSI-RS/CSI-IM/SSB occasion no later than the corresponding CSI reference resource, which corresponds to a DL slot to receive the reference signal for the CSI report, until the last symbol of the configured PUCCH carrying the CSI report, as the same in FIG. 6.
  • the report can be any CSI report, such as PMI or RI, or CSI prediction, or positioning, etc. This can make the report more flexible.
  • FIG. 7A illustrates another example AI/ML model or processing unit occupation time in accordance with some example embodiments of the present disclosure.
  • aperiodic CSI report associated with AI/ML operation one or more processing units are occupied from the first symbol after the PDCCH 710 triggering the CSI report, until the last symbol of the scheduled PUSCH carrying the report 730.
  • the CSI-RS/CSI-IM/SSB resource, such as CSI-RS 720 is inside the duration 740 in which the AI/ML model or processing unit is occupied. This way, the processing unit or the AI/ML model can be allocated as soon as possible, to meet the high priority of the aperiodic CSI report.
  • FIG. 7B illustrates a further example AI/ML model or processing unit occupation time in accordance with some example embodiments of the present disclosure.
  • the processing unit or the AI/ML model can be allocated as soon as possible, to meet the high priority and randomness of the initial SP CSI report.
  • the report can be any CSI report, such as PMI or RI, or CSI prediction, or positioning, etc. This can make the report more flexible.
  • the processing units are a first type of processing units for AI/ML inference of the AI/ML model, and a second type of processing units of the terminal device occupied for providing input to the AI/ML model.
  • the generation of input to the AI/ML model and AI/ML inference can be performed more efficiently.
  • the terminal device 120 reports the number of the first type processing units occupied for an AI/ML model i is N APU, i , and L first type processing units are occupied for CSI inference in a given symbol such as OFDM symbol, the terminal device 120 has N APU, i -L unoccupied first type processing units for AI/ML model i.
  • the terminal device 120 When a CSI report is associated with an AI/ML model and there are available first type processing units for this CSI report, but the terminal device 120 cannot provide the required AI/ML model input format, the terminal device 120 does not update the corresponding CSI and no first type processing unit is occupied for the triggered CSI report.
  • FIG. 8 illustrates an example report with both first type processing unit and second type processing unit in accordance with some example embodiments of the present disclosure.
  • the CSI report is triggered for spatial beam prediction.
  • the terminal device 120 is indicated to predict the best beam from the prediction beam set based on the measurement results of the measurement beam set.
  • DCI 810 triggers aperiodic CSI report
  • the terminal device 120 receives CSI-RS resource 820 for the triggered CSI report.
  • the terminal device 120 first obtains the L1-RSRPs of all the received CSI-RS resources in 830, by using a second type processing unit to obtain the AI/ML model input.
  • Each CSI-RS resource represents a beam.
  • the terminal device 120 With the inputs provided by the second type processing unit, the terminal device 120 further obtains the required CSI, i.e., the best K beams in the prediction beam set, with the first type processing unit via AI/ML inference in 840, the best K beam IDs in the prediction beam set and the corresponding L1-RSRP are calculated.
  • the beam report is carried in PUSCH in 850, and transmitted to the network device 110.
  • the terminal device 120 can report the required CSI with AI/ML inference.
  • the CSI report can occupy one or more second type processing units. If there is no available second type processing unit for this CSI report, the terminal device 120 cannot obtain the required AI/ML input for AI/ML inference even if there is available first type processing unit. The terminal device 120 does not update the corresponding CSI, and thus the CSI report calculation may not occupy the first type processing unit as well. This way, the resource of first type processing unit can be saved when the second type processing unit is not available, to improve the occupation efficiency of the first type processing unit, and avoid waste.
  • the terminal device 120 can report a set of CSI based on the output of the second processing units, or a CSI without AI/ML inference.
  • the corresponding CSI report is updated with a non-AI/ML CSI, and the CSI report occupies one or more second processing units.
  • the terminal device 120 does not update the corresponding CSI.
  • the corresponding CSI report is not updated, and the CSI report calculation does not occupy the second type processing unit. This way, flexibility can be achieved when the first type processing unit is not available.
  • FIG. 9 illustrates an example report with first type processing unit in accordance with some example embodiments of the present disclosure.
  • an example for a CSI report with only AI/ML inference is illustrated in Figure 9.
  • the same CSI report is triggered by the DCI 910, and associated with an AI/ML model.
  • the terminal device 120 calculates the CSI report 960 for PUSCH to carry, only with one or more first type processing units, without any second type processing unit.
  • the preparation for the AI/ML model input, or obtaining the L1 RSRP of each CSI-RS 940 is part of the AI/ML model occupying first type processing units, named as AI/ML pre-processing units.
  • the implementation of 940 may be different from 830 in FIG. 8, to match the structure difference of the first type processing unit from the second processing unit.
  • the first type processing unit obtains the CSI for report via AI/ML inference.
  • This can be main-processing unit.
  • the AI/ML model can directly receive the transmitted CSI-RS or CSI/IM or SSB resource for AI/ML inference.
  • the CSI report does not occupy any second type processing unit and only occupies one or more first processing units depending on the CSI content. As a result, if there are no available first processing unit for the triggered CSI report, the terminal device 120 does not update the corresponding CSI. This way, the CSI report calculation can be performed without the resource of second type processing unit, and improve the flexibility.
  • FIG. 10 illustrates an example of a method implemented at a terminal device 120 in accordance with some example embodiments of the present disclosure.
  • the processor in the terminal device 120 transmits to a network device 110 via the transceiver, a number of processing units for an AI/ML model or an indication of a plurality of AI/ML models for an AI/ML functionality.
  • the processor in the terminal device 120 receives from the network device via the transceiver, a configuration for an operation of the terminal device associated with the AI/ML model or the AI/ML functionality, wherein the configuration is determined based on the number of the processing units or the plurality of AI/ML models.
  • the processing units are defined for the AI/ML model.
  • the plurality of AI/ML models for the AI/ML functionality have a same AI/ML input format, a same AI/ML output format, same usage, or a same applicable scenario.
  • the processing units for the AI/ML model are simultaneously usable for AI/ML inference of the AI/ML model.
  • the operation comprises one of the following: providing a channel state information (CSI) report, performing a beam prediction, performing a CSI compression, performing a CSI prediction, or positioning.
  • CSI channel state information
  • one processing unit for the AI/ML model or one AI/ML model for the AI/ML functionality is occupied, in the case that the CSI report is configured for a beam report.
  • an occupied number of the processing units for the AI/ML model or an occupied number of AI/ML models for the AI/ML functionality is the number of channel state information reference signal (CSI-RS) resources configured in a CSI resource set for a channel measurement of the CSI report, in the case that the CSI report is configured to report at least one of a precoding matrix indicator (PMI) or a channel quality indicator (CQI) .
  • CSI-RS channel state information reference signal
  • At least one of the processing units is occupied from a first symbol of an earliest resource for a channel measurement or an interference measurement to a last symbol of a physical uplink control channel (PUCCH) or physical uplink shared channel (PUSCH) carrying the CSI report, in the case that the CSI report is one of the following: a periodic CSI report, a semi-persistent (SP) CSI report carried by the PUCCH, or a SP CSI report other than an initial SP CSI report on a physical uplink shared channel (PUSCH) .
  • PUCCH physical uplink control channel
  • PUSCH physical uplink shared channel
  • At least one of the processing units is occupied from a first symbol after a PDCCH triggering the CSI report to a last symbol of a PUSCH carrying the CSI report, in the case that the CSI report is one of the following: an aperiodic CSI report, or an initial SP CSI report triggered by downlink control information (DCI) .
  • DCI downlink control information
  • the processor in the terminal device 120 can skip updating N-M requested CSI reports with a lowest priority. Or the processor in the terminal device 120 can report the N-M requested CSI reports with the lowest priority without AI/ML inference of the AI/ML model when there are available CSI processing units for all the N-M requested CSI reports.
  • the processing units are a first type of processing units for AI/ML inference of the AI/ML model; and a second type of processing units of the terminal device occupied for providing input to the AI/ML model.
  • the processor in the terminal device 120 performs the operation without the AI/ML inference or skipping performing the operation, in the case that a processing unit with the second type is available and a processing unit with the first type is unavailable.
  • a processing unit among the processing units comprises: a main-processing unit for AI/ML inference of the AI/ML model; and a pre-processing unit for providing input to the AI/ML model.
  • the processor in the terminal device 120 skips performing the operation, in the case that a processing unit is unavailable for the operation. In some embodiments, the processor in the terminal device 120 transmits to the network device via the transceiver, a model input format, a model output format, usage, or an applicable scenario of the AI/ML model or the AI/ML functionality.
  • FIG. 11 illustrates an example of a method implemented at a network device 110 in accordance with some example embodiments of the present disclosure.
  • the processor in the network device 110 receives from a terminal device via the transceiver, a number of processing units for an artificial intelligence (AI) /machine learning (ML) model or an indication of a plurality of AI/ML models for an AI/ML functionality.
  • AI artificial intelligence
  • ML machine learning
  • the processor in the network device 110 determines, based on the number of processing units or the plurality of AI/ML models, a configuration for an operation of the terminal device associated with the AI/ML model or the AI/ML functionality.
  • the processor in the network device 110 transmits the configuration to the terminal device via the transceiver.
  • the processing units are defined for the AI/ML model.
  • the plurality of AI/ML models have a same AI/ML input format, a same AI/ML output format, same usage, or a same applicable scenario.
  • the processing units for the AI/ML model are simultaneously usable for AI/ML inference of the AI/ML model.
  • the operation comprises one of the following: providing a channel state information (CSI) report, performing a beam prediction, performing a CSI compression, performing a CSI prediction, or positioning.
  • CSI channel state information
  • one processing unit for the AI/ML model or one AI/ML model for the AI/ML functionality is occupied, in the case that the CSI report is configured for a beam report.
  • an occupied number of the processing units for the AI/ML model or an occupied number of AI/ML models for the AI/ML functionality is the number of channel state information reference signal (CSI-RS) resources configured in a CSI resource set for a channel measurement of the CSI report, in the case that the CSI report is configured for to report at least one of a precoding matrix indicator (PMI) report or a channel quality indicator (CQI) report.
  • CSI-RS channel state information reference signal
  • At least one of the processing units is occupied from a first symbol of an earliest resource for a channel measurement or an interference measurement to a last symbol of a physical uplink control channel (PUCCH) or physical uplink shared channel (PUSCH) carrying the CSI report, in the case that the CSI report is one of the following: a periodic CSI report, a semi-persistent (SP) CSI report carried by the PUCCH, or a SP CSI report other than an initial SP CSI report on a physical uplink shared channel (PUSCH) .
  • PUCCH physical uplink control channel
  • PUSCH physical uplink shared channel
  • At least one of the processing units is occupied from a first symbol after a PDCCH triggering the CSI report to a last symbol of a PUSCH carrying the CSI report, in the case that the CSI report is one of the following: an aperiodic CSI report, or an initial SP CSI report triggered by downlink control information (DCI) .
  • DCI downlink control information
  • the processor in the network device 110 receives from the terminal device via the transceiver, a model input format, a model output format, usage, or an applicable scenario of the AI/ML model or the AI/ML functionality.
  • FIG. 12 illustrates a simplified block diagram of a device 1200 that is suitable for implementing embodiments of the present disclosure.
  • the device 1200 can be considered as a further example implementation of the terminal device 120, and the network device 110 as shown in FIG. 1 and FIG. 2. Accordingly, the device 1200 can be implemented at or as at least a part of the terminal device 120, or the network device 110.
  • the device 1200 includes a processor 1210, a memory 1220 coupled to the processor 1210, a suitable transmitter (TX) and receiver (RX) 1240 coupled to the processor 1210, and a communication interface coupled to the TX/RX 1240.
  • the memory 1210 stores at least a part of a program 1230.
  • the TX/RX 1240 is for bidirectional communications.
  • the TX/RX 1240 has at least one antenna to facilitate communication, though in practice an Access Node mentioned in this disclosure may have several ones.
  • the communication interface may represent any interface that is necessary for communication with other network elements, such as X2 interface for bidirectional communications between eNBs, S1 interface for communication between a Mobility Management Entity (MME) /Serving Gateway (S-GW) and the eNB, Un interface for communication between the eNB and a relay node (RN) , or Uu interface for communication between the eNB and a terminal device.
  • MME Mobility Management Entity
  • S-GW Serving Gateway
  • Un interface for communication between the eNB and a relay node (RN)
  • Uu interface for communication between the eNB and a terminal device.
  • the program 1230 is assumed to include program instructions that, when executed by the associated processor 1210, enable the device 1200 to operate in accordance with the embodiments of the present disclosure, as discussed herein with reference to FIGS. 1-11.
  • the embodiments herein may be implemented by computer software executable by the processor 1210 of the device 1200, or by hardware, or by a combination of software and hardware.
  • the processor 1210 may be configured to implement various embodiments of the present disclosure.
  • a combination of the processor 1210 and memory 1220 may form processing means 1250 adapted to implement various embodiments of the present disclosure.
  • the memory 1220 may be of any type suitable to the local technical network and may be implemented using any suitable data storage technology, such as a non-transitory computer readable storage medium, semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory, as non-limiting examples. While only one memory 1220 is shown in the device 1200, there may be several physically distinct memory modules in the device 1200.
  • the processor 1210 may be of any type suitable to the local technical network, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs) and processors based on multicore processor architecture, as non-limiting examples.
  • the device 1200 may have multiple processors, such as an application specific integrated circuit chip that is slaved in time to a clock which synchronizes the main processor.
  • embodiments of the present disclosure may provide the following solutions.
  • a terminal device comprising: a processor; and a transceiver coupled to the processor, wherein the processor is configured to: transmit, to a network device via the transceiver, a number of processing units for an artificial intelligence (AI) /machine learning (ML) model or an indication of a plurality of AI/ML models for an AI/ML functionality; and receive, from the network device via the transceiver, a configuration for an operation of the terminal device associated with the AI/ML model or the AI/ML functionality, wherein the configuration is determined based on the number of the processing units or the plurality of AI/ML models.
  • AI artificial intelligence
  • ML machine learning
  • the processing units are defined for the AI/ML model; the plurality of AI/ML models for the AI/ML functionality have a same AI/ML input format, a same AI/ML output format, same usage, or a same applicable scenario; or the processing units for the AI/ML model are simultaneously usable for AI/ML inference of the AI/ML model.
  • the terminal device of Clause 1 comprises one of the following: providing a channel state information (CSI) report, performing a beam prediction, performing a CSI compression, performing a CSI prediction, or positioning.
  • CSI channel state information
  • Clause 4 The terminal device of Clause 3, one processing unit for the AI/ML model or one AI/ML model for the AI/ML functionality is occupied, in the case that the CSI report is configured for a beam report.
  • an occupied number of the processing units for the AI/ML model or an occupied number of AI/ML models for the AI/ML functionality is the number of channel state information reference signal (CSI-RS) resources configured in a CSI resource set for a channel measurement of the CSI report, in the case that the CSI report is configured to report at least one of a precoding matrix indicator (PMI) or a channel quality indicator (CQI) .
  • CSI-RS channel state information reference signal
  • Clause 6 The terminal device of Clause 3, at least one of the processing units is occupied from a first symbol of an earliest resource for a channel measurement or an interference measurement to a last symbol of a physical uplink control channel (PUCCH) or physical uplink shared channel (PUSCH) carrying the CSI report, in the case that the CSI report is one of the following: a periodic CSI report, a semi-persistent (SP) CSI report carried by the PUCCH, or a SP CSI report other than an initial SP CSI report on a physical uplink shared channel (PUSCH) .
  • PUCCH physical uplink control channel
  • PUSCH physical uplink shared channel
  • Clause 7 The terminal device of Clause 3, at least one of the processing units is occupied from a first symbol after a PDCCH triggering the CSI report to a last symbol of a PUSCH carrying the CSI report, in the case that the CSI report is one of the following: an aperiodic CSI report, or an initial SP CSI report triggered by downlink control information (DCI) .
  • DCI downlink control information
  • Clause 8 The terminal device of Clause 3, wherein the processor is further configured to: in the case that a number N of requested CSI reports is greater than a number of M of unoccupied processing units, skip updating N-M requested CSI reports with a lowest priority; or report the N-M requested CSI reports with the lowest priority without AI/ML inference of the AI/ML model when there are available CSI processing units for all the N-M requested CSI reports.
  • Clause 9 The terminal device of Clause 3, wherein the processor is further configured to: skip updating the CSI report, in the case that the processing units are available for the CSI report but the terminal device is unable to provide input to the AI/ML model.
  • the processing units are a first type of processing units for AI/ML inference of the AI/ML model; and a second type of processing units of the terminal device occupied for providing input to the AI/ML model.
  • the processor is further configured to: perform the operation without the AI/ML inference or skipping performing the operation, in the case that a processing unit with the second type is available and a processing unit with the first type is unavailable.
  • Clause 12 The terminal device of Clause 10, wherein the processor is further configured to: skip performing the operation, in the case that a processing unit with the second type is unavailable.
  • a processing unit among the processing units comprises: a main-processing unit for AI/ML inference of the AI/ML model; and a pre-processing unit for providing input to the AI/ML model.
  • Clause 14 The terminal device of Clause 13, wherein the processor is further configured to: skip performing the operation, in the case that a processing unit is unavailable for the operation.
  • Clause 15 The terminal device of Clause 1, wherein the processor is further configured to: transmit, to the network device via the transceiver, a model input format, a model output format, usage, or an applicable scenario of the AI/ML model or the AI/ML functionality.
  • a network device comprising: a processor; and a transceiver coupled to the processor, wherein the processor is configured to: receive, from a terminal device via the transceiver, a number of processing units for an artificial intelligence (AI) /machine learning (ML) model or an indication of a plurality of AI/ML models for an AI/ML functionality; determine, based on the number of processing units or the plurality of AI/ML models, a configuration for an operation of the terminal device associated with the AI/ML model or the AI/ML functionality; and transmit the configuration to the terminal device via the transceiver.
  • AI artificial intelligence
  • ML machine learning
  • the processing units are defined for the AI/ML model; the plurality of AI/ML models have a same AI/ML input format, a same AI/ML output format, same usage, or a same applicable scenario; or the processing units for the AI/ML model are simultaneously usable for AI/ML inference of the AI/ML model.
  • Clause 18 The network device of Clause 16, the operation comprises one of the following: providing a channel state information (CSI) report, performing a beam prediction, performing a CSI compression, performing a CSI prediction, or positioning.
  • CSI channel state information
  • Clause 19 The network device of Clause 18, one processing unit for the AI/ML model or one AI/ML model for the AI/ML functionality is occupied, in the case that the CSI report is configured for a beam report.
  • an occupied number of the processing units for the AI/ML model or an occupied number of AI/ML models for the AI/ML functionality is the number of channel state information reference signal (CSI-RS) resources configured in a CSI resource set for a channel measurement of the CSI report, in the case that the CSI report is configured for to report at least one of a precoding matrix indicator (PMI) report or a channel quality indicator (CQI) report.
  • CSI-RS channel state information reference signal
  • Clause 21 The network device of Clause 18, at least one of the processing units is occupied from a first symbol of an earliest resource for a channel measurement or an interference measurement to a last symbol of a physical uplink control channel (PUCCH) or physical uplink shared channel (PUSCH) carrying the CSI report, in the case that the CSI report is one of the following: a periodic CSI report, a semi-persistent (SP) CSI report carried by the PUCCH, or a SP CSI report other than an initial SP CSI report on a physical uplink shared channel (PUSCH) .
  • PUCCH physical uplink control channel
  • PUSCH physical uplink shared channel
  • Clause 22 The network device of Clause 18, at least one of the processing units is occupied from a first symbol after a PDCCH triggering the CSI report to a last symbol of a PUSCH carrying the CSI report, in the case that the CSI report is one of the following: an aperiodic CSI report, or an initial SP CSI report triggered by downlink control information (DCI) .
  • DCI downlink control information
  • the processor is further configured to: receive, from the terminal device via the transceiver, a model input format, a model output format, usage, or an applicable scenario of the AI/ML model or the AI/ML functionality.
  • a method performed by a terminal device comprising: transmitting, to a network device, a number of processing units for an artificial intelligence (AI) /machine learning (ML) model or an indication of a plurality of AI/ML models for an AI/ML functionality; and receiving, from the network device, a configuration for an operation of the terminal device associated with the AI/ML model or the AI/ML functionality, wherein the configuration is determined based on the number of the processing units or the plurality of AI/ML models.
  • AI artificial intelligence
  • ML machine learning
  • a method performed by a network device comprising: receiving, from a terminal device, a number of processing units for an artificial intelligence (AI) /machine learning (ML) model or an indication of a plurality of AI/ML models for an AI/ML functionality; determining, based on the number of processing units or the plurality of AI/ML models, a configuration for an operation of the terminal device associated with the AI/ML model or the AI/ML functionality; and transmitting the configuration to the terminal device.
  • AI artificial intelligence
  • ML machine learning
  • a non-transitory computer readable medium having program instructions stored thereon that, when executed by an apparatus, cause the apparatus at least to: transmitting, to a network device, a number of processing units for an artificial intelligence (AI) /machine learning (ML) model or an indication of a plurality of AI/ML models for an AI/ML functionality; and receiving, from the network device, a configuration for an operation of the terminal device associated with the AI/ML model or the AI/ML functionality, wherein the configuration is determined based on the number of the processing units or the plurality of AI/ML models.
  • AI artificial intelligence
  • ML machine learning
  • a non-transitory computer readable medium having program instructions stored thereon that, when executed by an apparatus, cause the apparatus at least to: receiving, from a terminal device, a number of processing units for an artificial intelligence (AI) /machine learning (ML) model or an indication of a plurality of AI/ML models for an AI/ML functionality; determining, based on the number of processing units or the plurality of AI/ML models, a configuration for an operation of the terminal device associated with the AI/ML model or the AI/ML functionality; and transmitting the configuration to the terminal device.
  • AI artificial intelligence
  • ML machine learning
  • a signaling mechanism can be introduced which allows the terminal device to indicate the network device the usage state of TO such that the network device can adjust subsequent resource allocation to avoid waste of unused TOs, thereby improving the performance of the communication.
  • the particular designs on the format of indication information in the present disclosure contributes to save signaling overhead, for example, by adopting limited number of bits to indicate the usage of TOs associated with one or more CG configurations.
  • various embodiments of the present disclosure may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device. While various aspects of embodiments of the present disclosure are illustrated and described as block diagrams, flowcharts, or using some other pictorial representation, it will be appreciated that the blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
  • the present disclosure also provides at least one computer program product tangibly stored on a non-transitory computer readable storage medium.
  • the computer program product includes computer-executable instructions, such as those included in program modules, being executed in a device on a target real or virtual processor, to carry out the process or method as described above.
  • program modules include routines, programs, libraries, objects, classes, components, data structures, or the like that perform particular tasks or implement particular abstract data types.
  • the functionality of the program modules may be combined or split between program modules as desired in various embodiments.
  • Machine-executable instructions for program modules may be executed within a local or distributed device. In a distributed device, program modules may be located in both local and remote storage media.
  • Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowcharts and/or block diagrams to be implemented.
  • the program code may execute entirely on a machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
  • the above program code may be embodied on a machine readable medium, which may be any tangible medium that may contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • the machine readable medium may be a machine readable signal medium or a machine readable storage medium.
  • a machine readable medium may include but not limited to an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • machine readable storage medium More specific examples of the machine readable storage medium would include an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM) , a read-only memory (ROM) , an erasable programmable read-only memory (EPROM or Flash memory) , an optical fiber, a portable compact disc read-only memory (CD-ROM) , an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or Flash memory erasable programmable read-only memory
  • CD-ROM portable compact disc read-only memory
  • magnetic storage device or any suitable combination of the foregoing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Des modes de réalisation donnés à titre d'exemple concernent un dispositif terminal, un dispositif réseau, des procédés et un support de stockage lisible par ordinateur pour une communication. Dans un procédé donné à titre d'exemple, un dispositif terminal comprend : un processeur ; et un émetteur-récepteur accouplé au processeur, le processeur étant conçu pour : transmettre à un dispositif réseau par l'intermédiaire de l'émetteur-récepteur, plusieurs unités de traitement pour un modèle d'intelligence artificielle (IA)/apprentissage automatique (ML) ou une indication d'une pluralité de modèles IA/ML pour une fonctionnalité IA/ML ; et recevoir depuis le dispositif réseau par l'intermédiaire de l'émetteur-récepteur, une configuration pour une opération du dispositif terminal associée au modèle IA/ML ou à la fonctionnalité IA/ML, la configuration étant déterminée sur la base du nombre d'unités de traitement ou de la pluralité de modèles IA/ML. De cette manière, le dispositif réseau peut déclencher l'opération IA/ML dans le dispositif terminal de manière flexible selon différents cas d'utilisation.
PCT/CN2023/078188 2023-02-24 2023-02-24 Dispositifs, procédés et support de stockage lisible par ordinateur pour communication WO2024093057A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2023/078188 WO2024093057A1 (fr) 2023-02-24 2023-02-24 Dispositifs, procédés et support de stockage lisible par ordinateur pour communication

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2023/078188 WO2024093057A1 (fr) 2023-02-24 2023-02-24 Dispositifs, procédés et support de stockage lisible par ordinateur pour communication

Publications (1)

Publication Number Publication Date
WO2024093057A1 true WO2024093057A1 (fr) 2024-05-10

Family

ID=90929543

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/078188 WO2024093057A1 (fr) 2023-02-24 2023-02-24 Dispositifs, procédés et support de stockage lisible par ordinateur pour communication

Country Status (1)

Country Link
WO (1) WO2024093057A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220334881A1 (en) * 2020-01-14 2022-10-20 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Artificial intelligence operation processing method and apparatus, system, terminal, and network device
US20220342713A1 (en) * 2020-01-14 2022-10-27 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Information reporting method, apparatus and device, and storage medium
US20220360973A1 (en) * 2021-05-05 2022-11-10 Qualcomm Incorporated Ue capability for ai/ml
CN115349279A (zh) * 2022-06-23 2022-11-15 北京小米移动软件有限公司 Ai模型确定方法、装置、通信设备及存储介质
US20220400373A1 (en) * 2021-06-15 2022-12-15 Qualcomm Incorporated Machine learning model configuration in wireless networks
WO2023015428A1 (fr) * 2021-08-10 2023-02-16 Qualcomm Incorporated Configuration de regroupement de catégories de modèle ml

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220334881A1 (en) * 2020-01-14 2022-10-20 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Artificial intelligence operation processing method and apparatus, system, terminal, and network device
US20220342713A1 (en) * 2020-01-14 2022-10-27 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Information reporting method, apparatus and device, and storage medium
US20220360973A1 (en) * 2021-05-05 2022-11-10 Qualcomm Incorporated Ue capability for ai/ml
US20220400373A1 (en) * 2021-06-15 2022-12-15 Qualcomm Incorporated Machine learning model configuration in wireless networks
WO2023015428A1 (fr) * 2021-08-10 2023-02-16 Qualcomm Incorporated Configuration de regroupement de catégories de modèle ml
CN115349279A (zh) * 2022-06-23 2022-11-15 北京小米移动软件有限公司 Ai模型确定方法、装置、通信设备及存储介质

Similar Documents

Publication Publication Date Title
US11523298B2 (en) Methods and apparatuses for channel state information transmission
JP2021530123A (ja) 方法、ネットワーク装置、及び、端末
WO2022021426A1 (fr) Procédé, dispositif et support de stockage informatique de communication
JP2020533863A (ja) 信号送信方法、関連する装置及びシステム
WO2022238801A1 (fr) Appareil et procédé de gestion de faisceau
WO2024093057A1 (fr) Dispositifs, procédés et support de stockage lisible par ordinateur pour communication
WO2022217606A1 (fr) Procédés de communication, dispositif terminal, dispositif réseau et supports lisibles par ordinateur
CN113708902B (zh) 用于休眠的带宽部分的信道信息报告
EP4106251A1 (fr) Procédé et appareil de détermination du temps d'occupation de cpu pour un scénario de transmissions répétées de multiples pdcch, support de stockage et terminal
WO2022141647A1 (fr) Procédé, dispositif et support de stockage informatique de communication
WO2022116094A1 (fr) Procédé, dispositif et support lisible par ordinateur pour communication
CN115836489A (zh) 装置、方法和计算机程序
CN116636169A (zh) 参考信号资源的传输方法、设备及存储介质
WO2023206046A1 (fr) Mécanisme de réception de faisceau unique à l'intérieur d'une fenêtre de traitement de signaux prs
WO2023212870A1 (fr) Mécanisme d'économie d'énergie d'ue
WO2022226885A1 (fr) Procédés, dispositifs et supports de stockage informatique pour la communication
WO2024093136A1 (fr) Dispositifs et procédés d'indication d'état d'utilisation d'occasions de transmission pour autorisation configurée
WO2023019439A1 (fr) Procédés, dispositifs, et support lisible par ordinateur de communication
WO2024093139A1 (fr) Dispositifs, procédés et supports pour communications
WO2022193252A1 (fr) Procédés de communication, dispositif terminal, dispositif de réseau et support lisible par ordinateur
WO2023245581A1 (fr) Procédés, dispositifs et support de communication
US20240098543A1 (en) Devices, methods and apparatuses for beam reporting
US20240163918A1 (en) Methods for communication, terminal device, and computer readable media
WO2022052130A1 (fr) Procédé, dispositif et support lisible par ordinateur destinés à la communication
WO2023272723A1 (fr) Procédé, dispositif et support de stockage informatique de communication