WO2023133897A1 - User equipment, base station, and wireless communication methods based on machine learning/artificial intelligence - Google Patents

User equipment, base station, and wireless communication methods based on machine learning/artificial intelligence Download PDF

Info

Publication number
WO2023133897A1
WO2023133897A1 PCT/CN2022/072400 CN2022072400W WO2023133897A1 WO 2023133897 A1 WO2023133897 A1 WO 2023133897A1 CN 2022072400 W CN2022072400 W CN 2022072400W WO 2023133897 A1 WO2023133897 A1 WO 2023133897A1
Authority
WO
WIPO (PCT)
Prior art keywords
machine learning
learning model
base station
new
wireless communication
Prior art date
Application number
PCT/CN2022/072400
Other languages
French (fr)
Inventor
Junrong GU
Jia SHENG
Original Assignee
Shenzhen Tcl New Technology Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tcl New Technology Co., Ltd. filed Critical Shenzhen Tcl New Technology Co., Ltd.
Priority to PCT/CN2022/072400 priority Critical patent/WO2023133897A1/en
Publication of WO2023133897A1 publication Critical patent/WO2023133897A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning

Definitions

  • the present disclosure relates to the field of wireless communication systems, and more particularly, to a user equipment (UE) , a base station, and wireless communication methods based on machine learning/artificial intelligence, for example, one or more mechanisms of updating/changing/switching one or more machine learning models for new radio (NR) air interface.
  • UE user equipment
  • NR new radio
  • Precoding and scheduling of new radio are both based on feedback information.
  • a large amount of feedback of a channel state information (CSI) for multi-user multiple-input multiple-output (MU-MIMO) is desired to improve the performance.
  • CSI channel state information
  • MU-MIMO multi-user multiple-input multiple-output
  • the codebook type II is introduced in Release 15, which is with higher precision compared with a codebook type I. That brings more amount of data to feedback. In a word, the CSI feedback in the MU-MIMO is quite an overhead.
  • a user equipment UE
  • a base station a base station
  • wireless communication methods based on machine learning/artificial intelligence, which can solve the issues in the prior art, provide an update of one or more machine learning models/auto-encoder models, provide one or more machine learning models/auto-encoder models switching during UE operations, reduce system overhead, provide a good communication performance, and/or provide high reliability.
  • An object of the present disclosure is to propose a user equipment (UE) , a base station, and wireless communication methods based on machine learning/artificial intelligence, which can solve the issues in the prior art, provide an update of one or more machine learning models/auto-encoder models, provide one or more machine learning models/auto-encoder models switching during UE operations, reduce system overhead, provide a good communication performance, and/or provide high reliability.
  • UE user equipment
  • base station and wireless communication methods based on machine learning/artificial intelligence
  • a wireless communication method based on machine learning/artificial intelligence performed by a UE includes maintaining one or more machine learning models by one or more tables, lists, or groups based on machine learning and performing a machine learning model updating or a machine learning model switching, wherein the machine learning model updating or the machine learning model switching comprises a replacement of a first machine learning model by a second machine learning model, and the machine learning model updating is based on one or more machine learning model performance conditions or an indication from a first base station explicitly or implicitly, or from a UE request.
  • a wireless communication method based on machine learning/artificial intelligence performed by a first base station includes maintaining or managing one or more machine learning models by one or more tables, lists, or groups based on machine learning and controlling a user equipment (UE) to perform a machine learning model updating or a machine learning model switching, wherein the machine learning model updating or the machine learning model switching comprises a replacement of a first machine learning model by a second machine learning model, and the machine learning model updating is based on one or more machine learning model performance conditions or an indication from a first base station explicitly or implicitly, or from a UE request.
  • UE user equipment
  • a user equipment comprises a memory, a transceiver, and a processor coupled to the memory and the transceiver.
  • the processor is configured to maintain one or more machine learning models by one or more tables, lists, or groups based on machine learning, and the processor is configured to perform a machine learning model updating or a machine learning model switching, wherein the machine learning model updating or the machine learning model switching comprises a replacement of a first machine learning model by a second machine learning model, and the machine learning model updating is based on one or more machine learning model performance conditions or an indication from a first base station explicitly or implicitly, or from a UE request.
  • a first base station comprises a memory, a transceiver, and a processor coupled to the memory and the transceiver.
  • the processor is configured to maintain or manage one or more machine learning models by one or more tables, lists, or groups based on machine learning, and the processor is configured to control a user equipment (UE) to perform a machine learning model updating or a machine learning model switching, wherein the machine learning model updating or the machine learning model switching comprises a replacement of a first machine learning model by a second machine learning model, and the machine learning model updating is based on one or more machine learning model performance conditions or an indication from a first base station explicitly or implicitly, or from a UE request.
  • UE user equipment
  • a non-transitory machine-readable storage medium has stored thereon instructions that, when executed by a computer, cause the computer to perform the above method.
  • a chip includes a processor, configured to call and run a computer program stored in a memory, to cause a device in which the chip is installed to execute the above method.
  • a computer readable storage medium in which a computer program is stored, causes a computer to execute the above method.
  • a computer program product includes a computer program, and the computer program causes a computer to execute the above method.
  • a computer program causes a computer to execute the above method.
  • FIG. 1 is a schematic diagram illustrating an example of a channel compression based on an auto-encoder according to an embodiment of the present disclosure.
  • FIG. 2 is a block diagram of one or more user equipments (UEs) and a base station (e.g., gNB) of communication in a communication network system according to an embodiment of the present disclosure.
  • UEs user equipments
  • gNB base station
  • FIG. 3 is a flowchart illustrating a wireless communication method based on machine learning/artificial intelligence performed by a UE according to an embodiment of the present disclosure.
  • FIG. 4 is a flowchart illustrating a wireless communication method based on machine learning/artificial intelligence performed by a first base station according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram illustrating an example of a life cycle of a machine learning model of AI for NR air-interface according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram illustrating an example of auto-encoders in a table according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic diagram illustrating an example of grouping of auto-encoders according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram illustrating an example of an indication of new auto-encoder and a switching time according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram illustrating an example of an auto-encoder switching from the current one to a general one, and further switching to a new one according to an embodiment of the present disclosure.
  • FIG. 10 is a block diagram of a system for wireless communication according to an embodiment of the present disclosure.
  • FIG. 1 illustrates that, in some embodiments, machine learning is introduced.
  • an autoencoder structure is applied.
  • the autoencoder structure comprises two parts, one at a UE for data compressing, the other at gNB for data decompressing.
  • the data here is a general concept, which means estimated CSI-RS values, or the channel estimations at UE side, etc. They play the role of encoder and decoder, respectively.
  • Current works on the channel compression are about the compression of the channel itself and the compression of the CSI-RS values.
  • the channel of time and frequency domain is treated as an image with imaginary and real part. It is sent into an auto encoder at the UE side.
  • the output of the autoencoder is a lower dimension output of the compressed channel.
  • the compressed channel is sent back to the gNB.
  • the gNB will decode the compressed channel.
  • the estimated raw CSI values (the channel estimation outcome at the CSI REs) are sent into an auto encoder at the UE side.
  • the output of the autoencoder is a lower dimension output, a vector.
  • the compressed CSI values are sent back to the gNB.
  • the gNB will decode the compressed CSI values.
  • the auto-encoder at the UE side and the auto-encoder at the gNB side need to be properly managed. It has life cycles and specific application scenario.
  • the second one is more sensible, since it compresses less data. It has same baseline with the codebook-based method. This is in sharp contrast with the current general-purpose codebook, i.e., the codebook type I and the codebook type II.
  • FIG. 2 illustrates that, in some embodiments, one or more user equipments (UEs) 10 and a base station (e.g., gNB) 20 for communication in a communication network system 40 according to an embodiment of the present disclosure are provided.
  • the communication network system 40 includes the one or more UEs 10 and the base station 20 (such as a first base station or a second base station) .
  • the one or more UEs 10 may include a memory 12, a transceiver 13, and a processor 11 coupled to the memory 12 and the transceiver 13.
  • the base station 20 may include a memory 22, a transceiver 23, and a processor 21 coupled to the memory 22 and the transceiver 23.
  • the processor 11or 21 may be configured to implement proposed functions, procedures and/or methods described in this description.
  • Layers of radio interface protocol may be implemented in the processor 11 or 21.
  • the memory 12 or 22 is operatively coupled with the processor 11 or 21 and stores a variety of information to operate the processor 11 or 21.
  • the transceiver 13 or 23 is operatively coupled with the processor 11 or 21, and the transceiver 13 or 23 transmits and/or receives a radio signal.
  • the processor 11 or 21 may include application-specific integrated circuit (ASIC) , other chipset, logic circuit and/or data processing device.
  • the memory 12 or 22 may include read-only memory (ROM) , random access memory (RAM) , flash memory, memory card, storage medium and/or other storage device.
  • the transceiver 13 or 23 may include baseband circuitry to process radio frequency signals.
  • modules e.g., procedures, functions, and so on
  • the modules can be stored in the memory 12 or 22 and executed by the processor 11 or 21.
  • the memory 12 or 22 can be implemented within the processor 11 or 21 or external to the processor 11 or 21 in which case those can be communicatively coupled to the processor 11 or 21 via various means as is known in the art.
  • the machine model updating is a common operation applicable to the use cases based on machine learning, such as beam management and positioning. It is not limited to the embodiment described with auto-encoder here.
  • the machine learning model can be deployed as one node only, either gNB or UE.
  • the model is updated according to certain criteria, such as the performance metrics, BER, or spectrum efficiency, or MSE.
  • the processor 11 is configured to maintain one or more machine learning models by one or more tables, lists, or groups based on machine learning, and the processor 11 is configured to perform a machine learning model updating or a machine learning model switching, wherein the machine learning model updating or the machine learning model switching comprises a replacement of a first machine learning model by a second machine learning model, and the machine learning model updating is based on one or more machine learning model performance conditions or an indication from the first base station explicitly or implicitly, or from a UE request.
  • This can solve issues in the prior art, provide an update of one or more machine learning models/auto-encoder models, provide one or more machine learning models/auto-encoder models switching during UE operations, reduce system overhead, provide a good communication performance, and/or provide high reliability.
  • the processor 21 is configured to maintain or manage one or more machine learning models by one or more tables, lists, or groups based on machine learning, and the processor 21 is configured to control the user equipment (UE) 10 to perform a machine learning model updating or a machine learning model switching, wherein the machine learning model updating or the machine learning model switching comprises a replacement of a first machine learning model by a second machine learning model, and the machine learning model updating is based on one or more machine learning model performance conditions or an indication from the first base station explicitly or implicitly, or from a UE request.
  • UE user equipment
  • FIG. 3 illustrates a wireless communication method 300 based on machine learning/artificial intelligence performed by a UE according to an embodiment of the present disclosure.
  • the method 300 includes: a block 302, maintaining one or more machine learning models by one or more tables, lists, or groups based on machine learning, and a block 304, performing a machine learning model updating or a machine learning model switching, wherein the machine learning model updating or the machine learning model switching comprises a replacement of a first machine learning model by a second machine learning model, and the machine learning model updating is based on one or more machine learning model performance conditions or an indication from a first base station explicitly or implicitly, or from a UE request.
  • FIG. 4 illustrates a wireless communication method 400 based on machine learning/artificial intelligence performed by a base station includes according to an embodiment of the present disclosure.
  • the method 400 includes: a block 402, maintaining or managing one or more machine learning models by one or more tables, lists, or groups based on machine learning, and a block 404, controlling a user equipment (UE) to perform a machine learning model updating or a machine learning model switching, wherein the machine learning model updating or the machine learning model switching comprises a replacement of a first machine learning model by a second machine learning model, and the machine learning model updating is based on one or more machine learning model performance conditions or an indication from a first base station explicitly or implicitly, or from a UE request.
  • UE user equipment
  • the machine learning models are maintained in groups.
  • the machine learning models grouped based on one or any combination of the following factors: a channel model, channel parameters, a signal to noise ratio (SNR) range, a model complexity, a UE capability, a modulation order, a rank, a bandwidth part (BWP) size, a delay spread, a doppler frequency shift, antenna ports, and/or an antenna geometry.
  • a grouping of the machine learning models is indicated to the UE by a downlink control information (DCI) , a medium access control (MAC) control element (CE) , a radio resource control (RRC) signaling, a bitmap, or a multiple-level bitmap.
  • DCI downlink control information
  • MAC medium access control
  • CE control element
  • RRC radio resource control
  • the machine learning models in the table or the list based on machine learning models are updated based on usage frequencies of the machine learning models, performances of the machine learning models, and/or a UE request.
  • the one or more machine learning model performance conditions comprise: if a machine learning model performance is higher than a first value for once, the UE performs the machine learning model updating; if the machine learning model performance is higher than the first value and if a number of occurrences of the first value exceeds a maximum number of times configured by the first base station, the UE performs the machine learning model updating; or if the UE enters a new scenario and a new machine learning model is applied/functioning/to be deployed in the new scenario, the UE performs the machine learning model updating.
  • the indication from the first base station comprises a MAC-CE, an RRC signaling, or a DCI field used to indicate an identifier (ID) /aname of the second machine learning model, and/or one or more levels or bitmaps to retrieve one or more second machine learning models in a group.
  • the replacement of the first machine learning model by the second machine learning model comprises a replacement of a current machine learning model by a new machine learning model, a replacement of the current machine learning model by a backup machine learning model, falling back to a conventional processing method comprising a conventional codebook type I or a conventional codebook type II, or switching to a relative general model obtained from machine learning.
  • the relative general model or general model refers to more generality than some machine learning models.
  • the wireless communication method based on machine learning by the UE further comprises providing feedback of an input of the machine learning model to the first base station, wherein the feedback comprises at least one of followings: wherein the raw value of a channel state information reference signal (CSI-RS) is sent back to the first base station periodically and is configured by an RRC signaling; wherein the raw value of the CSI-RS is sent back to the first base station semi-persistently and is configured by an MAC-CE or the RRC signaling and triggered by a DCI field; or wherein the raw value of the CSI-RS is sent back to the first base station aperiodically and is triggered by the DCI field.
  • the machine learning model updating comprises the replacement of the first machine learning model by the second machine learning model in a same group
  • the machine learning model switching comprises the replacement of the first machine learning model by the second machine learning model in different groups.
  • the machine learning model switching comprises at least one of followings: new RRC entries at a CSI report configuration are defined for the machine learning models to indicate a selection of one machine learning model for the CSI report; if the new RRC entries are configured for both the new machine learning model and the current machine learning model as two report settings, the reporting is switched to that configured to the new machine learning model by the first base station using a MAC-CE or a DCI field; if the CSI report configuration for the new machine learning model is not pre-configured, a new signaling e.g., a new DCI field and/or a MAC/CE is defined to activate the new machine learning model as the new codebook and report the CSI based on the new codebook; if the CSI report configuration for the new machine learning model is not pre-configured and the new signaling is not defined, the UE fallbacks to a conventional codebook comprising the codebook type I, the codebook type II or a relative general machine learning model with a flag in an uplink control information
  • the downloading process are initiated by the first base station.
  • the machine learning model of the UE is downloaded via a data channel from the first base station, or the machine learning model is downloaded from a third node.
  • the UE when the UE is handed over from the first base station to a second base station, if the UE considers an environment around the UE is changed, the UE requests the second base station to change the machine learning model, and the request is in the UCI.
  • the request is not limited to only occur in handover, and the request can also be suitable for more situations.
  • the request applies to all the embodiments mentioned in the present application.
  • the UE may request another machine learning model with lower complexity, or request to fallback to conventional processing schemes (e.g., using codebook type I) , when its power becomes low, or it is a UE with reduced capability.
  • the gNB receives the request, it will initiate machine learning model switching procedures. The machine learning model switching can follow the examples in this application.
  • the UE when the UE is handed over from the first base station to a second base station, if the first base station or the second base station considers an environment around the UE is changed, for example the first based station is indoor, and the second base station is outdoor. Besides, the first base station and the second base station are able to judge that data collected by the first base station and the second base station are not contributed to a same training data set, and the second base station informs the UE to pre-configure the new machine learning model by a DCI, a MAC-CE, or an RRC signaling.
  • the second base station informs the first base station to inform the UE to pre-arrange the new machine learning model by a DCI, a MAC-CE, or an RRC signaling.
  • the machine learning model switching comprises a time window beginning from an indication of switching to a new codebook/machine learning model and ending at time when the new codebook/machine learning model begins to function, and the time window is a switching time.
  • a size of the time window is related to the UE ability.
  • the switching time is configured by the first base station to the UE, or a portion of the switching time is configured by the first base station to the UE. In some embodiments, the switching time is reported by the UE as a UE capability, or a portion of the switching time is reported by the UE as the UE capability.
  • PDCCH physical downlink control channel
  • the switching time comprises a DCI/MAC-CE processing time plus a data processing time of an input of the current machine learning model, or the switching time is a value reported by the UE or configured by the first base station.
  • an old/legacy machine learning model continues to take effects until the switching time ends, or the old/legacy machine learning model falls back to a conventional codebook type I or a conventional codebook type II, and the conventional codebook type I or the conventional codebook type II continues to take effects until the switching time ends.
  • a timeline of the switching time follows current 3GPP definitions of a PDCCH processing ability time (N1) and a PUCCH processing ability time (N2) .
  • the switching time is time lag of switching from the current machine learning model to a general machine learning model, or the switching time is the time lag of switching from the general machine learning model to another machine learning model.
  • the machine learning model in the previous scenario takes effects until the machine learning model in new scenario functions.
  • Embodiment 1 Life cycle of an auto-encoder model (new codebook)
  • FIG. 5 is a schematic diagram illustrating an example of a life cycle of a machine learning model of ML/AI for NR air-interface according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram illustrating an example of auto-encoders in a table according to an embodiment of the present disclosure.
  • Some blocks of FIG. 5 can be adapted from a functional framework for RAN Intelligence of TR 37.817.
  • the model drop block of FIG. 5 is newly added.
  • the model drop can be based on a long term assessment of machine learning model performances.
  • the performance assessment can base on metrics for example can be a bit error rate (BER) , or mean-square error (MSE) or a newly defined measure. However, the metrics are not limited to them.
  • BER bit error rate
  • MSE mean-square error
  • machine learning model is trained at the gNB, since the gNB is with more powers and is static that will benefit data collection and labeling.
  • a machine learning model is to be deployed at the UE, possibly it will experience follow procedures: train, test, deployment (download to UE) , retrain-test, deploy, retrain/drop, and so on.
  • the working procedures in FIG. 5 is not limited to the auto-encoder model here. It is applicable to the use cases machine learning/artificial intelligence for NR air interfaces, such as positioning and beam management.
  • Machine learning models can be organized in a table: As an example, FIG. 6 illustrates that in some embodiments, the UE keeps one or more tables, lists, or groups for the machine learning models. In an example, The UE keeps a table of auto-encoders. The ID is a number. The rank is the preference of the auto-encoder.
  • FIG. 7 is a schematic diagram illustrating an example of grouping of auto-encoders according to an embodiment of the present disclosure.
  • FIG. 7 illustrates that, in some embodiments, machine learning models are in groups.
  • the UE keeps one or more tables, lists, or groups for the machine learning models, where the machine learning models/auto-encoders are maintained in groups.
  • Auto-encoder 1 and auto-encoder 2 are in group 1.
  • Auto-encoder 1 is with rank 1, which means auto-encoder 1 is preferred over auto-encoder 2.
  • auto-encoder 3 and auto-encoder 4 are in group 2.
  • Auto-encoder 3 is with rank 1, which means auto-encoder 3 is preferred over auto-encoder 4.
  • auto-encoders with ID 1 and ID 2 belong to group 1, and auto-encoders with ID 3 and ID 4 belong to group 2.
  • Grouping method are illustrated in the following examples.
  • the grouping methods can be linked to channel model, e.g., the auto-encoders of CDL-A channel are in one group.
  • the auto-encoders of TDL-A channel are in one group, and so on.
  • the grouping methods can be linked with channel parameters, such as delay spread, Doppler frequency shift, etc. e.g., the auto-encoders of delay spread 10 ⁇ 50 ms are in one group. Auto-encoders of delay spread 50 ⁇ 100ms are in one group, and so on.
  • the grouping methods can be linked with SNR (signal to noise ratio) range, e.g., the auto-encoders of CDL-A channel with SNR range 10dB ⁇ 20dB are in one group.
  • the auto-encoders of CDL-A channel 20dB ⁇ 30dB are in one group, and so on.
  • the grouping range value is not necessarily 20 dB.
  • the grouping methods can be model complexity, regarding UE capability. For example, the group of models with less complexity for reduced capability UE. The group of models with middle/high complexity for high performance UE (e.g., the high speed train) .
  • the definition of model complexity are based on parameter numbers and/or model structures, etc.
  • the machine learning models are grouped based on at least one or any combinations of the factors within the set ⁇ Modulation order, rank, BWP size, delay spread, Doppler frequency shift, channel type (e.g., CDL-A) , SNR range, model complexity, UE capability, antenna ports, antenna geometry ⁇ .
  • the grouping of machine learning models is explicitly indicated to UE by DCI/MAC-CE/RRC signaling.
  • the grouping of machine learning models is implicitly indicated to UE. For example, the Machine learning models which is trained with UE moving speed less than 120km/h belongs to group 1 and the machine learning models which is trained with UE moving speed greater than 120k/m belongs to group 2. The grouping is implicitly bound with UE moving speed.
  • the grouping can be implicitly bound with antenna ports number, and/or rank, and/or antenna panels/modulation levels.
  • the grouping and classification of auto-encoders at UE side are not necessarily linked to a table/list structure.
  • the grouping of the machine learning models is indicated with one bitmap, where the length of the bit map is equal to the total number of machine learning models. If the bit in the bitmap is “1” , which indicate the corresponding machine learning model applies.
  • two level of bitmap are designed. One bitmap indicates the belonging group, i.e., the size this bitmap is the total number of groups.
  • the other bit map indicates the corresponding machine learning model
  • the size of this bitmap equals to the size of the group, where “1/0” in this bitmap indicates the corresponding Machine learning model applies.
  • the group ID of the Machine learning models is indicated with binary values and the ID of the Machine learning model is indicated with binary values too.
  • “0010” indicates group “2”
  • “1100” indicate Machine learning model “6” , when the size of the binary value is 4.
  • the Machine learning model is trained at gNB, and downloaded to the UE. There are some criteria according to which the model will be dropped or replaced by another model. It is high likely that the UE may keep a table of machine learning models/auto-encoders. The auto-encoders are pre-downloaded to the UE. The models in the table can be updated based on the frequency that the machine learning model/auto-encoder is applied/used. If a machine learning model is used frequently, its rank will be high. Otherwise, if a machine learning model is used less frequently, it will be considered less useful and will be replaced with another desired models. Because the UE is usually with limited storage space, the size of the table will be limited too. Therefore, the table should be maintained with important and useful machine learning models.
  • the models in the table are maintained with certain performance metric, for example, BER (bit error rate) /MSE (mean square error) /spectrum efficiency, etc., or any combination of them.
  • the performance of the conventional codebook type I, or codebook type II can be the baseline for the auto-encoder model.
  • the performance of the auto-encoder model can be evaluated based on BER and compared with that of the conventional codebook. If the performance of auto-encoder model is worse, it will be eliminated from the table in UE and replaced by a new auto-encoder model.
  • the machine learning model is usually trained with the dataset collected from a specific scenario. For example, the auto-encoder is trained with data of with specific SNR range.
  • the machine learning model may not be able to handle it. And, the machine learning model should be updated in this case.
  • One option is to replace the current machine learning model with a backup machine learning model.
  • Another option is to fall back to the conventional codebook type I or the conventional codebook type II.
  • the Machine learning model is updated based on gNB indication which can be MAC-CE/RRC signaling/or a DCI field.
  • the specific signaling indicating the ID/name and/or group of the new machine learning model.
  • the model is updated.
  • the one or some criteria is not satisfied for once in a time window, the machine learning model will be updated.
  • the MSE or BER is higher than 10%for once, the machine learning model will be updated. If no event is reported during the time window, the machine learning model will not be updated.
  • the one or some criteria is not satisfied for a maximum number during a time window, the machine learning model will be updated.
  • the MSE or BER is higher than 10%which is reported for maximum twice, the machine learning model will be updated.
  • the time window expires, the counter is reset.
  • a new machine learning model/auto-encoder is applied/functioning, the Machine learning mode will be updated.
  • the input inference data is out of the range of the training data set of the current machine learning model, one option is falling back to the conventional schemes, e.g., the conventional codebook type I or conventional codebook type II. Another option is to switch to a general model obtained from machine learning.
  • Some embodiments describe the feedback of machine learning performances.
  • the gNB needs to check the BER/throughput. That can be statistically known in a time duration. If a new metric, e.g., the MSE (mean square error) is adopted.
  • the gNB needs to know the ground truth of the CSI-RS value at the input of the auto-encoder.
  • the ground truth value of CSI-RS should be sent back to gNB.
  • the ground truth value of CSI-RS is sent back of the gNB periodically, which is configured by RRC signaling.
  • the ground truth value of CSI-RS is sent back of the gNB semi-persistently. It is configured by MAC-CE or RRC signaling and triggered by a DCI field.
  • the ground truth value of CSI-RS is sent back of the gNB aperiodically. It is triggered by a DCI field.
  • the model can be dropped.
  • a new model e.g., with higher complexity, or different structure is deployment. For example, the model is updated regarding a maximum number, e.g., 10 times in 5 minutes.
  • the model will be replaced with a new model, e.g., with higher complexity, or different structure.
  • the convention schemes will be applied, e.g., the codebook type I or the codebook type II.
  • a new Machine learning model will be deployed, after it is trained and tested with satisfying performance. After deployment, the empty period ends.
  • Embodiment 2 Auto-encoder switching
  • auto-encoder switching and auto-encoder updating both refer to the replacement of one auto-encoder by another one auto-encoder.
  • auto-encoder updating we mean a more general case of auto-encoder replacement. Especially, those refer to the one replaced by another one within the same group, in the Embodiment 1.
  • auto-encoder switching we mean the auto-encoder replaced by another one of another group, e.g., the one of the indoor is replaced by the one of the outdoor. In some cases, the updating and switching may be used interchangeably.
  • the auto-encoder at the UE and the gNB respectively are a kind of new codebook.
  • a new RRC entry at the reportConfig of CSI report configuration should be defined for the auto-encoder. If the new entry is configured for the new auto-encoder and the current auto-encoder. The new auto-encoder is applied with pre-configured report settings. The auto-encoder switching is completed. On the other hand, if the reportConfiguration for the new auto-encoder is not pre-configured. In one example, a new signaling of a new DCI field/a MAC-CE can be defined to activate the new auto-encoder.
  • the UE can fallback to the conventional codebook, with a flag (one bit) in the UCI indicating the type of the conventional codebook. Further, the RRC reconfiguration is initiated. If the new codebook is pre-stored/pre-downloaded at the UE, there will be no downloading procedures. The RRC for report configuration will proceeds. Otherwise, if the new codebook is not pre-downloaded at the UE, the gNB will initiated the downloading process.
  • the auto-encoder of the UE side will be downloaded via the data channel from gNB. In another example the auto-encoder is downloaded from a third node.
  • the machine learning model switching is gNB specific. Because, generally the training of the machine learning model is at the gNB. The data collection will send the data to gNB. As an example, the gNB1 and gNB2 are able to judge whether the data collected by them are contributed to the same training data set. Thus, when a UE is handover from gNB1 to gNB2, there should be a judgement whether the environment around this UE is changed, e.g., gNB1 is indoor, and gNB2 is outdoor. It is determined by a UE. If so, if a UE considers the environment has changed, it will request the gNB to change the machine learning model. As an example, the request is a newly defined signaling in the UCI.
  • the gNB1 and gNB2 will inform the UE to pre-configure the new machine learning model, e.g., by DCI /MAC-CE or RRC signaling.
  • the new machine learning model will begin to function after hand over. Otherwise, there will be indication that there is no change of the machine learning model, after hand over, by DCI/MAC-CE or RRC signaling.
  • the gNB2 when the gNB1 and gNB2 are able to judge that the data collected by them are not contributed to the same training data set, the gNB2 will inform gNB1, then the gNB1 will inform the UE to pre-arrange the new machine learning model, e.g., by DCI /MAC-CE or RRC signaling, if the UE is decided to be handed over to gNB2.
  • the new machine learning model will begin to function after hand over. Otherwise, there will be indication that there is no change of the machine learning model, after hand over, by DCI/MAC-CE or RRC signaling.
  • the machine learning model switching is gNB specific. If the wireless environment is not changed, there is no codebook (auto-encoder model) switch. If the wireless environment has been changed, take the codebook (auto-encoder model) switch.
  • Embodiment 3 The time window of the autoencoder switching
  • FIG. 8 is a schematic diagram illustrating an example of an indication of new auto-encoder and a switching time according to an embodiment of the present disclosure.
  • FIG. 8 illustrates that, as an example, there can be a time window, beginning from the indication of the new codebook and ending at the time when the new codebook begins to function. The size of this time window is related to the UE ability. Some embodiments define this time window as the switching time. If the beginning time is accounted from the receiving of PDCCH indicating the auto-encoder model switching. Afterwards, if the report falls into the switching time window, the report data is processed using the current auto-encoder model, which is in use before the receiving of the indicating DCI.
  • the reporting time is larger than the switching time, the reporting data is processed using the indicated new auto-encoder.
  • the switching time is configured by gNB to a UE.
  • portions of the switching time are configured by the connected gNB to a UE.
  • the switching time is reported by the UE as a UE capability.
  • portions of the switching time are reported by the UE as UE capability.
  • the making of the switching time The DCI/MAC-CE processing time plus the data processing time of the auto-encoder input.
  • the making of the switching time is one value reported by UE, or configured by gNB.
  • the indicated new codebook/auto-encoder After the switching time, the indicated new codebook/auto-encoder begin to function.
  • the codebook/auto-encoder model taking effects during the switching time are described.
  • the old/legacy auto-encoder model continues to take effects, until the switching time ends, which is the case in FIG. 8.
  • the legacy auto-encoder model falls back to the conventional codebook type I or codebook type II.
  • the conventional /legacy auto-encoder model will take effects, until the switching time ends.
  • the timeline follows the current 3GPP definitions of PDCCH processing ability time and PUCCH processing ability time. (N1, N2) . If there is any uplink feedback before N2, the old auto-encoder model applies. After time N2, the conventional codebook type I or codebook type II applies. After the switching time (the switching time is larger than N2) , the indicated new auto-encoder begins to function.
  • FIG. 9 is a schematic diagram illustrating an example of an auto-encoder switching from the current one to a general one, and further switching to a new one according to an embodiment of the present disclosure.
  • FIG. 9 illustrates that, as another option, the legacy auto-encoder model falls back to general one.
  • the general model (auto-encoder model) is trained based on a large amount of data. Its performance is a tradeoff of several machine learning models of low complexity. However, it is with general applicability.
  • the general model also follows the switching time. The details are shown in FIG. 9.
  • Switching time 1 is the time lag of switching from the current auto-encoder model to a general auto-encoder model.
  • the switching time 2 is the time lag of switching from the general auto-encoder model to auto-encoder model 2.
  • the switching time 1 is the same as switching time 2.
  • the switching time is configured by gNB.
  • the switching times are reported by UE.
  • the switching times are configured by gNB.
  • the auto-encoder of the previous scenario will take effects until the auto-encoder model of the new scenario functions. If a UE moves into a building from outside, the auto-encoder model of the outdoor scenario will take effects during the switching time, until the auto-encoder model of the indoor begin to function when the switching time ends and the UE has possible entered into a building.
  • Autoencoder 1 For switching form Autoencoder 1, to fallback (general one) , and then to Autoencoder 2 is provided.
  • an intermediate state is necessary.
  • the intermediate state comprises a fallback codebook.
  • a general codebook from machine learning is provided.
  • the fallback/non-fallback is configurable.
  • Embodiment 4 Indirective auto-encoder switching
  • the UE moves into the new environment for the first time, the performance of the machine learning model is not guaranteed.
  • To fallback to a conventional model is to make sure the performance baseline.
  • Another reason is that the UE can be pre-connected to an indoor gNB when it moves into a building from outside. Or the UE can be pre-connected to an outdoor gNB when it moves into outside within a building.
  • neither the legacy machine learning model, nor the target new machine learning model works. It is wise to fallback to conventional codebook.
  • the UE fallbacks to a general machine learning model.
  • the general machine learning model (a general auto-encoder model) is trained based on a larger amount to data but with average performance. Its role is to take effects in the intermediate stage.
  • some embodiments of this disclosure are about the design of the machine learning model life cycle and the management of machine learning models.
  • the machine learning model, the auto-encoder for channel compression as an example should be updated based on certain criteria regarding the input data or the feedback measures. That will make sure the performance of the auto-encoder model.
  • the second aspect in some embodiments of this disclosure is on the machine learning model switching, e.g., the auto-encoder model switching during the UE operations.
  • the machine learning model is trained based on the data of a specific scenario. When the scenarios change, the auto-encoder model should be changed and switched to another model accordingly. That is related to the tradeoff between the generalization and the performance of a machine learning model. Further, some embodiments of this disclosure solve the both problems.
  • the first one is the update of the auto-encoder model.
  • the model should be updated based on certain criteria. That will make sure the performance of the auto-encoder model.
  • the second one is the auto-encoder model switch during the UE operations.
  • the machine learning model is trained based on the data of a specific scenario. When the scenarios change, the auto-encoder model should be changed and switched to another model accordingly. That is a necessarily step to guarantee the performance of the machine learning models.
  • FIG. 10 is a block diagram of an example system 700 for wireless communication according to an embodiment of the present disclosure. Embodiments described herein may be implemented into the system using any suitably configured hardware and/or software.
  • FIG. 10 illustrates the system 700 including a radio frequency (RF) circuitry 710, a baseband circuitry 720, an application circuitry 730, a memory/storage 740, a display 750, a camera 760, a sensor 770, and an input/output (I/O) interface 780, coupled with each other at least as illustrated.
  • the application circuitry 730 may include a circuitry such as, but not limited to, one or more single-core or multi-core processors.
  • the processors may include any combination of general-purpose processors and dedicated processors, such as graphics processors, application processors.
  • the processors may be coupled with the memory/storage and configured to execute instructions stored in the memory/storage to enable various applications and/or operating systems running on the system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

A user equipment (UE), a base station, and wireless communication methods based on machine learning/artificial intelligence are provided. The wireless communication methods based on machine learning/artificial intelligence performed by the UE includes maintaining one or more machine learning models by one or more tables, lists, or groups based on machine learning and performing a machine learning model updating or a machine learning model switching, wherein the machine learning model updating or the machine learning model switching comprises a replacement of a first machine learning model by a second machine learning model, and the machine learning model updating is based on one or more machine learning model performance conditions or an indication from a first base station explicitly or implicitly, or from a UE request.

Description

USER EQUIPMENT, BASE STATION, AND WIRELESS COMMUNICATION METHODS BASED ON MACHINE LEARNING/ARTIFICIAL INTELLIGENCE
BACKGROUND OF DISCLOSURE
1. Field of the Disclosure
The present disclosure relates to the field of wireless communication systems, and more particularly, to a user equipment (UE) , a base station, and wireless communication methods based on machine learning/artificial intelligence, for example, one or more mechanisms of updating/changing/switching one or more machine learning models for new radio (NR) air interface.
2. Description of the Related Art
Precoding and scheduling of new radio (NR) are both based on feedback information. A large amount of feedback of a channel state information (CSI) for multi-user multiple-input multiple-output (MU-MIMO) is desired to improve the performance. Further, as the antenna size keeps growing large, the antenna port number of MU-MIMO grows into 32 currently. The codebook type II is introduced in Release 15, which is with higher precision compared with a codebook type I. That brings more amount of data to feedback. In a word, the CSI feedback in the MU-MIMO is quite an overhead.
Therefore, there is a need for a user equipment (UE) , a base station, and wireless communication methods based on machine learning/artificial intelligence, which can solve the issues in the prior art, provide an update of one or more machine learning models/auto-encoder models, provide one or more machine learning models/auto-encoder models switching during UE operations, reduce system overhead, provide a good communication performance, and/or provide high reliability.
SUMMARY
An object of the present disclosure is to propose a user equipment (UE) , a base station, and wireless communication methods based on machine learning/artificial intelligence, which can solve the issues in the prior art, provide an update of one or more machine learning models/auto-encoder models, provide one or more machine learning models/auto-encoder models switching during UE operations, reduce system overhead, provide a good communication performance, and/or provide high reliability.
In a first aspect of the present disclosure, a wireless communication method based on machine learning/artificial intelligence performed by a UE includes maintaining one or more machine learning models by one or more tables, lists, or groups based on machine learning and performing a machine learning model updating or a machine learning model switching, wherein the machine learning model updating or the machine learning model switching comprises a replacement of a first machine learning model by a second machine learning model, and the machine learning model updating is based on one or more machine learning model performance conditions or an indication from a first base station explicitly or implicitly, or from a UE request.
In a second aspect of the present disclosure, a wireless communication method based on machine learning/artificial intelligence performed by a first base station includes maintaining or managing one or more machine learning models by one or more tables, lists, or groups based on machine learning and controlling a  user equipment (UE) to perform a machine learning model updating or a machine learning model switching, wherein the machine learning model updating or the machine learning model switching comprises a replacement of a first machine learning model by a second machine learning model, and the machine learning model updating is based on one or more machine learning model performance conditions or an indication from a first base station explicitly or implicitly, or from a UE request.
In a third aspect of the present disclosure, a user equipment (UE) comprises a memory, a transceiver, and a processor coupled to the memory and the transceiver. The processor is configured to maintain one or more machine learning models by one or more tables, lists, or groups based on machine learning, and the processor is configured to perform a machine learning model updating or a machine learning model switching, wherein the machine learning model updating or the machine learning model switching comprises a replacement of a first machine learning model by a second machine learning model, and the machine learning model updating is based on one or more machine learning model performance conditions or an indication from a first base station explicitly or implicitly, or from a UE request.
In a fourth aspect of the present disclosure, a first base station comprises a memory, a transceiver, and a processor coupled to the memory and the transceiver. The processor is configured to maintain or manage one or more machine learning models by one or more tables, lists, or groups based on machine learning, and the processor is configured to control a user equipment (UE) to perform a machine learning model updating or a machine learning model switching, wherein the machine learning model updating or the machine learning model switching comprises a replacement of a first machine learning model by a second machine learning model, and the machine learning model updating is based on one or more machine learning model performance conditions or an indication from a first base station explicitly or implicitly, or from a UE request.
In a fifth aspect of the present disclosure, a non-transitory machine-readable storage medium has stored thereon instructions that, when executed by a computer, cause the computer to perform the above method.
In a sixth aspect of the present disclosure, a chip includes a processor, configured to call and run a computer program stored in a memory, to cause a device in which the chip is installed to execute the above method.
In a seventh aspect of the present disclosure, a computer readable storage medium, in which a computer program is stored, causes a computer to execute the above method.
In an eighth aspect of the present disclosure, a computer program product includes a computer program, and the computer program causes a computer to execute the above method.
In a ninth aspect of the present disclosure, a computer program causes a computer to execute the above method.
BRIEF DESCRIPTION OF DRAWINGS
In order to illustrate the embodiments of the present disclosure or related art more clearly, the following figures will be described in the embodiments are briefly introduced. It is obvious that the drawings are merely some embodiments of the present disclosure, a person having ordinary skill in this field can obtain other figures according to these figures without paying the premise.
FIG. 1 is a schematic diagram illustrating an example of a channel compression based on an auto-encoder according to an embodiment of the present disclosure.
FIG. 2 is a block diagram of one or more user equipments (UEs) and a base station (e.g., gNB) of communication in a communication network system according to an embodiment of the present disclosure.
FIG. 3 is a flowchart illustrating a wireless communication method based on machine learning/artificial intelligence performed by a UE according to an embodiment of the present disclosure.
FIG. 4 is a flowchart illustrating a wireless communication method based on machine learning/artificial intelligence performed by a first base station according to an embodiment of the present disclosure.
FIG. 5 is a schematic diagram illustrating an example of a life cycle of a machine learning model of AI for NR air-interface according to an embodiment of the present disclosure.
FIG. 6 is a schematic diagram illustrating an example of auto-encoders in a table according to an embodiment of the present disclosure.
FIG. 7 is a schematic diagram illustrating an example of grouping of auto-encoders according to an embodiment of the present disclosure.
FIG. 8 is a schematic diagram illustrating an example of an indication of new auto-encoder and a switching time according to an embodiment of the present disclosure.
FIG. 9 is a schematic diagram illustrating an example of an auto-encoder switching from the current one to a general one, and further switching to a new one according to an embodiment of the present disclosure.
FIG. 10 is a block diagram of a system for wireless communication according to an embodiment of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS
Embodiments of the present disclosure are described in detail with the technical matters, structural features, achieved objects, and effects with reference to the accompanying drawings as follows. Specifically, the terminologies in the embodiments of the present disclosure are merely for describing the purpose of the certain embodiment, but not to limit the disclosure.
FIG. 1 illustrates that, in some embodiments, machine learning is introduced. For channel compression, an autoencoder structure is applied. The autoencoder structure comprises two parts, one at a UE for data compressing, the other at gNB for data decompressing. The data here is a general concept, which means estimated CSI-RS values, or the channel estimations at UE side, etc. They play the role of encoder and decoder, respectively. Current works on the channel compression are about the compression of the channel itself and the compression of the CSI-RS values. For the first scheme, the channel of time and frequency domain is treated as an image with imaginary and real part. It is sent into an auto encoder at the UE side. The output of the autoencoder is a lower dimension output of the compressed channel. The compressed channel is sent back to the gNB. The gNB will decode the compressed channel. For the second scheme, the estimated raw CSI values (the channel estimation outcome at the CSI REs) are sent into an auto encoder at the UE side. The output of the autoencoder is a lower dimension output, a vector. The compressed CSI values are sent back to the gNB. The gNB will decode the compressed CSI values. No matter which of the above schemes are adopted by the 3GPP, the auto-encoder at the UE side and the auto-encoder at the gNB side need to be properly managed. It has life  cycles and specific application scenario. The second one is more sensible, since it compresses less data. It has same baseline with the codebook-based method. This is in sharp contrast with the current general-purpose codebook, i.e., the codebook type I and the codebook type II.
FIG. 2 illustrates that, in some embodiments, one or more user equipments (UEs) 10 and a base station (e.g., gNB) 20 for communication in a communication network system 40 according to an embodiment of the present disclosure are provided. The communication network system 40 includes the one or more UEs 10 and the base station 20 (such as a first base station or a second base station) . The one or more UEs 10 may include a memory 12, a transceiver 13, and a processor 11 coupled to the memory 12 and the transceiver 13. The base station 20 may include a memory 22, a transceiver 23, and a processor 21 coupled to the memory 22 and the transceiver 23. The processor 11or 21 may be configured to implement proposed functions, procedures and/or methods described in this description. Layers of radio interface protocol may be implemented in the  processor  11 or 21. The  memory  12 or 22 is operatively coupled with the  processor  11 or 21 and stores a variety of information to operate the  processor  11 or 21. The  transceiver  13 or 23 is operatively coupled with the  processor  11 or 21, and the  transceiver  13 or 23 transmits and/or receives a radio signal.
The  processor  11 or 21 may include application-specific integrated circuit (ASIC) , other chipset, logic circuit and/or data processing device. The  memory  12 or 22 may include read-only memory (ROM) , random access memory (RAM) , flash memory, memory card, storage medium and/or other storage device. The  transceiver  13 or 23 may include baseband circuitry to process radio frequency signals. When the embodiments are implemented in software, the techniques described herein can be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The modules can be stored in the  memory  12 or 22 and executed by the  processor  11 or 21. The  memory  12 or 22 can be implemented within the  processor  11 or 21 or external to the  processor  11 or 21 in which case those can be communicatively coupled to the  processor  11 or 21 via various means as is known in the art.
In the following embodiments, please note that the machine model updating is a common operation applicable to the use cases based on machine learning, such as beam management and positioning. It is not limited to the embodiment described with auto-encoder here. For the other use cases, the machine learning model can be deployed as one node only, either gNB or UE. The model is updated according to certain criteria, such as the performance metrics, BER, or spectrum efficiency, or MSE.
In some embodiments, the processor 11 is configured to maintain one or more machine learning models by one or more tables, lists, or groups based on machine learning, and the processor 11 is configured to perform a machine learning model updating or a machine learning model switching, wherein the machine learning model updating or the machine learning model switching comprises a replacement of a first machine learning model by a second machine learning model, and the machine learning model updating is based on one or more machine learning model performance conditions or an indication from the first base station explicitly or implicitly, or from a UE request. This can solve issues in the prior art, provide an update of one or more machine learning models/auto-encoder models, provide one or more machine learning models/auto-encoder models switching during UE operations, reduce system overhead, provide a good communication performance, and/or provide high reliability.
In some embodiments, the processor 21 is configured to maintain or manage one or more machine learning models by one or more tables, lists, or groups based on machine learning, and the processor 21 is configured to control the user equipment (UE) 10 to perform a machine learning model updating or a machine learning model switching, wherein the machine learning model updating or the machine learning model switching comprises a replacement of a first machine learning model by a second machine learning model, and the machine learning model updating is based on one or more machine learning model performance conditions or an indication from the first base station explicitly or implicitly, or from a UE request. This can solve issues in the prior art, provide an update of one or more machine learning models/auto-encoder models, provide one or more machine learning models/auto-encoder models switching during UE operations, reduce system overhead, provide a good communication performance, and/or provide high reliability.
FIG. 3 illustrates a wireless communication method 300 based on machine learning/artificial intelligence performed by a UE according to an embodiment of the present disclosure. In some embodiments, the method 300 includes: a block 302, maintaining one or more machine learning models by one or more tables, lists, or groups based on machine learning, and a block 304, performing a machine learning model updating or a machine learning model switching, wherein the machine learning model updating or the machine learning model switching comprises a replacement of a first machine learning model by a second machine learning model, and the machine learning model updating is based on one or more machine learning model performance conditions or an indication from a first base station explicitly or implicitly, or from a UE request. This can solve issues in the prior art, provide an update of one or more machine learning models/auto-encoder models, provide one or more machine learning models/auto-encoder models switching during UE operations, reduce system overhead, provide a good communication performance, and/or provide high reliability.
FIG. 4 illustrates a wireless communication method 400 based on machine learning/artificial intelligence performed by a base station includes according to an embodiment of the present disclosure. In some embodiments, the method 400 includes: a block 402, maintaining or managing one or more machine learning models by one or more tables, lists, or groups based on machine learning, and a block 404, controlling a user equipment (UE) to perform a machine learning model updating or a machine learning model switching, wherein the machine learning model updating or the machine learning model switching comprises a replacement of a first machine learning model by a second machine learning model, and the machine learning model updating is based on one or more machine learning model performance conditions or an indication from a first base station explicitly or implicitly, or from a UE request. This can solve issues in the prior art, provide an update of one or more machine learning models/auto-encoder models, provide one or more machine learning models/auto-encoder models switching during UE operations, reduce system overhead, provide a good communication performance, and/or provide high reliability.
In some embodiments, the machine learning models are maintained in groups. In some embodiments, the machine learning models grouped based on one or any combination of the following factors: a channel model, channel parameters, a signal to noise ratio (SNR) range, a model complexity, a UE capability, a modulation order, a rank, a bandwidth part (BWP) size, a delay spread, a doppler frequency shift, antenna ports, and/or an antenna geometry. In some embodiments, a grouping of the machine learning models is indicated to the UE by a downlink control information (DCI) , a medium access control (MAC) control element (CE) , a radio resource  control (RRC) signaling, a bitmap, or a multiple-level bitmap. In some embodiments, the machine learning models in the table or the list based on machine learning models are updated based on usage frequencies of the machine learning models, performances of the machine learning models, and/or a UE request. In some embodiments, the one or more machine learning model performance conditions comprise: if a machine learning model performance is higher than a first value for once, the UE performs the machine learning model updating; if the machine learning model performance is higher than the first value and if a number of occurrences of the first value exceeds a maximum number of times configured by the first base station, the UE performs the machine learning model updating; or if the UE enters a new scenario and a new machine learning model is applied/functioning/to be deployed in the new scenario, the UE performs the machine learning model updating.
In some embodiments, the indication from the first base station comprises a MAC-CE, an RRC signaling, or a DCI field used to indicate an identifier (ID) /aname of the second machine learning model, and/or one or more levels or bitmaps to retrieve one or more second machine learning models in a group. In some embodiments, the replacement of the first machine learning model by the second machine learning model comprises a replacement of a current machine learning model by a new machine learning model, a replacement of the current machine learning model by a backup machine learning model, falling back to a conventional processing method comprising a conventional codebook type I or a conventional codebook type II, or switching to a relative general model obtained from machine learning. The relative general model or general model refers to more generality than some machine learning models. However, the relative general model or general model is not completely generic to any scenario/situation. In some embodiments, the wireless communication method based on machine learning by the UE further comprises providing feedback of an input of the machine learning model to the first base station, wherein the feedback comprises at least one of followings: wherein the raw value of a channel state information reference signal (CSI-RS) is sent back to the first base station periodically and is configured by an RRC signaling; wherein the raw value of the CSI-RS is sent back to the first base station semi-persistently and is configured by an MAC-CE or the RRC signaling and triggered by a DCI field; or wherein the raw value of the CSI-RS is sent back to the first base station aperiodically and is triggered by the DCI field. In some embodiments, the machine learning model updating comprises the replacement of the first machine learning model by the second machine learning model in a same group, and the machine learning model switching comprises the replacement of the first machine learning model by the second machine learning model in different groups.
In some embodiments, the machine learning model switching comprises at least one of followings: new RRC entries at a CSI report configuration are defined for the machine learning models to indicate a selection of one machine learning model for the CSI report; if the new RRC entries are configured for both the new machine learning model and the current machine learning model as two report settings, the reporting is switched to that configured to the new machine learning model by the first base station using a MAC-CE or a DCI field; if the CSI report configuration for the new machine learning model is not pre-configured, a new signaling e.g., a new DCI field and/or a MAC/CE is defined to activate the new machine learning model as the new codebook and report the CSI based on the new codebook; if the CSI report configuration for the new machine learning model is not pre-configured and the new signaling is not defined, the UE fallbacks to a conventional codebook comprising the codebook type I, the codebook type II or a relative general machine learning model with a flag  in an uplink control information (UCI) indicating the type of the codebook and an RRC configuration for the deployment of the new machine learning model is initiated; or if the CSI report configuration for the new machine learning model is not pre-configured and the new signaling is not defined, the UE fallbacks to a predefined codebook comprising the codebook type I, the codebook type II, or the relative general machine learning model and an RRC configuration for the deployment of the new machine learning model is initiated.
In some embodiments, if the RRC reconfiguration is initiated and the new codebook is pre-stored/pre-downloaded at the UE, there are no downloading procedures and the RRC for the CSI report configuration proceeds. In some embodiments, if the RRC reconfiguration is initiated and the new codebook is not pre-downloaded at the UE, the downloading process are initiated by the first base station. In some embodiments, the machine learning model of the UE is downloaded via a data channel from the first base station, or the machine learning model is downloaded from a third node. In some embodiments, when the UE is handed over from the first base station to a second base station, if the UE considers an environment around the UE is changed, the UE requests the second base station to change the machine learning model, and the request is in the UCI. The request is not limited to only occur in handover, and the request can also be suitable for more situations. For example, the request applies to all the embodiments mentioned in the present application. In another example, the UE may request another machine learning model with lower complexity, or request to fallback to conventional processing schemes (e.g., using codebook type I) , when its power becomes low, or it is a UE with reduced capability. In another example, as the UE is going to speed up, it will request the another machine learning before acceleration. When the gNB receives the request, it will initiate machine learning model switching procedures. The machine learning model switching can follow the examples in this application.
In some embodiments, when the UE is handed over from the first base station to a second base station, if the first base station or the second base station considers an environment around the UE is changed, for example the first based station is indoor, and the second base station is outdoor. Besides, the first base station and the second base station are able to judge that data collected by the first base station and the second base station are not contributed to a same training data set, and the second base station informs the UE to pre-configure the new machine learning model by a DCI, a MAC-CE, or an RRC signaling. In some embodiments, when the UE is handed over from the first base station to a second base station, if the first base station or the second base station considers an environment around the UE is changed, the first base station and the second base station are able to judge that data collected by the first base station and the second base station are not contributed to a same training data set, the second base station informs the first base station to inform the UE to pre-arrange the new machine learning model by a DCI, a MAC-CE, or an RRC signaling.
In some embodiments, the machine learning model switching comprises a time window beginning from an indication of switching to a new codebook/machine learning model and ending at time when the new codebook/machine learning model begins to function, and the time window is a switching time. In some embodiments, a size of the time window is related to the UE ability.
In some embodiments, if a beginning time of the switching time is accounted from a receipt of a physical downlink control channel (PDCCH) indicating the machine learning model switching and if a CSI report falls into the switching time, the CSI report is processed by the current machine learning model, which is in use before the receiving of the PDCCH. In some embodiments, if a beginning time of the switching time is  accounted from the receiving of the PDCCH indicating the machine learning model switching and if a reporting time of a report is larger than the switching time, the report is obtained using a new machine learning model. In some embodiments, the switching time is configured by the first base station to the UE, or a portion of the switching time is configured by the first base station to the UE. In some embodiments, the switching time is reported by the UE as a UE capability, or a portion of the switching time is reported by the UE as the UE capability.
In some embodiments, the switching time comprises a DCI/MAC-CE processing time plus a data processing time of an input of the current machine learning model, or the switching time is a value reported by the UE or configured by the first base station. In some embodiments, an old/legacy machine learning model continues to take effects until the switching time ends, or the old/legacy machine learning model falls back to a conventional codebook type I or a conventional codebook type II, and the conventional codebook type I or the conventional codebook type II continues to take effects until the switching time ends. In some embodiments, a timeline of the switching time follows current 3GPP definitions of a PDCCH processing ability time (N1) and a PUCCH processing ability time (N2) . In some embodiments, if there is any uplink feedback before the PUCCH processing ability time, the old/legacy machine learning model, or after the PUCCH processing ability time, the conventional codebook type I or the conventional codebook type II applies. In some embodiments, the switching time is time lag of switching from the current machine learning model to a general machine learning model, or the switching time is the time lag of switching from the general machine learning model to another machine learning model. In some embodiments, in a non-fallback scenario, the machine learning model in the previous scenario takes effects until the machine learning model in new scenario functions. In some embodiments, the UE fallbacks to another machine learning model before switching to a target machine learning model.
The specific example will be based on the auto-encoder model. The ideas and methods of the section are not limited to the auto-encoder model.
Embodiment 1: Life cycle of an auto-encoder model (new codebook)
FIG. 5 is a schematic diagram illustrating an example of a life cycle of a machine learning model of ML/AI for NR air-interface according to an embodiment of the present disclosure. FIG. 6 is a schematic diagram illustrating an example of auto-encoders in a table according to an embodiment of the present disclosure. Some blocks of FIG. 5 can be adapted from a functional framework for RAN Intelligence of TR 37.817. The model drop block of FIG. 5 is newly added. The model drop can be based on a long term assessment of machine learning model performances. The performance assessment can base on metrics for example can be a bit error rate (BER) , or mean-square error (MSE) or a newly defined measure. However, the metrics are not limited to them. At first, some embodiments consider the machine learning model is trained at the gNB, since the gNB is with more powers and is static that will benefit data collection and labeling. When a machine learning model is to be deployed at the UE, possibly it will experience follow procedures: train, test, deployment (download to UE) , retrain-test, deploy, retrain/drop, and so on. The working procedures in FIG. 5 is not limited to the auto-encoder model here. It is applicable to the use cases machine learning/artificial intelligence for NR air interfaces, such as positioning and beam management. Machine learning models can be organized in a table: As an example, FIG. 6 illustrates that in some embodiments, the UE keeps one or more tables, lists, or groups for the machine  learning models. In an example, The UE keeps a table of auto-encoders. The ID is a number. The rank is the preference of the auto-encoder.
FIG. 7 is a schematic diagram illustrating an example of grouping of auto-encoders according to an embodiment of the present disclosure. FIG. 7 illustrates that, in some embodiments, machine learning models are in groups. As an example, the UE keeps one or more tables, lists, or groups for the machine learning models, where the machine learning models/auto-encoders are maintained in groups. For examples in FIG. 7. There are two groups. Auto-encoder 1 and auto-encoder 2 are in group 1. Auto-encoder 1 is with rank 1, which means auto-encoder 1 is preferred over auto-encoder 2. In a similar way, auto-encoder 3 and auto-encoder 4 are in group 2. Auto-encoder 3 is with rank 1, which means auto-encoder 3 is preferred over auto-encoder 4. Further, auto-encoders with ID 1 and ID 2 belong to group 1, and auto-encoders with ID 3 and ID 4 belong to group 2.
Grouping method are illustrated in the following examples. As an example, the grouping methods can be linked to channel model, e.g., the auto-encoders of CDL-A channel are in one group. The auto-encoders of TDL-A channel are in one group, and so on. In another example, the grouping methods can be linked with channel parameters, such as delay spread, Doppler frequency shift, etc. e.g., the auto-encoders of delay spread 10~50 ms are in one group. auto-encoders of delay spread 50~100ms are in one group, and so on. In another example, the grouping methods can be linked with SNR (signal to noise ratio) range, e.g., the auto-encoders of CDL-A channel with SNR range 10dB~20dB are in one group. The auto-encoders of CDL-A channel 20dB~30dB are in one group, and so on. Please note the grouping range value is not necessarily 20 dB. Several other values can be selected. The idea is to groups the auto-encoders according to SNR values. In another example, the grouping methods can be model complexity, regarding UE capability. For example, the group of models with less complexity for reduced capability UE. The group of models with middle/high complexity for high performance UE (e.g., the high speed train) . The definition of model complexity are based on parameter numbers and/or model structures, etc.
In another example, the machine learning models are grouped based on at least one or any combinations of the factors within the set {Modulation order, rank, BWP size, delay spread, Doppler frequency shift, channel type (e.g., CDL-A) , SNR range, model complexity, UE capability, antenna ports, antenna geometry} . In another example, the grouping of machine learning models is explicitly indicated to UE by DCI/MAC-CE/RRC signaling. In another example, the grouping of machine learning models is implicitly indicated to UE. For example, the Machine learning models which is trained with UE moving speed less than 120km/h belongs to group 1 and the machine learning models which is trained with UE moving speed greater than 120k/m belongs to group 2. The grouping is implicitly bound with UE moving speed. In another example, the grouping can be implicitly bound with antenna ports number, and/or rank, and/or antenna panels/modulation levels. As an alternative, the grouping and classification of auto-encoders at UE side are not necessarily linked to a table/list structure. In an example, the grouping of the machine learning models is indicated with one bitmap, where the length of the bit map is equal to the total number of machine learning models. If the bit in the bitmap is “1” , which indicate the corresponding machine learning model applies. In another example, two level of bitmap are designed. One bitmap indicates the belonging group, i.e., the size this bitmap is the total number of groups. The other bit map indicates the corresponding machine learning model, the size of this bitmap equals to the size of the group, where “1/0” in this bitmap indicates the corresponding Machine learning model applies.  In an example, the group ID of the Machine learning models is indicated with binary values and the ID of the Machine learning model is indicated with binary values too. For example, “0010” indicates group “2” , and “1100” indicate Machine learning model “6” , when the size of the binary value is 4.
Generally, the Machine learning model is trained at gNB, and downloaded to the UE. There are some criteria according to which the model will be dropped or replaced by another model. It is high likely that the UE may keep a table of machine learning models/auto-encoders. The auto-encoders are pre-downloaded to the UE. The models in the table can be updated based on the frequency that the machine learning model/auto-encoder is applied/used. If a machine learning model is used frequently, its rank will be high. Otherwise, if a machine learning model is used less frequently, it will be considered less useful and will be replaced with another desired models. Because the UE is usually with limited storage space, the size of the table will be limited too. Therefore, the table should be maintained with important and useful machine learning models.
As an alternative, the models in the table are maintained with certain performance metric, for example, BER (bit error rate) /MSE (mean square error) /spectrum efficiency, etc., or any combination of them. The performance of the conventional codebook type I, or codebook type II can be the baseline for the auto-encoder model. For example, the performance of the auto-encoder model can be evaluated based on BER and compared with that of the conventional codebook. If the performance of auto-encoder model is worse, it will be eliminated from the table in UE and replaced by a new auto-encoder model. The machine learning model is usually trained with the dataset collected from a specific scenario. For example, the auto-encoder is trained with data of with specific SNR range. If there is suddenly high interference, the SNR becomes very small. The machine learning model may not be able to handle it. And, the machine learning model should be updated in this case. One option is to replace the current machine learning model with a backup machine learning model. Another option is to fall back to the conventional codebook type I or the conventional codebook type II.
The Machine learning model is updated based on gNB indication which can be MAC-CE/RRC signaling/or a DCI field. The specific signaling indicating the ID/name and/or group of the new machine learning model. As an example, if one or some criteria are not satisfied, the model is updated. The one or some criteria is not satisfied for once in a time window, the machine learning model will be updated. For example, the MSE or BER is higher than 10%for once, the machine learning model will be updated. If no event is reported during the time window, the machine learning model will not be updated. The one or some criteria is not satisfied for a maximum number during a time window, the machine learning model will be updated. For example, the MSE or BER is higher than 10%which is reported for maximum twice, the machine learning model will be updated. When the time window expires, the counter is reset. In the UE will enter a new scenario, a new machine learning model/auto-encoder is applied/functioning, the Machine learning mode will be updated. For example, when the input inference data is out of the range of the training data set of the current machine learning model, one option is falling back to the conventional schemes, e.g., the conventional codebook type I or conventional codebook type II. Another option is to switch to a general model obtained from machine learning.
Some embodiments describe the feedback of machine learning performances. To check the performance of auto-encoder, the gNB needs to check the BER/throughput. That can be statistically known in a time duration. If a new metric, e.g., the MSE (mean square error) is adopted. The gNB needs to know the ground truth of the CSI-RS value at the input of the auto-encoder. The ground truth value of CSI-RS should be sent back  to gNB. In an example, the ground truth value of CSI-RS is sent back of the gNB periodically, which is configured by RRC signaling. In another example, the ground truth value of CSI-RS is sent back of the gNB semi-persistently. It is configured by MAC-CE or RRC signaling and triggered by a DCI field. In another example, the ground truth value of CSI-RS is sent back of the gNB aperiodically. It is triggered by a DCI field.
In the long run (a longer time compared to model updating cycle) , if the model keeps being updated due to low performance, the model can be dropped. A new model, e.g., with higher complexity, or different structure is deployment. For example, the model is updated regarding a maximum number, e.g., 10 times in 5 minutes. The model will be replaced with a new model, e.g., with higher complexity, or different structure. After the drop of the machine learning model, there can be an empty period, during which no machine learning model is deployed, if there is no machine learning model is with satisfying performance. During the empty period, the convention schemes will be applied, e.g., the codebook type I or the codebook type II. A new Machine learning model will be deployed, after it is trained and tested with satisfying performance. After deployment, the empty period ends.
Embodiment 2: Auto-encoder switching
By auto-encoder switching and auto-encoder updating both refer to the replacement of one auto-encoder by another one auto-encoder. By auto-encoder updating, we mean a more general case of auto-encoder replacement. Especially, those refer to the one replaced by another one within the same group, in the Embodiment 1. On the other hand, by auto-encoder switching, we mean the auto-encoder replaced by another one of another group, e.g., the one of the indoor is replaced by the one of the outdoor. In some cases, the updating and switching may be used interchangeably.
The auto-encoder at the UE and the gNB respectively are a kind of new codebook. A new RRC entry at the reportConfig of CSI report configuration should be defined for the auto-encoder. If the new entry is configured for the new auto-encoder and the current auto-encoder. The new auto-encoder is applied with pre-configured report settings. The auto-encoder switching is completed. On the other hand, if the reportConfiguration for the new auto-encoder is not pre-configured. In one example, a new signaling of a new DCI field/a MAC-CE can be defined to activate the new auto-encoder. In another example, if a new signaling is not defined, the UE can fallback to the conventional codebook, with a flag (one bit) in the UCI indicating the type of the conventional codebook. Further, the RRC reconfiguration is initiated. If the new codebook is pre-stored/pre-downloaded at the UE, there will be no downloading procedures. The RRC for report configuration will proceeds. Otherwise, if the new codebook is not pre-downloaded at the UE, the gNB will initiated the downloading process. As an example, the auto-encoder of the UE side, will be downloaded via the data channel from gNB. In another example the auto-encoder is downloaded from a third node.
As an example, the machine learning model switching is gNB specific. Because, generally the training of the machine learning model is at the gNB. The data collection will send the data to gNB. As an example, the gNB1 and gNB2 are able to judge whether the data collected by them are contributed to the same training data set. Thus, when a UE is handover from gNB1 to gNB2, there should be a judgement whether the environment around this UE is changed, e.g., gNB1 is indoor, and gNB2 is outdoor. It is determined by a UE. If so, if a UE considers the environment has changed, it will request the gNB to change the machine learning model. As an  example, the request is a newly defined signaling in the UCI. Otherwise, there will be indication that there is no change of the machine learning model, e.g., in the UCI. It is determined by a gNB. When the gNB1 and gNB2 are able to judge that the data collected by them are not contributed to the same training data set, as an example, the gNB2 will inform the UE to pre-configure the new machine learning model, e.g., by DCI /MAC-CE or RRC signaling. The new machine learning model will begin to function after hand over. Otherwise, there will be indication that there is no change of the machine learning model, after hand over, by DCI/MAC-CE or RRC signaling.
As an alternative, when the gNB1 and gNB2 are able to judge that the data collected by them are not contributed to the same training data set, the gNB2 will inform gNB1, then the gNB1 will inform the UE to pre-arrange the new machine learning model, e.g., by DCI /MAC-CE or RRC signaling, if the UE is decided to be handed over to gNB2. The new machine learning model will begin to function after hand over. Otherwise, there will be indication that there is no change of the machine learning model, after hand over, by DCI/MAC-CE or RRC signaling.
Further, as an example, the machine learning model switching is gNB specific. If the wireless environment is not changed, there is no codebook (auto-encoder model) switch. If the wireless environment has been changed, take the codebook (auto-encoder model) switch.
Embodiment 3: The time window of the autoencoder switching
FIG. 8 is a schematic diagram illustrating an example of an indication of new auto-encoder and a switching time according to an embodiment of the present disclosure. FIG. 8 illustrates that, as an example, there can be a time window, beginning from the indication of the new codebook and ending at the time when the new codebook begins to function. The size of this time window is related to the UE ability. Some embodiments define this time window as the switching time. If the beginning time is accounted from the receiving of PDCCH indicating the auto-encoder model switching. Afterwards, if the report falls into the switching time window, the report data is processed using the current auto-encoder model, which is in use before the receiving of the indicating DCI. If the reporting time is larger than the switching time, the reporting data is processed using the indicated new auto-encoder. As an example, the switching time is configured by gNB to a UE. As another example, portions of the switching time are configured by the connected gNB to a UE. As an example, the switching time is reported by the UE as a UE capability. As another example, portions of the switching time are reported by the UE as UE capability. As an example, the making of the switching time: The DCI/MAC-CE processing time plus the data processing time of the auto-encoder input. As an example, the making of the switching time: is one value reported by UE, or configured by gNB.
After the switching time, the indicated new codebook/auto-encoder begin to function. The codebook/auto-encoder model taking effects during the switching time are described. As an option, the old/legacy auto-encoder model continues to take effects, until the switching time ends, which is the case in FIG. 8. As another option, the legacy auto-encoder model falls back to the conventional codebook type I or codebook type II. The conventional /legacy auto-encoder model will take effects, until the switching time ends. The timeline follows the current 3GPP definitions of PDCCH processing ability time and PUCCH processing ability time. (N1, N2) . If there is any uplink feedback before N2, the old auto-encoder model applies. After time N2,  the conventional codebook type I or codebook type II applies. After the switching time (the switching time is larger than N2) , the indicated new auto-encoder begins to function.
FIG. 9 is a schematic diagram illustrating an example of an auto-encoder switching from the current one to a general one, and further switching to a new one according to an embodiment of the present disclosure. FIG. 9 illustrates that, as another option, the legacy auto-encoder model falls back to general one. The general model (auto-encoder model) is trained based on a large amount of data. Its performance is a tradeoff of several machine learning models of low complexity. However, it is with general applicability. The general model also follows the switching time. The details are shown in FIG. 9.
In FIG. 9, there are two switching time. Switching time 1 is the time lag of switching from the current auto-encoder model to a general auto-encoder model. The switching time 2 is the time lag of switching from the general auto-encoder model to auto-encoder model 2. In an example, the switching time 1 is the same as switching time 2. There is only one switching time under working of a UE. It is reported by UE and depends on the UE capability. In another example, the switching time is configured by gNB. In another example, there is more than one switching time of a UE. That is switching time 1 is different from switching time 2. It depends on UE capability and model complexity. The switching times are reported by UE. In another example, the switching times are configured by gNB.
As an example of non-fallback, the auto-encoder of the previous scenario will take effects until the auto-encoder model of the new scenario functions. If a UE moves into a building from outside, the auto-encoder model of the outdoor scenario will take effects during the switching time, until the auto-encoder model of the indoor begin to function when the switching time ends and the UE has possible entered into a building.
Further, for switching form Autoencoder 1, to fallback (general one) , and then to Autoencoder 2 is provided. In an example, an intermediate state is necessary. In an example, the intermediate state comprises a fallback codebook. As an alternative, a general codebook from machine learning is provided. In an example the fallback/non-fallback is configurable.
Embodiment 4: Indirective auto-encoder switching
In the above embodiments whether switching of auto-encoders with fallback code is optional is not clarified. In this example, we consider it is mandatory to fallback to another codebook/machine learning model before switching to the target Machine Learning model. The reason is as follows.
The UE moves into the new environment for the first time, the performance of the machine learning model is not guaranteed. To fallback to a conventional model is to make sure the performance baseline. Another reason is that the UE can be pre-connected to an indoor gNB when it moves into a building from outside. Or the UE can be pre-connected to an outdoor gNB when it moves into outside within a building. In the intermediate stage, neither the legacy machine learning model, nor the target new machine learning model works. It is wise to fallback to conventional codebook. Or the UE fallbacks to a general machine learning model. The general machine learning model (a general auto-encoder model) is trained based on a larger amount to data but with average performance. Its role is to take effects in the intermediate stage.
As an example, when the UE moves into a new scenario, whether direct switching of machine learning model or indirect switching of machine learning model is configurable by gNB, with DCI field/MAC-CE/RRC  signaling. We consider the replacement of machine learning model 1 by the target machine learning model 2 as direct switching. One the other hand, we consider the machine learning model switching with an intermediate stage as indirect switching, e.g., switching to conventional codebook, then switching to the target machine learning model. As an example, when the UE moves into a new scenario, whether direct switching of machine learning model or indirect switching of machine learning model is configurable by gNB. And it is decided by UE speed. If a UE moves faster than a speed, e.g., 120km/h, the UE can only take direct auto-encoder switch.
In summary, some embodiments of this disclosure are about the design of the machine learning model life cycle and the management of machine learning models. The machine learning model, the auto-encoder for channel compression as an example, should be updated based on certain criteria regarding the input data or the feedback measures. That will make sure the performance of the auto-encoder model. The second aspect in some embodiments of this disclosure is on the machine learning model switching, e.g., the auto-encoder model switching during the UE operations. The machine learning model is trained based on the data of a specific scenario. When the scenarios change, the auto-encoder model should be changed and switched to another model accordingly. That is related to the tradeoff between the generalization and the performance of a machine learning model. Further, some embodiments of this disclosure solve the both problems. The first one is the update of the auto-encoder model. The model should be updated based on certain criteria. That will make sure the performance of the auto-encoder model. The second one is the auto-encoder model switch during the UE operations. The machine learning model is trained based on the data of a specific scenario. When the scenarios change, the auto-encoder model should be changed and switched to another model accordingly. That is a necessarily step to guarantee the performance of the machine learning models.
FIG. 10 is a block diagram of an example system 700 for wireless communication according to an embodiment of the present disclosure. Embodiments described herein may be implemented into the system using any suitably configured hardware and/or software. FIG. 10 illustrates the system 700 including a radio frequency (RF) circuitry 710, a baseband circuitry 720, an application circuitry 730, a memory/storage 740, a display 750, a camera 760, a sensor 770, and an input/output (I/O) interface 780, coupled with each other at least as illustrated. The application circuitry 730 may include a circuitry such as, but not limited to, one or more single-core or multi-core processors. The processors may include any combination of general-purpose processors and dedicated processors, such as graphics processors, application processors. The processors may be coupled with the memory/storage and configured to execute instructions stored in the memory/storage to enable various applications and/or operating systems running on the system.
While the present disclosure has been described in connection with what is considered the most practical and preferred embodiments, it is understood that the present disclosure is not limited to the disclosed embodiments but is intended to cover various arrangements made without departing from the scope of the broadest interpretation of the appended claims.

Claims (53)

  1. A wireless communication method based on machine learning/artificial intelligence by a user equipment (UE) , comprising:
    maintaining one or more machine learning models by one or more tables, lists, or groups machine learning based on machine learning; and
    performing a machine learning model updating or a machine learning model switching, wherein the machine learning model updating or the machine learning model switching comprises a replacement of a first machine learning model by a second machine learning model, and the machine learning model updating is based on one or more machine learning model performance conditions or an indication from a first base station explicitly or implicitly, or from a UE request.
  2. The wireless communication method based on machine learning/artificial intelligence by the UE according to claim 1, wherein the machine learning models are maintained in groups.
  3. The wireless communication method based on machine learning/artificial intelligence by the UE according to claim 2, wherein the machine learning models grouped based on one or any combination of the following factors: a channel model, channel parameters, a signal to noise ratio (SNR) range, a model complexity, a UE capability, a modulation order, a rank, a bandwidth part (BWP) size, a delay spread, a doppler frequency shift, antenna ports, and/or an antenna geometry.
  4. The wireless communication method based on machine learning/artificial intelligence by the UE according to claim 2 or 3, wherein a grouping of the machine learning models is indicated to the UE by a downlink control information (DCI) , a medium access control (MAC) control element (CE) , a radio resource control (RRC) signaling, a bitmap, or a multiple-level bitmap.
  5. The wireless communication method based on machine learning/artificial intelligence by the UE according to any one of claims 1 to 4, wherein the machine learning models in the table or the list based on machine learning models are updated based on usage frequencies of the machine learning models, performances of the machine learning models, and/or the UE request.
  6. The wireless communication method based on machine learning/artificial intelligence by the UE according to any one of claims 1 to 5, wherein the one or more machine learning model performance conditions comprise: if a machine learning model performance is higher than a first value for once, the UE performs the machine learning model updating;
    if the machine learning model performance is higher than the first value and if a number of occurrences of the first value exceeds a maximum number of times configured by the first base station, the UE performs the machine learning model updating; or
    if the UE enters a new scenario and a new machine learning model is applied/functioning/to be deployed in the new scenario, the UE performs the machine learning model updating.
  7. The wireless communication method based on machine learning/artificial intelligence by the UE according to any one of claims 1 to 6, wherein the indication from the first base station comprises a MAC-CE, an RRC signaling, or a DCI field used to indicate an identifier (ID) /aname of the second machine learning model, and/or one or more levels or bitmaps to retrieve one or more second machine learning models in a group.
  8. The wireless communication method based on machine learning/artificial intelligence by the UE according to  any one of claims 1 to 7, wherein the replacement of the first machine learning model by the second machine learning model comprises a replacement of a current machine learning model by a new machine learning model, a replacement of the current machine learning model by a backup machine learning model, falling back to a conventional processing method comprising a conventional codebook type I or a conventional codebook type II, or switching to a relative general model obtained from machine learning.
  9. The wireless communication method based on machine learning/artificial intelligence by the UE according to any one of claims 1 to 8, wherein the machine learning model updating comprises the replacement of the first machine learning model by the second machine learning model in a same group, and the machine learning model switching comprises the replacement of the first machine learning model by the second machine learning model in different groups.
  10. The wireless communication method based on machine learning/artificial intelligence by the UE according to any one of claims 1 to 9, wherein the machine learning model switching comprises at least one of followings: machine learning new RRC entries at a CSI report configuration are defined for the machine learning models to indicate a selection of one machine learning model for the CSI report;
    if the new RRC entries are configured for both the new machine learning model and the current machine learning model as two report settings, the reporting is switched to that using the new machine learning model by the first base station using a MAC-CE or a DCI field;
    if the CSI report configuration for the new machine learning model is not pre-configured, a new signaling of a new DCI field and/or a MAC/CE is defined to activate the new machine learning model as the new codebook and report the CSI based on the new codebook;
    if the CSI report configuration for the new machine learning model is not pre-configured and the new signaling is not defined, the UE fallbacks to a conventional codebook comprising a codebook type I, a codebook type II or a relative general machine learning model with a flag in an uplink control information (UCI) indicating the type of the codebook and an RRC configuration for the deployment of the new machine learning model is initiated; or
    if the CSI report configuration for the new machine learning model is not pre-configured and the new signaling is not defined, the UE fallbacks to a predefined codebook comprising the codebook type I, the codebook type II, or the relative general machine learning model and an RRC configuration for the deployment of the new machine learning model is initiated.
  11. The wireless communication method based on machine learning/artificial intelligence by the UE according to claim 10, wherein if the RRC reconfiguration is initiated and the new codebook is not pre-downloaded at the UE, the downloading process are initiated by the first base station.
  12. The wireless communication method based on machine learning/artificial intelligence by the UE according to claim 11, wherein the machine learning model of the UE is downloaded via a data channel from the first base station, or the machine learning model is downloaded from a third node.
  13. The wireless communication method based on machine learning/artificial intelligence by the UE according to any one of claims 1 to 12, wherein when the UE is handed over from the first base station to a second base station, if the UE considers an environment around the UE is changed, the UE requests the second base station to change the machine learning model, and the request is in the UCI.
  14. The wireless communication method based on machine learning/artificial intelligence by the UE according to any one of claims 1 to 12, wherein when the UE is handed over from the first base station to a second base station, if the first base station or the second base station considers an environment around the UE is changed, the first base station and the second base station are able to judge that data collected by the first base station and the second base station are not contributed to a same training data set, and the second base station informs the UE to pre-arrange the new machine learning model by a DCI, a MAC-CE, or an RRC signaling.
  15. The wireless communication method based on machine learning/artificial intelligence by the UE according to any one of claims 1 to 12, wherein when the UE is handed over from the first base station to a second base station, if the first base station or the second base station considers an environment around the UE is changed, the first base station and the second base station are able to judge that data collected by the first base station and the second base station are not contributed to a same training data set, the second base station informs the first base station to inform the UE to pre-arrange the new machine learning model by a DCI, a MAC-CE, or an RRC signaling.
  16. The wireless communication method based on machine learning/artificial intelligence by the UE according to any one of claims 1 to 15, wherein the machine learning model switching comprises a time window beginning from an indication of switching to a new codebook/machine learning model and ending at time when the new codebook/machine learning model begins to function, and the time window is a switching time.
  17. The wireless communication method based on machine learning/artificial intelligence by the UE according to claim 16, wherein a size of the time window is related to the UE ability.
  18. The wireless communication method based on machine learning/artificial intelligence by the UE according to claim 16 or 17, wherein if a beginning time of the switching time is accounted from a receipt of a physical downlink control channel (PDCCH) indicating the machine learning model switching and if a CSI report falls into the switching time, the CSI report is processed by the current machine learning model, which is in use before the receiving of the PDCCH.
  19. The wireless communication method based on machine learning/artificial intelligence by the UE according to claim 16 or 17, wherein if a beginning time of the switching time is accounted from the receiving of the PDCCH indicating the machine learning model switching and if a reporting time of a report is larger than the switching time, the report is obtained using a new machine learning model.
  20. The wireless communication method based on machine learning/artificial intelligence by the UE according to any one of claims 16 to 19, wherein the switching time is configured by the first base station to the UE, or a portion of the switching time is configured by the first base station to the UE.
  21. The wireless communication method based on machine learning/artificial intelligence by the UE according to any one of claims 16 to 19, wherein the switching time is reported by the UE as a UE capability, or a portion of the switching time is reported by the UE as the UE capability.
  22. The wireless communication method based on machine learning/artificial intelligence by the UE according to any one of claims 16 to 21, wherein the switching time comprises a DCI/MAC-CE processing time plus a data processing time of an input of the current machine learning model, or the switching time is a value reported by the UE or configured by the first base station.
  23. The wireless communication method based on machine learning/artificial intelligence by the UE according  to any one of claims 16 to 22, wherein an old/legacy machine learning model continues to take effects until the switching time ends, or the old/legacy machine learning model falls back to a conventional codebook type I or a conventional codebook type II, and the conventional codebook type I or the conventional codebook type II continues to take effects until the switching time ends.
  24. A wireless communication method based on machine learning/artificial intelligence by a base station, comprising:
    maintaining or managing one or more machine learning models by one or more tables, lists, or groups based on machine learning; and
    controlling a user equipment (UE) to perform a machine learning model updating or a machine learning model switching, wherein the machine learning model updating or the machine learning model switching comprises a replacement of a first machine learning model by a second machine learning model, and the machine learning model updating is based on one or more machine learning model performance conditions or an indication from a first base station explicitly or implicitly, or from a UE request.
  25. The wireless communication method based on machine learning/artificial intelligence by the first base station according to claim 24, wherein the machine learning models are maintained in groups.
  26. The wireless communication method based on machine learning/artificial intelligence by the first base station according to claim 25, wherein the machine learning models grouped based on one or any combination of the following factors: a channel model, channel parameters, a signal to noise ratio (SNR) range, a model complexity, a UE capability, a modulation order, a rank, a bandwidth part (BWP) size, a delay spread, a doppler frequency shift, antenna ports, and/or an antenna geometry.
  27. The wireless communication method based on machine learning/artificial intelligence by the first base station according to claim 25 or 26, wherein a grouping of the machine learning models is indicated to the UE by a downlink control information (DCI) , a medium access control (MAC) control element (CE) , a radio resource control (RRC) signaling, a bitmap, or a multiple-level bitmap.
  28. The wireless communication method based on machine learning/artificial intelligence by the first base station according to any one of claims 24 to 27, wherein the machine learning models in the table or the list based on machine learning models are updated based on usage frequencies of the machine learning models, performances of the machine learning models, and/or the UE request.
  29. The wireless communication method based on machine learning/artificial intelligence by the first base station according to any one of claims 24 to 28, wherein the one or more machine learning model performance conditions comprise:
    if a machine learning model performance is higher than a first value for once, the first base station controls the UE to perform the machine learning model updating;
    if the machine learning model performance is higher than the first value and if a number of occurrences of the first value exceeds a maximum number of times configured by the first base station, the first base station controls the UE to perform the machine learning model updating; or
    if the UE enters a new scenario and a new machine learning model is applied/functioning/to be deployed in the new scenario, the UE performs the machine learning model updating.
  30. The wireless communication method based on machine learning/artificial intelligence by the first base station  according to any one of claims 24 to 29, wherein the indication from the first base station comprises a MAC-CE, an RRC signaling, or a DCI field used to indicate an identifier (ID) /aname of the second machine learning model, and/or one or more levels or bitmaps to retrieve one or more second machine learning models in a group.
  31. The wireless communication method based on machine learning/artificial intelligence by the first base station according to any one of claims 24 to 30, wherein the replacement of the first machine learning model by the second machine learning model comprises a replacement of a current machine learning model by a new machine learning model, a replacement of the current machine learning model by a backup machine learning model, falling back to a conventional processing method comprising a conventional codebook type I or a conventional codebook type II, or switching to a relative general model obtained from machine learning.
  32. The wireless communication method based on machine learning/artificial intelligence by the first base station according to any one of claims 24 to 31, wherein the machine learning model updating comprises the replacement of the first machine learning model by the second machine learning model in a same group, and the machine learning model switching comprises the replacement of the first machine learning model by the second machine learning model in different groups.
  33. The wireless communication method based on machine learning/artificial intelligence by the first base station according to any one of claims 24 to 32, wherein the machine learning model switching comprises at least one of followings:
    new RRC entries at a CSI report configuration are defined for the machine learning models to indicate a selection of one machine learning model for the CSI report;
    if the new RRC entries are configured for both the new machine learning model and the current machine learning model as two report settings, the reporting is switched to that using the new machine learning model by the first base station using a MAC-CE or a DCI field;
    if the CSI report configuration for the new machine learning model is not pre-configured, a new signaling of a new DCI field and/or a MAC/CE is defined to activate the new machine learning model as the new codebook and report the CSI based on the new codebook;
    if the CSI report configuration for the new machine learning model is not pre-configured and the new signaling is not defined, the UE fallbacks to a conventional codebook comprising a codebook type I, a codebook type II or a relative general machine learning model with a flag in an uplink control information (UCI) indicating the type of the codebook and an RRC configuration for the deployment of the new machine learning model is initiated; or
    if the CSI report configuration for the new machine learning model is not pre-configured and the new signaling is not defined, the UE fallbacks to a predefined codebook comprising the codebook type I, the codebook type II, or the relative general machine learning model and an RRC configuration for the deployment of the new machine learning model is initiated.
  34. The wireless communication method based on machine learning/artificial intelligence by the first base station according to claim 33, wherein if the RRC reconfiguration is initiated and the new codebook is not pre-downloaded at the UE, the downloading process are initiated by the first base station.
  35. The wireless communication method based on machine learning/artificial intelligence by the first base station according to claim 34, wherein the machine learning model of the UE is downloaded via a data channel from  the first base station, or the machine learning model is downloaded from a third node.
  36. The wireless communication method based on machine learning/artificial intelligence by the first base station according to any one of claims 24 to 35, wherein when the UE is handed over from the first base station to a second base station, if the UE considers an environment around the UE is changed, the UE requests the second base station to change the machine learning model, and the request is in the UCI.
  37. The wireless communication method based on machine learning/artificial intelligence by the first base station according to any one of claims 24 to 35, wherein when the UE is handed over from the first base station to a second base station, if the first base station or the second base station considers an environment around the UE is changed, the first base station and the second base station are able to judge that data collected by the first base station and the second base station are not contributed to a same training data set, and the second base station informs the UE to pre-arrange the new machine learning model by a DCI, a MAC-CE, or an RRC signaling.
  38. The wireless communication method based on machine learning/artificial intelligence by the first base station according to any one of claims 24 to 35, wherein when the UE is handed over from the first base station to a second base station, if the first base station or the second base station considers an environment around the UE is changed, the first base station and the second base station are able to judge that data collected by the first base station and the second base station are not contributed to a same training data set, the second base station informs the first base station to inform the UE to pre-arrange the new machine learning model by a DCI, a MAC-CE, or an RRC signaling.
  39. The wireless communication method based on machine learning/artificial intelligence by the first base station according to any one of claims 24 to 38, wherein the machine learning model switching comprises a time window beginning from an indication of switching to a new codebook/machine learning model and ending at time when the new codebook/machine learning model begins to function, and the time window is a switching time.
  40. The wireless communication method based on machine learning/artificial intelligence by the first base station according to claim 39, wherein a size of the time window is related to the UE ability.
  41. The wireless communication method based on machine learning/artificial intelligence by the first base station according to claim 39 or 40, wherein if a beginning time of the switching time is accounted from a receipt of a physical downlink control channel (PDCCH) indicating the machine learning model switching and if a CSI report falls into the switching time, the CSI report is processed by the current machine learning model, which is in use before the receiving of the PDCCH.
  42. The wireless communication method based on machine learning/artificial intelligence by the first base station according to claim 39 or 40, wherein if a beginning time of the switching time is accounted from the receiving of the PDCCH indicating the machine learning model switching and if a reporting time of a report is larger than the switching time, the report is obtained using a new machine learning model.
  43. The wireless communication method based on machine learning/artificial intelligence by the first base station according to any one of claims 39 to 42, wherein the switching time is configured by the first base station to the UE, or a portion of the switching time is configured by the first base station to the UE.
  44. The wireless communication method based on machine learning/artificial intelligence by the first base station according to any one of claims 39 to 32, wherein the switching time is reported by the UE as a UE capability, or a portion of the switching time is reported by the UE as the UE capability.
  45. The wireless communication method based on machine learning/artificial intelligence by the first base station according to any one of claims 39 to 44, wherein the switching time comprises a DCI/MAC-CE processing time plus a data processing time of an input of the current machine learning model, or the switching time is a value reported by the UE or configured by the first base station.
  46. The wireless communication method based on machine learning/artificial intelligence by the first base station according to any one of claims 39 to 45, wherein an old/legacy machine learning model continues to take effects until the switching time ends, or the old/legacy machine learning model falls back to a conventional codebook type I or a conventional codebook type II, and the conventional codebook type I or the conventional codebook type II continues to take effects until the switching time ends.
  47. A user equipment (UE) , comprising:
    a memory;
    a transceiver; and
    a processor coupled to the memory and the transceiver;
    wherein the processor is configured to execute the method of any one of claims 1 to 23.
  48. A base station, comprising:
    a memory;
    a transceiver; and
    a processor coupled to the memory and the transceiver;
    wherein the processor is configured to execute the method of any one of claims 24 to 46.
  49. A non-transitory machine-readable storage medium having stored thereon instructions that, when executed by a computer, cause the computer to perform the method of any one of claims 1 to 46.
  50. A chip, comprising:
    a processor, configured to call and run a computer program stored in a memory, to cause a device in which the chip is installed to execute the method of any one of claims 1 to 46.
  51. A computer readable storage medium, in which a computer program is stored, wherein the computer program causes a computer to execute the method of any one of claims 1 to 46.
  52. A computer program product, comprising a computer program, wherein the computer program causes a computer to execute the method of any one of claims 1 to 46.
  53. A computer program, wherein the computer program causes a computer to execute the method of any one of claims 1 to 46.
PCT/CN2022/072400 2022-01-17 2022-01-17 User equipment, base station, and wireless communication methods based on machine learning/artificial intelligence WO2023133897A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/072400 WO2023133897A1 (en) 2022-01-17 2022-01-17 User equipment, base station, and wireless communication methods based on machine learning/artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/072400 WO2023133897A1 (en) 2022-01-17 2022-01-17 User equipment, base station, and wireless communication methods based on machine learning/artificial intelligence

Publications (1)

Publication Number Publication Date
WO2023133897A1 true WO2023133897A1 (en) 2023-07-20

Family

ID=87279950

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/072400 WO2023133897A1 (en) 2022-01-17 2022-01-17 User equipment, base station, and wireless communication methods based on machine learning/artificial intelligence

Country Status (1)

Country Link
WO (1) WO2023133897A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021038382A1 (en) * 2019-08-26 2021-03-04 International Business Machines Corporation Generating environment information using wireless communication
US20210326701A1 (en) * 2020-04-16 2021-10-21 Qualcomm Incorporated Architecture for machine learning (ml) assisted communications networks
CN113570063A (en) * 2020-04-28 2021-10-29 大唐移动通信设备有限公司 Machine learning model parameter transmission method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021038382A1 (en) * 2019-08-26 2021-03-04 International Business Machines Corporation Generating environment information using wireless communication
US20210326701A1 (en) * 2020-04-16 2021-10-21 Qualcomm Incorporated Architecture for machine learning (ml) assisted communications networks
CN113570063A (en) * 2020-04-28 2021-10-29 大唐移动通信设备有限公司 Machine learning model parameter transmission method and device

Similar Documents

Publication Publication Date Title
CN110868231B (en) Method for managing antenna panel, network equipment and terminal equipment
EP2534804B1 (en) Method and arrangement in a telecommunications system
US10165617B2 (en) Methods of adapting receiver configuration for control channel reception based on DRX status
JP6201031B2 (en) Adaptive use of receiver diversity
US20170195029A1 (en) Systems and methods for adaptively restricting csi reporting in multi antenna wireless communications systems utilizing unused bit resources
CN111130610A (en) Signal transmission method and device, electronic equipment and computer readable storage medium
DE112013002521T5 (en) Power Efficient Adaptive Channel State Feedback in Discontinuous Reception Scenarios
DE102012014650A1 (en) Flow Control
DE102015009779B4 (en) Performance optimization for channel status reports in a wireless communication network
KR20230156362A (en) Method and apparatus for channel estimation and mobility improvement in wireless communication system
US20200052754A1 (en) Base station and terminal in wireless communication system, and control method therefor
CN115669088A (en) Adjusting power consumption in a telecommunications network based on traffic prediction
WO2016166181A1 (en) Methods of adapting receiver configuration for control channel reception based on data reception
CN113992314A (en) Link self-adaptive adjusting method, device, system and storage medium
JP2023521881A (en) Beam report transmission method, mobile terminal and network equipment
WO2021028436A1 (en) Adaptive wus transmission
US10292199B2 (en) Method and communication device for performing link adaptation
US10405201B2 (en) Re-configuration of RBS performance mode
CN108353324A (en) The method and communication system of converting transmission time interval, user equipment and base station
EP3465958B1 (en) Outer-loop adjustment for link adaptation
WO2023133897A1 (en) User equipment, base station, and wireless communication methods based on machine learning/artificial intelligence
WO2024044001A1 (en) Determining a sub-band size for channel state information reporting based on an active antenna port configuration
WO2020253969A1 (en) Csi reporting triggered wake-up
JP2018534840A (en) Weight value acquisition method and apparatus
CN102843210B (en) Self-adaptive rank selection method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22919543

Country of ref document: EP

Kind code of ref document: A1