WO2023245515A1 - Monitoring method and wireless communication device - Google Patents
Monitoring method and wireless communication device Download PDFInfo
- Publication number
- WO2023245515A1 WO2023245515A1 PCT/CN2022/100553 CN2022100553W WO2023245515A1 WO 2023245515 A1 WO2023245515 A1 WO 2023245515A1 CN 2022100553 W CN2022100553 W CN 2022100553W WO 2023245515 A1 WO2023245515 A1 WO 2023245515A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- monitoring
- monitored
- sub
- output
- Prior art date
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 365
- 238000000034 method Methods 0.000 title claims abstract description 93
- 238000004891 communication Methods 0.000 title claims abstract description 32
- 238000010801 machine learning Methods 0.000 claims abstract description 243
- 230000010267 cellular communication Effects 0.000 claims abstract description 6
- 238000012549 training Methods 0.000 claims description 28
- 238000004590 computer program Methods 0.000 claims description 15
- 230000011664 signaling Effects 0.000 claims description 15
- 238000013139 quantization Methods 0.000 claims description 10
- 230000004044 response Effects 0.000 claims description 8
- 238000013473 artificial intelligence Methods 0.000 description 117
- 239000013598 vector Substances 0.000 description 19
- 230000006870 function Effects 0.000 description 16
- 238000012805 post-processing Methods 0.000 description 15
- 238000013480 data collection Methods 0.000 description 12
- 238000005259 measurement Methods 0.000 description 10
- 238000007726 management method Methods 0.000 description 9
- 230000004913 activation Effects 0.000 description 8
- 230000009849 deactivation Effects 0.000 description 8
- 230000003213 activating effect Effects 0.000 description 7
- 230000006835 compression Effects 0.000 description 6
- 238000007906 compression Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 230000005540 biological transmission Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000004140 cleaning Methods 0.000 description 3
- 238000012790 confirmation Methods 0.000 description 3
- 230000007257 malfunction Effects 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 3
- 238000002360 preparation method Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 230000001960 triggered effect Effects 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000000875 corresponding effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 239000007795 chemical reaction product Substances 0.000 description 1
- 238000013523 data management Methods 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000004821 distillation Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 238000013138 pruning Methods 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W24/00—Supervisory, monitoring or testing arrangements
- H04W24/08—Testing, supervising or monitoring using real traffic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W24/00—Supervisory, monitoring or testing arrangements
- H04W24/10—Scheduling measurement reports ; Arrangements for measurement reports
Definitions
- the present disclosure relates to the field of communication systems, and more particularly, to a monitoring method and a wireless communication device.
- Wireless communication systems such as the third-generation (3G) of mobile telephone standards and technology are well known.
- 3G standards and technology have been developed by the Third Generation Partnership Project (3GPP) .
- the 3rd generation of wireless communications has generally been developed to support macro-cell mobile phone communications.
- Communication systems and networks have developed towards being a broadband and mobile system.
- UE user equipment
- RAN radio access network
- the RAN comprises a set of base stations (BSs) that provide wireless links to the UEs located in cells covered by the base station, and an interface to a core network (CN) which provides overall network control.
- BSs base stations
- CN core network
- the RAN and CN each conduct respective functions in relation to the overall network.
- LTE Long Term Evolution
- E-UTRAN Evolved Universal Mobile Telecommunication System Territorial Radio Access Network
- 5G or NR new radio
- AI Artificial Intelligence
- ML Machine Learning
- CSI channel state information
- the beam selection is based on the measurement of channel state information (CSI) -reference signal (CSI-RS) /synchronization signal block (SSB) .
- CSI channel state information
- SSB synchronization signal block
- the AI/ML model should be monitored for proper operation. For example, the deployed AI/ML model is monitored for a determination as to whether a beam predicted by the AI/ML model is accurate for beam management, whether a positioning performed by the AI/ML model is still accurate, and/or whether reported CSI can be fully recovered.
- ML model monitoring is critical to ML model deployment. How to monitor an AI/ML model for telecommunication is still not much discussed.
- a monitoring method for machine learning models in a wireless communication device for telecommunication is desired.
- An object of the present disclosure is to propose a wireless communication device, such as a user equipment (UE) or a base station, and a monitoring method.
- a wireless communication device such as a user equipment (UE) or a base station
- UE user equipment
- an embodiment of the invention provides a monitoring method for monitoring machine learning (ML) models, executable in at least one wireless communication device, comprising: using a first ML model as a monitored ML model to work for a cellular communication task; using a second ML model as a monitoring ML model for monitoring of the first ML model; and evaluating performance of the monitored ML model based on the monitoring.
- ML machine learning
- an embodiment of the invention provides a wireless communication device comprising a processor configured to call and run a computer program stored in a memory, to cause a device in which the processor is installed to execute the disclosed method.
- the disclosed method may be implemented in a chip.
- the chip may include a processor, configured to call and run a computer program stored in a memory, to cause a device in which the chip is installed to execute the disclosed method.
- the disclosed method may be programmed as computer-executable instructions stored in non-transitory computer-readable medium.
- the non-transitory computer-readable medium when loaded to a computer, directs a processor of the computer to execute the disclosed method.
- the non-transitory computer-readable medium may comprise at least one from a group consisting of: a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a Read Only Memory, a Programmable Read Only Memory, an Erasable Programmable Read Only Memory, EPROM, an Electrically Erasable Programmable Read Only Memory and a Flash memory.
- the disclosed method may be programmed as a computer program product, which causes a computer to execute the disclosed method.
- the disclosed method may be programmed as a computer program, which causes a computer to execute the disclosed method.
- the invention provides embodiments to address problems in the monitoring of AI/ML model.
- monitoring of AI/ML model is no longer constrained by the time required for collecting ground truth data, and the AI/ML model can be monitored timely.
- monitoring a ML model is constrained by collecting the ground truth data in predetermined physical locations. The proposed methods break these constraints, enables the ML model monitoring at any time and any place.
- the embodiments of the disclosure can be applied to evaluating model generalization, including monitoring and evaluating the generalized AI/ML models.
- FIG. 1 illustrates a schematic view showing an example wireless communication system comprising a user equipment (UE) , a base station, and a network entity.
- UE user equipment
- FIG. 2 illustrates a schematic view showing the interaction of data collection, model training and model inference and feedback, where the ML model is monitored during model inference and updated according to them.
- FIG. 3 illustrates a schematic view showing an embodiment of the disclosed method.
- FIG. 6 illustrates a schematic view showing an example where a monitored model is deployed at gNB, and a monitoring model is deployed at UE.
- FIG. 9 illustrates a schematic view showing examples of a two-side model.
- FIG. 10 illustrates a schematic view showing examples of a two-side model.
- FIG. 11 illustrates a schematic view showing examples of two auto-encoder models for CSI feedback compression.
- all the beams are measured at a predicted time (or a predicted time window) .
- the beam index (or ID) of the beam with the strongest reference signal received power (RSRP) and this strongest RSRP is collected and compared with the AI/ML model output.
- RSRP reference signal received power
- the ground truth data is the true position of the UE. It is, however, difficult to obtain the true position of the UE.
- the true position in the training data can be collected from a positioning reference unit (PRU) , or some tags from a third party. Collection of the ground truth data from PRUs or tags is not suitable for the timely monitoring of the AI/ML model in the field during the normal operation of the UE.
- PRU positioning reference unit
- beam prediction in time domain aims to reduce the delay.
- the collection of the ground truth needs the transmission and measurement of reference signals, which is both time-consuming and resource-consuming.
- Data collection for offline model training is not much of a problem.
- Data collection for AI/ML model monitoring as being subjected to the amount of collected ground truth data, sometimes has challenges.
- a generalized AI/ML model is an AI/ML model trained to work for all sets of unseen data.
- generalization is a definition to demonstrate how well a trained model classifies or forecasts unseen data.
- the trained capability of a generalized AI/ML model may be referred to as generalization capability.
- a proper way to evaluate the trained capability of a generalized AI/ML model is to compare the performance of the generalized AI/ML model with the performance of a non-generalized AI/ML model which is scenario-specific.
- an AI/ML model is used to monitor another AI/ML model.
- an AI/ML model, AI/ML model, and model are interchangeably used in the description.
- An AI/ML model that is used to monitor one or more AI/ML models is referred to as a monitoring model.
- An AI/ML model that is monitored by a monitoring AI/ML model is referred to as a monitored model.
- a telecommunication system including a UE 10a, a base station 20a, a base station 20b, and a network entity device 30 executes the disclosed method according to an embodiment of the present disclosure.
- FIG. 1 is shown for illustrative, not limiting, and the system may comprise more UEs, BSs, and CN entities. Connections between devices and device components are shown as lines and arrows in the FIGs.
- the UE 10a may include a processor 11a, a memory 12a, and a transceiver 13a.
- the base station 20a may include a processor 21a, a memory 22a, and a transceiver 23a.
- the base station 20b may include a processor 21b, a memory 22b, and a transceiver 23b.
- the network entity device 30 may include a processor 31, a memory 32, and a transceiver 33.
- Each of the processors 11a, 21a, 21b, and 31 may be configured to implement the proposed functions, procedures, and/or methods described in this description. Layers of radio interface protocol may be implemented in the processors 11a, 21a, 21 b, and 31.
- Each of the memory 12a, 22a, 22b, and 32 operatively stores a variety of programs and information to operate a connected processor.
- Each of the transceivers 13a, 23a, 23b, and 33 is operatively coupled with a connected processor, transmits and/or receives a radio signal.
- Each of the base stations 20a and 20b may be an eNB, a gNB, or one of other radio nodes.
- Each of the processors 11a, 21a, 21b, and 31 may include a general-purpose central processing unit (CPU) , application-specific integrated circuits (ASICs) , other chipsets, logic circuits and/or data processing devices.
- Each of the memory 12a, 22a, 22b, and 32 may include read-only memory (ROM) , a random-access memory (RAM) , a flash memory, a memory card, a storage medium and/or other storage devices.
- Each of the transceivers 13a, 23a, 23b, and 33 may include baseband circuitry and radio frequency (RF) circuitry to process radio frequency signals.
- RF radio frequency
- the techniques described herein can be implemented with modules, procedures, functions, entities and so on, that perform the functions described herein.
- the modules can be stored in a memory and executed by the processors.
- the memory can be implemented within a processor or external to the processor, in which those can be communicatively coupled to the processor via various means are known in the art.
- the network entity device 30 may be a node in a CN.
- CN may include LTE CN or 5GC which may include user plane function (UPF) , session management function (SMF) , mobility management function (AMF) , unified data management (UDM) , policy control function (PCF) , control plane (CP) /user plane (UP) separation (CUPS) , authentication server (AUSF) , network slice selection function (NSSF) , and the network exposure function (NEF) .
- UPF user plane function
- SMF session management function
- AMF mobility management function
- UDM unified data management
- PCF policy control function
- PCF control plane
- CP control plane
- UP user plane
- CUPS authentication server
- NSSF network slice selection function
- NEF network exposure function
- a system 100 for the general aspects on machine learning in NR or in NR air-interface comprises units of data collection 101, model training unit 102, actor 103, and model inference 104.
- FIG. 2 does not necessarily limit the monitoring method to the instant example.
- the monitoring method is applicable to any design based on machine learning.
- the general steps comprise data collection and/or model training and/or model inference and/or (an) actor (s) .
- the data collection unit 101 is a function that provides input data to the model training unit 102 and the model inference unit 104.
- AI/ML algorithm specific data preparation e.g., data pre-processing and cleaning, formatting, and transformation
- Training data is data needed as input for the AI/ML Model training unit 102.
- Inference data is data needed as input for the AI/ML Model inference unit 104.
- the model training unit 102 is a function that performs the ML model training, validation, and testing.
- the Model training unit 102 is also responsible for data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) based on training data delivered by the data collection unit 101, if required.
- Model Deployment/Update between units 102 and 104 involves deployment or update of an AI/ML model (e.g., a trained machine learning model 105a or 105b) to the model inference unit 104.
- the model training unit 102 uses data units as training data to train a machine learning model 105a and generates a trained machine learning model 105b from the machine learning model 105a.
- the output shown between unit 103 and unit 104 is the inference output of the AI/ML model produced by the model inference unit 104.
- Some embodiments use at least one AI/ML model to assist or monitor at least another AI/ML model.
- the model monitoring can be mode, which can be activated or deactivated by at least one or any combination from ⁇ RRC. MAC-CE or DCI ⁇ .
- the ML model monitoring usually occurs during a time window. Within this time window, if the monitored model is continuously claimed as malfunctioning by a UE or a gNB or a third node, the monitored model is decided as malfunctioning by a gNB or a third node or a UE. In response, model switching, or fallback to non-AI methods which be triggered.
- At least one wireless communication device executes a monitoring method for monitoring machine learning (ML) models.
- the at least one wireless communication device may comprise a combination of a user equipment (UE) , a base station, or a third node.
- UE user equipment
- the at least one wireless communication device uses a second ML model as a monitoring ML model for monitoring of the first ML model (S012) .
- the at least one wireless communication device evaluates performance of the monitored ML model based on the monitoring (S014) .
- a monitored-model-deploying device in which the monitored ML model is deployed and activated is different from a monitored-model-training device in which the monitored ML model is trained.
- the monitored ML model is downloaded from the monitored-model-training device to the monitored-model-deploying device.
- the monitored ML model is deployed and activated at one of a user equipment (UE) , a base station, or a third node.
- the monitored-model-deploying device may be one of the user equipment, the base station, or the third node.
- the monitoring ML model is trained at one of the user equipment, the base station, or the third node.
- the monitored-model-training device may be one of the user equipment, the base station, or the third node.
- the monitored ML model may be activated by a downlink control information (DCI) signal, a radio resource control (RRC) signal, or a Medium Access Control (MAC) control element (CE) .
- DCI downlink control information
- RRC radio resource control
- CE Medium Access
- a monitoring-model-deploying device in which the monitoring ML model is deployed and activated is different from a monitoring-model-training device in which the monitoring ML model is trained.
- the monitoring ML model is downloaded from the monitoring-model-training device to the monitoring-model-deploying device.
- the monitoring ML model is deployed and activated at one of a user equipment (UE) , a base station, or a third node.
- the monitoring-model-deploying device may be one of the user equipment, the base station, or the third node.
- the monitoring ML model is trained at one of the user equipment, the base station, or the third node.
- the monitoring-model-training device may be one of the user equipment, the base station, or the third node.
- the monitoring ML model is activated by a downlink control information (DCI) signal, a radio resource control (RRC) signal, or a Medium Access Control (MAC) control element (CE) .
- DCI downlink control information
- RRC radio resource control
- CE Medium Access Control
- Both the monitored model and monitoring model are deployed and activated at a UE:
- both the monitored model and monitoring model are deployed and activated at UE 10.
- the monitored model and/or monitoring model are trained at gNB 20 or a third node.
- the gNB 20 and/or the third node trains both the monitoring model and the monitored model.
- Both the monitoring model and the monitored model are downloaded to the UE.
- Both the monitoring model and the monitored model are deployed at the UE.
- the deployment of the monitoring model and the monitored model are confirmed by UE, for example by sending a message or a hybrid automatic repeat request (HARQ) acknowledgment (ACK) or a negative acknowledgment (NACK) indicating whether the deployment is successful.
- HARQ hybrid automatic repeat request
- ACK acknowledgement
- NACK negative acknowledgment
- both the monitoring model and the monitored model are activated at the UE.
- the monitoring results are reported to gNB 20.
- the monitoring result can be the difference between the outputs of monitored model and the outputs of the monitoring model.
- the report can be a measurement of the performance of the monitored model (referred to as model performance) , e.g., the confidence level or a probability indicating accuracy of the prediction of the monitored model.
- model performance e.g., the confidence level or a probability indicating accuracy of the prediction of the monitored model.
- the report is status confirmation of the monitored model, and no re-retraining or model switching is triggered. Otherwise, the report is a request of model re-training or model switching.
- the gNB 20 configures the UE 10 by providing configuration to the UE 10. In response to the configuration, the UE 10 uses the monitoring model to monitor the monitored model.
- the gNB 20 can send the configuration to the UE 10 in a downlink control information (DCI) signal, a radio resource control (RRC) signal, or a Medium Access Control (MAC) control element (CE) .
- DCI downlink control information
- RRC radio resource control
- CE Medium Access Control
- the configuration may comprise a monitoring time window or a monitoring period.
- the configuration may comprise a monitoring mode.
- a monitoring mode indicates whether the monitoring is performed by the monitoring ML model or performed by collecting ground truth data for comparison with output of the monitored ML model.
- the monitoring mode indicates that UE 10 is monitored by another AI/ML model (i.e., monitored by the monitoring model) .
- the monitoring mode indicates the UE 10 is monitored by collecting ground truth data without the help of the monitoring model.
- a default setting of the monitoring mode is monitoring with the configured monitoring model, when the monitoring model is provided by gNB 20 or a third node.
- a default setting of the monitoring mode is monitoring by collecting the ground truth data without the help of the monitoring model, when the monitoring model is not provided by gNB 20 and/or a third node.
- the monitoring mode, monitored model, and/or the monitoring model can be activated/deactivated jointly or separately.
- An activation signal for model activation e.g., activating the monitoring mode, the monitored model and/or the monitoring model
- a deactivation signal for model deactivation e.g., deactivating the monitoring mode, the monitored model and/or the monitoring model
- activating the monitoring model and activating the monitoring mode are two different operations. Once the monitoring model is activated, the monitored model is in the monitoring mode.
- Post-processing for an AI/ML model can comprise fine-tuning, model quantization, and model distillation, model pruning, etc.
- the post-processing for example, can further comprise adjusting or reducing AI/ML model complexity or customizing the AI/ML models.
- the monitored model and monitoring model are integrated into one model and are downloaded from gNB 20 to UE 10 through the air interface.
- the gNB configures the UE to take post-processing by at least one of RRC signaling, MAC-CE or DCI.
- the monitoring model ⁇ can be obtained from post-processing of an ML model.
- the configuration signaling of obtaining the ⁇ the monitored model and/or the monitoring model ⁇ via post-processing for monitoring of a one-side model and a two-side model through all the examples in Fig. 4 ⁇ Fig 11, is at least one of ⁇ RRC, MAC-CE or DCI signalling ⁇ .
- the UE 10 post-processing for the monitored model. That is, only the monitored model is obtained from post-processing at UE 10.
- the UE 10 post-processing for the monitoring model. That is, only the monitoring model is obtained from post-processing at UE 10.
- gNB 20 performs the post-processing.
- a third node performs the post-processing.
- the monitored model is deployed and activated at UE 10, and the monitoring model is deployed and activated at gNB 20.
- the monitored model can be trained or deployed at gNB 20 or a third node.
- a procedure of the embodiment is detailed in the following:
- the monitored model is downloaded to a UE 10.
- the monitored model is deployed at the UE 10.
- the monitoring mode and/or the monitoring model is activated by gNB 20.
- the UE 10 reports the input and output of the monitored model to the gNB 20.
- the input and output of the monitored model are further processed in the gNB 20 and fed into the monitoring model.
- the monitored model is activated at gNB while the monitoring model is activated at UE:
- the monitored model is activated at gNB 20 while the monitoring model is activated at UE 10.
- the monitored model can be trained or deployed at gNB 20 or a third node.
- the monitoring model is downloaded to a UE 10.
- the monitoring model is deployed at the UE 10.
- the deployment is confirmed by UE 10, for example, by sending a message or a hybrid automatic repeat request (HARQ) acknowledgment (ACK) or a negative acknowledgment (NACK) to report whether the deployment is successful.
- HARQ hybrid automatic repeat request
- ACK acknowledgement
- NACK negative acknowledgment
- the monitoring mode and/or the monitoring model is activated by gNB 20.
- the gNB 20 can re-deploy the monitoring model or deploy another monitoring model.
- output of the monitoring model is reported to the gNB 20, e.g., position information (coordinates) , beam information (beam index, RSRP) , and confidence level.
- Examples of signaling between the gNB 20 and the UE 10 are detailed in the following.
- Monitored model is activated at gNB and monitoring model is activated at gNB:
- the monitored model can be trained or deployed at gNB 20 or a third node.
- the gNB 20 informs the UE 10 that the monitored model is activated.
- the UE 10 reports the input of the monitored model to the gNB 20.
- the gNB 20 informs the UE 10 that the monitoring model is activated.
- the input of the monitored model and the input of the monitoring model can be same.
- the input of the monitored model and the input of the monitoring model can be different.
- the input of the monitored model and the input of the monitoring model can be with different amounts of input beam indices and/or L1-RSRP.
- Examples of signaling between the gNB 20 and the UE 10 are detailed in the following.
- the monitoring mode, monitored model, and/or the monitoring model can be activated/deactivated jointly or separately.
- An activation signal for model activation e.g., activating the monitoring mode, the monitored model and/or the monitoring model
- a deactivation signal for model deactivation e.g., deactivating the monitoring mode, the monitored model and/or the monitoring model
- the reports of the monitored model during the monitoring mode can be assistant information, at least one from ⁇ time of arrival (TOA) , time difference of arrival (TDOA) , channel conditions, Doppler, L1-SINR ⁇ .
- TOA time of arrival
- TDOA time difference of arrival
- channel conditions Doppler
- L1-SINR ⁇ L1-SINR
- the AI/ML model can be running at UE 10/gNB 20 and monitored by a third node.
- At least one of the monitoring model and the monitored model is trained at the third note.
- the monitoring model and the monitored model are delivered to the UE 10/gNB 20, respectively.
- the UE 10 requests the third node for the monitoring model and/or the monitored model
- the third node may send the AI/ML models (the monitoring model and/or the monitored model) to the UE 10, and the UE 10 sends the confirmation to the third node.
- the gNB 20 requests the third node for the monitoring model and/or the monitored model.
- the third node may send the AI/ML models (the monitoring model and/or the monitored model) to the gNB 20, and the gNB 20 sends the confirmation to the third node.
- the UE 10 may send the report to the third nodes, instead of the gNB 20.
- the UE 10 may send the report to the gNB 20.
- the difference between the monitoring model and the monitored model is a difference in complexity level.
- the monitoring ML model has a first complexity level
- the monitored ML model has a second complexity level
- the first complexity level is greater than the second complexity level.
- the monitored model is a model with low complexity, e.g., after post-processing.
- the monitoring model with high complexity has advanced performance.
- the UE 10 can execute AI/ML models of different complexity levels according to UE capability of the UE 10.
- the UE 10 can run a low complexity AI/ML model frequently and a high complexity model less frequently just for model monitoring.
- the generalized AI/ML model can be deployed at UE 10 or gNB 20 for interference. And the scenario-specific AI/ML model can be deployed at UE 10 or gNB 20 for measuring the generalization capability of the generalized AI/ML model in these scenarios. If the output of these two models differs too much, e.g., larger than a threshold, the generalized AI/ML model is not considered feasible.
- Output of monitoring AI/ML model the synthesized data for replacing ground truth.
- Model monitoring usually works in a reactive manner. That is, the model becomes unreliable after the performance degradation occurs.
- the ground truth data can be replaced by synthesized data generated by an AI/ML model.
- the input of the AI/ML model can be the assistant information (e.g., channel conditions, beam index, L1-RSRP, L1-SINR, CIR) , UE reported data/noisy reference signals.
- the output of the AI/ML model is the synthesized data.
- the input of the AI/ML model can be historical data, such as the historical measurement results or monitoring results.
- the output of the AI/ML model can be the model performance at the current time.
- Output of monitoring AI/ML model confidence level.
- the input of the AI/ML model can be the assistant information (e.g., channel conditions, a beam index, L1-RSRP, L1-SINR, CIR) , UE reported data/noisy reference signals.
- the output of the AI/ML model is the synthesized data.
- the output of the monitoring model is confidence level.
- the historical data can be the confidence level.
- the AI/ML model predicts a confidence level indicating whether the monitored model may work properly in at least the subsequent time unit.
- the monitoring model comprises a two-side model and has a first monitoring sub-model that works in a first model-deploying device and a second monitoring sub-model that works in a second model-deploying device.
- the monitored model comprises another two-side model and has a first monitored sub-model that works in the first model-deploying device and a second monitored sub-model that works in the second model-deploying device.
- the first model-deploying device comprises a user equipment (UE)
- the second model-deploying device comprises a base station.
- UE user equipment
- the first monitored sub-model NN1_1 and the second monitored sub-model NN1_2, compared with the first monitoring sub-model NN2_1 and the second monitoring sub-model NN2_2, are with better performance, for example with more sophisticated model structure and/or more complexity.
- the monitoring can be performed with at least one of the following ways.
- the model monitoring can be jointly performed at UE 10 side.
- the second monitoring sub-model and the second monitored sub-model are integrated into one sub-model.
- FIG. 9 shows a two-side model for positioning.
- a two-side model (the first monitoring sub-model NN2_1 and a second sub-model NN12) is deployed for monitoring of another two-side model (the first monitored sub-model NN1_1 and the second sub-model NN12) .
- the second sub-model NN12 can process both the output of the first monitored sub-model NN1_1 and the second monitored sub-model NN1_2.
- the monitoring can be performed in at least one of the following ways.
- the monitoring can be performed for the difference between the output of the first monitored sub- model NN1_1 and the output of the second monitoring sub-model NN2_2.
- the KPI for measuring the difference between the output of the first monitored sub-model NN1_1 and the output of the second monitoring sub-model NN2_2 can be at least one of the followings, MSE, NMSE, cosine similarity, confidence level, and accuracy.
- the first monitoring sub-model and the first monitored sub-model may be integrated into one sub-model.
- FIG. 10 shows two-side model for positioning.
- a two-side model (the first sub-model NN3_1 and the second monitoring sub-model NN3_2’) is deployed for monitoring another two-side model (the first sub-model NN3_1 and the second monitored sub-model NN3_2) .
- the second monitoring sub-model NN3_2 receives the output of the first sub-model NN3_1 as input of the second monitoring sub-model NN3_2
- the second monitored sub-model NN3_2’ receives the output of the first sub-model NN3_1 as input of the second monitored sub-model NN3_2’.
- the second monitored sub-model NN3_2’ serves as a monitoring model to monitor the performance of the second monitoring sub-model NN3_2.
- the gNB 20 informs the UE 10 the deployment of the second monitored sub-model NN3_2’ as monitoring model.
- the monitoring of the first sub-model NN3_1 and the second monitoring sub-model NN3_2 are performed at gNB 20.
- the monitoring model comprises a first auto-encoder for reporting channel state information (CSI)
- the first monitoring sub-model serves as a first encoder of the first auto-encoder operable to compress CSI
- the second monitoring sub-model serves as a first decoder of the first auto-encoder operable to decompress the compressed CSI from the first encoder.
- the monitored model comprises a second auto-encoder for reporting channel state information (CSI)
- the first monitored sub-model serves as a second encoder of the second auto-encoder operable to compress CSI
- the second monitored sub-model serves as a second decoder of the second auto-encoder operable to decompress the compressed CSI from the second encoder.
- the first auto-encoder and the second auto-encoder report CSI according to a configured monitoring period.
- the monitoring period is reported by a UE.
- a base station determines the monitoring period based on UE capability.
- a second model is deployed to measure the performance of the first model.
- the first model and the second mode can have different complexity.
- the first model and the second model can have different quantization levels of encoder output.
- the monitoring ML model has a first quantization level
- the monitored ML model has a second quantization level
- the first quantization level is greater than the second quantization level.
- FIG. 11 shows two auto-encoder models for CSI feedback compression and CSI feedback.
- One auto-encoder (referred to as a first auto-encoder) may comprise encoder A10 and decoder A11.
- the other auto-encoder (referred to as a second auto-encoder) may comprise encoder A20 and decoder A21. )
- the principle for determining an AI/ML model as the touchstone can be a stricter performance measure (s) , such as stricter cosine similarities, MMSE/throughput, and/or prediction accuracy (e.g., 95%prediction accuracy) .
- s stricter performance measure
- MMSE/throughput MMSE/throughput
- prediction accuracy e.g. 95%prediction accuracy
- the compressed vectors are reported by UE 10.
- a procedure of configuring UE 10 to run two AI/ML models is detailed in the following.
- the gNB 20 may configure the UE 10, and the UE 10 reports the two compressed vectors.
- the UE 10 reports the compressed vectors in one shot. When radio resources are limited, the UE 10 reports the compressed vector according to a priority rule.
- the compressed vector with larger size is dropped.
- the compressed vectors with the same size the one with a greater age is dropped.
- the UE 10 randomly drops one vector.
- the compressed vector with a greater age is dropped.
- the compressed vectors have the same age, the one with a larger size is dropped.
- the UE 10 randomly drops one vector.
- the UE 10 can report the assistant information to the gNB 20/third node.
- the input data in the synthesized data may comprise one or more of CIR, RS, TDOA, AOA, angle of departure (AOD) , and channel conditions.
- the output data in the synthesized data may comprise position information.
- the synthesized data comprises input data for the monitoring model associated with the output data for the monitoring model.
- a format of the synthesized data comprises:
- the input data in the synthesized data may comprise one or more subsets of all beams.
- the output data in the synthesized data may comprise one or more of predicted beam (s) .
- the gNB 20 configures the UE 10 by providing configuration regarding one or more monitoring schemes to the UE 10.
- This configuration is at least one of ⁇ RRC signaling, MAC-CE or DCI signaling ⁇ .
- the UE 10 uses one or more monitoring schemes.
- the one or more monitoring schemes may be selected from one or more of:
- the monitoring ML model monitors the first ML model according to an activated monitoring scheme among a plurality of monitoring schemes, and the plurality of monitoring schemes comprises:
- One of the plurality of monitoring schemes is activated to be the activated monitoring scheme according to a configuration.
- the configuration can be carried in a downlink control information (DCI) signal, a radio resource control (RRC) signal, or a Medium Access Control (MAC) control element (CE) .
- DCI downlink control information
- RRC radio resource control
- CE Medium Access Control
- the configuration is configured by an RRC signal or a MAC-CE and activated by DCI signaling.
- the processing unit 730 may include circuitry, such as, but not limited to, one or more single-core or multi-core processors.
- the processors may include any combinations of general-purpose processors and dedicated processors, such as graphics processors and application processors.
- the processors may be coupled with the memory/storage and configured to execute instructions stored in the memory/storage to enable various applications and/or operating systems running on the system.
- the baseband circuitry 720 may include circuitry to operate with signals that are not strictly considered as being in a baseband frequency.
- baseband circuitry may include circuitry to operate with signals having an intermediate frequency, which is between a baseband frequency and a radio frequency.
- the disclosure provides a monitoring method for monitoring ML models.
- the invention provides embodiments to address problems in monitoring of AI/ML model.
- monitoring of AI/ML model is no longer constrained by the time required for collecting ground truth data, and the AI/ML model can be monitored timely.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
The disclosure provides a monitoring method for monitoring machine learning (ML) models. At least one wireless communication device uses a first ML model as a monitored ML model to work for a cellular communication task and uses a second ML model as a monitoring ML model for monitoring of the first ML model. The at least one wireless communication device evaluates performance of the monitored ML model based on the monitoring.
Description
The present disclosure relates to the field of communication systems, and more particularly, to a monitoring method and a wireless communication device.
Background Art
Wireless communication systems, such as the third-generation (3G) of mobile telephone standards and technology are well known. Such 3G standards and technology have been developed by the Third Generation Partnership Project (3GPP) . The 3rd generation of wireless communications has generally been developed to support macro-cell mobile phone communications. Communication systems and networks have developed towards being a broadband and mobile system. In cellular wireless communication systems, user equipment (UE) is connected by a wireless link to a radio access network (RAN) . The RAN comprises a set of base stations (BSs) that provide wireless links to the UEs located in cells covered by the base station, and an interface to a core network (CN) which provides overall network control. As will be appreciated the RAN and CN each conduct respective functions in relation to the overall network. The 3rd Generation Partnership Project has developed the so-called Long Term Evolution (LTE) system, namely, an Evolved Universal Mobile Telecommunication System Territorial Radio Access Network, (E-UTRAN) , for a mobile access network where one or more macro-cells are supported by a base station known as an eNodeB or eNB (evolved NodeB) . More recently, LTE is evolving further towards the so-called 5G or NR (new radio) systems where one or more cells are supported by a base station known as a gNB.
In 3GPP Rel-18, a study item (SI) “Artificial Intelligence (AI) /Machine Learning (ML) for NR Air Interface” will start to develop. The AI/ML is applied to the 3GPP telecommunication system, and several use cases are investigated and studied, including channel state information (CSI) feedback compression, the beam management and the positioning.
Typically, the beam selection is based on the measurement of channel state information (CSI) -reference signal (CSI-RS) /synchronization signal block (SSB) . This process costs a large amount of reference signals and delay. Thus, predictive beam switching is proposed to reduce the delay. Applying ML to beam management is to be studied.
After an AI/ML model is deployed, the AI/ML model should be monitored for proper operation. For example, the deployed AI/ML model is monitored for a determination as to whether a beam predicted by the AI/ML model is accurate for beam management, whether a positioning performed by the AI/ML model is still accurate, and/or whether reported CSI can be fully recovered.
ML model monitoring is critical to ML model deployment. How to monitor an AI/ML model for telecommunication is still not much discussed.
Hence, a monitoring method for machine learning models in a wireless communication device for telecommunication is desired.
An object of the present disclosure is to propose a wireless communication device, such as a user equipment (UE) or a base station, and a monitoring method.
In a first aspect, an embodiment of the invention provides a monitoring method for monitoring machine learning (ML) models, executable in at least one wireless communication device, comprising: using a first ML model as a monitored ML model to work for a cellular communication task; using a second ML model as a monitoring ML model for monitoring of the first ML model; and evaluating performance of the monitored ML model based on the monitoring.
In a second aspect, an embodiment of the invention provides a wireless communication device comprising a processor configured to call and run a computer program stored in a memory, to cause a device in which the processor is installed to execute the disclosed method.
The disclosed method may be implemented in a chip. The chip may include a processor, configured to call and run a computer program stored in a memory, to cause a device in which the chip is installed to execute the disclosed method.
The disclosed method may be programmed as computer-executable instructions stored in non-transitory computer-readable medium. The non-transitory computer-readable medium, when loaded to a computer, directs a processor of the computer to execute the disclosed method.
The non-transitory computer-readable medium may comprise at least one from a group consisting of:a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a Read Only Memory, a Programmable Read Only Memory, an Erasable Programmable Read Only Memory, EPROM, an Electrically Erasable Programmable Read Only Memory and a Flash memory.
The disclosed method may be programmed as a computer program product, which causes a computer to execute the disclosed method.
The disclosed method may be programmed as a computer program, which causes a computer to execute the disclosed method.
The invention provides embodiments to address problems in the monitoring of AI/ML model. In some embodiments of the disclosure, monitoring of AI/ML model is no longer constrained by the time required for collecting ground truth data, and the AI/ML model can be monitored timely. In some embodiments, monitoring a ML model is constrained by collecting the ground truth data in predetermined physical locations. The proposed methods break these constraints, enables the ML model monitoring at any time and any place.
The embodiments of the disclosure can be applied to evaluating model generalization, including monitoring and evaluating the generalized AI/ML models.
In some embodiments of the disclosure, synthetic ground truth data is synthesized and treated as a kind of substitute for the ground truth data to assist in proactively model monitoring or model switching or pre-training AI/ML models. System performance can be thus improved.
Description of Drawings
In order to more clearly illustrate the embodiments of the present disclosure or related art, the following figures will be described in the embodiments are briefly introduced. It is obvious that the drawings are merely some embodiments of the present disclosure. A person having ordinary skills in this field can obtain other figures according to these figures without paying the premise.
FIG. 1 illustrates a schematic view showing an example wireless communication system comprising a user equipment (UE) , a base station, and a network entity.
FIG. 2 illustrates a schematic view showing the interaction of data collection, model training and model inference and feedback, where the ML model is monitored during model inference and updated according to them.
FIG. 3 illustrates a schematic view showing an embodiment of the disclosed method.
FIG. 4 illustrates a schematic view showing an example where both a monitoring model and a monitored model are deployed at a UE.
FIG. 5 illustrates a schematic view showing an example where a monitored model is deployed at UE, and a monitoring model is deployed at gNB.
FIG. 6 illustrates a schematic view showing an example where a monitored model is deployed at gNB, and a monitoring model is deployed at UE.
FIG. 7 illustrates a schematic view showing an example where both a monitoring model and a monitored model are deployed at a gNB.
FIG. 8 illustrates a schematic view showing examples of a two-side model.
FIG. 9 illustrates a schematic view showing examples of a two-side model.
FIG. 10 illustrates a schematic view showing examples of a two-side model.
FIG. 11 illustrates a schematic view showing examples of two auto-encoder models for CSI feedback compression.
FIG. 12 illustrates a schematic view showing a system for wireless communication according to an embodiment of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS
Embodiments of the disclosure are described in detail with the technical matters, structural features, achieved objects, and effects with reference to the accompanying drawings as follows. Specifically, the terminologies in the embodiments of the present disclosure are merely for describing the purpose of the certain embodiment, but not to limit the disclosure.
Embodiments of the disclosure are related to artificial intelligence (AI) and machine learning (ML) for new radio (NR) air interface and addresses problems of data collection, model monitoring, and model generalization.
In some embodiments, the performance of an AI/ML model is monitored as a part of life cycle management for the AI/ML model. If the performance of the AI/ML model is degraded, the AI/ML model can be re-trained or switched to another AL/ML model. In some embodiment of the disclosure, an AI/ML model is used to monitor the performance of another AI/ML model.
After an AI/ML model is deployed, the AI/ML model should be monitored for proper operation. For example, an AI/ML model monitors the deployed AI/ML model to determine whether a beam predicted by the AI/ML model is accurate for beam management, whether a positioning performed by the AI/ML model is still accurate, and/or whether reported CSI can be fully recovered.
In one scheme of AI/ML model monitoring, in order to proceed with monitoring the AI/ML model, ground truth is collected, reported and compared with the AI/ML model output.
For example, to decide whether a predicted beam is accurate, all the beams are measured at a predicted time (or a predicted time window) . The beam index (or ID) of the beam with the strongest reference signal received power (RSRP) and this strongest RSRP is collected and compared with the AI/ML model output.
For example, for an AI/ML model trained to provide channel state information (CSI) feedback, if the raw CSI (by raw CSI we mean the raw CSI-RS values) is selected as the input of the AI/ML model, and a recovered CSI is output of a decoder of the auto-encoder AI/ML model. The raw CSI itself has to be reported to a gNB as ground truth and compared with the decoder output. If the difference of the two exceeds a level, the AI/ML model does not works properly, and should be re-trained, or switched to another model, or triggered to fall back to traditional non-AI method, such as codebook type I and codebook type II.
In another scheme of AI/ML model monitoring, the confidence level is output as a measure of the AI/ML model output. For example, for an AI/ML model trained to provide positioning, a confidence level is calculated for positioning results output by the AI/ML model.
For AI/ML model monitoring, collecting ground truth is needed, while the ground truth data sometimes is difficult to obtain, or it is cumbersome to be collected.
For example, for the AI/ML model trained to provide positioning of a UE, the ground truth data is the true position of the UE. It is, however, difficult to obtain the true position of the UE. The true position in the training data can be collected from a positioning reference unit (PRU) , or some tags from a third party. Collection of the ground truth data from PRUs or tags is not suitable for the timely monitoring of the AI/ML model in the field during the normal operation of the UE.
For example, for the AI/ML model trained to provide beam management case, beam prediction in time domain aims to reduce the delay. The collection of the ground truth needs the transmission and measurement of reference signals, which is both time-consuming and resource-consuming.
Data collection for offline model training is not much of a problem. Data collection for AI/ML model monitoring, however, as being subjected to the amount of collected ground truth data, sometimes has challenges.
On the other hand, if the confidence level of monitored AI/ML model is selected as the monitoring key performance indicator (KPI) , when the monitored AI/ML model becomes unreliable, this confidence level also becomes unreliable. Hence, the calculation of a confidence level needs a more sophisticated AI/ML model or a sophisticated method.
A generalized AI/ML model is an AI/ML model trained to work for all sets of unseen data. In machine learning, generalization is a definition to demonstrate how well a trained model classifies or forecasts unseen data. The trained capability of a generalized AI/ML model may be referred to as generalization capability. A proper way to evaluate the trained capability of a generalized AI/ML model is to compare the performance of the generalized AI/ML model with the performance of a non-generalized AI/ML model which is scenario-specific.
In some embodiments of the disclosure, an AI/ML model is used to monitor another AI/ML model. For simplicity, an AI/ML model, AI/ML model, and model are interchangeably used in the description. An AI/ML model that is used to monitor one or more AI/ML models is referred to as a monitoring model. An AI/ML model that is monitored by a monitoring AI/ML model is referred to as a monitored model.
The monitoring model can be deployed at different nodes, including a UE, a base station (e.g., gNB) , or a third node. Each of the monitoring model and the monitored model can be a one-side model or a two-side model.
In the description of embodiments of the disclosure, model switching comprises switching off or deactivating a model and switching on or activating another model.
A third node may comprise an application server, a gNB, or a UE.
With reference to FIG. 1, a telecommunication system including a UE 10a, a base station 20a, a base station 20b, and a network entity device 30 executes the disclosed method according to an embodiment of the present disclosure. FIG. 1 is shown for illustrative, not limiting, and the system may comprise more UEs, BSs, and CN entities. Connections between devices and device components are shown as lines and arrows in the FIGs. The UE 10a may include a processor 11a, a memory 12a, and a transceiver 13a. The base station 20a may include a processor 21a, a memory 22a, and a transceiver 23a. The base station 20b may include a processor 21b, a memory 22b, and a transceiver 23b. The network entity device 30 may include a processor 31, a memory 32, and a transceiver 33. Each of the processors 11a, 21a, 21b, and 31 may be configured to implement the proposed functions, procedures, and/or methods described in this description. Layers of radio interface protocol may be implemented in the processors 11a, 21a, 21 b, and 31. Each of the memory 12a, 22a, 22b, and 32 operatively stores a variety of programs and information to operate a connected processor. Each of the transceivers 13a, 23a, 23b, and 33 is operatively coupled with a connected processor, transmits and/or receives a radio signal. Each of the base stations 20a and 20b may be an eNB, a gNB, or one of other radio nodes.
Each of the processors 11a, 21a, 21b, and 31 may include a general-purpose central processing unit (CPU) , application-specific integrated circuits (ASICs) , other chipsets, logic circuits and/or data processing devices. Each of the memory 12a, 22a, 22b, and 32 may include read-only memory (ROM) , a random-access memory (RAM) , a flash memory, a memory card, a storage medium and/or other storage devices. Each of the transceivers 13a, 23a, 23b, and 33 may include baseband circuitry and radio frequency (RF) circuitry to process radio frequency signals. When the embodiments are implemented in software, the techniques described herein can be implemented with modules, procedures, functions, entities and so on, that perform the functions described herein. The modules can be stored in a memory and executed by the processors. The memory can be implemented within a processor or external to the processor, in which those can be communicatively coupled to the processor via various means are known in the art.
The network entity device 30 may be a node in a CN. CN may include LTE CN or 5GC which may include user plane function (UPF) , session management function (SMF) , mobility management function (AMF) , unified data management (UDM) , policy control function (PCF) , control plane (CP) /user plane (UP) separation (CUPS) , authentication server (AUSF) , network slice selection function (NSSF) , and the network exposure function (NEF) .
With reference to FIG. 2, a system 100 for the general aspects on machine learning in NR or in NR air-interface comprises units of data collection 101, model training unit 102, actor 103, and model inference 104. Please note that FIG. 2 does not necessarily limit the monitoring method to the instant example. The monitoring method is applicable to any design based on machine learning. The general steps comprise data collection and/or model training and/or model inference and/or (an) actor (s) .
The data collection unit 101 is a function that provides input data to the model training unit 102 and the model inference unit 104. AI/ML algorithm specific data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) is not carried out in the data collection unit 101.
Examples of input data may include measurements from UEs or different network entities, feedback from Actor 103, and output from an AI/ML model.
Training data is data needed as input for the AI/ML Model training unit 102.
Inference data is data needed as input for the AI/ML Model inference unit 104.
The model training unit 102 is a function that performs the ML model training, validation, and testing. The Model training unit 102 is also responsible for data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) based on training data delivered by the data collection unit 101, if required.
Model Deployment/Update between units 102 and 104 involves deployment or update of an AI/ML model (e.g., a trained machine learning model 105a or 105b) to the model inference unit 104. The model training unit 102 uses data units as training data to train a machine learning model 105a and generates a trained machine learning model 105b from the machine learning model 105a.
The model inference unit 104 is a function that provides AI/ML model inference output (e.g., predictions or decisions) . The AI/ML model inference output is the output of the machine learning model 105b. The Model inference unit 104 is also responsible for data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) based on inference data delivered by the data collection unit 101, if required.
The output shown between unit 103 and unit 104 is the inference output of the AI/ML model produced by the model inference unit 104.
Feedback between unit 103 and unit 101 is information that may be needed to derive training or inference data or performance feedback.
General aspects for AI for AI/ML model monitoring
General aspects of an embodiment of the method are detailed in the following. Some embodiments use at least one AI/ML model to assist or monitor at least another AI/ML model. The model monitoring can be mode, which can be activated or deactivated by at least one or any combination from {RRC. MAC-CE or DCI} . The ML model monitoring usually occurs during a time window. Within this time window, if the monitored model is continuously claimed as malfunctioning by a UE or a gNB or a third node, the monitored model is decided as malfunctioning by a gNB or a third node or a UE. In response, model switching, or fallback to non-AI methods which be triggered. As an alternative, if the monitored model is at least once claimed as malfunctioning, the ML model is decided as malfunctioning. As an alternative, if the times of monitored model being claimed as malfunctioning is greater than a threshold, the ML model is decided as malfunctioning. The threshold is configured by aUE/agNB by a gNB or a third node.
With reference to FIG. 3 to FIG. 7, an example of a UE 10 in the description may include one of the UE 10a. Examples of a gNB 20 in the description may include the base station 20a or 20b. Note that even though the gNB is described as an example of a base station in the following, the disclosed method of may be implemented in any other types of base stations, such as an eNB or a base station for beyond 5G.Uplink (UL) transmission of a control signal or data may be a transmission operation from a UE to a base station. Downlink (DL) transmission of a control signal or data may be a transmission operation from a base station to a UE. The disclosed method is detailed in the following. The UE 10 and a base station, such as a gNB 20, execute the monitoring method based on machine learning.
FIG. 3 shows an embodiment of the disclosed method. At least one wireless communication device executes a monitoring method based on machine learning. In an embodiment, the at least one wireless communication device may comprise a user equipment (UE) . In another embodiment, the at least one wireless communication device may comprise a base station. In still another embodiment, the at least one wireless communication device may comprise a combination of UEs and base stations.
With reference to FIG. 3, at least one wireless communication device executes a monitoring method for monitoring machine learning (ML) models. The at least one wireless communication device may comprise a combination of a user equipment (UE) , a base station, or a third node.
The at least one wireless communication device uses a first ML model as a monitored ML model to work for a cellular communication task (S010) . For example, the cellular communication task comprises one or more of channel state information (CSI) reporting, beam prediction in a time domain, beam prediction in a spatial domain, and positioning for a user equipment (UE) .
The at least one wireless communication device uses a second ML model as a monitoring ML model for monitoring of the first ML model (S012) .
The at least one wireless communication device evaluates performance of the monitored ML model based on the monitoring (S014) .
In an embodiment, a monitored-model-deploying device in which the monitored ML model is deployed and activated is different from a monitored-model-training device in which the monitored ML model is trained. The monitored ML model is downloaded from the monitored-model-training device to the monitored-model-deploying device. The monitored ML model is deployed and activated at one of a user equipment (UE) , a base station, or a third node. The monitored-model-deploying device may be one of the user equipment, the base station, or the third node. The monitoring ML model is trained at one of the user equipment, the base station, or the third node. The monitored-model-training device may be one of the user equipment, the base station, or the third node. The monitored ML model may be activated by a downlink control information (DCI) signal, a radio resource control (RRC) signal, or a Medium Access Control (MAC) control element (CE) .
In an embodiment, a monitored-model-deploying device in which the monitored ML model is deployed and activated is different from a model-evaluating device in which the evaluating is performed. A result of the monitoring is reported from the monitored-model-deploying device to the model-evaluating device. The monitored ML model is deployed and activated at one of a user equipment (UE) , a base station, or a third node. The monitored-model-deploying device may be one of the user equipment, the base station, or the third node. The evaluating is performed at one of the user equipment, the base station, or the third node. The model-evaluating device may be one of the user equipment, the base station, or the third node. The result of the monitoring comprises at least a combination of input and output of the monitored ML model.
In an embodiment, a monitoring-model-deploying device in which the monitoring ML model is deployed and activated is different from a monitoring-model-training device in which the monitoring ML model is trained. The monitoring ML model is downloaded from the monitoring-model-training device to the monitoring-model-deploying device. The monitoring ML model is deployed and activated at one of a user equipment (UE) , a base station, or a third node. The monitoring-model-deploying device may be one of the user equipment, the base station, or the third node. The monitoring ML model is trained at one of the user equipment, the base station, or the third node. The monitoring-model-training device may be one of the user equipment, the base station, or the third node. The monitoring ML model is activated by a downlink control information (DCI) signal, a radio resource control (RRC) signal, or a Medium Access Control (MAC) control element (CE) .
In an embodiment, a monitoring-model-deploying device in which the monitoring ML model is deployed and activated is different from a model-evaluating device in which the evaluating is performed. A result of the monitoring is reported from the monitoring-model-deploying device to the model-evaluating device. The monitoring ML model is deployed and activated at one of a user equipment (UE) , a base station, or a third node. The monitoring-model-deploying device may be one of the user equipment, the base station, or the third node. The evaluating is performed at one of the user equipment, the base station, or the third node. The model-evaluating device may be one of the user equipment, the base station, or the third node. The result of the monitoring comprises at least a combination of input and output of the monitoring ML model.
Both the monitored model and monitoring model are deployed and activated at a UE:
With reference to FIG. 4, both the monitored model and monitoring model are deployed and activated at UE 10. The monitored model and/or monitoring model are trained at gNB 20 or a third node.
A procedure of the embodiment is detailed in the following:
The gNB 20 and/or the third node trains both the monitoring model and the monitored model.
Both the monitoring model and the monitored model are downloaded to the UE.
Both the monitoring model and the monitored model are deployed at the UE.
The deployment of the monitoring model and the monitored model are confirmed by UE, for example by sending a message or a hybrid automatic repeat request (HARQ) acknowledgment (ACK) or a negative acknowledgment (NACK) indicating whether the deployment is successful.
When the deployment is successful, both the monitoring model and the monitored model are activated at the UE.
After a time duration, results of the monitoring (referred to as the monitoring results) are reported to gNB 20. The monitoring result can be the difference between the outputs of monitored model and the outputs of the monitoring model.
Alternatively, the report can be a measurement of the performance of the monitored model (referred to as model performance) , e.g., the confidence level or a probability indicating accuracy of the prediction of the monitored model.
Alternatively, if the measurement for monitoring is continuously qualified within a time window, the report is status confirmation of the monitored model, and no re-retraining or model switching is triggered. Otherwise, the report is a request of model re-training or model switching.
Examples of signaling between the gNB 20 and the UE 10 are detailed in the following. The gNB 20 configures the UE 10 by providing configuration to the UE 10. In response to the configuration, the UE 10 uses the monitoring model to monitor the monitored model. The gNB 20 can send the configuration to the UE 10 in a downlink control information (DCI) signal, a radio resource control (RRC) signal, or a Medium Access Control (MAC) control element (CE) .
The configuration may comprise a monitoring time window or a monitoring period.
The configuration may comprise a monitoring mode. A monitoring mode indicates whether the monitoring is performed by the monitoring ML model or performed by collecting ground truth data for comparison with output of the monitored ML model. In an embodiment, the monitoring mode indicates that UE 10 is monitored by another AI/ML model (i.e., monitored by the monitoring model) . In some embodiments, the monitoring mode indicates the UE 10 is monitored by collecting ground truth data without the help of the monitoring model.
Alternatively, a default setting of the monitoring mode is monitoring with the configured monitoring model, when the monitoring model is provided by gNB 20 or a third node.
Alternatively, a default setting of the monitoring mode is monitoring by collecting the ground truth data without the help of the monitoring model, when the monitoring model is not provided by gNB 20 and/or a third node.
The monitoring mode, monitored model, and/or the monitoring model can be activated/deactivated jointly or separately. An activation signal for model activation (e.g., activating the monitoring mode, the monitored model and/or the monitoring model) or a deactivation signal for model deactivation (e.g., deactivating the monitoring mode, the monitored model and/or the monitoring model) can be carried in a DCI signal, an RRC signal, or a MAC-CE.
In an embodiment, activating the monitoring model and activating the monitoring mode are two different operations. Once the monitoring model is activated, the monitored model is in the monitoring mode.
Post-processing for an AI/ML model, such as the monitored model and/or monitoring model, can comprise fine-tuning, model quantization, and model distillation, model pruning, etc. The post-processing, for example, can further comprise adjusting or reducing AI/ML model complexity or customizing the AI/ML models.
In an embodiment, the monitored model and monitoring model are integrated into one model and are downloaded from gNB 20 to UE 10 through the air interface. The gNB configures the UE to take post-processing by at least one of RRC signaling, MAC-CE or DCI. Please note at least one of {the monitored model, the monitoring model} can be obtained from post-processing of an ML model. The configuration signaling of obtaining the {the monitored model and/or the monitoring model} via post-processing for monitoring of a one-side model and a two-side model through all the examples in Fig. 4~Fig 11, is at least one of {RRC, MAC-CE or DCI signalling} .
In an embodiment, the UE 10 post-processing for the monitored model. That is, only the monitored model is obtained from post-processing at UE 10.
In an embodiment, the UE 10 post-processing for the monitoring model. That is, only the monitoring model is obtained from post-processing at UE 10.
In an embodiment, the UE 10 post-processing for the monitoring model and the monitored model. That is, both the monitoring model and the monitored model are obtained from post-processing at UE 10.
Alternatively, gNB 20 performs the post-processing.
Alternatively, a third node performs the post-processing.
Alternatively, both the monitored model and the monitoring model are trained at a third node.
The monitored model is deployed and activated at UE, and the monitoring model is deployed and activated at gNB:
With reference to FIG. 5, the monitored model is deployed and activated at UE 10, and the monitoring model is deployed and activated at gNB 20.
The monitored model can be trained or deployed at gNB 20 or a third node. A procedure of the embodiment is detailed in the following:
The monitored model is downloaded to a UE 10.
The monitored model is deployed at the UE 10.
The deployment is confirmed by UE 10, for example, by sending a message or a hybrid automatic repeat request (HARQ) acknowledgment (ACK) or a negative acknowledgment (NACK) .
The monitored model is activated by UE 10 or gNB 20.
After a time duration, the output of the monitored model is reported to gNB 20. For example, the output of the monitored model may comprise CSI, position information (coordinates) , and/or beam information (beam index, RSRP) .
The monitoring mode and/or the monitoring model is activated by gNB 20.
The UE 10 reports the input and output of the monitored model to the gNB 20. The input and output of the monitored model are further processed in the gNB 20 and fed into the monitoring model.
In an embodiment, the report may comprise at least one of {the ground truth, the input of monitored model, the output of the monitored model, assistant information for the monitoring model} .
Examples of signaling between the gNB 20 and the UE 10 are detailed in the following.
The monitoring mode, monitored model, and/or the monitoring model can be activated/deactivated jointly or separately. An activation signal for model activation (e.g., activating the monitoring mode, the monitored model and/or the monitoring model) or a deactivation signal for model deactivation (e.g., deactivating the monitoring mode, the monitored model and/or the monitoring model) can be carried in a DCI signal, an RRC signal, or a MAC-CE.
The assistant information is sent from UE 10 to the monitoring model of the gNB 20 and may comprise at least one of the followings, the current channel conditions, L1-RSRPs, L1-SINRs, etc. RSRP stands for reference signal received power (RSRP) , and SINR stands for signal-to-interference plus noise ratio (SINR) .
The monitored model is activated at gNB while the monitoring model is activated at UE:
With reference to FIG. 6, the monitored model is activated at gNB 20 while the monitoring model is activated at UE 10. The monitored model can be trained or deployed at gNB 20 or a third node.
A procedure of the embodiment is detailed in the following.
The monitoring model is downloaded to a UE 10.
The monitoring model is deployed at the UE 10.
The deployment is confirmed by UE 10, for example, by sending a message or a hybrid automatic repeat request (HARQ) acknowledgment (ACK) or a negative acknowledgment (NACK) to report whether the deployment is successful.
When the deployment is successful, the monitoring mode and/or the monitoring model is activated by gNB 20.
If the deployment is not successful, the gNB 20 can re-deploy the monitoring model or deploy another monitoring model.
After a time duration, output of the monitoring model is reported to the gNB 20, e.g., position information (coordinates) , beam information (beam index, RSRP) , and confidence level.
Examples of signaling between the gNB 20 and the UE 10 are detailed in the following.
The monitoring mode, monitored model, and/or the monitoring model can be activated/deactivated jointly or separately. An activation signal for model activation (e.g., activating the monitoring mode, the monitored model and/or the monitoring model) or a deactivation signal for model deactivation (e.g., deactivating the monitoring mode, the monitored model and/or the monitoring model) can be carried in a DCI signal, an RRC signal, or a MAC-CE.
The reports during the monitoring mode.
Monitored model is activated at gNB and monitoring model is activated at gNB:
With reference to FIG. 7, both the deployed monitored model and monitoring model are deployed at gNB 20.
The monitored model can be trained or deployed at gNB 20 or a third node.
A procedure of the embodiment is detailed in the following:
The gNB 20 informs the UE 10 that the monitored model is activated.
The UE 10 reports the input of the monitored model to the gNB 20.
The gNB 20 informs the UE 10 that the monitoring model is activated.
The UE 10 reports the input of the monitoring model to the gNB 20.
In some examples, the input of the monitored model and the input of the monitoring model can be same.
In some examples, the input of the monitored model and the input of the monitoring model can be different. For example, the input of the monitored model and the input of the monitoring model can be with different amounts of input beam indices and/or L1-RSRP.
Examples of signaling between the gNB 20 and the UE 10 are detailed in the following.
The monitoring mode, monitored model, and/or the monitoring model can be activated/deactivated jointly or separately. An activation signal for model activation (e.g., activating the monitoring mode, the monitored model and/or the monitoring model) or a deactivation signal for model deactivation (e.g., deactivating the monitoring mode, the monitored model and/or the monitoring model) can be carried in a DCI signal, an RRC signal, or a MAC-CE.
As an example, the reports of the monitored model during the monitoring mode, can be assistant information, at least one from {time of arrival (TOA) , time difference of arrival (TDOA) , channel conditions, Doppler, L1-SINR} .
Alternatively, the AI/ML model can be running at UE 10/gNB 20 and monitored by a third node.
At least one of the monitoring model and the monitored model is trained at the third note.
The monitoring model and the monitored model are delivered to the UE 10/gNB 20, respectively.
The UE 10 requests the third node for the monitoring model and/or the monitored model
The third node may send the AI/ML models (the monitoring model and/or the monitored model) to the UE 10, and the UE 10 sends the confirmation to the third node.
The gNB 20 requests the third node for the monitoring model and/or the monitored model. The third node may send the AI/ML models (the monitoring model and/or the monitored model) to the gNB 20, and the gNB 20 sends the confirmation to the third node.
Alternatively, if the monitored model resides in the third node, the UE 10 may send the report to the third nodes, instead of the gNB 20.
Alternatively, if the monitored model resides in the third node, the UE 10 may send the report to the gNB 20.
The input and output of monitoring AI/ML model:
As an example, the input and output of the monitoring model have the same quantities/metrics as the monitored model. For example, the monitoring model and the monitored model both output prediction of the best beams in a time domain or a spatial domain from the same input. For example, the monitoring model and the monitored model both output positions of the UE 10 from the same input information (e.g., reference signals, TDOA, angle of arrival (AOA) , channel impulse response (CIR) )
The difference between the monitoring model and the monitored model is a difference in complexity level. For example, the monitoring ML model has a first complexity level, the monitored ML model has a second complexity level, and the first complexity level is greater than the second complexity level.
As an example, the monitored model is a model with low complexity, e.g., after post-processing. Usually, the monitoring model with high complexity has advanced performance. The UE 10 can execute AI/ML models of different complexity levels according to UE capability of the UE 10. The UE 10 can run a low complexity AI/ML model frequently and a high complexity model less frequently just for model monitoring.
Alternatively, the UE 10 can deploy a low complexity model as a monitored model. A high complexity model runs at gNB 20 as a monitoring model.
Another example is model generalization:
A generalized model is known as an AI/ML model that works for all subsets of unseen data. Usually, a generalized AI/ML model has qualified performance in a range of settings and scenarios. A scenario-specific AI/ML model with is better performance in this scenario. To measure the performance of a generalized AI/ML model under certain settings, a scenario-specific AI/ML model can be selected as the monitoring model or the baseline model in these settings.
The generalized AI/ML model can be deployed at UE 10 or gNB 20 for interference. And the scenario-specific AI/ML model can be deployed at UE 10 or gNB 20 for measuring the generalization capability of the generalized AI/ML model in these scenarios. If the output of these two models differs too much, e.g., larger than a threshold, the generalized AI/ML model is not considered feasible.
Output of monitoring AI/ML model: the synthesized data for replacing ground truth.
A monitoring mode indicates whether the monitoring is performed by the monitoring ML model or performed by collecting ground truth data for comparison with output of the monitored ML model. The monitoring ML model generates synthesized data from assistant information, and the synthesized data replaces the ground truth data in comparison with output of the monitored ML model in the evaluating. The assistant information comprises at least one of channel conditions, a beam index, reference signal received power (RSRP) , signal-to-interference plus noise ratio (SINR) , and channel impulse response (CIR) .
The ground truth can be difficult to obtain (e.g., the true position of UE 10) or tedious to obtain (e.g., the best beam) . Model monitoring usually works in a reactive manner. That is, the model becomes unreliable after the performance degradation occurs.
The ground truth data can be replaced by synthesized data generated by an AI/ML model. The input of the AI/ML model can be the assistant information (e.g., channel conditions, beam index, L1-RSRP, L1-SINR, CIR) , UE reported data/noisy reference signals. The output of the AI/ML model is the synthesized data.
The input of the AI/ML model can be historical data, such as the historical measurement results or monitoring results. The output of the AI/ML model can be the model performance at the current time.
Output of monitoring AI/ML model: confidence level.
The input of the AI/ML model can be the assistant information (e.g., channel conditions, a beam index, L1-RSRP, L1-SINR, CIR) , UE reported data/noisy reference signals. The output of the AI/ML model is the synthesized data., The output of the monitoring model is confidence level.
In an example, the historical data can be the confidence level. The AI/ML model predicts a confidence level indicating whether the monitored model may work properly in at least the subsequent time unit.
Two-side model:
In an embodiment, the monitoring model comprises a two-side model and has a first monitoring sub-model that works in a first model-deploying device and a second monitoring sub-model that works in a second model-deploying device.
The monitored model comprises another two-side model and has a first monitored sub-model that works in the first model-deploying device and a second monitored sub-model that works in the second model-deploying device.
The first model-deploying device comprises a user equipment (UE) , and the second model-deploying device comprises a base station.
In an embodiment, the monitoring is performed to monitor the output of the first monitoring sub-model and output of the first monitored sub-model, and the evaluating is performed to evaluate the difference between the output of the first monitoring sub-model and the output of the first monitored sub-model.
In an embodiment, the monitoring is performed to monitor output of the second monitoring sub-model and output of the second monitored sub-model, and the evaluating is performed to evaluate difference between the output of the second monitoring sub-model and the output of the second monitored sub-model.
The difference between the output of the first monitoring sub-model and the output of the first monitored sub-model or the difference between the output of the second monitoring sub-model and the output of the second monitored sub-model is calculated using at least one of mean square error (MSE) , normalized mean square error (NMSE) , cosine similarity, confidence level, and accuracy.
Two-sided model for positioning:
For the positioning case, the two-side model works as follows, the first part (a the first monitored sub-model NN1_1 or a first monitoring sub-model NN2_1) extract features (time differences, AOA, TOA, TDOA, etc. …) from input (channel impulse response/references signaling) . The features are sent to the gNB 20 for further processing to get the output (UE 10 position information/coordinate) .
The first monitored sub-model NN1_1 is coupled with a second monitored sub-model NN1_2. The first monitoring sub-model NN2_1 is coupled with a second monitoring sub-model NN2_2.
The first monitored sub-model NN1_1 and the second monitored sub-model NN1_2, compared with the first monitoring sub-model NN2_1 and the second monitoring sub-model NN2_2, are with better performance, for example with more sophisticated model structure and/or more complexity.
For the two-side model, the model monitoring can be separately performed.
FIG. 8 shows an example of a two-side model for position. A second two-side model (a first monitoring sub-model NN2_1 and a second monitoring sub-model NN2_2) serving as the monitoring model is deployed for monitoring of the first two-side model (the first monitored sub-model NN1_1 and the second monitored sub-model NN1_2) that serves as the monitored model.
The monitoring can be performed with at least one of the following ways.
The monitoring can be at the outputs of the first monitored sub-model NN1_1 and the first monitoring sub-model NN2_1. The measured KPI can be at least one of the followings, mean square error (MSE) , normalized mean square error (NMSE) , cosine similarity, confidence level, and accuracy.
The monitoring can be at the outputs of the first monitoring sub-model NN2_1 and the second monitoring sub-model NN2_2. The measured KPI can be at least one of the followings, MSE, NMSE, cosine similarity, confidence level, and accuracy.
Alternatively, at least one of {the first monitoring sub-model NN2_1 and the second monitoring sub-model NN2_2} are deployed at a third node.
In another example, for the two-side model, the model monitoring can be jointly performed at UE 10 side.
In an embodiment, the second monitoring sub-model and the second monitored sub-model are integrated into one sub-model.
FIG. 9 shows a two-side model for positioning. A two-side model (the first monitoring sub-model NN2_1 and a second sub-model NN12) is deployed for monitoring of another two-side model (the first monitored sub-model NN1_1 and the second sub-model NN12) . At the gNB side, the second sub-model NN12 can process both the output of the first monitored sub-model NN1_1 and the second monitored sub-model NN1_2.
The monitoring can be performed in at least one of the following ways.
The monitoring can be performed for the difference between the output of the first monitored sub- model NN1_1 and the output of the second monitoring sub-model NN2_2. The KPI for measuring the difference between the output of the first monitored sub-model NN1_1 and the output of the second monitoring sub-model NN2_2 can be at least one of the followings, MSE, NMSE, cosine similarity, confidence level, and accuracy.
The monitoring can be performed for the difference between the output of the second sub-model NN12 obtained from the output of the first monitored sub-model NN1_1 and the output of the second sub-model NN12 obtained from the second monitoring sub-model NN2_2. The KPI for measuring the difference can be at least one of the followings, MSE, NMSE, cosine similarity, confidence level, and accuracy.
Alternatively, at least one of the second sub-model NN12 is deployed at a third node.
In another example, for the two-side model, the monitoring can be jointly performed at gNB side.
In an embodiment, the first monitoring sub-model and the first monitored sub-model may be integrated into one sub-model.
FIG. 10 shows two-side model for positioning. A two-side model (the first sub-model NN3_1 and the second monitoring sub-model NN3_2’) is deployed for monitoring another two-side model (the first sub-model NN3_1 and the second monitored sub-model NN3_2) . At the gNB side, the second monitoring sub-model NN3_2 receives the output of the first sub-model NN3_1 as input of the second monitoring sub-model NN3_2, and the second monitored sub-model NN3_2’ receives the output of the first sub-model NN3_1 as input of the second monitored sub-model NN3_2’. The second monitored sub-model NN3_2’ serves as a monitoring model to monitor the performance of the second monitoring sub-model NN3_2.
A procedure of the embodiment is detailed in the following:
The gNB 20 informs the UE 10 the deployment of the second monitored sub-model NN3_2’ as monitoring model. The monitoring of the first sub-model NN3_1 and the second monitoring sub-model NN3_2 are performed at gNB 20.
In another example, the second monitored sub-model NN3_2’ is deployed at a third node. The output of the first sub-model NN3_1 is reported to the gNB 20 and the third node simultaneously.
CSI feedback compression:
In an embodiment, the monitoring model comprises a first auto-encoder for reporting channel state information (CSI) , the first monitoring sub-model serves as a first encoder of the first auto-encoder operable to compress CSI, and the second monitoring sub-model serves as a first decoder of the first auto-encoder operable to decompress the compressed CSI from the first encoder.
In an embodiment, the monitored model comprises a second auto-encoder for reporting channel state information (CSI) , the first monitored sub-model serves as a second encoder of the second auto-encoder operable to compress CSI, and the second monitored sub-model serves as a second decoder of the second auto-encoder operable to decompress the compressed CSI from the second encoder.
In an embodiment, the first auto-encoder and the second auto-encoder report CSI according to a configured monitoring period. In an embodiment, the monitoring period is reported by a UE. In an embodiment, a base station determines the monitoring period based on UE capability.
For the two-side model, e.g., the auto-encoder, a second model is deployed to measure the performance of the first model.
In an embodiment, the first model and the second mode can have different complexity.
Alternatively, the first model and the second model can have different quantization levels of encoder output. For example, the monitoring ML model has a first quantization level, the monitored ML model has a second quantization level, and the first quantization level is greater than the second quantization level.
FIG. 11 shows two auto-encoder models for CSI feedback compression and CSI feedback. One auto-encoder (referred to as a first auto-encoder) may comprise encoder A10 and decoder A11. The other auto-encoder (referred to as a second auto-encoder) may comprise encoder A20 and decoder A21. )
The first auto-encoder can be a more sophisticated model, while the second auto-encoder is a simplified model. The first auto-encoder can be a touchstone or a benchmark for the performance of the second auto-encoder.
Usually, the raw CSI and/or Eigen vectors need to be reported with high precision as the ground truth by UE for model monitoring and/or data collection. In this way, two auto-encoder models are deployed, and the ground truth is no longer needed by them in model monitoring. Thus, the feedback overhead of reporting raw CSI and/or Eigen vectors as the ground truth for model monitoring and/or data collection is reduced.
The principle for determining an AI/ML model as the touchstone (monitoring model) can be a stricter performance measure (s) , such as stricter cosine similarities, MMSE/throughput, and/or prediction accuracy (e.g., 95%prediction accuracy) .
As another example, the second model or both the two models can be obtained e.g., by post-processing from the same AI/ML model. Thus, initially, one model is downloaded from the gNB 20 to the UE 10, if the auto-encoder model is trained at gNB 20.
The compressed vectors are reported by UE 10. A procedure of configuring UE 10 to run two AI/ML models is detailed in the following. The gNB 20 may configure the UE 10, and the UE 10 reports the two compressed vectors.
The UE 10 reports the compressed vectors in one shot. When radio resources are limited, the UE 10 reports the compressed vector according to a priority rule.
In an embodiment, when the multiple compressed vectors have been reported and overlap, the compressed vector with larger size is dropped. When the compressed vectors with the same size, the one with a greater age is dropped. Alternatively, the UE 10 randomly drops one vector.
Alternatively, the compressed vector a with small size is dropped to ensure performance. Since the less compression ratio, the better the performance. When the compressed vector with the same size, the one with a greater age is dropped. Alternatively, the UE 10 randomly drops one vector.
The monitoring period is a UE capability of the UE 10. The monitoring period can be reported by UE 10. Alternatively, the gNB 20 can explicitly determine the monitoring period based on the UE capability (storage, CPU, memory, floating-point operations per second (FLOPS) , …) of the UE 10. After receiving the UE capability from the UE 10, the gNB 20 configures the monitoring period.
Alternatively, the gNB 20 configures the monitoring period regardless of the UE capability of the UE 10.
In an embodiment, the UE 10 reports the compressed vectors with different periods.
The monitoring period is a UE capability of the UE 10. The monitoring period can be reported by UE 10. Alternatively, the gNB 20 can explicitly determine the monitoring period based on the UE capability (storage, CPU, memory, floating-point operations per second (FLOPS) , …) of the UE 10. After receiving the UE capability from the UE 10, the gNB 20 configures the monitoring period.
Alternatively, the gNB 20 configures the monitoring period regardless of the UE capability of the UE 10.
Please note that the method in this embodiment is not limited to the model monitoring. When multiple compressed vectors have to be reported with a limited resource, the UE 10 reports the compressed vector according to a priority rule.
In one embodiment, the compressed vector with a greater age is dropped. When the compressed vectors have the same age, the one with a larger size is dropped. Alternatively, the UE 10 randomly drops one vector.
Please note that the deployments of the monitoring model and monitored model for the CSI feedback compression may follow similar ways as these of the two-side positioning model, such as jointly performed or separately performed at gNB 20/UE 10 side, or even in a third node. The related signaling may be reported to the third node or reported to the gNB 20 and forwarded to the third node.
ML assisted non-AI for all use cases:
In another example, the monitoring model outputs the synthesized data for replacing ground truth. The synthesized data is produced for further processing. One usage of the synthesized data is model monitoring.
The synthesized data is generated according to assistant information, e.g., current channel conditions, an indoor environment, an outdoor environment, a moving speed of the UE 10, Doppler shift, an others.
The assistant information and synthesized data can be collected and sent to a node as hyper-parameters for an AI/ML model. In an example, this AI/ML model is the monitoring model that can generate the synthesized data.
When a monitoring model is deployed at UE 10, the UE 10 can obtain the assistant information, such as current channel conditions, for the monitoring model.
When a monitoring model is deployed at gNB 20/third node, the UE 10 can report the assistant information to the gNB 20/third node.
In the example of a monitored model trained to provide positioning, the ground truth of the true position is difficult to obtain. The UE 10 may get close to the positioning reference unit (PRU) or some tag to get its true position at a time. The reference is PRU. The UE 10 can only be monitored with reference to some tags or PRUs. This limits the model monitoring at UE 10. An AI/ML model (referred to as a data generation neural network) for generating data can be deployed to generate the input data (e.g., the channel impulse response (CIR) , TOA, reference signals) and output data. The input data and the output data of the AI/ML model for generating data (referred to as data generation neural network) is used by the monitoring model as corresponding synthesized data for replacing ground truth to monitor whether the monitored model works properly with the synthesized data.
For example, the input data in the synthesized data may comprise one or more of CIR, RS, TDOA, AOA, angle of departure (AOD) , and channel conditions. For example, the output data in the synthesized data may comprise position information. The synthesized data comprises input data for the monitoring model associated with the output data for the monitoring model. For example, a format of the synthesized data comprises:
{input [CIR/RS/TDOA/AOA/AOD/channel conditions] , output [position information] } .
In an example of a monitored model trained to generate beam management data, since collection of ground truth data costs both resources and time.
For example, the input data in the synthesized data may comprise one or more subsets of all beams. For example, the output data in the synthesized data may comprise one or more of predicted beam (s) .
The synthesized data comprises input data associated with the output data, such as in a format:
{input [subset of all beams] , output [predicted beam (s) ] } or {input [the measurements of subset of all beams] , output [the measurements of the predicted beam (s) in time or spatial] } . {input [the measurements of subset of all beams, and the beam ID of these subset beams] , output [the measurements of the predicted beam (s) in time or spatial, and the beam ID of these predicted beams] } .
Monitoring schemes:
The gNB 20 configures the UE 10 by providing configuration regarding one or more monitoring schemes to the UE 10. This configuration is at least one of {RRC signaling, MAC-CE or DCI signaling} . In response to the configuration, the UE 10 uses one or more monitoring schemes. The one or more monitoring schemes may be selected from one or more of:
● collecting and/or reporting the ground truth;
● collecting or generating and/or reporting synthesized data for replacing ground truth; and
● using the same type of input and/or output of the monitoring model and the monitored model.
More specifically, the monitoring ML model monitors the first ML model according to an activated monitoring scheme among a plurality of monitoring schemes, and the plurality of monitoring schemes comprises:
● comparing the output of the monitored ML model with ground truth for the monitored ML model;
● comparing the output of the monitored ML model with synthesized data for replacing the ground truth; and
● comparing the output of the monitored ML model with the output of the monitoring model based on the same input of the monitored ML model and the monitoring model.
One of the plurality of monitoring schemes is activated to be the activated monitoring scheme according to a configuration. The configuration can be carried in a downlink control information (DCI) signal, a radio resource control (RRC) signal, or a Medium Access Control (MAC) control element (CE) .
The configuration is configured by an RRC signal or a MAC-CE and activated by DCI signaling.
The configuration is configured by an RRC signal or a MAC-CE and deactivated by DCI signaling. For multiple monitoring models, the particular locations of the monitoring model and monitored model can be any combination of some of all the examples in FIG. 4 to FIG 11.
Multiple Models:
In an embodiment, the monitoring model comprises multiple component monitoring models rather than a single model, similar to the random forest in ML. The outputs of multiple component monitoring models are mathematically processed, e.g., by averaging, max, min, etc., as a benchmark of the component monitoring models. The benchmark is treated as the output of the monitoring model or a result of the monitoring. The outcomes of monitoring models are obtained either by averaging or voting. Voting means majority wins, when a majority of monitoring models (or a majority of monitoring sub-models) indicate that the monitored model malfunctions, the monitored model is decided as malfunctioning. Or if at least one scheme indicates that the monitored model malfunctions, the monitored model is decided as malfunctioning. Or if all schemes indicate that the monitored model malfunctions, the monitored model is decided as malfunctioning.
The monitoring model can, as being configured by the gNB 20 or the third node in a configuration, is configurable to include only one component monitoring model or multiple component monitoring models according to a configuration. The benchmark of the monitoring models can be carried in a downlink control information (DCI) signal, a radio resource control (RRC) signal, or a Medium Access Control (MAC) control element (CE) . The configuration can be carried in a downlink control information (DCI) signal, a radio resource control (RRC) signal, or a Medium Access Control (MAC) control element (CE) to inform the UE 10.
By default, the configuration is a single AI/ML model as one monitoring model.
FIG. 12 is a block diagram of an example system 700 for wireless communication according to an embodiment of the present disclosure. Embodiments described herein may be implemented into the system using any suitably configured hardware and/or software. FIG. 12 illustrates the system 700 including a radio frequency (RF) circuitry 710, a baseband circuitry 720, a processing unit 730, a memory/storage 740, a display 750, a camera 760, a sensor 770, and an input/output (I/O) interface 780, coupled with each other as illustrated.
The processing unit 730 may include circuitry, such as, but not limited to, one or more single-core or multi-core processors. The processors may include any combinations of general-purpose processors and dedicated processors, such as graphics processors and application processors. The processors may be coupled with the memory/storage and configured to execute instructions stored in the memory/storage to enable various applications and/or operating systems running on the system.
The radio control functions may include, but are not limited to, signal modulation, encoding, decoding, radio frequency shifting, etc. In some embodiments, the baseband circuitry may provide for communication compatible with one or more radio technologies. For example, in some embodiments, the baseband circuitry may support communication with 5G NR, LTE, an evolved universal terrestrial radio access network (EUTRAN) and/or other wireless metropolitan area networks (WMAN) , a wireless local area network (WLAN) , a wireless personal area network (WPAN) . Embodiments in which the baseband circuitry is configured to support radio communications of more than one wireless protocol may be referred to as multi-mode baseband circuitry. In various embodiments, the baseband circuitry 720 may include circuitry to operate with signals that are not strictly considered as being in a baseband frequency. For example, in some embodiments, baseband circuitry may include circuitry to operate with signals having an intermediate frequency, which is between a baseband frequency and a radio frequency.
In various embodiments, the system 700 may be a mobile computing device such as, but not limited to, a laptop computing device, a tablet computing device, a netbook, an ultrabook, a smartphone, etc. In various embodiments, the system may have more or less components, and/or different architectures. Where appropriate, the methods described herein may be implemented as a computer program. The computer program may be stored on a storage medium, such as a non-transitory storage medium.
The embodiment of the present disclosure is a combination of techniques/processes that can be adopted in 3GPP specification to create an end product.
If the software function unit is realized and used and sold as a product, it can be stored in a readable storage medium in a computer. Based on this understanding, the technical plan proposed by the present disclosure can be essentially or partially realized as the form of a software product. Or, one part of the technical plan beneficial to the conventional technology can be realized as the form of a software product. The software product in the computer is stored in a storage medium, including a plurality of commands for a computational device (such as a personal computer, a server, or a network device) to run all or some of the steps disclosed by the embodiments of the present disclosure. The storage medium includes a USB disk, a mobile hard disk, a read-only memory (ROM) , a random-access memory (RAM) , a floppy disk, or other kinds of media capable of storing program codes.
The disclosure provides a monitoring method for monitoring ML models. The invention provides embodiments to address problems in monitoring of AI/ML model. In some embodiments of the disclosure, monitoring of AI/ML model is no longer constrained by the time required for collecting ground truth data, and the AI/ML model can be monitored timely.
The embodiments of the disclosure can be applied to evaluating model generalization, including monitoring and evaluating the generalized AI/ML models.
In some embodiments of the disclosure, synthetic ground truth data is synthesized and treated as ground truth data to assist proactively switching or pre-training AI/ML models. System performance can be thus improved.
While the present disclosure has been described in connection with what is considered the most practical and preferred embodiments, it is understood that the present disclosure is not limited to the disclosed embodiments but is intended to cover various arrangements made without departing from the scope of the broadest interpretation of the appended claims.
Claims (44)
- A monitoring method for monitoring machine learning (ML) models, executable in at least one wireless communication device, comprising:using a first ML model as a monitored ML model to work for a cellular communication task;using a second ML model as a monitoring ML model for monitoring of the first ML model; andevaluating performance of the monitored ML model based on the monitoring.
- The method of claim 1, wherein the cellular communication task comprises one or more of channel state information (CSI) reporting, beam prediction in a time domain, beam prediction in a spatial domain, and positioning for a user equipment (UE) .
- The method of claim 1, wherein a monitored-model-deploying device in which the monitored ML model is deployed and activated is different from a monitored-model-training device in which the monitored ML model is trained; andthe monitored ML model is downloaded from the monitored-model-training device to the monitored-model-deploying device.
- The method of claim 3, wherein the monitored ML model is deployed and activated at one of a user equipment (UE) , a base station, or a third node; andthe monitoring ML model is trained at one of the user equipment, the base station, or the third node.
- The method of claim 4, wherein the monitored ML model is activated by a downlink control information (DCI) signal, a radio resource control (RRC) signal, or a Medium Access Control (MAC) control element (CE) .
- The method of claim 1, wherein a monitored-model-deploying device in which the monitored ML model is deployed and activated is different from a model-evaluating device in which the evaluating is performed; anda result of the monitoring is reported from the monitored-model-deploying device to the model-evaluating device.
- The method of claim 6, wherein the monitored ML model is deployed and activated at one of a user equipment (UE) , a base station, or a third node; andthe evaluating is performed at one of the user equipment, the base station, or the third node.
- The method of claim 6, wherein the result of the monitoring comprises at least a combination of input and output of the monitored ML model.
- The method of claim 1, wherein a monitoring-model-deploying device in which the monitoring ML model is deployed and activated is different from a monitoring-model-training device in which the monitoring ML model is trained; andthe monitoring ML model is downloaded from the monitoring-model-training device to the monitoring-model-deploying device.
- The method of claim 9, wherein the monitoring ML model is deployed and activated at one of a user equipment (UE) , a base station, or a third node; andthe monitoring ML model is trained at one of the user equipment, the base station, or the third node.
- The method of claim 10, wherein the monitoring ML model is activated by a downlink control information (DCI) signal, a radio resource control (RRC) signal, or a Medium Access Control (MAC) control element (CE) .
- The method of claim 1, wherein a monitoring-model-deploying device in which the monitoring ML model is deployed and activated is different from a model-evaluating device in which the evaluating is performed; anda result of the monitoring is reported from the monitoring-model-deploying device to the model-evaluating device.
- The method of claim 12, wherein the monitoring ML model is deployed and activated at one of a user equipment (UE) , a base station, or a third node; andthe evaluating is performed at one of the user equipment, the base station, or the third node.
- The method of claim 12, wherein the result of the monitoring comprises at least a combination of input and output of the monitoring ML model.
- The method of claim 1, wherein a monitoring mode indicates whether the monitoring is performed by the monitoring ML model or performed by collecting ground truth data for comparison with output of the monitored ML model.
- The method of claim 15, wherein the monitoring ML model generates synthesized data from assistant information, and the synthesized data replaces the ground truth data in comparison with output of the monitored ML model in the evaluating.
- The method of claim 16, wherein the assistant information comprises at least one of channel condition, a beam index, reference signal received power (RSRP) , signal-to-interference plus noise ratio (SINR) , channel impulse response (CIR) , an indoor environment, an outdoor environment, a moving speed of a UE, and Doppler shift.
- The method of claim 16, wherein the synthesized data comprises input data for the monitoring model associated with output data for the monitoring model.
- The method of claim 1, wherein the monitoring ML model has a first complexity level, the monitored ML model has a second complexity level, and the first complexity level is greater than the second complexity level.
- The method of claim 1, wherein the monitoring model comprises a scenario-specific ML model and the monitored model comprises a generalized ML model.
- The method of claim 1, wherein the monitoring model and the monitored model both output prediction of the best beams in a time domain or a spatial domain from the same input.
- The method of claim 1, wherein the monitoring model and the monitored model both output UE positions from the same input.
- The method of claim 1, wherein the monitoring model comprises a two-side model and has a first monitoring sub-model that works in a first model-deploying device and a second monitoring sub-model that works in a second model-deploying device; andthe monitored model comprises another two-side model and has a first monitored sub-model that works in the first model-deploying device and a second monitored sub-model that works in the second model-deploying device.
- The method of claim 23, wherein the first model-deploying device comprises a user equipment (UE) , and the second model-deploying device comprises a base station.
- The method of claim 23, wherein the monitoring is performed to monitor output of the first monitoring sub-model and output of the first monitored sub-model, and the evaluating is performed to evaluate difference between the output of the first monitoring sub-model and the output of the first monitored sub-model; orthe monitoring is performed to monitor output of the second monitoring sub-model and output of the second monitored sub-model, and the evaluating is performed to evaluate difference between the output of the second monitoring sub-model and the output of the second monitored sub-model.
- The method of claim 25, wherein the first monitoring sub-model and the first monitored sub-model are integrated into one sub-model; orthe second monitoring sub-model and the second monitored sub-model are integrated into one sub-model.
- The method of claim 25, wherein the difference between the output of the first monitoring sub-model and the output of the first monitored sub-model or the difference between the output of the second monitoring sub-model and the output of the second monitored sub-model is calculated using at least one of mean square error (MSE) , normalized mean square error (NMSE) , cosine similarity, confidence level, and accuracy.
- The method of claim 23, wherein the monitoring model comprises a first auto-encoder for reporting channel state information (CSI) , the first monitoring sub-model serves as a first encoder of the first auto-encoder operable to compress CSI, and the second monitoring sub-model serves as a first decoder of the first auto-encoder operable to decompress the compressed CSI from the first encoder; andthe monitored model comprises a second auto-encoder for reporting channel state information (CSI) , the first monitored sub-model serves as a second encoder of the second auto-encoder operable to compress CSI, and the second monitored sub-model serves as a second decoder of the second auto-encoder operable to decompress the compressed CSI from the second encoder.
- The method of claim 28, wherein the monitoring ML model has a first quantization level, the monitored ML model has a second quantization level, and the first quantization level is greater than the second quantization level.
- The method of claim 28, wherein the first auto-encoder and the second auto-encoder report CSI according to a configured monitoring period.
- The method of claim 30, wherein the monitoring period is reported by a UE; ora base station determines the monitoring period based on UE capability.
- The method of claim 1, wherein the monitoring ML model monitors of the first ML model according to an activated monitoring scheme among a plurality of monitoring schemes, and the plurality of monitoring schemes comprises:comparing output of the monitored ML model with ground truth for the monitored ML model;comparing output of the monitored ML model with synthesized data for replacing the ground truth; andcomparing output of the monitored ML model with output of the monitoring model based on the same input of the monitored ML model and the monitoring model.
- The method of claim 32, wherein one of the plurality of monitoring schemes is activated to be the activated monitoring scheme according to a configuration.
- The method of claim 32, wherein the configuration is carried in a downlink control information (DCI) signal, a radio resource control (RRC) signal, or a Medium Access Control (MAC) control element (CE) .
- The method of claim 32, wherein the configuration is configured by an RRC signal or a MAC-CE and activated by DCI signaling.
- The method of claim 32, wherein the configuration is configured by an RRC signal or a MAC-CE and deactivated by DCI signaling.
- The method of claim 1, wherein the monitoring model comprise multiple component monitoring models, outputs of multiple component monitoring models are mathematically processed as a benchmark of the component monitoring models.
- The method of claim 1, wherein the monitoring model is configurable to include only one component monitoring model or multiple component monitoring models according to a configuration.
- The method of claim 37, wherein the configuration is carried in a downlink control information (DCI) signal, a radio resource control (RRC) signal, or a Medium Access Control (MAC) control element (CE) .
- A wireless communication device comprising:a processor, configured to call and run a computer program stored in a memory, to cause a device in which the processor is installed to execute the method of any of claims 1 to 39.
- A chip, comprising:a processor, configured to call and run a computer program stored in a memory, to cause a device in which the chip is installed to execute the method of any of claims 1 to 39.
- A computer-readable storage medium, in which a computer program is stored, wherein the computer program causes a computer to execute the method of any of claims 1 to 39.
- A computer program product, comprising a computer program, wherein the computer program causes a computer to execute the method of any of claims 1 to 39.
- A computer program, wherein the computer program causes a computer to execute the method of any of claims 1 to 39.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2022/100553 WO2023245515A1 (en) | 2022-06-22 | 2022-06-22 | Monitoring method and wireless communication device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2022/100553 WO2023245515A1 (en) | 2022-06-22 | 2022-06-22 | Monitoring method and wireless communication device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023245515A1 true WO2023245515A1 (en) | 2023-12-28 |
Family
ID=89378874
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/100553 WO2023245515A1 (en) | 2022-06-22 | 2022-06-22 | Monitoring method and wireless communication device |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2023245515A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190370218A1 (en) * | 2018-06-01 | 2019-12-05 | Cisco Technology, Inc. | On-premise machine learning model selection in a network assurance service |
WO2020234902A1 (en) * | 2019-05-20 | 2020-11-26 | Saankhya Labs Pvt. Ltd. | Radio mapping architecture for applying machine learning techniques to wireless radio access networks |
CN113473511A (en) * | 2020-03-31 | 2021-10-01 | 瞻博网络公司 | Network system troubleshooting via machine learning models |
US20220150221A1 (en) * | 2020-11-10 | 2022-05-12 | Accenture Global Solutions Limited | Utilizing machine learning models to determine customer care actions for telecommunications network providers |
-
2022
- 2022-06-22 WO PCT/CN2022/100553 patent/WO2023245515A1/en unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190370218A1 (en) * | 2018-06-01 | 2019-12-05 | Cisco Technology, Inc. | On-premise machine learning model selection in a network assurance service |
WO2020234902A1 (en) * | 2019-05-20 | 2020-11-26 | Saankhya Labs Pvt. Ltd. | Radio mapping architecture for applying machine learning techniques to wireless radio access networks |
CN113473511A (en) * | 2020-03-31 | 2021-10-01 | 瞻博网络公司 | Network system troubleshooting via machine learning models |
US20220150221A1 (en) * | 2020-11-10 | 2022-05-12 | Accenture Global Solutions Limited | Utilizing machine learning models to determine customer care actions for telecommunications network providers |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11838787B2 (en) | Functional architecture and interface for non-real-time ran intelligent controller | |
CN113475157B (en) | Connection behavior identification for wireless networks | |
EP3259881B1 (en) | Adaptive, anomaly detection based predictor for network time series data | |
US20240107429A1 (en) | Machine Learning Non-Standalone Air-Interface | |
WO2021064275A1 (en) | Radio access information reporting in wireless network | |
EP4233212A1 (en) | Communication device predicted future interference information | |
KR20240028333A (en) | Method and apparatus for classifying channel environment in wireless network system | |
US11985527B2 (en) | Systems and methods for autonomous network management using deep reinforcement learning | |
US20240137783A1 (en) | Signalling support for split ml-assistance between next generation random access networks and user equipment | |
US20240088968A1 (en) | Method and apparatus for support of machine learning or artificial intelligence-assisted csi feedback | |
WO2023245515A1 (en) | Monitoring method and wireless communication device | |
WO2024031469A1 (en) | Method of artificial intelligence-assisted configuration in wireless communication system | |
WO2023191682A1 (en) | Artificial intelligence/machine learning model management between wireless radio nodes | |
US20240056836A1 (en) | Methods and apparatuses for testing user equipment (ue) machine learning-assisted radio resource management (rrm) functionalities | |
WO2023184411A1 (en) | Data collection method for beam management based on machine learning and wireless communication device | |
EP4373159A1 (en) | Ai/ml model functionality in handover scenarios | |
WO2024098170A1 (en) | Wireless communication method and wireless communication device | |
WO2023245513A1 (en) | Device capability discovery method and wireless communication device | |
US20240098533A1 (en) | Ai/ml model monitoring operations for nr air interface | |
FI130871B1 (en) | Transmission beam determination | |
US20240340942A1 (en) | Sidelink signal sensing of passively reflected signal to predict decrease in radio network performance of a user node-network node radio link | |
WO2024098398A1 (en) | Methods, devices and medium for communication | |
WO2024000559A1 (en) | Methods and apparatus of monitoring artificial intelligence model in radio access network | |
WO2024207416A1 (en) | Inference data similarity feedback for machine learning model performance monitoring in beam prediction | |
US20240107347A1 (en) | Machine learning model selection for beam prediction for wireless networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22947292 Country of ref document: EP Kind code of ref document: A1 |