WO2024092716A1 - 信息收发方法与装置 - Google Patents

信息收发方法与装置 Download PDF

Info

Publication number
WO2024092716A1
WO2024092716A1 PCT/CN2022/129856 CN2022129856W WO2024092716A1 WO 2024092716 A1 WO2024092716 A1 WO 2024092716A1 CN 2022129856 W CN2022129856 W CN 2022129856W WO 2024092716 A1 WO2024092716 A1 WO 2024092716A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
performance
value
performance metric
configuration parameters
Prior art date
Application number
PCT/CN2022/129856
Other languages
English (en)
French (fr)
Inventor
孙刚
王昕�
Original Assignee
富士通株式会社
孙刚
王昕�
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士通株式会社, 孙刚, 王昕� filed Critical 富士通株式会社
Priority to PCT/CN2022/129856 priority Critical patent/WO2024092716A1/zh
Publication of WO2024092716A1 publication Critical patent/WO2024092716A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/08Testing, supervising or monitoring using real traffic

Definitions

  • the embodiments of the present application relate to the field of communication technologies.
  • the millimeter wave band can provide a larger bandwidth and has become an important frequency band for the 5G NR (New Radio) system. Due to its shorter wavelength, millimeter waves have different propagation characteristics from traditional low-frequency bands, such as higher propagation loss, poor reflection and diffraction performance, etc. Therefore, a larger-scale antenna array is usually used to form a shaped beam with greater gain, overcome propagation loss, and ensure system coverage.
  • the 5G NR standard has designed a series of solutions for beam management, including beam scanning, beam measurement, beam reporting, and beam indication. However, when the number of transmit and receive beams is relatively large, the system load and delay will be greatly increased.
  • AI artificial intelligence
  • the transmitting end of a communication system has M beams and the receiving end has N beams.
  • M*N beams need to be measured.
  • measuring M*N beams will lead to a large system load and long delay.
  • Using a model for example, an AI model to predict the optimal beam pair through a small number of beam measurement results can greatly reduce the system load and delay caused by beam measurement.
  • the above process relies on the performance monitoring of the AI model in different communication environments. Therefore, how to monitor the performance of the AI model has become an urgent problem to be solved, and there is no relevant discussion at present.
  • an embodiment of the present application provides a method and device for sending and receiving information.
  • an information transceiver device which is applied to a network device, and the device includes:
  • a first sending unit which sends configuration parameters for monitoring the performance of one or more AI models to a terminal device
  • a first receiving unit receives performance monitoring results of one or more AI models sent by the terminal device.
  • an information transceiver device which is applied to a terminal device, and the device includes:
  • a second receiving unit which receives configuration parameters sent by a network device for monitoring the performance of one or more AI models
  • a second sending unit sends performance monitoring results of one or more AI models to the network device.
  • a communication system including a terminal device and/or a network device, wherein the terminal device includes the information transceiver device of the aforementioned aspect, and the network device includes the information transceiver device of the aforementioned aspect.
  • the network device provides the terminal device with configuration parameters for detecting the performance of the AI model, so that the terminal device can generate performance monitoring results based on the configuration parameters, and the performance of the AI model can be effectively monitored based on the performance monitoring results, which in turn helps to select a beam management method that meets the current communication environment and reduce system load and delay.
  • FIG1 is a schematic diagram of a communication system of the present application.
  • FIG2 is a schematic diagram of a transmit beam and a receive beam in a communication system according to an embodiment of the present application
  • FIG3 is a schematic diagram of a method for sending and receiving information according to an embodiment of the present application.
  • FIG4 is a schematic diagram of a transmit beam and a receive beam according to an embodiment of the present application.
  • FIG5 is a schematic diagram of a method for sending and receiving information according to an embodiment of the present application.
  • FIG6 is a schematic diagram of a method for sending and receiving information according to an embodiment of the present application.
  • FIG7 is a schematic diagram of an information transceiver device according to an embodiment of the present application.
  • FIG8 is a schematic diagram of an information transceiver device according to an embodiment of the present application.
  • FIG9 is a schematic diagram of a network device according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a terminal device according to an embodiment of the present application.
  • the terms “first”, “second”, etc. are used to distinguish different elements from the title, but do not indicate the spatial arrangement or time order of these elements, etc., and these elements should not be limited by these terms.
  • the term “and/or” includes any one and all combinations of one or more of the terms listed in association.
  • the terms “comprising”, “including”, “having”, etc. refer to the existence of the stated features, elements, components or components, but do not exclude the existence or addition of one or more other features, elements, components or components.
  • the term “communication network” or “wireless communication network” may refer to a network that complies with any of the following communication standards, such as Long Term Evolution (LTE), enhanced Long Term Evolution (LTE-A), Wideband Code Division Multiple Access (WCDMA), High-Speed Packet Access (HSPA), and the like.
  • LTE Long Term Evolution
  • LTE-A enhanced Long Term Evolution
  • WCDMA Wideband Code Division Multiple Access
  • HSPA High-Speed Packet Access
  • communication between devices in the communication system may be carried out according to communication protocols of any stage, such as but not limited to the following communication protocols: 1G (generation), 2G, 2.5G, 2.75G, 3G, 4G, 4.5G and 5G, New Radio (NR), future 6G, etc., and/or other communication protocols currently known or to be developed in the future.
  • 1G generation
  • 2G 2.5G
  • 2.75G 3G
  • 4G 4G
  • 4.5G and 5G 3G
  • NR New Radio
  • future 6G etc.
  • communication protocols currently known or to be developed in the future.
  • the term "network device” refers to, for example, a device in a communication system that connects a terminal device to a communication network and provides services for the terminal device.
  • the network device may include, but is not limited to, the following devices: base station (BS), access point (AP), transmission reception point (TRP), broadcast transmitter, mobile management entity (MME), gateway, server, radio network controller (RNC), base station controller (BSC), etc.
  • base stations may include but are not limited to: Node B (NodeB or NB), evolved Node B (eNodeB or eNB) and 5G base station (gNB), etc., and may also include remote radio heads (RRH, Remote Radio Head), remote radio units (RRU, Remote Radio Unit), relays or low-power nodes (such as femeto, pico, etc.).
  • NodeB Node B
  • eNodeB or eNB evolved Node B
  • gNB 5G base station
  • base station may include remote radio heads (RRH, Remote Radio Head), remote radio units (RRU, Remote Radio Unit), relays or low-power nodes (such as femeto, pico, etc.).
  • RRH Remote Radio Head
  • RRU Remote Radio Unit
  • relays or low-power nodes such as femeto, pico, etc.
  • base station may include some or all of their functions, and each base station can provide communication coverage for a specific geographical area.
  • the term "cell” can refer
  • the term "user equipment” (UE) or “terminal equipment” (TE) refers to, for example, a device that accesses a communication network through a network device and receives network services.
  • the terminal device may be fixed or mobile, and may also be referred to as a mobile station (MS), a terminal, a subscriber station (SS), an access terminal (AT), a station, and the like.
  • terminal devices may include but are not limited to the following devices: cellular phones, personal digital assistants (PDA, Personal Digital Assistant), wireless modems, wireless communication devices, handheld devices, machine-type communication devices, laptop computers, cordless phones, smart phones, smart watches, digital cameras, etc.
  • PDA personal digital assistants
  • wireless modems wireless communication devices
  • handheld devices machine-type communication devices
  • laptop computers cordless phones
  • smart phones smart watches, digital cameras, etc.
  • the terminal device can also be a machine or device for monitoring or measuring, such as but not limited to: machine type communication (MTC) terminal, vehicle-mounted communication terminal, device to device (D2D) terminal, machine to machine (M2M) terminal, and so on.
  • MTC machine type communication
  • D2D device to device
  • M2M machine to machine
  • network side refers to one side of the network, which may be a base station, or may include one or more network devices as above.
  • user side or “terminal side” or “terminal device side” refers to one side of the user or terminal, which may be a UE, or may include one or more terminal devices as above.
  • device may refer to either a network device or a terminal device.
  • uplink control signal and “uplink control information (UCI)” or “physical uplink control channel (PUCCH)” are interchangeable, and the terms “uplink data signal” and “uplink data information” or “physical uplink shared channel (PUSCH)” are interchangeable if no confusion is caused;
  • downlink control signal and “downlink control information (DCI)” or “physical downlink control channel (PDCCH)” are interchangeable, and the terms “downlink data signal” and “downlink data information” or “physical downlink shared channel (PDSCH)” are interchangeable.
  • DCI downlink control information
  • PDCCH physical downlink control channel
  • sending or receiving PUSCH can be understood as sending or receiving uplink data carried by PUSCH
  • sending or receiving PUCCH can be understood as sending or receiving uplink information carried by PUCCH
  • sending or receiving PRACH can be understood as sending or receiving preamble carried by PRACH
  • uplink signals can include uplink data signals and/or uplink control signals, etc., and can also be called uplink transmission (UL transmission) or uplink information or uplink channel.
  • Sending uplink transmission on uplink resources can be understood as sending the uplink transmission using the uplink resources.
  • downlink data/signal/channel/information can be understood accordingly.
  • the high-level signaling may be, for example, a radio resource control (RRC) signaling; for example, an RRC message (RRC message), including, for example, MIB, system information (system information), a dedicated RRC message; or an RRC IE (RRC information element).
  • RRC radio resource control
  • the high-level signaling may also be, for example, a MAC (Medium Access Control) signaling; or a MAC CE (MAC control element).
  • RRC radio resource control
  • FIG1 is a schematic diagram of a communication system according to an embodiment of the present application, schematically illustrating a situation taking a terminal device and a network device as an example.
  • a communication system 100 may include a network device 101 and terminal devices 102 and 103.
  • FIG1 only illustrates two terminal devices and one network device as an example, but the embodiment of the present application is not limited thereto.
  • existing services or future services can be sent between the network device 101 and the terminal devices 102 and 103.
  • these services may include but are not limited to: enhanced mobile broadband (eMBB), massive machine type communication (mMTC), and ultra-reliable and low-latency communication (URLLC), etc.
  • eMBB enhanced mobile broadband
  • mMTC massive machine type communication
  • URLLC ultra-reliable and low-latency communication
  • the terminal device 102 may send data to the network device 101, for example, using an authorized or unauthorized transmission mode.
  • the network device 101 may receive data sent by one or more terminal devices 102, and feedback information to the terminal device 102, such as confirmation ACK/non-confirmation NACK information, etc.
  • the terminal device 102 may confirm the end of the transmission process, or may perform new data transmission, or may retransmit the data according to the feedback information.
  • FIG1 shows that both terminal devices 102 and 103 are within the coverage of the network device 101, but the present application is not limited thereto. Both terminal devices 102 and 103 may not be within the coverage of the network device 101, or one terminal device 102 is within the coverage of the network device 101 and the other terminal device 103 is outside the coverage of the network device 101.
  • the AI model includes but is not limited to: an input layer (input), multiple convolutional layers, a connection layer (concat), a fully connected layer (FC), and a quantizer, etc. Among them, the processing results of multiple convolutional layers are merged in the connection layer.
  • the specific structure of the AI model can be referred to the existing technology and will not be repeated here.
  • Fig. 2 is a schematic diagram of the transmission beam and the reception beam in the communication system of each embodiment of the present application.
  • the network device 101 may have M1 downlink transmission beams DL TX
  • the terminal device 102 may have N1 downlink reception beams DL RX.
  • a model 201 for predicting beam measurement results may be deployed in a network device 101 or a terminal device 102.
  • the model 201 may predict the measurement results of M1*N1 beams based on the measurement results of some beams.
  • the model 201 may be, for example, an AI model.
  • the network device 101 can have N2 uplink receive beams (not shown in Figure 2), and the terminal device 102 can have M2 uplink transmit beams UL TX (not shown in Figure 2).
  • An embodiment of the present application provides a method for sending and receiving information, which is described from the perspective of a network device.
  • FIG3 is a schematic diagram of a method for sending and receiving information according to an embodiment of the present application. As shown in FIG3 , the method includes:
  • the network device sends configuration parameters for monitoring the performance of one or more AI models to the terminal device;
  • the network device receives the performance monitoring results of one or more AI models sent by the terminal device.
  • FIG. 3 is only a schematic illustration of the embodiment of the present application, but the present application is not limited thereto.
  • the execution order between the various operations can be appropriately adjusted, and other operations can be added or some operations can be reduced.
  • Those skilled in the art can make appropriate modifications based on the above content, and are not limited to the description of the above FIG. 3.
  • an AI model for beam prediction is deployed in a terminal device.
  • the AI model is used to predict the optimal beam pair through a small number of beam pair measurement results.
  • the input parameter of the AI model is the RSRP (Reference Signal Receiving Power) value of some beam pairs, and can also be the SINR (Signal to Interference plus Noise Ratio) value of some beam pairs.
  • the physical quantity of the output parameter is the RSRP or SINR of all beam pairs.
  • FIG4 is a schematic diagram of the transmitting beam and receiving beam and the AI model in an embodiment of the present application. As shown in FIG4, for example, there are 12 downlink transmitting beams and 8 downlink receiving beams, with a total of 96 beam pairs.
  • the terminal device only measures the RSRP of 24 beam pairs (6 downlink transmitting beams and 4 downlink receiving beams).
  • the dimension of the input parameter of the AI model is 24, the physical quantity is RSRP or SINR, the dimension of the output parameter is 96, and the physical quantity is also RSRP or SINR.
  • the optimal beam pair can be selected from the prediction results.
  • one or more AI models may be pre-deployed in the terminal device.
  • the network device sends configuration parameters for monitoring the performance of one or more AI models to the terminal device.
  • the terminal device can generate performance monitoring results based on the configuration parameters, and the performance of the AI model can be effectively monitored based on the performance monitoring results, which helps to select a beam management method that meets the current communication environment and reduce system load and delay.
  • the method further includes (not shown): activating the AI model or deactivating the AI model or selecting the AI model or switching the AI model according to the performance monitoring result.
  • the configuration parameters include a first configuration parameter for determining whether to activate the AI model, and/or a second configuration parameter for determining whether to deactivate the AI model, and/or a third configuration parameter for selecting the AI model and/or switching the AI model. That is, the network device can send the above three configuration parameters at the same time, or can send at least one of the three configuration parameters according to the state of the system.
  • the first configuration parameter, the second configuration parameter and the third configuration parameter are the same or different, which are described in detail below.
  • the system when an AI model is pre-deployed in a terminal device, the system may be in an AI model activation state (that is, the AI model is used for beam prediction, or the AI model is working), and the network device may send configuration parameters for monitoring the performance of the AI model (hereinafter may also be referred to as configuration parameters for determining deactivation, or second configuration parameters), and the terminal device may generate a performance monitoring result based on the configuration parameter, and the network device may determine whether to deactivate the AI model based on the performance monitoring result (for example, to restore to the traditional beam management state).
  • an AI model activation state that is, the AI model is used for beam prediction, or the AI model is working
  • the network device may send configuration parameters for monitoring the performance of the AI model (hereinafter may also be referred to as configuration parameters for determining deactivation, or second configuration parameters)
  • the terminal device may generate a performance monitoring result based on the configuration parameter
  • the network device may determine whether to deactivate the AI model based on the performance monitoring result (for example,
  • the system when an AI model is pre-deployed in a terminal device, the system may be in an AI model deactivated state (traditional beam management state), and the network device may send configuration parameters for monitoring the performance of the AI model (hereinafter may also be referred to as configuration parameters for determining activation, or first configuration parameters), and the terminal device may generate performance monitoring results based on the configuration parameters, and the network device determines whether to activate the AI model based on the performance monitoring results (for example, restore to a state where the AI model is used for beam prediction, or the system enters an AI model activation state or an AI model working state).
  • configuration parameters for monitoring the performance of the AI model hereinafter may also be referred to as configuration parameters for determining activation, or first configuration parameters
  • the terminal device may generate performance monitoring results based on the configuration parameters
  • the network device determines whether to activate the AI model based on the performance monitoring results (for example, restore to a state where the AI model is used for beam prediction, or the system enters an AI model activation state or an AI model working state
  • the system when multiple AI models are pre-deployed in the terminal device, the system can be in an AI model deactivated state (traditional beam management state) or an AI model activated state (using the AI model for beam prediction, or the AI model is working), and the network device can send configuration parameters for monitoring the performance of one of the AI models (including the aforementioned first configuration parameters or the second configuration parameters), and the terminal device can generate performance monitoring results based on the configuration parameters.
  • an AI model deactivated state traditional beam management state
  • an AI model activated state using the AI model for beam prediction, or the AI model is working
  • the network device can send configuration parameters for monitoring the performance of one of the AI models (including the aforementioned first configuration parameters or the second configuration parameters)
  • the terminal device can generate performance monitoring results based on the configuration parameters.
  • the network device determines whether to deactivate the AI model based on the performance monitoring results (for example, restore to the traditional beam management state, based on the second configuration parameters), or determines whether to activate the AI model based on the performance monitoring results (for example, restore to the state of using the AI model for beam prediction or the system enters the AI model activation state or the AI model working state, based on the first configuration parameters).
  • the first configuration parameter and the second configuration parameter can be sent to the terminal device at the same time, or only one of the configuration parameters can be sent according to the state of the system, for example, when the system is in the AI model activation state, the second configuration parameter is sent, and when the system is in the AI model deactivation state, the first configuration parameter is sent, and the embodiments of the present application are not limited to this.
  • the information sent by the network device to the terminal device is: the configuration parameters of an AI model
  • the aforementioned configuration parameters may include the first configuration parameter and/or the second configuration parameter
  • the first configuration parameter and the second configuration parameter may be the same or different.
  • the system when multiple AI models are pre-deployed in the terminal device, the system can be in an AI model deactivated state (traditional beam management state) or an AI model activated state (using one AI model for beam prediction, or one of the AI models is working), and the network device can send configuration parameters for monitoring the performance of multiple AI models (including second configuration parameters or configuration parameters for selecting AI models and/or switching AI models, i.e., third configuration parameters), and the terminal device can generate performance monitoring results based on the configuration parameters.
  • an AI model deactivated state traditional beam management state
  • an AI model activated state using one AI model for beam prediction, or one of the AI models is working
  • the network device can send configuration parameters for monitoring the performance of multiple AI models (including second configuration parameters or configuration parameters for selecting AI models and/or switching AI models, i.e., third configuration parameters)
  • the terminal device can generate performance monitoring results based on the configuration parameters.
  • the network device determines whether to select an AI model to enter a working state (originally in a deactivated state, based on the third configuration parameter) based on the performance monitoring results, or determines whether it is necessary to switch the AI model for beam prediction (originally in an AI model activated state, based on the third configuration parameter), or whether it is necessary to deactivate the AI model (for example, restore to the traditional beam management state, based on the second configuration parameter).
  • the third configuration parameter and the second configuration parameter can be sent to the terminal device at the same time, or only one of the configuration parameters can be sent according to the state of the system; on the other hand, the configuration parameters for monitoring performance of each AI model in multiple AI models can be sent to the terminal device at the same time, or can be sent to the terminal device separately in multiple times, and examples are not given one by one here.
  • the configuration parameters for monitoring the performance of different AI models are the same or different.
  • the configuration parameters are the same, only one set of configuration parameters for all AI models can be sent, that is, the information sent by the network device to the terminal device is: configuration parameters; when the configuration parameters are different, the network device can also send the identifier of the AI model corresponding to the configuration parameters to distinguish the configuration parameters configured for different AI models.
  • the information sent by the network device to the terminal device is: identifier of AI model 1, configuration parameter 1 of AI model 1, identifier of AI model 2, configuration parameter 2 of AI model 2, identifier of AI model 3, configuration parameter 3 of AI model 3...
  • the above information can be sent simultaneously, or sent multiple times, for example, the identifier of AI model 1 and configuration parameter 1 of AI model 1 are sent for the first time, the identifier of AI model 2 and configuration parameter 2 of AI model 2 are sent for the second time, the identifier of AI model 3 and configuration parameter 3 of AI model 3 are sent for the third time, and so on.
  • the aforementioned configuration parameters may include the second configuration parameter and/or the third configuration parameter, and the third configuration parameter and the second configuration parameter may be the same or different.
  • the following description takes the configuration parameters used to monitor the performance of an AI model as an example. It should be noted that the following description is applicable to the first configuration parameter, the second configuration parameter and the third configuration parameter, and is also applicable to the configuration parameters for monitoring the performance of other AI models.
  • the configuration parameters include a threshold value of a performance metric and/or a filter coefficient of a performance metric and/or a counter of a monitoring result of a statistical performance metric.
  • the above performance metrics may include a prediction error of a beam measurement result, and/or a prediction accuracy of a beam measurement result, and/or a throughput, and/or a frame error rate, which are described below.
  • the input of the AI model is the measurement results of some beams (pairs)
  • the output of the AI model may include the predicted values of the measurement results of each beam (pair) and the identification information of the corresponding beam (pair).
  • the prediction error or prediction accuracy can be used as a performance metric.
  • the prediction error includes the difference between the predicted measurement result corresponding to the optimal beam (pair) output by the AI model and the actual measurement result of the optimal beam (pair).
  • the prediction error is the difference between the predicted measurement result output by the AI model corresponding to the same optimal beam and the actual measurement result.
  • the prediction error includes the difference between the predicted measurement result corresponding to the first optimal beam (pair) output by the AI model and the measurement result corresponding to the second optimal beam (pair) actually measured.
  • the first optimal beam (pair) is the optimal beam (pair) output by the AI model
  • the second optimal beam (pair) is the optimal beam (pair) actually measured.
  • the first optimal beam (pair) and the second optimal beam (pair) may be the same or different.
  • the prediction error includes the average value of the difference between the predicted measurement results corresponding to the multiple beams (pairs) output by the AI model and the actual measurement results of the multiple beams (pairs).
  • the prediction error is the average value of the difference between the predicted measurement results and the actual measurement results output by the AI model corresponding to the same multiple beams (pairs), and the multiple beams can be the K beams (pairs) with the largest measurement results, but the embodiments of the present application are not limited to this.
  • the predicted measurement results of multiple beams (pairs) are B1, B2, ..., BK
  • the actual measurement results are A1, A2, ..., AK
  • the prediction error is (B1-A1+B2-A2+...+BK-AK)/K.
  • the prediction error includes the average value of the difference between the predicted measurement results corresponding to the multiple first beams (pairs) output by the AI model and the measurement results of the multiple second beams (pairs) actually measured.
  • the first beam (pair) is the multiple (K) beams (pairs) output by the AI model, for example, with the largest measurement results
  • the second beam (pair) is the multiple (K) beams (pairs) actually measured, for example, with the largest measurement results.
  • the first beam (pair) and the second beam (pair) may be the same or different.
  • the predicted measurement results of the first beam (pair) are B1, B2, ..., BK
  • the actual measurement results of the second beam (pair) are A1, A2, ..., AK
  • the prediction error is (B1-A1+B2-A2+...+BK-AK)/K.
  • the prediction accuracy includes a first probability that the first optimal beam (pair) output by the AI model is the same as the second optimal beam (pair) actually measured, or a second probability that the first number of first beams (pairs) output by the AI model contains the optimal beam (pair) actually measured, or a third probability that the optimal beam (pair) output by the AI model is contained in the second number of second beams (pairs) actually measured.
  • the description of the first beam (pair) and the second beam (pair) is as described above and will not be repeated here.
  • the above probabilities can be counted through multiple AI model reasoning cycles.
  • the first probability is counted through M AI model reasoning cycles.
  • M AI model reasoning cycles there are M-2 reasoning cycles in which the first optimal beam (pair) and the second optimal beam (pair) are the same, and there are 2 reasoning cycles in which the first optimal beam (pair) and the second optimal beam (pair) are different, then the first probability is (M-2)/M, and the calculation method of the second probability and the third probability is similar, and examples are not given here one by one.
  • the above-mentioned measurement results include L1-RSRP or SINR.
  • the embodiments of the present application are not limited to this.
  • the above-mentioned actual measurement results are determined according to the training data of the AI model or according to the measurement results of the actual measurement when the AI model is not applied, that is, the above-mentioned actually measured L1-RSRP or SINR can be the label data measured in the AI model training period, or it can also be the real measurement data in history (for example, the measurement data when the traditional beam management state is used when the AI model is not applied).
  • the performance metric may also be throughput and/or frame error rate.
  • the calculation methods of throughput and frame error rate may refer to the prior art and are not given one by one here.
  • the threshold value in the configuration parameter is used to compare with the value of the performance metric calculated by the terminal device to determine whether the AI model performance can meet the performance requirements and realize the monitoring of the AI model performance.
  • the communication environment changes frequently or there is a calculation error in the value of the performance metric calculated by the terminal side, it will cause the system's beam management to frequently switch between the judgment of whether the current AI model is suitable (AI model activation/deactivation), thereby affecting the load of system signaling and the delay of communication.
  • Filter coefficients and/or counters are used to filter the calculated performance metric values, and determine whether the AI model performance can meet the performance requirements based on the value of the performance metric after filtering. Thus, frequent switching of the AI model between activation and deactivation can be avoided. How the threshold value, filter coefficient, and counter are applied will be described in the embodiment of the second aspect described later.
  • the configuration parameter is carried by RRC or MAC CE or DCI.
  • the threshold value, filter coefficient and/or counter can be represented by a bit sequence with a predetermined number of bits, and the decimal value corresponding to the bit sequence is the threshold value, filter coefficient and/or counter.
  • a new information element can be added to the existing RRC or MAC CE or DCI to carry the configuration parameter, or an existing information element in the existing RRC or MAC CE or DCI can be used to carry the configuration parameter.
  • a new RRC or MAC CE or DCI can also be designed to carry the configuration parameter. The embodiments of the present application are not limited to this.
  • the terminal device can generate a performance monitoring result based on the configuration parameter and report it to the network device.
  • the performance monitoring result includes AI model performance indication information, and/or the value of the AI model performance metric, and/or the identifier of the AI model, which will be specifically described in the embodiments of the second aspect.
  • the network device provides the terminal device with configuration parameters for detecting the performance of the AI model, so that the terminal device can generate performance monitoring results based on the configuration parameters. According to the performance monitoring results, the performance of the AI model can be effectively monitored, which in turn helps to select a beam management method that meets the current communication environment and reduce system load and delay.
  • An embodiment of the present application provides a method for sending and receiving information, which is described from the terminal device side, and the contents that are the same as the embodiment of the first aspect are not repeated.
  • FIG. 5 is a schematic diagram of a method for sending and receiving information according to an embodiment of the present application. As shown in FIG. 5 , the method includes:
  • the terminal device receives configuration parameters sent by the network device for monitoring the performance of one or more AI models;
  • the terminal device sends the performance monitoring results of one or more AI models to the network device.
  • FIG. 5 is only a schematic illustration of the embodiment of the present application, but the present application is not limited thereto.
  • the execution order between the various operations can be appropriately adjusted, and other operations can be added or some operations can be reduced.
  • Those skilled in the art can make appropriate modifications based on the above content, and are not limited to the description of the above FIG. 5.
  • the implementation of the above configuration parameters can refer to the embodiment of the first aspect, and the implementation of 501 corresponds to 301, which will not be repeated here.
  • the threshold value in the configuration parameter is used to compare with the value of the performance metric calculated by the terminal device to determine whether the AI model performance can meet the performance requirements and realize the monitoring of AI model performance.
  • the performance metric is prediction error
  • the value of the prediction error when the value of the prediction error is greater than the threshold value for the prediction error, it indicates that the AI model performance does not meet the performance requirements
  • the value of the prediction error when the value of the prediction error is less than the threshold value for the prediction error, it indicates that the AI model performance meets the performance requirements
  • the performance metric is prediction accuracy
  • the value of the prediction accuracy is greater than the threshold value for the prediction accuracy, it indicates that the AI model performance meets the performance requirements
  • the value of the prediction accuracy is less than the threshold value for the prediction accuracy, it indicates that the AI model performance does not meet the performance requirements.
  • the filter coefficients in the configuration parameters are used to filter the values of the calculated performance metrics, and determine whether the AI model performance can meet the performance requirements based on the values of the performance metrics after filtering. This can avoid frequent switching of the AI model between activation and deactivation.
  • the filtering formula is as follows:
  • a is the filter coefficient in the configuration parameters sent by the network side
  • Mn is the value of the performance metric calculated by the most recent AI model inference, for example, Fn is the value of the updated filtered performance metric, Fn-1 is the value of the last filtered performance metric, and n represents the number of filtering times.
  • the performance metric Mn is prediction error
  • the value of the filtered prediction error Fn is greater than the threshold value for the prediction error
  • the AI model performance does not meet the performance requirements
  • the value of the filtered prediction error Fn is less than the threshold value for the prediction error
  • the AI model performance meets the performance requirements
  • the performance metric Mn is prediction accuracy
  • the value of the filtered prediction accuracy Fn is greater than the threshold value for the prediction accuracy
  • the AI model performance meets the performance requirements
  • the value of the filtered prediction accuracy Fn is less than the threshold value for the prediction accuracy
  • the counter in the configuration parameters is used to filter the value of the calculated performance metric, and determine whether the AI model performance can meet the performance requirements based on the value of the performance metric after filtering. This can avoid frequent switching of the AI model between activation and deactivation.
  • the counter in the configuration parameter sent by the network device is an N-bit counter, and the maximum count value is set to Y.
  • the performance metric Mn is a prediction error
  • the counter is incremented by 1 when the value of the prediction error Mn is less than the threshold value for the prediction error, the counter is decremented by 1, and so on, until the counter reaches the set maximum count value, or vice versa (specifically related to the system state);
  • the performance metric Mn is a prediction accuracy
  • the counter is incremented by 1, when the value of the prediction accuracy Mn is less than the threshold value for the prediction accuracy, the counter is decremented by 1, and so on, until the counter reaches the set maximum count value, or vice versa (specifically related to the system state).
  • the performance monitoring results include AI model performance indication information, and/or the value of the AI model performance metric, and/or the identification of the AI model.
  • the terminal device may calculate the value of the performance metric, and send the performance monitoring result including the value of the performance metric to the network device, and the network device determines whether the performance of the AI model can meet the performance requirements, and then activates the AI model or deactivates the AI model or selects the AI model or switches the AI model; or, the terminal device may calculate the value of the performance metric, and the terminal device determines whether the performance of the AI model can meet the performance requirements, generates a performance monitoring result including the AI model performance indication information and sends it to the network device, and the network device activates the AI model or deactivates the AI model or selects the AI model or switches the AI model according to the performance monitoring result, wherein, when multiple AI models are deployed on the terminal device, and the performance monitoring results for the multiple AI models are different, the performance monitoring result may also include the identifier of the AI model to distinguish the performance monitoring results of different AI models. The following are explained separately.
  • the terminal device determines whether the AI model performance meets the performance requirements
  • the terminal device may filter the value of the performance metric according to the configuration parameters (filter coefficients and/or counters), and/or compare the value of the performance metric (before or after filtering) with the configuration parameters (threshold values), and generate the performance monitoring result containing the AI model performance indication information based on the filtering result and/or comparison result.
  • the configuration parameters include the threshold value
  • the terminal device can calculate the value of the performance metric
  • the terminal device compares the value with the threshold value
  • the AI model performance indication information is used to indicate whether the AI model performance can meet the performance requirements.
  • the AI model performance indication information can be represented by 1 bit of information.
  • the configuration parameter is the second configuration parameter.
  • the performance metric is the prediction error
  • the value of the prediction error is greater than the threshold value for the prediction error
  • the terminal device when the value of the prediction error is greater than the threshold value for the prediction error, it indicates that the AI model performance does not meet the performance requirements, that is, the current AI model is no longer adapted to the communication environment, and the terminal device generates AI model performance indication information (bit value is 1) as a performance monitoring result to indicate that the AI model performance does not meet the performance requirements, or in other words, when the value of the prediction error is greater than the threshold value for the prediction error, a performance monitoring result report is triggered.
  • the network device After receiving the information that the AI model performance does not meet the performance requirements from the performance monitoring result, the network device will determine whether to activate the AI model. In other words, when the value of the prediction error is less than the threshold value, the AI model performance indication information is not generated, and the performance monitoring result is not reported.
  • the configuration parameter is the first configuration parameter.
  • the performance metric is the prediction error
  • the value of the prediction error is less than the threshold value for the prediction error
  • the terminal device when the value of the prediction error is less than the threshold value for the prediction error, it indicates that the AI model performance meets the performance requirements, that is, the current AI model adapts to the communication environment, and the terminal device generates AI model performance indication information (bit value is 1) as a performance monitoring result to indicate that the AI model performance meets the performance requirement information, or in other words, when the value of the prediction error is less than the threshold value for the prediction error, a performance monitoring result report is triggered.
  • the network device learns that the AI model performance meets the performance requirements, and therefore determines whether to activate the AI model (enter the AI model activation state). In other words, when the value of the prediction error is greater than the threshold value, the AI model performance indication information is not generated, and the performance monitoring result is not reported.
  • the configuration parameter is the second configuration parameter.
  • the performance metric is prediction accuracy
  • the value of the prediction accuracy is less than the threshold value for the prediction accuracy
  • the terminal device when the value of the prediction accuracy is less than the threshold value for the prediction accuracy, it indicates that the AI model performance does not meet the performance requirements, that is, the current AI model is no longer adapted to the communication environment, and the terminal device generates AI model performance indication information (bit value is 1) as a performance monitoring result to indicate that the AI model performance does not meet the performance requirements, or in other words, when the value of the prediction accuracy is less than the threshold value for the prediction accuracy, a performance monitoring result report is triggered.
  • the network device receives the information that the AI model performance does not meet the performance requirements from the performance monitoring result, and therefore determines whether to deactivate the AI model. In other words, when the value of the prediction accuracy is greater than the threshold value, the AI model performance indication information is not generated, and the performance monitoring result is not reported.
  • the configuration parameter is the first configuration parameter.
  • the performance metric is prediction accuracy
  • the value of the prediction accuracy is greater than the threshold value for the prediction accuracy
  • the terminal device when the value of the prediction accuracy is greater than the threshold value for the prediction accuracy, it indicates that the AI model performance meets the performance requirements, that is, the current AI model adapts to the communication environment, and the terminal device generates AI model performance indication information (bit value is 1) as a performance monitoring result to indicate that the AI model performance meets the performance requirements, or in other words, when the value of the prediction progress is greater than the threshold value for the prediction accuracy, a performance monitoring result report is triggered.
  • the network device receives the information that the AI model performance meets the performance requirements from the performance monitoring result, and therefore determines whether to activate the AI model. In other words, when the value of the prediction accuracy is less than the threshold value, the AI model performance indication information is not generated, and the performance monitoring result is not reported.
  • the terminal device uses the third configuration parameter
  • the configuration parameters include the threshold value and the filter coefficient.
  • the terminal device can calculate the value of the performance metric Mn , filter the value of the performance metric according to the filter coefficient, compare the filtered value Fn with the threshold value, and generate AI model performance indication information according to the comparison result.
  • the configuration parameter is the second configuration parameter.
  • the performance metric Mn is the prediction error
  • the value of the filtered prediction error Fn is greater than the threshold value for the prediction error
  • the terminal device when the value of the filtered prediction error Fn is greater than the threshold value for the prediction error, it indicates that the AI model performance does not meet the performance requirements, that is, the current AI model is no longer adapted to the communication environment, and the terminal device generates AI model performance indication information (bit value is 1) as a performance monitoring result to indicate that the AI model performance does not meet the performance requirements (triggering the reporting of a performance monitoring result).
  • the network device receives the information that the AI model performance does not meet the performance requirements of the communication from the performance monitoring result, and therefore determines whether to deactivate the AI model. In other words, when the value of the filtered prediction error Fn is less than the threshold value, the AI model performance indication information is not generated, and the performance monitoring result is not reported.
  • the configuration parameter is the first configuration parameter.
  • the performance metric Mn is the prediction error
  • the value of the filtered prediction error Fn is less than the threshold value for the prediction error
  • the terminal device when the value of the filtered prediction error Fn is less than the threshold value for the prediction error, it indicates that the AI model performance meets the performance requirements, that is, the current AI model adapts to the communication environment, and the terminal device generates AI model performance indication information (bit value is 1) as a performance monitoring result to indicate that the AI model performance meets the performance requirements (triggering the reporting of a performance monitoring result).
  • the network device receives the information that the AI model performance meets the performance requirements of the communication from the performance monitoring result, and therefore determines whether to activate the AI model. In other words, when the value of the filtered prediction error Fn is greater than the threshold value, the AI model performance indication information is not generated, and the performance monitoring result is not reported.
  • the configuration parameter is the second configuration parameter.
  • the performance metric Mn is the prediction accuracy
  • the value of the prediction accuracy Fn after filtering is less than the threshold value for the prediction accuracy
  • the terminal device when the value of the prediction accuracy Fn after filtering is less than the threshold value for the prediction accuracy, it indicates that the AI model performance does not meet the performance requirements, that is, the current AI model is no longer adapted to the communication environment, and the terminal device generates AI model performance indication information (bit value is 1) as a performance monitoring result to indicate that the AI model performance does not meet the performance requirements (triggering a performance monitoring result report), and the network device receives the performance monitoring result to learn that the AI model performance does not meet the performance requirements, so it determines whether to activate the AI model.
  • the AI model performance indication information is not generated, and the performance monitoring result is not reported.
  • the configuration parameter is the first configuration parameter.
  • the performance metric Mn is the prediction accuracy
  • the value of the prediction accuracy Fn after filtering is greater than the threshold value for the prediction accuracy
  • the terminal device when the value of the prediction accuracy Fn after filtering is greater than the threshold value for the prediction accuracy, it indicates that the AI model performance meets the performance requirements, that is, the current AI model adapts to the communication environment, and the terminal device generates AI model performance indication information (bit value is 1) as a performance monitoring result to indicate that the AI model performance meets the performance requirements of the communication (triggering the reporting of a performance monitoring result).
  • the network device receives the information that the AI model performance meets the performance requirements from the performance monitoring result, and therefore determines whether to activate the AI model. In other words, when the value of the prediction accuracy after filtering is less than the threshold value, the AI model performance indication information is not generated, and the performance monitoring result is not reported.
  • Fn is set to 0.
  • the terminal device uses the third configuration parameter
  • the configuration parameters include the threshold value and the counter.
  • the terminal device can calculate the value of the performance metric, compare the calculated value with the threshold value, use the counter to count the comparison results, and generate AI model performance indication information based on the statistical results.
  • the configuration parameter is the second configuration parameter.
  • the performance metric is the prediction error
  • the counter is incremented by 1.
  • the counter is decremented by 1.
  • the terminal device generates AI model performance indication information (bit value is 1) as a performance monitoring result to indicate that the AI model performance does not meet the performance requirements (triggering a performance monitoring result report).
  • the network device receives the performance monitoring result and learns that the AI model performance does not meet the performance requirements, so it determines whether to deactivate the AI model. In other words, when the counter does not reach the set maximum count value, the AI model performance indication information is not generated, and the performance monitoring result is not reported.
  • the configuration parameter is the first configuration parameter.
  • the performance metric is the prediction error
  • the counter is incremented by 1.
  • the counter is decremented by 1.
  • this operation is repeated until the counter reaches the set maximum count value, indicating that the AI model performance meets the performance requirements, that is, the current AI model adapts to the communication environment, and the terminal device generates AI model performance indication information (bit value 1) as a performance monitoring result to indicate that the AI model performance meets the performance requirements (triggering a performance monitoring result report).
  • the network device receives the performance monitoring result and learns that the AI model performance meets the performance requirements, so it determines whether to activate the AI model. In other words, when the counter does not reach the set maximum count value, the AI model performance indication information is not generated, and the performance monitoring result is not reported.
  • the configuration parameter is the second configuration parameter.
  • the performance metric is prediction accuracy
  • the counter is incremented by 1.
  • the counter is decremented by 1.
  • the network device receives the performance monitoring result and learns that the AI model performance does not meet the performance requirements, so it determines whether to deactivate the AI model. In other words, when the counter does not reach the set maximum count value, the AI model performance indication information is not generated, and the performance monitoring result is not reported.
  • the configuration parameter is the first configuration parameter.
  • the performance metric is prediction accuracy
  • the counter is incremented by 1.
  • the counter is decremented by 1.
  • this operation is repeated until the counter reaches the set maximum count value, indicating that the AI model performance meets the performance requirements, that is, the current AI model adapts to the communication environment, and the terminal device generates AI model performance indication information (bit value 1) as a performance monitoring result to indicate that the AI model performance meets the performance requirements (triggering a performance monitoring result report).
  • the network device receives the performance monitoring result and learns that the AI model performance meets the performance requirements, so it determines whether to activate the AI model. In other words, when the counter does not reach the set maximum count value, the AI model performance indication information is not generated, and the performance monitoring result is not reported.
  • the counter is reset.
  • the terminal device uses the third configuration parameter
  • the configuration parameters include the threshold value, the filter coefficient and the counter.
  • the terminal device can calculate the value of the performance metric, filter the value of the performance metric according to the filter coefficient, compare the filtered value with the threshold value, use the counter to count the comparison results, and generate AI model performance indication information based on the statistical results.
  • the configuration parameter is the second configuration parameter.
  • the performance metric is the prediction error
  • the counter is increased by 1
  • the value of the filtered prediction error Fn is less than the threshold value for the prediction error, the counter is decreased by 1.
  • the terminal device After the next AI model inference calculates the value of the performance metric, this operation is repeated until the counter reaches the set maximum count value, indicating that the AI model performance does not meet the performance requirements, that is, the current AI model is no longer suitable for the communication environment, and the terminal device generates AI model performance indication information (bit value 1) as a performance monitoring result to indicate that the AI model performance does not meet the performance requirements (triggering a performance monitoring result report), and the network device receives the performance monitoring result and learns that the AI model performance does not meet the performance requirements, so it determines whether to deactivate the AI model.
  • bit value 1 AI model performance indication information
  • the network device receives the performance monitoring result and learns that the AI model performance does not meet the performance requirements, so it determines whether to deactivate the AI model.
  • the configuration parameter is the first configuration parameter.
  • the performance metric is the prediction error
  • the counter is incremented by 1.
  • the counter is decremented by 1.
  • the terminal device After the next AI model inference calculates the value of the performance metric, this operation is repeated until the counter reaches the set maximum count value, indicating that the AI model performance meets the performance requirements, that is, the current AI model adapts to the communication environment, and the terminal device generates AI model performance indication information (bit value 1) as a performance monitoring result to indicate that the AI model performance meets the performance requirements (triggering a performance monitoring result report).
  • AI model performance indication information bit value 1
  • the network device receives the performance monitoring result and learns that the AI model performance meets the performance requirements, so it determines whether to activate the AI model. In other words, when the counter does not reach the set maximum count value, the AI model performance indication information is not generated, and the performance monitoring result is not reported.
  • the configuration parameter is the second configuration parameter.
  • the performance metric is prediction accuracy
  • the counter is increased by 1.
  • the value of the prediction accuracy Fn after filtering is greater than the threshold value for prediction accuracy, the counter is decreased by 1.
  • the terminal device After the next AI model inference calculates the value of the performance metric, this operation is repeated until the counter reaches the set maximum count value, indicating that the AI model performance does not meet the performance requirements, that is, the current AI model is no longer adapted to the communication environment, and the terminal device generates AI model performance indication information (bit value 1) as a performance monitoring result to indicate that the AI model performance does not meet the performance requirements (triggering a performance monitoring result report).
  • the network device receives the information that the AI model performance does not meet the performance requirements from the performance monitoring result, and therefore determines whether to deactivate the AI model. In other words, when the counter does not reach the set maximum count value, the AI model performance indication information is not generated, and the performance monitoring result is not reported.
  • the configuration parameter is the first configuration parameter.
  • the performance metric is prediction accuracy
  • the counter is increased by 1.
  • the value of the prediction accuracy Fn after filtering is less than the threshold value for prediction accuracy
  • the counter is decreased by 1.
  • this operation is repeated until the counter reaches the set maximum count value, indicating that the AI model performance meets the performance requirements, that is, the current AI model adapts to the communication environment, and the terminal device generates AI model performance indication information (bit value 1) as a performance monitoring result to indicate that the AI model performance meets the performance requirements (triggering a performance monitoring result report).
  • the network device receives the performance monitoring result and learns that the AI model performance does not meet the performance requirements, so it determines whether to activate the AI model. In other words, when the counter does not reach the set maximum count value, the AI model performance indication information is not generated, and the performance monitoring result is not reported.
  • the terminal device uses the third configuration parameter
  • the terminal device can calculate the value of the AI model performance metric and filter the value of the performance metric according to the configuration parameters (filter coefficients).
  • the terminal device can calculate the value of the performance metric according to the configuration parameters (filter coefficients), include the calculated value of the performance metric (e.g., filtered) in the performance monitoring results, and report it to the network device.
  • the configuration parameters include filter coefficients
  • the terminal device can calculate the value Mn of the performance metric, filter the value Mn of the performance metric according to the filter coefficient, obtain the value Fn of the filtered performance metric, and report the Fn to the network device as the performance monitoring result.
  • the network device After receiving the performance monitoring result, the network device will determine whether the performance of the AI model can meet the performance requirements in combination with the threshold value (optionally, it can also include a counter), determine whether the current AI model is suitable for the current communication environment, and determine whether to activate the AI model or deactivate the AI model or select the AI model or switch the AI model.
  • the specific method of determining whether the performance of the AI model can meet the performance requirements is the same as that of the terminal device side, and will not be repeated here.
  • the threshold values in the above first configuration parameter, the second configuration parameter and the third configuration parameter are the same or different
  • the filter coefficients in the first configuration parameter, the second configuration parameter and the third configuration parameter are the same or different
  • the counters in the first configuration parameter, the second configuration parameter and the third configuration parameter are the same or different.
  • the above uses the performance monitoring results of an AI model as an example.
  • the performance monitoring results of each AI model in the multiple AI models are also applicable to the above method, which will not be repeated here.
  • the performance monitoring result when a terminal device deploys multiple AI models, for example, when a network device sends configuration parameters for monitoring the performance of multiple AI models, the performance monitoring result also includes an identifier of the AI model to distinguish the performance monitoring results of different AI models.
  • the performance monitoring results are carried by UCI, for example, the AI model performance indication information, and/or the value of the AI model performance metric, and/or the identifier of the AI model can be represented by a bit sequence with a predetermined number of bits.
  • a new information element can be added to the existing UCI to carry the performance monitoring results, or an existing information element in the existing UCI can be used to carry the performance monitoring results.
  • a new UCI can also be designed to carry the performance monitoring results. The embodiments of the present application are not limited to this.
  • the network device provides the terminal device with configuration parameters for detecting the performance of the AI model, so that the terminal device can generate performance monitoring results based on the configuration parameters. According to the performance monitoring results, the performance of the AI model can be effectively monitored, which in turn helps to select a beam management method that meets the current communication environment and reduce system load and delay.
  • FIG6 is a schematic diagram of a method for sending and receiving information according to an embodiment of the present application. As shown in FIG6 , the method includes:
  • the network device sends configuration parameters for monitoring AI model performance to the terminal device;
  • the terminal device uses the AI model to perform beam prediction or uses a traditional method to perform beam management to obtain a beam measurement result;
  • the terminal device calculates a value of the performance metric according to the beam measurement result
  • the terminal device generates a performance monitoring result according to the configuration parameters
  • the terminal device reports the performance monitoring result to the network device;
  • the network device activates the AI model or deactivates the AI model or selects the AI model or switches the AI model according to the performance monitoring result.
  • the implementation of 601-606 can refer to 301-302 and 501-502, and the repeated parts will not be repeated.
  • the embodiment of the present application provides an information transceiver device.
  • the device may be, for example, a terminal device, or may be one or more components or assemblies configured in the terminal device, and the contents that are the same as those in the embodiment of the second aspect will not be repeated.
  • FIG7 is a schematic diagram of an information transceiver device according to an embodiment of the present application. As shown in FIG7 , the information transceiver device 700 includes:
  • a second receiving unit 701 receives configuration parameters sent by a network device for monitoring the performance of one or more AI models
  • the second sending unit 702 sends the performance monitoring results of one or more AI models to the network device.
  • the performance monitoring results include AI model performance indication information, and/or the value of the AI model performance metric, and/or the identification of the AI model.
  • the apparatus further comprises:
  • a first computing unit (not shown) is used to calculate the value of the AI model performance metric.
  • the first calculation unit calculates the value of the performance metric according to the configuration parameters, and the performance monitoring result includes the value of the performance metric.
  • the apparatus further comprises:
  • a second processing unit filters the value of the performance metric according to the configuration parameters (filter coefficients and/or counters), and/or compares the value of the performance metric with the configuration parameters (threshold values), and generates the performance monitoring result containing the AI model performance indication information according to the filtering result and/or comparison result.
  • the second processing unit processes the value of the performance metric according to the configured filter coefficient of the performance metric to generate a filtered performance metric value, compares the filtered performance metric value with the configured performance metric threshold value, and generates the performance monitoring result including the AI model performance indication information when the filtered performance metric value is greater than the configured performance metric threshold value; or generates the performance monitoring result including the AI model performance indication information when the filtered performance metric value is less than the configured performance metric threshold value; or,
  • the second processing unit compares the value of the performance metric with the configured threshold value of the performance metric.
  • the counter of the monitoring result of the statistical performance metric is increased by 1; when the value of the performance metric is less than the configured threshold value of the performance metric, the counter of the monitoring result of the statistical performance metric is decreased by 1; and when the value of the counter reaches a maximum count value, the performance monitoring result containing the AI model performance indication information is generated; or when the value of the performance metric is less than the configured threshold value of the performance metric, the counter of the monitoring result of the statistical performance metric is increased by 1; when the value of the performance metric is greater than the configured threshold value of the performance metric, the counter of the monitoring result of the statistical performance metric is decreased by 1; and when the value of the counter reaches a maximum count value, the performance monitoring result containing the AI model performance indication information is generated.
  • the implementation of the second receiving unit 701 and the second sending unit 702 corresponds to 501 - 502 and will not be repeated here.
  • the information transceiver device 700 may also include other components or modules, and the specific contents of these components or modules may refer to the relevant technology.
  • FIG. 7 only exemplifies the connection relationship or signal direction between various components or modules, but it should be clear to those skilled in the art that various related technologies such as bus connection can be used.
  • the above-mentioned various components or modules can be implemented by hardware facilities such as processors, memories, transmitters, receivers, etc.; the implementation of this application is not limited to this.
  • the network device sends configuration parameters for detecting the performance of the AI model to the terminal device, so that the terminal device can generate performance monitoring results based on the configuration parameters. According to the performance monitoring results, the performance of the AI model can be effectively monitored, which in turn helps to select a beam management method that meets the current communication environment and reduce system load and delay.
  • the embodiment of the present application provides an information transceiver device.
  • the device may be, for example, a network device, or may be one or more components or assemblies configured in the network device, and the same contents as those in the embodiment of the first aspect will not be repeated.
  • FIG8 is a schematic diagram of an information transceiver device according to an embodiment of the present application. As shown in FIG8 , the information transceiver device 800 includes:
  • a first sending unit 801 sends configuration parameters for monitoring the performance of one or more AI models to a terminal device
  • the first receiving unit 802 receives the performance monitoring results of one or more AI models sent by the terminal device.
  • the configuration parameters include a threshold value of the performance metric and/or a filter coefficient of the performance metric and/or a counter of a monitoring result of a statistical performance metric.
  • the performance metric includes a prediction error of a beam measurement result, and/or a prediction accuracy of a beam measurement result, and/or a throughput, and/or a frame error rate.
  • the prediction error includes the difference between the predicted measurement result corresponding to the optimal beam (pair) output by the AI model and the actual measurement result of the optimal beam (pair), or the difference between the predicted measurement result corresponding to the first optimal beam (pair) output by the AI model and the measurement result corresponding to the second optimal beam (pair) actually measured, or the average value of the difference between the predicted measurement results corresponding to multiple beams (pairs) output by the AI model and the actual measurement results of the multiple beams (pairs), or the average value of the difference between the predicted measurement results corresponding to multiple first beams (pairs) output by the AI model and the measurement results of multiple second beams (pairs) actually measured.
  • the prediction accuracy includes the probability of whether the first optimal beam (pair) output by the AI model and the second optimal beam (pair) actually measured are the same, or the probability that the first number of first beams (pairs) output by the AI model contains the optimal beam (pair) actually measured, or the probability that the optimal beam (pair) output by the AI model is contained in the second number of second beams (pairs) actually measured.
  • the measurement result includes L1-RSRP or SINR.
  • the measurement result of the actual measurement is determined based on the training data of the AI model or based on the measurement result of the actual measurement when the AI model is not applied.
  • the apparatus further comprises:
  • a first processing unit (not shown) activates the AI model or deactivates the AI model or selects the AI model or switches the AI model according to the performance monitoring result.
  • the configuration parameters include a first configuration parameter for determining whether to activate an AI model, and/or a second configuration parameter for determining whether to deactivate an AI model, and/or a third configuration parameter for selecting an AI model and/or switching an AI model.
  • the first configuration parameter, the second configuration parameter, and the third configuration parameter are the same or different.
  • the configuration parameters used to monitor the performance of different AI models are the same or different.
  • the first sending unit when the first sending unit sends configuration parameters for monitoring the performance of multiple AI models, the first sending unit is also used to send the identifier of the AI model corresponding to the configuration parameters, and the performance monitoring result also includes the identifier of the corresponding AI model.
  • the configuration parameters are carried by RRC or MAC CE or DCI, and the performance monitoring results are carried by UCI.
  • the AI model is deployed on the terminal device side.
  • the implementation of the first sending unit 1301 and the second sending unit 1302 corresponds to 301 - 302 and will not be repeated here.
  • the information transceiver device 800 may also include other components or modules, and the specific contents of these components or modules may refer to the relevant technology.
  • FIG8 only exemplifies the connection relationship or signal direction between various components or modules, but it should be clear to those skilled in the art that various related technologies such as bus connection can be used.
  • the above-mentioned various components or modules can be implemented by hardware facilities such as processors, memories, transmitters, and receivers; the implementation of this application is not limited to this.
  • the network device provides the terminal device with configuration parameters for detecting the performance of the AI model, so that the terminal device can generate performance monitoring results based on the configuration parameters. According to the performance monitoring results, the performance of the AI model can be effectively monitored, which in turn helps to select a beam management method that meets the current communication environment and reduce system load and delay.
  • An embodiment of the present application also provides a communication system, and reference may be made to FIG1 .
  • the contents that are the same as those in the first to fourth embodiments will not be repeated herein.
  • the communication system 100 may include at least: a network device 101 and/or a terminal device 102, the network device sends configuration parameters for monitoring the performance of one or more AI models to the terminal device; the network device receives performance monitoring results of one or more AI models sent by the terminal device.
  • the implementation of the above configuration parameters and performance monitoring results can refer to the embodiments of the first aspect and the second aspect, which will not be repeated here.
  • An embodiment of the present application further provides a network device, which may be, for example, a base station, but the present application is not limited thereto and may also be other network devices.
  • a network device which may be, for example, a base station, but the present application is not limited thereto and may also be other network devices.
  • FIG9 is a schematic diagram of the composition of a network device according to an embodiment of the present application.
  • the network device 900 may include: a processor 910 (e.g., a central processing unit CPU) and a memory 920; the memory 920 is coupled to the processor 910.
  • the memory 920 may store various data; in addition, it may store a program 930 for information processing, and the program 930 may be executed under the control of the processor 910.
  • the processor 910 may be configured to execute a program to implement the information sending and receiving method as described in the embodiment of the first aspect.
  • the processor 910 may be configured to perform the following control: sending configuration parameters for monitoring the performance of one or more AI models to a terminal device; receiving performance monitoring results of one or more AI models sent by the terminal device.
  • the network device 900 may further include: a transceiver 940 and an antenna 950, etc.; wherein the functions of the above components are similar to those of the prior art and are not described in detail here. It is worth noting that the network device 900 does not necessarily include all the components shown in FIG9 ; in addition, the network device 900 may also include components not shown in FIG9 , which may refer to the prior art.
  • the embodiment of the present application also provides a terminal device, but the present application is not limited thereto and may also be other devices.
  • FIG10 is a schematic diagram of a terminal device according to an embodiment of the present application.
  • the terminal device 1000 may include a processor 1010 and a memory 1020; the memory 1020 stores data and programs and is coupled to the processor 1010. It is worth noting that the figure is exemplary; other types of structures may also be used to supplement or replace the structure to implement telecommunication functions or other functions.
  • the processor 1010 may be configured to execute a program to implement the information sending and receiving method as described in the embodiment of the second aspect.
  • the processor 1010 may be configured to perform the following control: receiving configuration parameters sent by a network device for monitoring the performance of one or more AI models; sending performance monitoring information of one or more AI models to the network device.
  • the terminal device 1000 may further include: a communication module 1030, an input unit 1040, a display 1050, and a power supply 1060.
  • the functions of the above components are similar to those in the prior art and are not described in detail here. It is worth noting that the terminal device 1000 does not necessarily include all the components shown in FIG10 , and the above components are not necessary; in addition, the terminal device 1000 may also include components not shown in FIG10 , and reference may be made to the prior art.
  • An embodiment of the present application also provides a computer program, wherein when the program is executed in a terminal device, the program enables the terminal device to execute the information sending and receiving method described in the embodiment of the second aspect.
  • An embodiment of the present application also provides a storage medium storing a computer program, wherein the computer program enables a terminal device to execute the information sending and receiving method described in the embodiment of the second aspect.
  • An embodiment of the present application also provides a computer program, wherein when the program is executed in a network device, the program enables the network device to execute the information sending and receiving method described in the embodiment of the first aspect.
  • An embodiment of the present application also provides a storage medium storing a computer program, wherein the computer program enables a network device to execute the information sending and receiving method described in the embodiment of the first aspect.
  • the above devices and methods of the present application can be implemented by hardware, or by hardware combined with software.
  • the present application relates to such a computer-readable program, which, when executed by a logic component, enables the logic component to implement the above-mentioned devices or components, or enables the logic component to implement the various methods or steps described above.
  • the present application also relates to a storage medium for storing the above program, such as a hard disk, a magnetic disk, an optical disk, a DVD, a flash memory, etc.
  • the method/device described in conjunction with the embodiments of the present application may be directly embodied as hardware, a software module executed by a processor, or a combination of the two.
  • one or more of the functional block diagrams shown in the figure and/or one or more combinations of the functional block diagrams may correspond to various software modules of the computer program flow or to various hardware modules.
  • These software modules may correspond to the various steps shown in the figure, respectively.
  • These hardware modules may be implemented by solidifying these software modules, for example, using a field programmable gate array (FPGA).
  • FPGA field programmable gate array
  • the software module may be located in a RAM memory, a flash memory, a ROM memory, an EPROM memory, an EEPROM memory, a register, a hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
  • a storage medium may be coupled to a processor so that the processor can read information from the storage medium and write information to the storage medium; or the storage medium may be an integral part of the processor.
  • the processor and the storage medium may be located in an ASIC.
  • the software module may be stored in a memory of a mobile terminal or in a memory card that can be inserted into the mobile terminal.
  • the software module may be stored in the MEGA-SIM card or the large-capacity flash memory device.
  • the functional blocks described in the drawings and/or one or more combinations of functional blocks it can be implemented as a general-purpose processor, digital signal processor (DSP), application-specific integrated circuit (ASIC), field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware component or any appropriate combination thereof for performing the functions described in the present application.
  • DSP digital signal processor
  • ASIC application-specific integrated circuit
  • FPGA field programmable gate array
  • it can also be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in communication with a DSP, or any other such configuration.
  • a method for sending and receiving information, applied to a network device characterized in that the method comprises:
  • the network device sends configuration parameters for monitoring performance of one or more AI models to the terminal device;
  • the network device receives performance monitoring results of one or more AI models sent by the terminal device.
  • the configuration parameters include a threshold value of a performance metric and/or a filter coefficient of a performance metric and/or a counter of a monitoring result of a statistical performance metric.
  • the performance metric includes a prediction error of a beam measurement result, and/or a prediction accuracy of a beam measurement result, and/or throughput, and/or a frame error rate.
  • the prediction error includes the difference between the predicted measurement result corresponding to the optimal beam (pair) output by the AI model and the actual measurement result of the optimal beam (pair), or the difference between the predicted measurement result corresponding to the first optimal beam (pair) output by the AI model and the measurement result corresponding to the actually measured second optimal beam (pair), or the average value of the difference between the predicted measurement results corresponding to multiple beams (pairs) output by the AI model and the actual measurement results of the multiple beams (pairs), or the average value of the difference between the predicted measurement results corresponding to multiple first beams (pairs) output by the AI model and the measurement results of multiple second beams (pairs) actually measured.
  • the prediction accuracy includes the probability of whether the first optimal beam (pair) output by the AI model and the second optimal beam (pair) actually measured are the same, or the probability that the first number of first beams (pairs) output by the AI model contains the optimal beam (pair) actually measured, or the probability that the optimal beam (pair) output by the AI model is contained in the second number of second beams (pairs) actually measured.
  • the network device activates the AI model or deactivates the AI model or selects the AI model or switches the AI model according to the performance monitoring result.
  • the configuration parameters include a first configuration parameter for determining whether to activate an AI model, and/or a second configuration parameter for determining whether to deactivate an AI model, and/or a third configuration parameter for selecting an AI model and/or switching an AI model.
  • the network device when the network device sends configuration parameters for monitoring the performance of multiple AI models, the network device also sends the identifier of the AI model corresponding to the configuration parameters, and the performance monitoring result also includes the identifier of the corresponding AI model.
  • a method for sending and receiving information, applied to a terminal device characterized in that the method comprises:
  • the terminal device receives configuration parameters sent by the network device for monitoring the performance of one or more AI models
  • the terminal device sends the performance monitoring results of one or more AI models to the network device.
  • the performance monitoring results include AI model performance indication information, and/or the value of the AI model performance metric, and/or the identifier of the AI model.
  • the terminal device calculates the value of the AI model performance metric.
  • the terminal device filters the value of the performance metric according to the configuration parameters (filter coefficients and/or counters), and/or compares the value of the performance metric with the configuration parameters (threshold values), and generates the performance monitoring result containing AI model performance indication information based on the filtering result and/or comparison result.
  • the terminal device processes the value of the performance metric according to the configured filter coefficient of the performance metric to generate a filtered performance metric value, compares the filtered performance metric value with the configured performance metric threshold value, and generates the performance monitoring result containing the AI model performance indication information when the filtered performance metric value is greater than the configured performance metric threshold value; or generates the performance monitoring result containing the AI model performance indication information when the filtered performance metric value is less than the configured performance metric threshold value; or,
  • the terminal device compares the value of the performance metric with the configured threshold value of the performance metric.
  • the counter of the monitoring result of the statistical performance metric is increased by 1; when the value of the performance metric is less than the configured threshold value of the performance metric, the counter of the monitoring result of the statistical performance metric is decreased by 1; and when the value of the counter reaches a maximum count value, the performance monitoring result containing the AI model performance indication information is generated; or when the value of the performance metric is less than the configured threshold value of the performance metric, the counter of the monitoring result of the statistical performance metric is increased by 1; when the value of the performance metric is greater than the configured threshold value of the performance metric, the counter of the monitoring result of the statistical performance metric is decreased by 1; and when the value of the counter reaches the maximum count value, the performance monitoring result containing the AI model performance indication information is generated.
  • a network device comprising a memory and a processor, wherein the memory stores a computer program, and the processor is configured to execute the computer program to implement the method as described in any one of Notes 1 to 14.
  • a terminal device comprising a memory and a processor, wherein the memory stores a computer program, and the processor is configured to execute the computer program to implement the method as described in any one of Notes 15 to 20.
  • a communication system comprising the network device described in Note 21 and/or the terminal device described in Note 22.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

本申请实施例提供一种信息收发方法以及装置。该方法包括:网络设备向终端设备发送用于监测一个或多个AI模型性能的配置参数;接收所述终端设备发送的一个或多个AI模型的性能监测结果。

Description

信息收发方法与装置 技术领域
本申请实施例涉及通信技术领域。
背景技术
随着低频段频谱资源变得稀缺,毫米波频段能够提供更大带宽,成为了5G NR(New Radio,新无线)系统的重要频段。毫米波由于波长较短,具有与传统低频段不同的传播特性,例如更高的传播损耗,反射和衍射性能差等。因此通常会采用更大规模的天线阵列,以形成增益更大的赋形波束,克服传播损耗,确保系统覆盖。5G NR标准为波束管理设计了波束扫描,波束测量,波束汇报,波束指示等一系列的方案。但当收发波束数目比较大的时候,会大大增加系统的负荷和延时。
伴随着人工智能(Artificial Intelligence,AI)技术的发展,将人工智能技术应用到无线通信物理层上,来解决传统方法的难点成为当前一个技术方向。对于波束管理而言,利用AI模型,根据少量波束测量的结果预测出空间上最优的波束对,能够大幅度减少系统的负荷和延时。
应该注意,上面对技术背景的介绍只是为了方便对本申请的技术方案进行清楚、完整的说明,并方便本领域技术人员的理解而阐述的。不能仅仅因为这些方案在本申请的背景技术部分进行了阐述而认为上述技术方案为本领域技术人员所公知。
发明内容
假设通信系统的发送端有M个波束,接收端有N个波束,在现有的标准中,需要对M*N个波束进行测量,当M和N数量较大时,对M*N个波束进行测量会导致较大的系统负荷和较长的延时。利用模型(例如,AI模型),通过少量的波束测量结果来预测最优的波束对,能够大大减少波束测量所导致的系统负荷和延时。
发明人发现,在使用AI模型进行波束预测时,由于通信环境经常发生变化,如果AI模型不适合当前的通信环境,就会造成通信性能的下降,此时,可以回退到传统的波束管理方法,重新选择AI模型或者重新训练AI模型;另外,如果系统工作在传统的波束管理的状态下,当AI模型适合当前的通信环境时,系统可以进入使用AI 模型进行波束预测的状态。上述流程依赖于在不同通信环境下对AI模型的的性能监测,因此,如何对AI模型进行性能监测成为亟待解决的问题,目前还没有相关的讨论。
针对上述问题的至少之一,本申请实施例提供一种信息收发方法以及装置。
根据本申请实施例的一个方面,提供一种信息收发装置,应用于网络设备,所述装置包括:
第一发送单元,其向终端设备发送用于监测一个或多个AI模型性能的配置参数;
第一接收单元,其接收所述终端设备发送的一个或多个AI模型的性能监测结果。
根据本申请实施例的另一个方面,提供一种信息收发装置,应用于终端设备,所述装置包括:
第二接收单元,其接收网络设备发送的用于监测一个或多个AI模型性能的配置参数;
第二发送单元,其向所述网络设备发送一个或多个AI模型的性能监测结果。
根据本申请实施例的另一个方面,提供一种通信系统,包括终端设备和/或网络设备,所述终端设备包括前述一个方面的信息收发装置,所述网络设备包括前述另一个方面的信息收发装置。
本申请实施例的有益效果之一在于:网络设备向终端设备用于检测AI模型性能的配置参数,由此,终端设备能够根据该配置参数生成性能监测结果,根据该性能监测结果能够有效的监测AI模型的性能,进而有助于选择符合当前通信环境的波束管理方式,减少系统负荷和延迟。
参照后文的说明和附图,详细公开了本申请的特定实施方式,指明了本申请的原理可以被采用的方式。应该理解,本申请的实施方式在范围上并不因而受到限制。在所附权利要求的精神和条款的范围内,本申请的实施方式包括许多改变、修改和等同。
针对一种实施方式描述和/或示出的特征可以以相同或类似的方式在一个或更多个其它实施方式中使用,与其它实施方式中的特征相组合,或替代其它实施方式中的特征。
应该强调,术语“包括/包含”在本文使用时指特征、整件、步骤或组件的存在,但并不排除一个或更多个其它特征、整件、步骤或组件的存在或附加。
附图说明
在本申请实施例的一个附图或一种实施方式中描述的元素和特征可以与一个或更多个其它附图或实施方式中示出的元素和特征相结合。此外,在附图中,类似的标号表示几个附图中对应的部件,并可用于指示多于一种实施方式中使用的对应部件。
图1是本申请的通信系统的一示意图;
图2是本申请实施例的通信系统中发送波束和接收波束的一个示意图;
图3是本申请实施例的信息收发方法的示意图;
图4是本申请实施例的发送波束和接收波束的示意图;
图5是本申请实施例的信息收发方法的示意图;
图6是本申请实施例的信息收发方法的示意图;
图7是本申请实施例的信息收发装置的示意图;
图8是本申请实施例的信息收发装置的示意图;
图9是本申请实施例的网络设备的示意图;
图10是本申请实施例的终端设备的示意图。
具体实施方式
参照附图,通过下面的说明书,本申请的前述以及其它特征将变得明显。在说明书和附图中,具体公开了本申请的特定实施方式,其表明了其中可以采用本申请的原则的部分实施方式,应了解的是,本申请不限于所描述的实施方式,相反,本申请包括落入所附权利要求的范围内的全部修改、变型以及等同物。
在本申请实施例中,术语“第一”、“第二”等用于对不同元素从称谓上进行区分,但并不表示这些元素的空间排列或时间顺序等,这些元素不应被这些术语所限制。术语“和/或”包括相关联列出的术语的一种或多个中的任何一个和所有组合。术语“包含”、“包括”、“具有”等是指所陈述的特征、元素、元件或组件的存在,但并不排除存在或添加一个或多个其他特征、元素、元件或组件。
在本申请实施例中,单数形式“一”、“该”等包括复数形式,应广义地理解为“一种”或“一类”而并不是限定为“一个”的含义;此外术语“所述”应理解为既包括单数形式也包括复数形式,除非上下文另外明确指出。此外术语“根据”应理解为“至少部分根据……”,术语“基于”应理解为“至少部分基于……”,除非上下文另外明确指出。
在本申请实施例中,术语“通信网络”或“无线通信网络”可以指符合如下任意通信标准的网络,例如长期演进(LTE,Long Term Evolution)、增强的长期演进(LTE-A,LTE-Advanced)、宽带码分多址接入(WCDMA,Wideband Code Division Multiple Access)、高速报文接入(HSPA,High-Speed Packet Access)等等。
并且,通信系统中设备之间的通信可以根据任意阶段的通信协议进行,例如可以包括但不限于如下通信协议:1G(generation)、2G、2.5G、2.75G、3G、4G、4.5G以及5G、新无线(NR,New Radio)、未来的6G等等,和/或其他目前已知或未来将被开发的通信协议。
在本申请实施例中,术语“网络设备”例如是指通信系统中将终端设备接入通信网络并为该终端设备提供服务的设备。网络设备可以包括但不限于如下设备:基站(BS,Base Station)、接入点(AP、Access Point)、发送接收点(TRP,Transmission Reception Point)、广播发射机、移动管理实体(MME、Mobile Management Entity)、网关、服务器、无线网络控制器(RNC,Radio Network Controller)、基站控制器(BSC,Base Station Controller)等等。
其中,基站可以包括但不限于:节点B(NodeB或NB)、演进节点B(eNodeB或eNB)以及5G基站(gNB),等等,此外还可包括远端无线头(RRH,Remote Radio Head)、远端无线单元(RRU,Remote Radio Unit)、中继(relay)或者低功率节点(例如femeto、pico等等)。并且术语“基站”可以包括它们的一些或所有功能,每个基站可以对特定的地理区域提供通信覆盖。术语“小区”可以指的是基站和/或其覆盖区域,这取决于使用该术语的上下文。
在本申请实施例中,术语“用户设备”(UE,User Equipment)或者“终端设备”(TE,Terminal Equipment或Terminal Device)例如是指通过网络设备接入通信网络并接收网络服务的设备。终端设备可以是固定的或移动的,并且也可以称为移动台(MS,Mobile Station)、终端、用户台(SS,Subscriber Station)、接入终端(AT,Access Terminal)、站,等等。
其中,终端设备可以包括但不限于如下设备:蜂窝电话(Cellular Phone)、个人数字助理(PDA,Personal Digital Assistant)、无线调制解调器、无线通信设备、手持设备、机器型通信设备、膝上型计算机、无绳电话、智能手机、智能手表、数字相机,等等。
再例如,在物联网(IoT,Internet of Things)等场景下,终端设备还可以是进行监控或测量的机器或装置,例如可以包括但不限于:机器类通信(MTC,Machine Type Communication)终端、车载通信终端、设备到设备(D2D,Device to Device)终端、机器到机器(M2M,Machine to Machine)终端,等等。
此外,术语“网络侧”或“网络设备侧”是指网络的一侧,可以是某一基站,也可以包括如上的一个或多个网络设备。术语“用户侧”或“终端侧”或“终端设备侧”是指用户或终端的一侧,可以是某一UE,也可以包括如上的一个或多个终端设备。本文在没有特别指出的情况下,“设备”可以指网络设备,也可以指终端设备。
在以下的说明中,在不引起混淆的情况下,术语“上行控制信号”和“上行控制信息(UCI,Uplink Control Information)”或“物理上行控制信道(PUCCH,Physical Uplink Control Channel)”可以互换,术语“上行数据信号”和“上行数据信息”或“物理上行共享信道(PUSCH,Physical Uplink Shared Channel)”可以互换;
术语“下行控制信号”和“下行控制信息(DCI,Downlink Control Information)”或“物理下行控制信道(PDCCH,Physical Downlink Control Channel)”可以互换,术语“下行数据信号”和“下行数据信息”或“物理下行共享信道(PDSCH,Physical Downlink Shared Channel)”可以互换。
另外,发送或接收PUSCH可以理解为发送或接收由PUSCH承载的上行数据,发送或接收PUCCH可以理解为发送或接收由PUCCH承载的上行信息,发送或接收PRACH可以理解为发送或接收由PRACH承载的preamble;上行信号可以包括上行数据信号和/或上行控制信号等,也可以称为上行传输(UL transmission)或上行信息或上行信道。在上行资源上发送上行传输可以理解为使用该上行资源发送该上行传输。类似地,可以相应地理解下行数据/信号/信道/信息。
在本申请实施例中,高层信令例如可以是无线资源控制(RRC)信令;例如称为RRC消息(RRC message),例如包括MIB、系统信息(system information)、专用RRC消息;或者称为RRC IE(RRC information element)。高层信令例如还可以是MAC(Medium Access Control)信令;或者称为MAC CE(MAC control element)。但本申请不限于此。
以下通过示例对本申请实施例的场景进行说明,但本申请不限于此。
图1是本申请实施例的通信系统的示意图,示意性说明了以终端设备和网络设备 为例的情况,如图1所示,通信系统100可以包括网络设备101和终端设备102、103。为简单起见,图1仅以两个终端设备和一个网络设备为例进行说明,但本申请实施例不限于此。
在本申请实施例中,网络设备101和终端设备102、103之间可以进行现有的业务或者未来可实施的业务发送。例如,这些业务可以包括但不限于:增强的移动宽带(eMBB,enhanced Mobile Broadband)、大规模机器类型通信(mMTC,massive Machine Type Communication)和高可靠低时延通信(URLLC,Ultra-Reliable and Low-Latency Communication),等等。
其中,终端设备102可以向网络设备101发送数据,例如使用授权或免授权传输方式。网络设备101可以接收一个或多个终端设备102发送的数据,并向终端设备102反馈信息,例如确认ACK/非确认NACK信息等,终端设备102根据反馈信息可以确认结束传输过程、或者还可以再进行新的数据传输,或者可以进行数据重传。
值得注意的是,图1示出了两个终端设备102、103均处于网络设备101的覆盖范围内,但本申请不限于此。两个终端设备102、103可以均不在网络设备101的覆盖范围内,或者一个终端设备102在网络设备101的覆盖范围之内而另一个终端设备103在网络设备101的覆盖范围之外。
AI模型(或ML模型)包括但不限于:输入层(input)、多个卷积层、连接层(concat)、全连接层(FC)以及量化器等。其中,多个卷积层的处理结果在连接层进行合并,关于AI模型的具体结构可以参考现有技术,此处不再赘述。
图2是本申请各实施例的通信系统中发送波束和接收波束的一个示意图。如图2所示,在通信系统100中,以下行信道为例,网络设备101可以具有M1个下行发送波束DL TX,终端设备102可以具有N1个下行接收波束DL RX。
在本申请实施例中,如图2所示,用于预测波束测量结果的模型201可以被部署于网络设备101或终端设备102。模型201可以根据部分波束的测量结果,预测M1*N1个波束的测量结果。其中,模型201例如可以是AI模型。
此外,针对上行信道,网络设备101可以具有N2个上行接收波束(图2未示出),终端设备102可以具有M2个上行发送波束UL TX(图2未示出)。
以下结合附图和实施例进行说明。
第一方面的实施例
本申请实施例提供一种信息收发方法,从网络设备侧进行说明。
图3是本申请实施例的信息收发方法的一示意图,如图3所示,该方法包括:
301,网络设备向终端设备发送用于监测一个或多个AI模型性能的配置参数;
302,网络设备接收所述终端设备发送的一个或多个AI模型的性能监测结果。
值得注意的是,以上附图3仅对本申请实施例进行了示意性说明,但本申请不限于此。例如可以适当地调整各个操作之间的执行顺序,此外还可以增加其他的一些操作或者减少其中的某些操作。本领域的技术人员可以根据上述内容进行适当地变型,而不仅限于上述附图3的记载。
在一些实施例中,用于波束预测的AI模型部署在终端设备中,利用该AI模型,通过少量的波束对测量结果预测最优波束对,该AI模型输入参数为部分波束对的RSRP(Reference Signal Receiving Power,参考信号接收功率)值,也可以为部分波束对的SINR(Signal to Interference plus Noise Ratio,信号与干扰加噪声比)值,输出参数的物理量为所有波束对的RSRP或者SINR,图4是本申请实施例中发送波束和接收波束以及AI模型示意图,如图4所示,例如下行发送波束有12个,下行接收波束有8个,总共有96波束对。通过配置,终端设备只测量了其中24个波束对RSRP(6个下行发送波束和4个下行接收波束)。此时AI模型的输入参数的维度为24,物理量为RSRP或SINR,输出参数的维度为96,物理量也为RSRP或SINR,可以从预测结果中选出最优的波束对。
在一些实施例中,在终端设备中可以预先部署一个或多个AI模型,为了监测部署的AI模型的性能,网络设备向终端设备发送用于监测一个或多个AI模型性能的配置参数。由此,终端设备能够根据该配置参数生成性能监测结果,根据该性能监测结果能够有效的监测AI模型的性能,进而有助于选择符合当前通信环境的波束管理方式,减少系统负荷和延迟。
在一些实施例中,该方法还包括(未图示):根据所述性能监测结果激活AI模型或去激活AI模型或选择AI模型或者切换AI模型。该配置参数包括用于判断激活AI模型的第一配置参数,和/或用于判断去激活AI模型的第二配置参数,和/或用于选择AI模型和/或切换AI模型的第三配置参数,也就是说,网络设备可以同时发送上述三种配置参数,或者也可以根据系统的状态发送三种配置参数中的至少一种,所 述第一配置参数、所述第二配置参数和所述第三配置参数相同或不同,以下详细说明。
在一些实施例中,在终端设备中预先部署一个AI模型时,系统可以处于AI模型激活状态(也就是使用AI模型进行波束预测,或者说AI模型正在工作),网络设备可以发送用于监测该AI模型性能的配置参数(以下也可以称为用于判断去激活的配置参数,或者第二配置参数),终端设备可以根据该配置参数生成性能监测结果,网络设备基于该性能监测结果确定是否去激活AI模型(例如恢复为传统的波束管理状态)。
在一些实施例中,在终端设备中预先部署一个AI模型时,系统可以处于AI模型去激活状态(传统的波束管理状态),网络设备可以发送用于监测该AI模型性能的配置参数(以下也可以称为用于判断激活的配置参数,或者第一配置参数),终端设备可以根据该配置参数生成性能监测结果,网络设备基于该性能监测结果确定是否激活AI模型(例如恢复为使用AI模型进行波束预测的状态或者说系统进入AI模型激活状态或AI模型工作状态)。
在一些实施例中,在终端设备中预先部署多个AI模型时,系统可以处于AI模型去激活状态(传统的波束管理状态)或AI模型激活状态(使用AI模型进行波束预测,或者说AI模型正在工作),网络设备可以发送用于监测其中一个AI模型性能的配置参数(包括前述第一配置参数或第二配置参数),终端设备可以根据该配置参数生成性能监测结果,网络设备基于该性能监测结果确定是否需要去激活AI模型(例如恢复为传统的波束管理状态,基于第二配置参数),或者基于该性能监测结果确定是否激活AI模型(例如恢复为使用AI模型进行波束预测的状态或者说系统进入AI模型激活状态或AI模型工作状态,基于第一配置参数)。
在以上实施例中,第一配置参数和第二配置参数可以同时发送给终端设备,或者根据系统处于的状态仅发送其中的一种配置参数,例如系统处于AI模型激活状态时,发送第二配置参数,系统处于AI模型去激活状态时,发送第一配置参数,本申请实施例并不以此作为限制。例如,网络设备向终端设备发送的信息为:一个AI模型的配置参数,前述配置参数可以包括第一配置参数和/或第二配置参数,第一配置参数和第二配置参数可以相同或不同。
在一些实施例中,在终端设备中预先部署多个AI模型时,系统可以处于AI模型去激活状态(传统的波束管理状态)或AI模型激活状态(使用一个AI模型进行波束 预测,或者说其中一个AI模型正在工作),网络设备可以发送用于监测多个AI模型性能的配置参数(包括第二配置参数或用于选择AI模型和/或切换AI模型的配置参数,即第三配置参数),终端设备可以根据该配置参数生成性能监测结果,网络设备基于该性能监测结果确定是否选择一个AI模型进入工作状态(原来处于去激活状态,基于第三配置参数),或者确定是否需要切换AI模型进行波束预测(原来处于一个AI模型激活状态,基于第三配置参数),或者是否需要去激活AI模型(例如恢复为传统的波束管理状态,基于第二配置参数)。
在该实施例中,第三配置参数和第二配置参数可以同时发送给终端设备,或者根据系统处于的状态仅发送其中的一种配置参数;另一方面,多个AI模型中各个AI模型的用于监测性能的配置参数可以同时发送给终端设备,或者,也可以分多次分别发送给终端设备,此处不再一一举例。
在该实施例中,在网络设备发送用于监测多个AI模型性能的配置参数时,用于监测不同AI模型性能的配置参数相同或不同,例如,在配置参数相同时,可以仅发送针对所有AI模型的一组配置参数,即网络设备向终端设备发送的信息为:配置参数;在配置参数不同时,网络设备还可以发送与所述配置参数对应的AI模型的标识,以区分为不同AI模型配置的配置参数。例如,网络设备向终端设备发送的信息为:AI模型1的标识,AI模型1的配置参数1,AI模型2的标识,AI模型2的配置参数2,AI模型3的标识,AI模型3的配置参数3......,上述信息可以同时发送,或者分多次发送,例如第一次发送AI模型1的标识,AI模型1的配置参数1,第二次发送AI模型2的标识,AI模型2的配置参数2,第三次发送AI模型3的标识,AI模型3的配置参数3,以此类推。前述配置参数可以包括第二配置参数和/或第三配置参数,第三配置参数和第二配置参数可以相同或不同。
以下以用于监测一个AI模型性能的配置参数为例进行说明,需要说明的是,以下说明适应于第一配置参数、第二配置参数和第三配置参数,也适用于其他AI模型监测性能的配置参数。
以下详细说明。
在一些实施例中,所述配置参数包括性能度量的门限值和/或性能度量的滤波器系数和/或统计性能度量的监测结果的计数器。以上性能度量可以包括波束测量结果的预测误差,和/或波束测量结果的预测精度,和/或吞吐量,和/或误帧率,以下分别 说明。
在一些实施例中,AI模型的输入为部分波束(对)的测量结果,AI模型的输出可以包括各个波束(对)的测量结果的预测值和对应的波束(对)的标识信息,预测值和实际测量的结果可能存在误差,该误差可以用于评价AI模型的好坏,因此,可以将预测误差或预测精度作为性能度量。
例如,所述预测误差包括AI模型输出的最优波束(对)应的预测测量结果和所述最优波束(对)的实际测量结果的差值。也就是说,预测误差为对应相同最优波束的AI模型输出的预测测量结果和实际测量结果的差值。
例如,所述预测误差包括AI模型输出的第一最优波束(对)对应的预测测量结果和实际测量的第二最优波束(对)对应的测量结果的差值。该第一最优波束(对)是AI模型输出的最优波束(对),该第二最优波束(对)是实际测量的最优波束(对),该第一最优波束(对)和第二最优波束(对)可能相同或不同。
例如,所述预测误差包括AI模型输出的多个波束(对)对应的预测测量结果与所述多个波束(对)的实际测量结果的差值的平均值。也就是说,预测误差为对应相同多个波束(对)的AI模型输出的预测测量结果和实际测量结果的差值的平均值,该多个波束可以为测量结果最大的K个波束(对),但本申请实施例并不以此作为限制。例如,多个波束(对)的预测测量结果为B1,B2,...,BK,实际测量结果为A1,A2,...,AK,该预测误差为(B1-A1+B2-A2+...+BK-AK)/K。
例如,所述预测误差包括AI模型输出的多个第一波束(对)对应的预测测量结果与实际测量的多个第二波束(对)的测量结果的差值的平均值。该第一波束(对)是AI模型输出的例如测量结果最大的多(K)个波束(对),该第二波束(对)是实际测量的例如测量结果最大的多(K)个波束(对),该第一波束(对)和第二波束(对)可能相同或不同。例如,第一波束(对)的预测测量结果为B1,B2,...,BK,第二波束(对)的实际测量结果为A1,A2,...,AK,该预测误差为(B1-A1+B2-A2+...+BK-AK)/K。
例如,所述预测精度包括AI模型输出的第一最优波束(对)和实际测量的第二最优波束(对)相同的第一概率,或者AI模型输出的第一数量个第一波束(对)中包含实际测量的最优波束(对)的第二概率,或者AI模型输出的最优波束(对)包含于实际测量的第二数量个第二波束(对)的第三概率。关于第一波束(对)和第二 波束(对)的描述如前所述,此处不再重复。其中,可以通过多个AI模型推理周期来统计上述概率。例如,通过M个AI模型推理周期统计第一概率,例如在M个AI模型推理周期中,有M-2个推理周期第一最优波束(对)和第二最优波束(对)相同,有2个推理周期第一最优波束(对)和第二最优波束(对)不相同,则第一概率为(M-2)/M,该第二概率和第三概率的计算方法类似,此处不再一一举例。
在一些实施例中,上述测量结果包括L1-RSRP或SINR,本申请实施例并不以此作为限制,上述实际测量的测量结果根据AI模型的训练数据确定或者根据未应用AI模型时实际测量的测量结果确定,即上述实际测量的L1-RSRP或SINR可以是AI模型训练周期测量的标签数据,或者也可以是历史上真实的测量数据(例如未应用AI模型时使用传统波束管理状态时的测量数据)。
在一些实施例中,该性能度量还可以是吞吐量,和/或误帧率,吞吐量和误帧率的计算方法可以参考现有技术,此处不再一一举例。
在一些实施例中,配置参数中的该门限值用于与终端设备计算的性能度量的值进行比较,以确定AI模型性能能否满足性能需求,实现AI模型性能的监测。另外,当通信的环境变化频繁或者由于终端侧在计算性能度量的值存在计算误差时,会造成系统的波束管理在当前AI模型是否适合的判断之间频繁切换(AI模型激活/去激活),从而对系统信令的负载和通信的延时带来影响,使用滤波器系数和/或计数器用于对计算的性能度量的值进行滤波处理,并根据滤波处理的后性能度量的值确定AI模型性能能否满足性能需求,由此,可以避免AI模型在激活和去激活之间频繁切换。关于该门限值、滤波器系数以及计数器如何应用将在后述第二方面的实施例进行说明。
在一些实施例中,该配置参数由RRC或MAC CE或DCI承载,例如,该门限值、滤波器系数和/或计数器可以使用预定比特数量的比特序列来表示,比特序列对应的十进制的值即为该门限值、滤波器系数和/或计数器,可以在现有RRC或MAC CE或DCI中新增信息元来承载该配置参数,也可以在现有RRC或MAC CE或DCI中现有信息元来承载该配置参数,也可以设计新的RRC或MAC CE或DCI来承载该配置参数,本申请实施例并不以此作为限制。
在一些实施例中,在302中,终端设备可以基于该配置参数生成性能监测结果,并上报给网络设备,该性能监测结果包括AI模型性能指示信息,和/或AI模型性能度量的值,和/或AI模型的标识,具体将在第二方面的实施例进行说明。
以上各个实施例仅对本申请实施例进行了示例性说明,但本申请不限于此,还可以在以上各个实施例的基础上进行适当的变型。例如,可以单独使用上述各个实施例,也可以将以上各个实施例中的一种或多种结合起来。
由此,网络设备向终端设备用于检测AI模型性能的配置参数,由此,终端设备能够根据该配置参数生成性能监测结果,根据该性能监测结果能够有效的监测AI模型的性能,进而有助于选择符合当前通信环境的波束管理方式,减少系统负荷和延迟。
第二方面的实施例
本申请实施例提供一种信息收发方法,从终端设备侧进行说明,与第一方面的实施例相同的内容不再赘述。
图5是本申请实施例的信息收发方法的一示意图,如图5所示,该方法包括:
501,终端设备接收网络设备发送的用于监测一个或多个AI模型性能的配置参数;
502,该终端设备向所述网络设备发送一个或多个AI模型的性能监测结果。
值得注意的是,以上附图5仅对本申请实施例进行了示意性说明,但本申请不限于此。例如可以适当地调整各个操作之间的执行顺序,此外还可以增加其他的一些操作或者减少其中的某些操作。本领域的技术人员可以根据上述内容进行适当地变型,而不仅限于上述附图5的记载。
在一些实施例中,上述配置参数的实施方式可以参考第一方面的实施例,501的实施方式与301对应,此处不再赘述。
如前所述,配置参数中的该门限值用于与终端设备计算的性能度量的值进行比较,以确定AI模型性能能否满足性能需求,实现AI模型性能的监测。
例如,性能度量为预测误差时,预测误差的值大于针对预测误差的门限值时,表示AI模型性能不满足性能需求,预测误差的值小于针对预测误差的门限值时,表示AI模型性能满足性能需求;性能度量为预测精度时,预测精度的值大于针对预测精度的门限值时,表示AI模型性能满足性能需求,预测精度的值小于针对预测精度的门限值时,表示AI模型性能不满足性能需求。
如前所述,配置参数中的滤波器系数用于对计算的性能度量的值进行滤波处理,并根据滤波处理的后性能度量的值确定AI模型性能能否满足性能需求,由此,可以避免AI模型在激活和去激活之间频繁切换。
例如,滤波公式如下:
F n=(1–a)×F n-1+a×M n
其中,a为网络侧发送的配置参数中的滤波器系数,M n为最近一次AI模型推理所计算的性能度量的值,例如,F n为更新的滤波后的性能度量的值,F n-1为上次的滤波后的性能度量的值,n表示滤波的次数。
例如,性能度量M n为预测误差时,滤波后的预测误差F n的值大于针对预测误差的门限值时,表示AI模型性能不满足性能需求,滤波后的预测误差F n的值小于针对预测误差的门限值时,表示AI模型性能满足性能需求;性能度量M n为预测精度时,滤波后的预测精度F n的值大于针对预测精度的门限值时,表示AI模型性能满足性能需求,滤波后的预测精度F n的值小于针对预测精度的门限值时,表示AI模型性能不满足性能需求。
如前所述,配置参数中的计数器用于对计算的性能度量的值进行滤波处理,并根据滤波处理的后性能度量的值确定AI模型性能能否满足性能需求,由此,可以避免AI模型在激活和去激活之间频繁切换。
例如,网络设备发送的配置参数中的计数器为N比特的计数器,并设置最高计数值为Y。性能度量M n为预测误差时,预测误差M n的值大于针对预测误差的门限值时,计数器加1,预测误差M n的值小于针对预测误差的门限值时,计数器减1,以此类推,直至计数器到达所设置的最高计数值,反之亦可(具体与系统状态有关);性能度量M n为预测精度时,预测精度M n的值大于针对预测精度的门限值时,计数器加1,预测精度M n的值小于针对预测精度的门限值时,计数器减1,以此类推,直至计数器到达所设置的最高计数值,反之亦可(具体与系统状态有关)。
以上说明了上述配置参数的含义,以下详细说明如何生成性能监测结果。
在一些实施例中,性能监测结果包括AI模型性能指示信息,和/或AI模型性能度量的值,和/或AI模型的标识。
在一些实施例中,终端设备可以计算性能度量的值,并将包含性能度量的值的性能监测结果发送给网络设备,由网络设备判断AI模型性能的能否满足性能需求,进而激活AI模型或去激活AI模型或选择AI模型或者切换AI模型;或者,终端设备可以计算性能度量的值,并由终端设备判断AI模型性能能否满足性能需求,生成包含AI模型性能指示信息的性能监测结果发送给网络设备,网络设备根据该性能监测 结果激活AI模型或去激活AI模型或选择AI模型或者切换AI模型,其中,在终端设备部署了多个AI模型,且针对多个AI模型的性能监测结果不同时,该性能监测结果中还可以包括AI模型的标识,以区分不同AI模型的性能监测结果。以下分别说明。
(一)终端设备判断AI模型性能能否满足性能需求
在一些实施例中,终端设备可以根据所述配置参数(滤波器系数和/或计数器)对所述性能度量的值进行滤波处理,和/或将(滤波前或滤波后)性能度量的值与所述配置参数(门限值)进行比较,根据所述滤波处理结果和/或比较结果生成包含AI模型性能指示信息的所述性能监测结果。
在一些实施例中,配置参数包括该门限值,终端设备可以计算性能度量的值,终端设备将该值与门限值进行比较,根据比较结果生成AI模型性能指示信息,该AI模型性能指示信息用于指示AI模型性能能否满足性能需求,该AI模型性能指示信息可以用1比特信息表示。
例如,在系统处于AI模型激活状态时,该配置参数为第二配置参数,在性能度量为预测误差时,预测误差的值大于针对预测误差的门限值时,表示AI模型性能不满足性能需求,即当前AI模型不再适应通信环境,终端设备生成AI模型性能指示信息(比特值为1)作为性能监测结果指示AI模型性能不满足性能需求,或者说,预测误差的值大于针对预测误差的门限值时触发一次性能监测结果的上报。网络设备在接收到该性能监测结果获知AI模型性能不满足性能需求的信息,因此会判断是否去激活该AI模型。也就是说,预测误差的值小于门限值时,不生成AI模型性能指示信息,不上报性能监测结果。
例如,在系统处于AI模型去激活状态时,该配置参数为第一配置参数,在性能度量为预测误差时,预测误差的值小于针对预测误差的门限值时,表示AI模型性能满足性能需求,即当前AI模型适应通信环境,终端设备生成AI模型性能指示信息(比特值为1)作为性能监测结果指示AI模型性能满足性能需求信息,或者说,预测误差的值小于针对预测误差的门限值时触发一次性能监测结果的上报。网络设备在接收到该性能监测结果获知AI模型性能满足性能需求的信息,因此判断是否激活该AI模型(进入AI模型激活状态)。也就是说,预测误差的值大于门限值时,不生成AI模型性能指示信息,不上报性能监测结果。
例如,在系统处于AI模型激活状态时,该配置参数为第二配置参数,在性能度 量为预测精度时,预测精度的值小于针对预测精度的门限值时,表示AI模型性能不满足性能需求,即当前AI模型不再适应通信环境,终端设备生成AI模型性能指示信息(比特值为1)作为性能监测结果指示AI模型性能不满足性能需求,或者说,预测精度的值小于针对预测精度的门限值时触发一次性能监测结果的上报。网络设备在接收到该性能监测结果获知AI模型性能不满足性能需求的信息,因此判断是否去激活该AI模型。也就是说,预测精度的值大于门限值时,不生成AI模型性能指示信息,不上报性能监测结果。
例如,在系统处于AI模型去激活状态时,该配置参数为第一配置参数,在性能度量为预测精度时,预测精度的值大于针对预测精度的门限值时,表示AI模型性能满足性能需求,即当前AI模型适应通信环境,终端设备生成AI模型性能指示信息(比特值为1)作为性能监测结果指示AI模型性能满足性能需求,或者说,预测进度的值大于针对预测精度的门限值时触发一次性能监测结果的上报。网络设备在接收到该性能监测结果获知AI模型性能满足性能需求的信息,因此判断是否激活该AI模型。也就是说,预测精度的值小于门限值时,不生成AI模型性能指示信息,不上报性能监测结果。
关于终端设备使用第三配置参数的方式可以参考第一配置参数,本申请实施例并不以此作为限制。
在一些实施例中,配置参数包括该门限值和滤波器系数,终端设备可以计算性能度量的值M n,并根据该滤波器系数对性能度量的值进行滤波处理,将该滤波后的值F n与门限值进行比较,根据比较结果生成AI模型性能指示信息。
例如,在系统处于AI模型激活状态时,该配置参数为第二配置参数,在性能度量M n为预测误差时,滤波后的预测误差F n的值大于针对预测误差的门限值时,表示AI模型性能不满足性能需求,即当前AI模型不再适应通信环境,终端设备生成AI模型性能指示信息(比特值为1)作为性能监测结果指示AI模型性能不满足性能需求(触发一次性能监测结果的上报),网络设备在接收到该性能监测结果获知AI模型性能不满足通信的性能需求的信息,因此会判断是否去激活该AI模型。也就是说,滤波后的预测误差F n的值小于门限值时,不生成AI模型性能指示信息,不上报性能监测结果。
例如,在系统处于AI模型去激活状态时,该配置参数为第一配置参数,在性能 度量M n为预测误差时,滤波后的预测误差F n的值小于针对预测误差的门限值时,表示AI模型性能满足性能需求,即当前AI模型适应通信环境,终端设备生成AI模型性能指示信息(比特值为1)作为性能监测结果指示AI模型性能满足性能需求(触发一次性能监测结果的上报),网络设备在接收到该性能监测结果获知AI模型性能满足通信的性能需求的信息,因此判断是否激活该AI模型。也就是说,滤波后的预测误差F n的值大于门限值时,不生成AI模型性能指示信息,不上报性能监测结果。
例如,在系统处于AI模型激活状态时,该配置参数为第二配置参数,在性能度量M n为预测精度时,滤波后的预测精度F n的值小于针对预测精度的门限值时,表示AI模型性能不满足性能需求,即当前AI模型不再适应通信环境,终端设备生成AI模型性能指示信息(比特值为1)作为性能监测结果指示AI模型性能不满足性能需求(触发一次性能监测结果的上报),网络设备在接收到该性能监测结果获知AI模型性能不满足性能需求的信息,因此判断是否去激活该AI模型。也就是说,滤波后的预测精度的值大于门限值时,不生成AI模型性能指示信息,不上报性能监测结果。
例如,在系统处于AI模型去激活状态时,该配置参数为第一配置参数,在性能度量M n为预测精度时,滤波后的预测精度F n的值大于针对预测精度的门限值时,表示AI模型性能满足性能需求,即当前AI模型适应通信环境,终端设备生成AI模型性能指示信息(比特值为1)作为性能监测结果指示AI模型性能满足通信的性能需求(触发一次性能监测结果的上报),网络设备在接收到该性能监测结果获知AI模型性能满足性能需求的信息,因此判断是否激活该AI模型。也就是说,滤波后的预测精度的值小于门限值时,不生成AI模型性能指示信息,不上报性能监测结果。
在以上实施例中,在触发一次性能监测结果上报后,将F n置为0。
关于终端设备使用第三配置参数的方式可以参考第一配置参数,本申请实施例并不以此作为限制。
在一些实施例中,配置参数包括该门限值和计数器,终端设备可以计算性能度量的值,将计算的值与门限值进行比较,使用计数器对比较结果进行统计,根据统计结果生成AI模型性能指示信息。
例如,在系统处于AI模型激活状态时,该配置参数为第二配置参数,在性能度量为预测误差时,预测误差M n的值大于针对预测误差的门限值时,计数器加1,在预测误差的值小于针对预测误差的门限值时,计数器减1,下一次AI模型推理计算 性能度量的值后,重复此操作,直至计数器到达所设置的最高计数值,表示AI模型性能不满足性能需求,即当前AI模型不再适应通信环境,终端设备生成AI模型性能指示信息(比特值为1)作为性能监测结果指示AI模型性能不满足性能需求(触发一次性能监测结果的上报),网络设备在接收到该性能监测结果获知AI模型性能不满足性能需求信息,因此判断是否去激活该AI模型。也就是说,计数器未到达所设置的最高计数值时,不生成AI模型性能指示信息,不上报性能监测结果。
例如,在系统处于AI模型去激活状态时,该配置参数为第一配置参数,在性能度量为预测误差时,预测误差M n的值小于针对预测误差的门限值时,计数器加1,在预测误差的值大于针对预测误差的门限值时,计数器减1,下一次AI模型推理计算性能度量的值后,重复此操作,直至计数器到达所设置的最高计数值,表示AI模型性能满足性能需求,即当前AI模型适应通信环境,终端设备生成AI模型性能指示信息(比特值为1)作为性能监测结果指示AI模型性能满足性能需求(触发一次性能监测结果的上报),网络设备在接收到该性能监测结果获知AI模型性能满足性能需求的信息,因此判断是否激活该AI模型。也就是说,计数器未到达所设置的最高计数值时,不生成AI模型性能指示信息,不上报性能监测结果。
例如,在系统处于AI模型激活状态时,该配置参数为第二配置参数,在性能度量为预测精度时,预测精度M n的值小于针对预测精度的门限值时,计数器加1,在预测精度的值大于针对预测精度的门限值时,计数器减1,下一次AI模型推理计算性能度量的值后,重复此操作,直至计数器到达所设置的最高计数值,表示AI模型性能不满足性能需求,即当前AI模型不再适应通信环境,终端设备生成AI模型性能指示信息(比特值为1)作为性能监测结果指示AI模型性能不满足性能需求(触发一次性能监测结果的上报),网络设备在接收到该性能监测结果获知AI模型性能不满足性能需求的信息,因此判断是否去激活该AI模型。也就是说,计数器未到达所设置的最高计数值时,不生成AI模型性能指示信息,不上报性能监测结果。
例如,在系统处于AI模型去激活状态时,该配置参数为第一配置参数,在性能度量为预测精度时,预测精度M n的值大于针对预测精度的门限值时,计数器加1,在预测精度的值小于针对预测精度的门限值时,计数器减1,下一次AI模型推理计算性能度量的值后,重复此操作,直至计数器到达所设置的最高计数值,表示AI模型性能满足性能需求,即当前AI模型适应通信环境,终端设备生成AI模型性能指示 信息(比特值为1)作为性能监测结果指示AI模型性能满足性能需求(触发一次性能监测结果的上报),网络设备在接收到该性能监测结果获知AI模型性能满足性能需求的信息,因此判断是否激活该AI模型。也就是说,计数器未到达所设置的最高计数值时,不生成AI模型性能指示信息,不上报性能监测结果。
在以上实施例中,在触发一次性能监测结果上报后,将计数器重置。
关于终端设备使用第三配置参数的方式可以参考第一配置参数,本申请实施例并不以此作为限制。
在一些实施例中,配置参数包括该门限值、滤波器系数和计数器,终端设备可以计算性能度量的值,并根据该滤波器系数对性能度量的值进行滤波处理,将该滤波后的值与门限值进行比较,使用计数器对比较结果进行统计,根据统计结果生成AI模型性能指示信息。
例如,在系统处于AI模型激活状态时,该配置参数为第二配置参数,在性能度量为预测误差时,滤波后的预测误差F n的值大于针对预测误差的门限值时,计数器加1,在滤波后的预测误差F n的值小于针对预测误差的门限值时,计数器减1,下一次AI模型推理计算性能度量的值后,重复此操作,直至计数器到达所设置的最高计数值,表示AI模型性能不满足性能需求,即当前AI模型不再适应通信环境,终端设备生成AI模型性能指示信息(比特值为1)作为性能监测结果指示AI模型性能不满足性能需求(触发一次性能监测结果的上报),网络设备在接收到该性能监测结果获知AI模型性能不满足性能需求的信息,因此判断是否去激活该AI模型。也就是说,计数器未到达所设置的最高计数值时,不生成AI模型性能指示信息,不上报性能监测结果。
例如,在系统处于AI模型去激活状态时,该配置参数为第一配置参数,在性能度量为预测误差时,滤波后的预测误差F n的值小于针对预测误差的门限值时,计数器加1,在滤波后的预测误差F n的值大于针对预测误差的门限值时,计数器减1,下一次AI模型推理计算性能度量的值后,重复此操作,直至计数器到达所设置的最高计数值,表示AI模型性能满足性能需求,即当前AI模型适应通信环境,终端设备生成AI模型性能指示信息(比特值为1)作为性能监测结果指示AI模型性能满足性能需求(触发一次性能监测结果的上报),网络设备在接收到该性能监测结果获知AI模型性能满足性能需求的信息,因此判断是否激活该AI模型。也就是说,计数器未 到达所设置的最高计数值时,不生成AI模型性能指示信息,不上报性能监测结果。
例如,在系统处于AI模型激活状态时,该配置参数为第二配置参数,在性能度量为预测精度时,滤波后的预测精度F n的值小于针对预测精度的门限值时,计数器加1,在滤波后的预测精度F n的值大于针对预测精度的门限值时,计数器减1,下一次AI模型推理计算性能度量的值后,重复此操作,直至计数器到达所设置的最高计数值,表示AI模型性能不满足性能需求,即当前AI模型不再适应通信环境,终端设备生成AI模型性能指示信息(比特值为1)作为性能监测结果指示AI模型性能不满足性能需求(触发一次性能监测结果的上报),网络设备在接收到该性能监测结果获知AI模型性能不满足性能需求的信息,因此判断是否去激活该AI模型。也就是说,计数器未到达所设置的最高计数值时,不生成AI模型性能指示信息,不上报性能监测结果。
例如,在系统处于AI模型去激活状态时,该配置参数为第一配置参数,在性能度量为预测精度时,滤波后的预测精度F n的值大于针对预测精度的门限值时,计数器加1,在滤波后的预测精度F n的值小于针对预测精度的门限值时,计数器减1,下一次AI模型推理计算性能度量的值后,重复此操作,直至计数器到达所设置的最高计数值,表示AI模型性能满足性能需求,即当前AI模型适应通信环境,终端设备生成AI模型性能指示信息(比特值为1)作为性能监测结果指示AI模型性能满足性能需求(触发一次性能监测结果的上报),网络设备在接收到该性能监测结果获知AI模型性能不满足性能需求的信息,因此判断是否激活该AI模型。也就是说,计数器未到达所设置的最高计数值时,不生成AI模型性能指示信息,不上报性能监测结果。
在以上实施例中,在触发一次性能监测结果上报后,将计数器重置,,将F n置为0。
关于终端设备使用第三配置参数的方式可以参考第一配置参数,本申请实施例并不以此作为限制。
(二)网络设备判断AI模型性能能否满足性能需求
在一些实施例中,终端设备可以计算AI模型性能度量的值,根据所述配置参数(滤波器系数)对所述性能度量的值进行滤波处理,或者说,终端设备可以根据所述配置参数(滤波器系数)计算所述性能度量的值,将计算的性能度量的值(例如滤波后的)包含在性能监测结果中上报给网络设备。
在一些实施例中,配置参数包括滤波器系数,终端设备可以计算性能度量的值M n,根据滤波器系数对性能度量的值M n进行滤波处理,得到滤波后的性能度量的值F n,将该F n作为性能监测结果上报给网络设备。网络设备在收到性能监测结果后,会结合门限值(可选的,还可以包括计数器)确定AI模型性能能否满足性能需求,确定当前AI模型是否适合当前通信环境,以判断是否激活AI模型或去激活AI模型或选择AI模型或者切换AI模型,具体确定AI模型性能能否满足性能需求的方式与终端设备侧相同,此处不再重复。
如前所述,以上第一配置参数、第二配置参数和第三配置参数中的门限值相同或不同,第一配置参数、第二配置参数和第三配置参数中的滤波器系数相同或不同,第一配置参数、第二配置参数和第三配置参数中的计数器相同或不同。
另外,以上以一个AI模型的性能监测结果为例进行说明,在终端设备部署多个AI模型时,多个AI模型中各个AI模型的性能监测结果同样适用前述方法,此处不再重复。
在一些实施例中,在终端设备部署多个AI模型时,例如,网络设备发送用于监测多个AI模型性能的配置参数时,该性能监测结果还包括AI模型的标识,以区分不同AI模型的性能监测结果。
在一些实施例中,所述性能监测结果由UCI承载,例如该AI模型性能指示信息,和/或AI模型性能度量的值,和/或AI模型的标识可以使用预定比特数量的比特序列来表示,可以在现有UCI中新增信息元来承载该性能监测结果,也可以在现有UCI中现有信息元来承载该性能监测结果,也可以设计新的UCI来承载该性能监测结果,本申请实施例并不以此作为限制。
以上各个实施例仅对本申请实施例进行了示例性说明,但本申请不限于此,还可以在以上各个实施例的基础上进行适当的变型。例如,可以单独使用上述各个实施例,也可以将以上各个实施例中的一种或多种结合起来。
由此,网络设备向终端设备用于检测AI模型性能的配置参数,由此,终端设备能够根据该配置参数生成性能监测结果,根据该性能监测结果能够有效的监测AI模型的性能,进而有助于选择符合当前通信环境的波束管理方式,减少系统负荷和延迟。
图6是本申请实施例的信息收发方法示意图,如图6所示,该方法包括:
601,网络设备向终端设备发送用于监测AI模型性能的配置参数;
602,终端设备使用AI模型进行波束预测或使用传统的方法进行波束管理,得到波束测量结果;
603,终端设备根据波束测量结果计算性能度量的值;
604,终端设备根据配置参数生成性能监测结果;
605,终端设备向网络设备上报性能监测结果;
606,网络设备根据该性能监测结果激活AI模型或去激活AI模型或选择AI模型或者切换AI模型。
在一些实施例中,601-606的实施方式可以参考301-302以及501-502,重复之处不再赘述。
第三方面的实施例
本申请实施例提供一种信息收发装置。该装置例如可以是终端设备,也可以是配置于终端设备的某个或某些部件或者组件,与第二方面的实施例相同的内容不再赘述。
图7是本申请实施例的信息收发装置的一示意图。如图7所示,信息收发装置700包括:
第二接收单元701,其接收网络设备发送的用于监测一个或多个AI模型性能的配置参数;
第二发送单元702,其向所述网络设备发送一个或多个AI模型的性能监测结果。
在一些实施例中,性能监测结果包括AI模型性能指示信息,和/或AI模型性能度量的值,和/或AI模型的标识。
在一些实施例中,所述装置还包括:
第一计算单元(未图示),其用于计算AI模型性能度量的值。
在一些实施例中,所述第一计算单元根据所述配置参数计算所述性能度量的值,并且所述性能监测结果包括所述性能度量的值。
在一些实施例中,所述装置还包括:
第二处理单元(未图示),其根据所述配置参数(滤波器系数和/或计数器)对所述性能度量的值进行滤波处理,和/或将性能度量的值与所述配置参数(门限值)进行比较,根据所述滤波处理结果和/或比较结果生成包含AI模型性能指示信息的所述 性能监测结果。
在一些实施例中,所述第二处理单元根据配置的滤性能度量的滤波器系数对所述性能度量的值进行处理,生成滤波后的性能度量的值,将所述滤波后的性能度量的值与配置的性能度量的门限值进行比较,在所述滤波后的性能度量的值大于配置的性能度量的门限值时,生成包含AI模型性能指示信息的所述性能监测结果;或者所述滤波后的性能度量的值小于配置的性能度量的门限值时,生成包含AI模型性能指示信息的所述性能监测结果;或者,
所述第二处理单元将所述性能度量的值与配置的性能度量的门限值进行比较,在所述性能度量的值大于配置的性能度量的门限值时,统计性能度量的监测结果的计数器加1,在所述性能度量的值小于配置的性能度量的门限值时,统计性能度量的监测结果的计数器减1,在所述计数器的值达到最高计数值时生成包含AI模型性能指示信息的所述性能监测结果;或者在所述性能度量的值小于配置的性能度量的门限值时,统计性能度量的监测结果的计数器加1,在所述性能度量的值大于配置的性能度量的门限值时,统计性能度量的监测结果的计数器减1,在所述计数器的值达到最高计数值时生成包含AI模型性能指示信息的所述性能监测结果。
在一些实施例中,第二接收单元701和第二发送单元702的实施方式与501-502对应,此处不再赘述。
以上各个实施例仅对本申请实施例进行了示例性说明,但本申请不限于此,还可以在以上各个实施例的基础上进行适当的变型。例如,可以单独使用上述各个实施例,也可以将以上各个实施例中的一种或多种结合起来。
值得注意的是,以上仅对与本申请相关的各部件或模块进行了说明,但本申请不限于此。信息收发装置700还可以包括其他部件或者模块,关于这些部件或者模块的具体内容,可以参考相关技术。
此外,为了简单起见,图7中仅示例性示出了各个部件或模块之间的连接关系或信号走向,但是本领域技术人员应该清楚的是,可以采用总线连接等各种相关技术。上述各个部件或模块可以通过例如处理器、存储器、发射机、接收机等硬件设施来实现;本申请实施并不对此进行限制。
由此,网络设备向终端设备用于检测AI模型性能的配置参数,由此,终端设备能够根据该配置参数生成性能监测结果,根据该性能监测结果能够有效的监测AI模 型的性能,进而有助于选择符合当前通信环境的波束管理方式,减少系统负荷和延迟。
第四方面的实施例
本申请实施例提供一种信息收发装置。该装置例如可以是网络设备,也可以是配置于网络设备的某个或某些部件或者组件,与第一方面的实施例相同的内容不再赘述。
图8是本申请实施例的信息收发装置的一示意图。如图8所示,信息收发装置800包括:
第一发送单元801,其向终端设备发送用于监测一个或多个AI模型性能的配置参数;
第一接收单元802,其接收所述终端设备发送的一个或多个AI模型的性能监测结果。
在一些实施例中,所述配置参数包括性能度量的门限值和/或性能度量的滤波器系数和/或统计性能度量的监测结果的计数器。
在一些实施例中,所述性能度量包括波束测量结果的预测误差,和/或波束测量结果的预测精度,和/或吞吐量,和/或误帧率。
在一些实施例中,所述预测误差包括AI模型输出的最优波束(对)应的预测测量结果和所述最优波束(对)的实际测量结果的差值,或者AI模型输出的第一最优波束(对)对应的预测测量结果和实际测量的第二最优波束(对)对应的测量结果的差值,或者AI模型输出的多个波束(对)对应的预测测量结果与所述多个波束(对)的实际测量结果的差值的平均值,或者AI模型输出的多个第一波束(对)对应的预测测量结果与实际测量的多个第二波束(对)的测量结果的差值的平均值。
在一些实施例中,所述预测精度包括AI模型输出的第一最优波束(对)和实际测量的第二最优波束(对)是否相同的概率,或者AI模型输出的第一数量个第一波束(对)中包含实际测量的最优波束(对)的概率,或者AI模型输出的最优波束(对)包含于实际测量的第二数量个第二波束(对)的概率。
在一些实施例中,所述测量结果包括L1-RSRP或SINR。
在一些实施例中,所述实际测量的测量结果根据AI模型的训练数据确定或者根据未应用AI模型时实际测量的测量结果确定。
在一些实施例中,所述装置还包括:
第一处理单元(未图示),其根据所述性能监测结果激活AI模型或去激活AI模型或选择AI模型或者切换AI模型。
在一些实施例中,所述配置参数包括用于判断激活AI模型的第一配置参数,和/或用于判断去激活AI模型的第二配置参数,和/或用于选择AI模型和/或切换AI模型的第三配置参数。
在一些实施例中,所述第一配置参数、所述第二配置参数和所述第三配置参数相同或不同。
在一些实施例中,用于监测不同AI模型性能的配置参数相同或不同。
在一些实施例中,在所述第一发送单元发送用于监测多个AI模型性能的配置参数时,所述第一发送单元还用于发送与所述配置参数对应的AI模型的标识,所述性能监测结果还包括对应的AI模型的标识。
在一些实施例中,所述配置参数由RRC或MAC CE或DCI承载,所述性能监测结果由UCI承载。
在一些实施例中,所述AI模型部署在所述终端设备侧。
在一些实施例中,第一发送单元1301和第二发送单元1302的实施方式与301-302对应,此处不再赘述。
以上各个实施例仅对本申请实施例进行了示例性说明,但本申请不限于此,还可以在以上各个实施例的基础上进行适当的变型。例如,可以单独使用上述各个实施例,也可以将以上各个实施例中的一种或多种结合起来。
值得注意的是,以上仅对与本申请相关的各部件或模块进行了说明,但本申请不限于此。信息收发装置800还可以包括其他部件或者模块,关于这些部件或者模块的具体内容,可以参考相关技术。
此外,为了简单起见,图8中仅示例性示出了各个部件或模块之间的连接关系或信号走向,但是本领域技术人员应该清楚的是,可以采用总线连接等各种相关技术。上述各个部件或模块可以通过例如处理器、存储器、发射机、接收机等硬件设施来实现;本申请实施并不对此进行限制。
由此,网络设备向终端设备用于检测AI模型性能的配置参数,由此,终端设备能够根据该配置参数生成性能监测结果,根据该性能监测结果能够有效的监测AI模型的性能,进而有助于选择符合当前通信环境的波束管理方式,减少系统负荷和延迟。
第五方面的实施例
本申请实施例还提供一种通信系统,可以参考图1,与第一至四方面的实施例相同的内容不再赘述。
在一些实施例中,通信系统100至少可以包括:网络设备101和/或终端设备102,网络设备向终端设备发送用于监测一个或多个AI模型性能的配置参数;网络设备接收所述终端设备发送的一个或多个AI模型的性能监测结果。
在一些实施例中,上述配置参数和性能监测结果的实施方式可以参考第一方面和第二方面的实施例,此处不再赘述。
本申请实施例还提供一种网络设备,例如可以是基站,但本申请不限于此,还可以是其他的网络设备。
图9是本申请实施例的网络设备的构成示意图。如图9所示,网络设备900可以包括:处理器910(例如中央处理器CPU)和存储器920;存储器920耦合到处理器910。其中该存储器920可存储各种数据;此外还存储信息处理的程序930,并且在处理器910的控制下执行该程序930。
例如,处理器910可以被配置为执行程序而实现如第一方面的实施例所述的信息收发方法。例如处理器910可以被配置为进行如下的控制:向终端设备发送用于监测一个或多个AI模型性能的配置参数;接收所述终端设备发送的一个或多个AI模型的性能监测结果。
此外,如图9所示,网络设备900还可以包括:收发机940和天线950等;其中,上述部件的功能与现有技术类似,此处不再赘述。值得注意的是,网络设备900也并不是必须要包括图9中所示的所有部件;此外,网络设备900还可以包括图9中没有示出的部件,可以参考现有技术。
本申请实施例还提供一种终端设备,但本申请不限于此,还可以是其他的设备。
图10是本申请实施例的终端设备的示意图。如图10所示,该终端设备1000可以包括处理器1010和存储器1020;存储器1020存储有数据和程序,并耦合到处理器1010。值得注意的是,该图是示例性的;还可以使用其他类型的结构,来补充或代替该结构,以实现电信功能或其他功能。
例如,处理器1010可以被配置为执行程序而实现如第二方面的实施例所述的信 息收发方法。例如处理器1010可以被配置为进行如下的控制:接收网络设备发送的用于监测一个或多个AI模型性能的配置参数;向所述网络设备发送一个或多个AI模型的性能监。
如图10所示,该终端设备1000还可以包括:通信模块1030、输入单元1040、显示器1050、电源1060。其中,上述部件的功能与现有技术类似,此处不再赘述。值得注意的是,终端设备1000也并不是必须要包括图10中所示的所有部件,上述部件并不是必需的;此外,终端设备1000还可以包括图10中没有示出的部件,可以参考现有技术。
本申请实施例还提供一种计算机程序,其中当在终端设备中执行所述程序时,所述程序使得所述终端设备执行第二方面的实施例所述的信息收发方法。
本申请实施例还提供一种存储有计算机程序的存储介质,其中所述计算机程序使得终端设备执行第二方面的实施例所述的信息收发方法。
本申请实施例还提供一种计算机程序,其中当在网络设备中执行所述程序时,所述程序使得所述网络设备执行第一方面的实施例所述的信息收发方法。
本申请实施例还提供一种存储有计算机程序的存储介质,其中所述计算机程序使得网络设备执行第一方面的实施例所述的信息收发方法。
本申请以上的装置和方法可以由硬件实现,也可以由硬件结合软件实现。本申请涉及这样的计算机可读程序,当该程序被逻辑部件所执行时,能够使该逻辑部件实现上文所述的装置或构成部件,或使该逻辑部件实现上文所述的各种方法或步骤。本申请还涉及用于存储以上程序的存储介质,如硬盘、磁盘、光盘、DVD、flash存储器等。
结合本申请实施例描述的方法/装置可直接体现为硬件、由处理器执行的软件模块或二者组合。例如,图中所示的功能框图中的一个或多个和/或功能框图的一个或多个组合,既可以对应于计算机程序流程的各个软件模块,亦可以对应于各个硬件模块。这些软件模块,可以分别对应于图中所示的各个步骤。这些硬件模块例如可利用现场可编程门阵列(FPGA)将这些软件模块固化而实现。
软件模块可以位于RAM存储器、闪存、ROM存储器、EPROM存储器、EEPROM存储器、寄存器、硬盘、移动磁盘、CD-ROM或者本领域已知的任何其它形式的存储介质。可以将一种存储介质耦接至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息;或者该存储介质可以是处理器的组成部分。处理器 和存储介质可以位于ASIC中。该软件模块可以存储在移动终端的存储器中,也可以存储在可插入移动终端的存储卡中。例如,若设备(如移动终端)采用的是较大容量的MEGA-SIM卡或者大容量的闪存装置,则该软件模块可存储在该MEGA-SIM卡或者大容量的闪存装置中。
针对附图中描述的功能方框中的一个或多个和/或功能方框的一个或多个组合,可以实现为用于执行本申请所描述功能的通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件或者其任意适当组合。针对附图描述的功能方框中的一个或多个和/或功能方框的一个或多个组合,还可以实现为计算设备的组合,例如,DSP和微处理器的组合、多个微处理器、与DSP通信结合的一个或多个微处理器或者任何其它这种配置。
以上结合具体的实施方式对本申请进行了描述,但本领域技术人员应该清楚,这些描述都是示例性的,并不是对本申请保护范围的限制。本领域技术人员可以根据本申请的精神和原理对本申请做出各种变型和修改,这些变型和修改也在本申请的范围内。
关于包括以上实施例的实施方式,还公开下述的附记:
1.一种信息收发方法,应用于网络设备,其特征在于,所述方法包括:
所述网络设备向终端设备发送用于监测一个或多个AI模型性能的配置参数;
所述网络设备接收所述终端设备发送的一个或多个AI模型的性能监测结果。
2.根据附记1所述的方法,其中,所述配置参数包括性能度量的门限值和/或性能度量的滤波器系数和/或统计性能度量的监测结果的计数器。
3.根据附记2所述的方法,其中,所述性能度量包括波束测量结果的预测误差,和/或波束测量结果的预测精度,和/或吞吐量,和/或误帧率。
4.根据附记3所述的方法,其中,所述预测误差包括AI模型输出的最优波束(对)应的预测测量结果和所述最优波束(对)的实际测量结果的差值,或者AI模型输出的第一最优波束(对)对应的预测测量结果和实际测量的第二最优波束(对)对应的测量结果的差值,或者AI模型输出的多个波束(对)对应的预测测量结果与所述多个波束(对)的实际测量结果的差值的平均值,或者AI模型输出的多个第一波束(对)对应的预测测量结果与实际测量的多个第二波束(对)的测量结果的差值的平均值。
5.根据附记3所述的方法,其中,所述预测精度包括AI模型输出的第一最优波束(对)和实际测量的第二最优波束(对)是否相同的概率,或者AI模型输出的第一数量个第一波束(对)中包含实际测量的最优波束(对)的概率,或者AI模型输出的最优波束(对)包含于实际测量的第二数量个第二波束(对)的概率。
6.根据附记4所述的方法,其中,所述测量结果包括L1-RSRP或SINR。
7.根据附记4所述的方法,其中,所述实际测量的测量结果根据AI模型的训练数据确定或者根据未应用AI模型时实际测量的测量结果确定。
8.根据附记1所述的方法,其中,所述方法还包括:
所述网络设备根据所述性能监测结果激活AI模型或去激活AI模型或选择AI模型或者切换AI模型。
9.根据附记8所述的方法,其中,所述配置参数包括用于判断激活AI模型的第一配置参数,和/或用于判断去激活AI模型的第二配置参数,和/或用于选择AI模型和/或切换AI模型的第三配置参数。
10.根据附记9所述的方法,其中,所述第一配置参数、所述第二配置参数和所述第三配置参数相同或不同。
11.根据附记1所述的方法,其中,用于监测不同AI模型性能的配置参数相同或不同。
12.根据附记1所述的方法,其中,在所述网络设备发送用于监测多个AI模型性能的配置参数时,所述网络设备还发送与所述配置参数对应的AI模型的标识,所述性能监测结果还包括对应的AI模型的标识。
13.根据附记1所述的方法,其中,所述配置参数由RRC或MAC CE或DCI承载,所述性能监测结果由UCI承载。
14.根据附记1所述的方法,其中,所述AI模型部署在所述终端设备侧。
15.一种信息收发方法,应用于终端设备,其特征在于,所述方法包括:
所述终端设备接收网络设备发送的用于监测一个或多个AI模型性能的配置参数;
所述终端设备向所述网络设备发送一个或多个AI模型的性能监测结果。
16.根据附记15所述的方法,其中,所述性能监测结果包括AI模型性能指示信息,和/或AI模型性能度量的值,和/或AI模型的标识。
17.根据附记15所述的方法,其中,所述方法还包括:
所述终端设备计算AI模型性能度量的值。
18.根据附记17所述的方法,其中,所述终端设备根据所述配置参数计算所述性能度量的值,并且所述性能监测结果包括所述性能度量的值。
19.根据附记17所述的方法,其中,所述方法还包括:
所述终端设备根据所述配置参数(滤波器系数和/或计数器)对所述性能度量的值进行滤波处理,和/或将性能度量的值与所述配置参数(门限值)进行比较,根据所述滤波处理结果和/或比较结果生成包含AI模型性能指示信息的所述性能监测结果。
20.根据附记19所述的方法,其中,所述终端设备根据配置的滤性能度量的滤波器系数对所述性能度量的值进行处理,生成滤波后的性能度量的值,将所述滤波后的性能度量的值与配置的性能度量的门限值进行比较,在所述滤波后的性能度量的值大于配置的性能度量的门限值时,生成包含AI模型性能指示信息的所述性能监测结果;或者所述滤波后的性能度量的值小于配置的性能度量的门限值时,生成包含AI模型性能指示信息的所述性能监测结果;或者,
所述终端设备将所述性能度量的值与配置的性能度量的门限值进行比较,在所述性能度量的值大于配置的性能度量的门限值时,统计性能度量的监测结果的计数器加1,在所述性能度量的值小于配置的性能度量的门限值时,统计性能度量的监测结果的计数器减1,在所述计数器的值达到最高计数值时生成包含AI模型性能指示信息的所述性能监测结果;或者在所述性能度量的值小于配置的性能度量的门限值时,统计性能度量的监测结果的计数器加1,在所述性能度量的值大于配置的性能度量的门限值时,统计性能度量的监测结果的计数器减1,在所述计数器的值达到最高计数值时生成包含AI模型性能指示信息的所述性能监测结果。
21.一种网络设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器被配置为执行所述计算机程序而实现如附记1至14任一项所述的方法。
22.一种终端设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器被配置为执行所述计算机程序而实现如附记15至20任一项所述的方法。
23.一种通信系统,包括附记21所述的网络设备和/或附记22所述的终端设备。

Claims (20)

  1. 一种信息收发装置,应用于网络设备,其特征在于,所述装置包括:
    第一发送单元,其向终端设备发送用于监测一个或多个AI模型性能的配置参数;
    第一接收单元,其接收所述终端设备发送的一个或多个AI模型的性能监测结果。
  2. 根据权利要求1所述的装置,其中,所述配置参数包括性能度量的门限值和/或性能度量的滤波器系数和/或统计性能度量的监测结果的计数器。
  3. 根据权利要求2所述的装置,其中,所述性能度量包括波束测量结果的预测误差,和/或波束测量结果的预测精度,和/或吞吐量,和/或误帧率。
  4. 根据权利要求3所述的装置,其中,所述预测误差包括AI模型输出的最优波束(对)应的预测测量结果和所述最优波束(对)的实际测量结果的差值,或者AI模型输出的第一最优波束(对)对应的预测测量结果和实际测量的第二最优波束(对)对应的测量结果的差值,或者AI模型输出的多个波束(对)对应的预测测量结果与所述多个波束(对)的实际测量结果的差值的平均值,或者AI模型输出的多个第一波束(对)对应的预测测量结果与实际测量的多个第二波束(对)的测量结果的差值的平均值。
  5. 根据权利要求3所述的装置,其中,所述预测精度包括AI模型输出的第一最优波束(对)和实际测量的第二最优波束(对)是否相同的概率,或者AI模型输出的第一数量个第一波束(对)中包含实际测量的最优波束(对)的概率,或者AI模型输出的最优波束(对)包含于实际测量的第二数量个第二波束(对)的概率。
  6. 根据权利要求4所述的装置,其中,所述测量结果包括L1-RSRP或SINR。
  7. 根据权利要求4所述的装置,其中,所述实际测量的测量结果根据AI模型的训练数据确定或者根据未应用AI模型时实际测量的测量结果确定。
  8. 根据权利要求1所述的装置,其中,所述装置还包括:
    第一处理单元,其根据所述性能监测结果激活AI模型或去激活AI模型或选择AI模型或者切换AI模型。
  9. 根据权利要求8所述的装置,其中,所述配置参数包括用于判断激活AI模型的第一配置参数,和/或用于判断去激活AI模型的第二配置参数,和/或用于选择AI模型和/或切换AI模型的第三配置参数。
  10. 根据权利要求9所述的装置,其中,所述第一配置参数、所述第二配置参数和所述第三配置参数相同或不同。
  11. 根据权利要求1所述的装置,其中,用于监测不同AI模型性能的配置参数相同或不同。
  12. 根据权利要求1所述的装置,其中,在所述第一发送单元发送用于监测多个AI模型性能的配置参数时,所述第一发送单元还用于发送与所述配置参数对应的AI模型的标识,所述性能监测结果还包括对应的AI模型的标识。
  13. 根据权利要求1所述的装置,其中,所述配置参数由RRC或MAC CE或DCI承载,所述性能监测结果由UCI承载。
  14. 根据权利要求1所述的装置,其中,所述AI模型部署在所述终端设备侧。
  15. 一种信息收发装置,应用于终端设备,其特征在于,所述装置包括:
    第二接收单元,其接收网络设备发送的用于监测一个或多个AI模型性能的配置参数;
    第二发送单元,其向所述网络设备发送一个或多个AI模型的性能监测结果。
  16. 根据权利要求15所述的装置,其中,所述性能监测结果包括AI模型性能指示信息,和/或AI模型性能度量的值,和/或AI模型的标识。
  17. 根据权利要求15所述的装置,其中,所述装置还包括:
    第一计算单元,其用于计算AI模型性能度量的值。
  18. 根据权利要求17所述的装置,其中,所述第一计算单元根据所述配置参数计算所述性能度量的值,并且所述性能监测结果包括所述性能度量的值。
  19. 根据权利要求17所述的装置,其中,所述装置还包括:
    第二处理单元,其根据所述配置参数(滤波器系数和/或计数器)对所述性能度量的值进行滤波处理,和/或将性能度量的值与所述配置参数(门限值)进行比较,根据所述滤波处理结果和/或比较结果生成包含AI模型性能指示信息的所述性能监测结果。
  20. 根据权利要求19所述的装置,其中,所述第二处理单元根据配置的滤性能度量的滤波器系数对所述性能度量的值进行处理,生成滤波后的性能度量的值,将所述滤波后的性能度量的值与配置的性能度量的门限值进行比较,在所述滤波后的性能度量的值大于配置的性能度量的门限值时,生成包含AI模型性能指示信息的所述性 能监测结果;或者所述滤波后的性能度量的值小于配置的性能度量的门限值时,生成包含AI模型性能指示信息的所述性能监测结果;或者,
    所述第二处理单元将所述性能度量的值与配置的性能度量的门限值进行比较,在所述性能度量的值大于配置的性能度量的门限值时,统计性能度量的监测结果的计数器加1,在所述性能度量的值小于配置的性能度量的门限值时,统计性能度量的监测结果的计数器减1,在所述计数器的值达到最高计数值时生成包含AI模型性能指示信息的所述性能监测结果;或者在所述性能度量的值小于配置的性能度量的门限值时,统计性能度量的监测结果的计数器加1,在所述性能度量的值大于配置的性能度量的门限值时,统计性能度量的监测结果的计数器减1,在所述计数器的值达到最高计数值时生成包含AI模型性能指示信息的所述性能监测结果。
PCT/CN2022/129856 2022-11-04 2022-11-04 信息收发方法与装置 WO2024092716A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/129856 WO2024092716A1 (zh) 2022-11-04 2022-11-04 信息收发方法与装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/129856 WO2024092716A1 (zh) 2022-11-04 2022-11-04 信息收发方法与装置

Publications (1)

Publication Number Publication Date
WO2024092716A1 true WO2024092716A1 (zh) 2024-05-10

Family

ID=90929412

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/129856 WO2024092716A1 (zh) 2022-11-04 2022-11-04 信息收发方法与装置

Country Status (1)

Country Link
WO (1) WO2024092716A1 (zh)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443284A (zh) * 2019-07-15 2019-11-12 超参数科技(深圳)有限公司 Ai模型的训练方法、调用方法、服务器及可读存储介质
CN112508044A (zh) * 2019-09-16 2021-03-16 华为技术有限公司 人工智能ai模型的评估方法、系统及设备
WO2022047320A1 (en) * 2020-08-31 2022-03-03 Intel Corporation Ran node and ue configured for beam failure detection reporting to support to ai and ml based beam management

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443284A (zh) * 2019-07-15 2019-11-12 超参数科技(深圳)有限公司 Ai模型的训练方法、调用方法、服务器及可读存储介质
CN112508044A (zh) * 2019-09-16 2021-03-16 华为技术有限公司 人工智能ai模型的评估方法、系统及设备
WO2022047320A1 (en) * 2020-08-31 2022-03-03 Intel Corporation Ran node and ue configured for beam failure detection reporting to support to ai and ml based beam management

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HUAWEI, HISILICON: "Discussion on general aspects of AI/ML framework", 3GPP DRAFT; R1-2203139, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. RAN WG1, no. e-Meeting; 20220509 - 20220520, 29 April 2022 (2022-04-29), Mobile Competence Centre ; 650, route des Lucioles ; F-06921 Sophia-Antipolis Cedex ; France, XP052143957 *
VIVO: "Consideration of use case specific aspects", 3GPP DRAFT; R2-2209565, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. RAN WG2, no. electronic; 20221010 - 20221019, 30 September 2022 (2022-09-30), Mobile Competence Centre ; 650, route des Lucioles ; F-06921 Sophia-Antipolis Cedex ; France, XP052262894 *
ZTE CORPORATION: "Discussion on potential enhancements for AI/ML based beam management", 3GPP DRAFT; R1-2203251, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. RAN WG1, no. e-Meeting; 20220509 - 20220520, 29 April 2022 (2022-04-29), Mobile Competence Centre ; 650, route des Lucioles ; F-06921 Sophia-Antipolis Cedex ; France, XP052152893 *

Similar Documents

Publication Publication Date Title
US10425875B2 (en) Network-side device, user equipment, and blind area management method
CN109804664B (zh) 用于无线网络中的测量和测量报告的方法和装置
US11115851B2 (en) Cell measurement method, terminal device and network device
US20220014943A1 (en) Measurement method and apparatus, and device
WO2019157724A1 (zh) 无线通信方法和设备
EP3664345A1 (en) Information indication method and apparatus
US20210127286A1 (en) Method for wireless communication, terminal, and non-transitory computer-readable storage medium
WO2021147641A1 (zh) 一种调整波束的方法及装置
WO2021012166A1 (zh) 无线通信的方法和设备
CN110603835B (zh) 用于传送和接收数据的方法和设备
WO2024092716A1 (zh) 信息收发方法与装置
WO2022082670A1 (zh) 边链路资源选择方法以及装置
WO2023206272A1 (zh) 波束信息的发送和接收方法、装置和通信系统
WO2024098184A1 (zh) 信息收发方法与装置
WO2024016230A1 (zh) 信息收发方法与装置
WO2019191906A1 (zh) 测量方法、测量配置方法、装置及通信系统
WO2023206445A1 (zh) Ai监测装置以及方法
WO2024031290A1 (zh) 信息收发方法与装置
WO2024000156A9 (zh) 信息收发方法与装置
EP4318971A1 (en) Method, apparatus and computer program
WO2024060078A1 (zh) 信息收发方法与装置
WO2022242536A1 (zh) 数据处理方法、装置、设备及存储介质
US20240224095A1 (en) Data processing method and apparatus, device and storage medium
WO2019153154A1 (zh) 无线通信方法和设备
CN117579219A (zh) 信道质量指示cqi修正方法、装置及通信设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22964025

Country of ref document: EP

Kind code of ref document: A1