WO2024104126A1 - 更新ai网络模型的方法、装置和通信设备 - Google Patents

更新ai网络模型的方法、装置和通信设备 Download PDF

Info

Publication number
WO2024104126A1
WO2024104126A1 PCT/CN2023/128033 CN2023128033W WO2024104126A1 WO 2024104126 A1 WO2024104126 A1 WO 2024104126A1 CN 2023128033 W CN2023128033 W CN 2023128033W WO 2024104126 A1 WO2024104126 A1 WO 2024104126A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
quantization
terminal
side device
network model
Prior art date
Application number
PCT/CN2023/128033
Other languages
English (en)
French (fr)
Inventor
任千尧
谢天
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2024104126A1 publication Critical patent/WO2024104126A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0813Configuration setting characterised by the conditions triggering a change of settings
    • H04L41/082Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L5/00Arrangements affording multiple use of the transmission path
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W16/00Network planning, e.g. coverage or traffic planning tools; Network deployment, e.g. resource partitioning or cells structures
    • H04W16/22Traffic simulation tools or models

Definitions

  • the present application belongs to the field of communication technology, and specifically relates to a method, device and communication equipment for updating an AI network model.
  • the AI network model includes a coding AI network model applied to the terminal and a decoding AI network model applied to the base station, wherein the coding AI network model needs to match the decoding AI network model.
  • the coding AI network model and the decoding AI network model can be trained by online joint training, that is, the terminal side calculates the forward information of the coding AI network model and sends it to the base station, and the base station calculates the reverse gradient information and sends it to the terminal, so that the terminal and the base station update the coding AI network model and the decoding AI network model according to the forward information and the reverse gradient information, respectively, and iterate in this way until the training obtains mutually matching coding AI network models and decoding AI network models.
  • the encoding AI network model can be divided into an encoding part and a quantization part.
  • the encoding methods include scalar encoding and vector encoding.
  • the terminal obtains forward information in floating point format based on the encoding part, converts the forward information into a bit stream through the quantization part, and then transmits the forward information in bit stream form to the base station;
  • the decoding AI network model can be divided into a dequantization part and a decoding part.
  • the base station When the base station receives the forward information in bit stream form, it converts the forward information in bit stream form into forward information in floating point format based on the dequantization part, calculates the reverse gradient information in floating point format according to the forward information in floating point format through the decoding part, and then converts the reverse gradient information in floating point format into reverse gradient information in bit stream format before transmitting it to the terminal.
  • the decoding AI network model of the base station needs to be jointly trained with the encoding AI network models of multiple terminals.
  • different terminals can use different quantization methods, or the base station may not know the quantization method of each terminal. This will make it impossible for the base station and the terminal to perform online joint training.
  • the embodiments of the present application provide a method, apparatus, and communication device for updating an AI network model, so that a terminal quantizes forward information in a joint training encoding and decoding AI network model process according to a specific quantization method, and transmits the quantized forward information to a base station, so that the base station can accurately dequantize the forward information and calculate the corresponding reverse gradient information accordingly.
  • the information is then transmitted to the terminal, allowing online joint training to proceed smoothly and improving the matching degree between the trained encoding AI network model and the decoding AI network model.
  • a method for updating an AI network model comprising:
  • the terminal performs quantization processing on the first information according to the first quantization information to obtain second information, wherein the terminal has a first AI network model, and the first information is related to a first processing result of the first AI network model on the first channel information;
  • the terminal sends the second information to the network side device, or sends the second information and second channel information, where the second channel information is related to the first channel information;
  • the terminal receives third information from the network side device, wherein the third information is determined according to third channel information and the second channel information, the third channel information is related to a second processing result of the second AI network model on fourth information, and the fourth information is information obtained by dequantizing the second information;
  • the terminal updates the first AI network model according to the third information.
  • a device for updating an AI network model which is applied to a terminal, and the device includes:
  • a first processing module configured to perform quantization processing on the first information according to the first quantization information to obtain second information, wherein the terminal has a first AI network model, and the first information is related to a first processing result of the first AI network model on the first channel information;
  • a first sending module configured to send the second information to a network side device or send the second information and second channel information, where the second channel information is related to the first channel information;
  • a first receiving module configured to receive third information from the network side device, wherein the third information is determined according to third channel information and the second channel information, the third channel information is related to a second processing result of the second AI network model on fourth information, and the fourth information is information obtained by dequantizing the second information;
  • a first updating module is used to update the first AI network model according to the third information.
  • a method for updating an AI network model comprising:
  • the network side device receives the second information from the terminal or receives the second information and second channel information from the terminal, wherein the second information is information obtained by quantizing the first information, the first information is related to a first processing result of the first AI network model of the terminal on the first channel information, and the second channel information is related to the first channel information;
  • the network side device dequantizes the second information according to the first quantization information to obtain fourth information
  • the network side device determines third channel information based on a second processing result of the fourth information by the second AI network model
  • the network side device updates the second AI network model according to the third channel information and the second channel information, and determines third information
  • the network side device sends the third information to the terminal.
  • a device for updating an AI network model which is applied to a network side device, and the device includes:
  • the second receiving module is used to receive the second information from the terminal or receive the second information from the terminal and the second Channel information, wherein the second information is information obtained by quantizing the first information, the first information is related to a first processing result of the first AI network model of the terminal on the first channel information, and the second channel information is related to the first channel information;
  • a second processing module configured to perform dequantization processing on the second information according to the first quantization information to obtain fourth information
  • a first determination module configured to determine third channel information based on a second processing result of the fourth information by a second AI network model
  • a second updating module configured to update the second AI network model and determine third information according to the third channel information and the second channel information
  • the second sending module is used to send the third information to the terminal.
  • a communication device which includes a processor and a memory, wherein the memory stores a program or instruction that can be run on the processor, and when the program or instruction is executed by the processor, the steps of the method described in the first aspect or the third aspect are implemented.
  • a terminal comprising a processor and a communication interface, wherein the processor is used to perform quantization processing on first information according to first quantization information to obtain second information, wherein the terminal has a first AI network model, and the first information is related to a first processing result of the first AI network model on first channel information; the communication interface is used to send the second information to a network side device, or to send the second information and second channel information, and the second channel information is related to the first channel information; the communication interface is also used to receive third information from the network side device, wherein the third information is determined according to the third channel information and the second channel information, and the third channel information is related to the second processing result of the second AI network model on fourth information, and the fourth information is information obtained by dequantizing the second information; the processor is also used to update the first AI network model according to the third information.
  • a network side device including a processor and a communication interface, wherein the communication interface is used to receive second information from a terminal or receive second information and second channel information from the terminal, wherein the second information is information obtained after quantizing the first information, and the first information is related to a first processing result of the first AI network model of the terminal on the first channel information, and the second channel information is related to the first channel information; the processor is used to dequantize the second information according to the first quantization information to obtain fourth information, and determine third channel information based on the second processing result of the fourth information by the second AI network model, and update the second AI network model according to the third channel information and the second channel information, and determine the third information; the communication interface is also used to send the third information to the terminal.
  • a communication system comprising: a terminal and a network side device, wherein the terminal can be used to execute the steps of the method for updating the AI network model as described in the first aspect, and the network side device can be used to execute the steps of the method for updating the AI network model as described in the third aspect.
  • a readable storage medium wherein a program or instruction is stored on the readable storage medium, and when the program or instruction is executed by a processor, the steps of the method described in the first aspect are implemented, or the steps of the method described in the third aspect are implemented. Steps of the method.
  • a chip comprising a processor and a communication interface, wherein the communication interface is coupled to the processor, and the processor is used to run a program or instruction to implement the method described in the first aspect, or to implement the method described in the third aspect.
  • a computer program/program product is provided, wherein the computer program/program product is stored in a storage medium, and the computer program/program product is executed by at least one processor to implement the steps of the method for updating the AI network model as described in the first aspect, or the computer program/program product is executed by at least one processor to implement the steps of the method for updating the AI network model as described in the third aspect.
  • the terminal can quantize the forward information output by the coding AI network model, that is, the first information, according to the first quantization information, so as to convert the forward information in floating point format into a bit stream, and send the bit stream to the base station, and the network side device can dequantize the bit stream into forward information in floating point format according to the first quantization information, and calculate the reverse gradient information of the decoding AI network model and the reverse gradient information of the coding AI network model corresponding to the forward information based on the decoding AI network model.
  • the network side device can update the decoding AI network model based on the reverse gradient information of the decoding AI network model, and quantize the reverse gradient information of the coding AI network model and send it to the terminal, so that the terminal updates the coding AI network model accordingly, so as to realize the online joint training of the coding AI network model and the decoding AI network model.
  • the problem that the online joint training of the coding AI network model and the decoding AI network model cannot be implemented due to the network side device not knowing the first quantization information used by the terminal and being unable to dequantize the forward information in the form of a bit stream reported by the terminal is solved.
  • FIG1 is a schematic diagram of the structure of a wireless communication system to which an embodiment of the present application can be applied;
  • FIG2 is a flow chart of a method for updating an AI network model provided in an embodiment of the present application.
  • FIG3 is a flow chart of a method for updating an AI network model provided in an embodiment of the present application.
  • FIG4 is a schematic diagram of the structure of a device for updating an AI network model provided in an embodiment of the present application.
  • FIG5 is a schematic diagram of the structure of a device for updating an AI network model provided in an embodiment of the present application.
  • FIG6 is a schematic diagram of the structure of a communication device provided in an embodiment of the present application.
  • FIG. 7 is a schematic diagram of the hardware structure of a terminal provided in an embodiment of the present application.
  • FIG8 is a schematic diagram of the structure of a network side device provided in an embodiment of the present application.
  • first, second and the like in the specification and claims of this application are used to distinguish similar objects. It is not used to describe a specific order or sequence. It should be understood that the terms used in this way are interchangeable where appropriate, so that the embodiments of the present application can be implemented in an order other than those illustrated or described herein, and the objects distinguished by “first” and “second” are generally of the same type, and the number of objects is not limited.
  • the first object can be one or more.
  • “and/or” in the specification and claims means at least one of the connected objects, and the character “/" generally indicates that the objects associated with each other are in an "or” relationship.
  • LTE Long Term Evolution
  • LTE-A Long Term Evolution
  • CDMA Code Division Multiple Access
  • TDMA Time Division Multiple Access
  • FDMA Frequency Division Multiple Access
  • OFDMA Orthogonal Frequency Division Multiple Access
  • SC-FDMA Single-carrier Frequency Division Multiple Access
  • NR new radio
  • FIG1 shows a block diagram of a wireless communication system applicable to an embodiment of the present application.
  • the wireless communication system includes a terminal 11 and a network side device 12 .
  • the terminal 11 may be a mobile phone, a tablet computer, a laptop computer or a notebook computer, a personal digital assistant (PDA), a handheld computer, a netbook, an ultra-mobile personal computer (UMPC), a mobile Internet device (MID), an augmented reality (AR)/virtual reality (VR) device, a robot, a wearable device, a vehicle user equipment (VUE), a pedestrian terminal (PUE), a smart home (a home appliance with wireless communication function, such as a refrigerator, a television, a washing machine or furniture, etc.), a game console, a personal computer (PC), a teller machine or a self-service machine and other terminal side devices, and the wearable device includes: a smart watch, a smart bracelet, a smart headset, a smart glasses, smart jewelry (smart bracelet, smart bracelet, smart ring
  • the network side device 12 may include an access network device or a core network device, wherein the access network device may also be referred to as a radio access network device, a radio access network (RAN), a radio access network function or a radio access network unit.
  • the access network device may include a base station, a wireless local area network (WLAN) access point or a WiFi node, etc.
  • WLAN wireless local area network
  • the base station may be referred to as a node B, an evolved node B (eNB), an access point, a base transceiver station (BTS), a radio base station, a radio transceiver, a basic service set (BSS), an extended service set (ESS), a home B node, a home evolved B node, a transmitting and receiving point (TRP) or other appropriate terms in the field, as long as the same technical effect is achieved, the base station is not limited to a specific technical vocabulary, it should be noted that in the embodiment of the present application, only the base station in the NR system is used as an example for introduction, and the specific type of the base station is not limited.
  • the transmitter can optimize the signal transmission based on CSI to make it more compatible with the channel state.
  • the channel quality indicator CQI
  • MCS modulation and coding scheme
  • PMI precoding matrix indicator
  • MIMO multi-input multi-output
  • the base station sends a CSI Reference Signal (CSI-RS) on certain time-frequency resources in a certain time slot.
  • CSI-RS CSI Reference Signal
  • the terminal performs channel estimation based on the CSI-RS, calculates the channel information on this slot, and feeds back the PMI to the base station through the codebook.
  • the base station combines the channel information based on the codebook information fed back by the terminal, and uses this to perform data precoding and multi-user scheduling before the next CSI report.
  • the terminal can change the PMI reported in each subband to reporting PMI according to delay. Since the channels in the delay domain are more concentrated, the PMI with less delay can approximately represent the PMI of all subbands, that is, the delay domain information is compressed before reporting.
  • the base station can pre-code the CSI-RS in advance and send the encoded CSI-RS to the terminal.
  • the terminal sees the channel corresponding to the encoded CSI-RS.
  • the terminal only needs to select several ports with higher strength from the ports indicated by the network side and report the coefficients corresponding to these ports.
  • neural network or machine learning methods can be used.
  • AI modules such as neural networks, decision trees, support vector machines, Bayesian classifiers, etc. This application uses neural networks as an example for illustration, but does not limit the specific type of AI modules.
  • the parameters of the neural network are optimized through optimization algorithms.
  • An optimization algorithm is a type of algorithm that can help us minimize or maximize an objective function (sometimes called a loss function).
  • the objective function is often a mathematical combination of model parameters and data. For example, given data X and its corresponding label Y, we build a neural network model f(.). With the model, we can get the predicted output f(x) based on the input x, and we can calculate the difference between the predicted value and the true value (f(x)-Y), which is the loss function. Our goal is to find the right weights and biases to minimize the value of the above loss function. The smaller the loss value, the closer our model is to the real situation.
  • the common optimization algorithms are basically based on the error back propagation (BP) algorithm.
  • BP error back propagation
  • the basic idea of the BP algorithm is that the learning process consists of two processes: forward propagation of the signal and back propagation of the error.
  • the input sample is passed from the input layer, processed by each hidden layer layer by layer, and then passed to the output layer. If the actual output of the output layer does not match the expected output, it enters the error back propagation stage.
  • Error back propagation is to propagate the output error layer by layer through the hidden layer to the input layer in some form, and distribute the error to all units in each layer, so as to obtain the error signal of each layer unit. This error signal is used as the basis for correcting the weights of each unit.
  • This process of adjusting the weights of each layer of forward signal propagation and back propagation of error is repeated over and over again.
  • the process of continuous adjustment of weights is also the learning training of the network. This process continues until the error of the network output is reduced to an acceptable level, or until the preset number of learning times is reached.
  • the CSI compression recovery process is as follows: the terminal estimates the CSI-RS, calculates the channel information, obtains the encoding result of the calculated channel information or the original estimated channel information through the encoding AI network model, and sends the encoding result to the base station.
  • the base station receives the encoded result and inputs it into the decoding AI network model to recover the channel information.
  • the neural network-based CSI compression feedback solution is to compress and encode the channel information at the terminal, send the compressed content to the base station, and decode the compressed content at the base station to restore the channel information.
  • the decoding AI network model of the base station and the encoding AI network model of the terminal need to be jointly trained to achieve a reasonable match.
  • the input of the encoding AI network model is channel information
  • the output is encoding information, that is, channel feature information.
  • the input of the decoding AI network model is encoding information, and the output is restored channel information.
  • the online joint training of the encoding AI network model and the decoding AI network model is realized, that is, the terminal sends the forward information in the joint training process to the network side device, and the network side device sends the corresponding reverse information to the terminal, so that the network side device updates the decoding AI network model according to the forward information, and the terminal updates the encoding AI network model according to the reverse information.
  • the embodiments of the present application mainly propose improvements for the online joint training of the encoding AI network model and the decoding AI network model in the above method 3).
  • the decoding AI network model of the base station needs to match the encoding AI network model of all users.
  • each user calculates its own forward information and sends it to the base station.
  • the base station calculates the reverse gradient information of each user and sends it to the corresponding user.
  • the AI network model in the embodiment of the present application includes a first AI network model and a second AI network model.
  • the first AI network model may have compression and/or encoding functions, that is, the first AI network model may be any one of an encoding AI network model, a compression and encoding AI network model, and a compression AI network model.
  • the second AI network model may have a decompression and/or decoding function, that is, the second AI network model may be any one of a decoding AI network model, a decompression and decoding AI network model, and a decompression AI network model.
  • the first AI network model is an encoding AI network model
  • the second AI network model is a decoding AI network model as an example for illustration, which does not constitute a specific limitation herein.
  • the encoding AI network model may include an encoding part and a quantization part, that is, the encoding AI network model may also be referred to as a quantization AI network model or an encoding and quantization AI network model; correspondingly, the decoding AI network model may include a decoding part and a dequantization part, that is, the decoding AI network model may also be referred to as a dequantization AI network model or a decoding and dequantization AI network model.
  • the quantization and dequantization in the embodiments of the present application may also adopt a non-AI method, or adopt an AI network model independent of the encoding AI network model and the decoding AI network model.
  • the embodiments of the present application are illustrated by taking the encoding AI network model as an example, in which the encoding AI network model may include an encoding part and a quantization part, and the decoding AI network model may include a decoding part and a dequantization part. No specific limitation is constituted herein.
  • the channel characteristic information output by the encoding part and the input of the decoding part are data in floating point format, and during the data transmission process, data in bit stream format is usually transmitted.
  • the terminal converts the data in floating point format into data in bit stream format by quantization and then transmits it, and after receiving the data in bit stream format, the network side device performs corresponding dequantization on the data in bit stream format to restore the data in floating point format, so as to input the data in floating point format into the decoding part to obtain the corresponding reverse information, wherein the reverse information can also be called backward information or reverse gradient information, etc., which is not specifically limited here.
  • the base station since quantization information such as quantization codebooks are constantly trained and updated during the training of AI network models, the base station may not know the quantization information actually used by the terminal, resulting in the base station being unable to accurately dequantize the forward information in the form of a bit stream sent by the terminal, making it impossible to achieve online joint training of the encoding AI network model and the decoding AI network model.
  • the terminal quantizes the forward information based on the first quantization information known to the network side device, and the network side device can accurately dequantize the forward information in the form of a bit stream sent by the terminal, and/or the network side device adopts a fixed or a quantization method known to the terminal to quantize the reverse information, so that the terminal can accurately dequantize the reverse information in the form of a bit stream sent by the network side device, so that online joint training of the encoding AI network model and the decoding AI network model can be realized, and finally the training obtains mutually matching encoding AI network model and decoding AI network model.
  • a method for updating an AI network model provided in an embodiment of the present application is executed by a terminal. As shown in FIG. 2 , the method for updating an AI network model executed by the terminal may include the following steps:
  • Step 201 The terminal performs quantization processing on first information according to first quantization information to obtain second information, wherein the terminal has a first AI network model, and the first information is related to a first processing result of the first AI network model on first channel information.
  • Step 202 The terminal sends the second information to the network side device or sends the second information and the second Channel information, where the second channel information is related to the first channel information.
  • Step 203 the terminal receives third information from the network side device, wherein the third information is determined based on third channel information and the second channel information, the third channel information is related to the second processing result of the second AI network model on the fourth information, and the fourth information is information obtained by dequantizing the second information.
  • Step 204 The terminal updates the first AI network model according to the third information.
  • the first quantization information is quantization information known to both the terminal and the network side device.
  • the first quantization information satisfies at least one of the following:
  • the terminal selects and reports the network side device
  • the first quantization information is associated with the first AI network model, and the first quantization information may be trained together with the first AI network model.
  • the terminal may report the number of the first AI network model, or the network side device may indicate or configure the initial AI network model of the first AI network model in advance. In this way, when the network side device learns the first AI network model used by the terminal, it also learns the first quantization information associated with the first AI network model.
  • the first AI network model is used to compress and/or encode the first channel information to obtain channel characteristic information, that is, the first processing is compression and/or encoding processing, and the first processing result may be channel characteristic information.
  • the first information is related to the first processing result of the first AI network model on the first channel information, and the first information may include the channel characteristic information output by the first AI network model, or the first information includes information obtained after certain processing of the channel characteristic information output by the first AI network model, such as channel characteristic information after segmentation processing.
  • the first information may be understood as forward information in a floating point format
  • the second information may be forward information obtained by quantizing the forward information in a floating point format.
  • the second information may be forward information in a bitstream format or forward information in a floating point format.
  • the second information is generally described as forward information in a bitstream format in the embodiments of the present application.
  • the quantization method in the embodiment of the present application may include at least one of scalar quantization and vector quantization, wherein scalar quantization is to quantize each floating point number using a fixed number of bits, and it is necessary to determine the value corresponding to each bit combination by looking up a table to convert the floating point number into a corresponding bit combination, and when dequantizing, it is necessary to look up a table based on the same table or rule to dequantize the bit combination into a corresponding floating point number.
  • Vector quantization is to quantize multiple floating point numbers together into a bit stream of a certain length, and it is necessary to quantize based on a quantization codebook. Similarly, when dequantizing, a corresponding quantization codebook is required for dequantization.
  • the table used for scalar quantization can be used as a quantization codebook for vector quantization.
  • the quantization codebook is also continuously trained and updated.
  • both the terminal and the network side devices need to know the quantization codebook to be used. Therefore, after the terminal updates the quantization codebook, it needs to synchronize the quantization codebook with the network side device, otherwise the forward information cannot be transmitted correctly.
  • the reverse information is usually sent in the form of a quantized bit stream when it is transmitted. At this time, It is necessary to use a fixed quantization method or a quantization method known to both parties to quantize and dequantize the reverse information.
  • the network side device will dequantize the second information according to the above-mentioned first quantization information to obtain the fourth information, wherein the fourth information can be forward information in a floating point format that is completely or partially the same as the first information. Then, the network side device decodes and/or decompresses the fourth information based on the second AI network model to restore the third channel information, and reversely transmits the gradient difference between the third channel information and the second channel information to the input layer of the second AI network model to obtain the reverse information of the first AI network model.
  • the forward information transmission process is: Input->A->B->C->X->D->E->F->output, wherein input is the input of the encoding AI network model, that is, the first channel information, output is the output of the decoding AI network model, that is, the third channel information, X includes the quantization process on the terminal side and the dequantization process on the base station side, the output of C is the forward information, X includes the quantization of the forward information, the terminal reports the quantized bit stream to the base station, the base station dequantizes the bit stream to restore the forward information, and then passes through D EF obtains the output corresponding to the forward information, makes a gradient difference between the output and the second channel information to obtain the reverse information of F, and then transmits it to E to calculate the reverse information of E, and then transmits it to D to calculate the reverse information of D, that is,
  • the base station quantizes the third information and transmits it to C, that is, it reaches the coding AI network model side, wherein the quantization of the third information can be a quantization process independent of the above-mentioned X, for example: the third information is quantized using a quantization method of a floating point data type 16 (float 16).
  • the terminal can perform back propagation through C, B, and A in sequence to adjust the weights of C, B, and A in sequence to complete an update of the first AI network model.
  • the terminal may report the second information to the network side device without reporting the second channel information.
  • the first channel information is agreed upon by the protocol, or is offline channel information determined by negotiation between the network side device and the terminal.
  • the terminal does not need to report the second channel information, wherein the second channel information is used as a sample label for the network side device to determine the reverse information based on the decoding result of the second AI network model and the sample label.
  • the terminal may report the second information and the second channel information to the network side device.
  • the first channel information is the channel information measured by the terminal, and the network side device does not know the channel information of the terminal.
  • the terminal reports the second information and the second channel information to the network side device, so that the network side device determines the reverse information based on the decoding result of the second AI network model and the second channel information.
  • the second channel information is related to the first channel information, and the second channel information may be the same as the first channel information.
  • the network side device may make a gradient difference based on the second channel information and the third channel information to determine the reverse information of the first AI network model.
  • the second channel information is related to the first channel information, and the second channel information may be channel information obtained after encoding the first channel information.
  • the network side device may decode the second channel information to obtain channel information that is completely or partially the same as the first channel information, and make a gradient difference based on the decoded second channel information and the third channel information to determine the reverse information of the first AI network model.
  • the third information can be understood as reverse information, for example, reverse information in bitstream format.
  • the network side device can quantize the reverse information in floating point format output by the second AI network model into reverse information in bitstream format and transmit it to the terminal.
  • the quantization method used by the network side device can be the same as the method used by the terminal to quantize the first information. The same, or the network side device adopts a fixed quantization method, or the network side device adopts other quantization methods known to the terminal, which is not specifically limited here.
  • the fourth information can be understood as forward information in a floating point format obtained by dequantizing the second information.
  • the network side device can dequantize the second information based on the known first quantization information to obtain forward information in a floating point format, and input the forward information in a floating point format into the second AI network model to obtain the third channel information.
  • the second AI network model is used to decode and/or decompress the fourth information to restore the channel information.
  • the third channel information is related to the second processing result of the second AI network model on the fourth information.
  • the third channel information may be the channel information after decoding and/or decompression processing output by the second AI network model, or the third channel information is the channel information obtained after certain processing of the channel information after decoding and/or decompression processing output by the second AI network model, for example: the third channel information is the channel information obtained by splicing at least two segmented channel information after decoding and/or decompression processing output by the second AI network model, wherein the second processing is the above-mentioned decoding and/or decompression processing.
  • the first quantization information includes at least one of the following:
  • the quantization method includes scalar quantization and/or vector quantization
  • the indication of the quantization information can be for the number of quantization bits used for each floating-point number, and can use a vector indication with the same length as the number of floating-point numbers (for example, using the vector [2,2,2,3,3,3] to indicate that the first three floating-point numbers are quantized using 2 bits, and the last three floating-point numbers are quantized using three bits), or it can centrally indicate a set of floating-point numbers quantized using the same number of bits (for example, it is agreed that the first k1 consecutive floating-point numbers are quantized using 1 bit, followed by k2 consecutive floating-point numbers using 2 bits, and then k3 consecutive floating-point numbers using 3 bits, and finally the remaining k4 consecutive floating-point numbers are quantized using 4 bits.
  • [0,3,3,0] can be indicated to indicate that the first 3 floating-point numbers are quantized using 2 bits, and the last three floating-point numbers are quantized using 3 bits).
  • the indication of quantization information may indicate a vector quantization codebook and a floating-point number sequence corresponding to each codebook for quantization. For example: for a floating-point number sequence of length 6, it is divided into two subsequences of length 3, and each subsequence is quantized using a codebook of size 2 ⁇ 6 and a codebook of size 2 ⁇ 9, respectively. It is necessary to indicate the above two codebooks and the first codebook quantizes the first three floating-point numbers, and the second codebook quantizes the last three floating-point numbers.
  • the method for indicating the codebook weight includes: 1) issuing a complete codebook; 2) predefining some candidate codebooks, and selecting and indicating the corresponding serial number from the candidate codebook when the codebook needs to be indicated.
  • the update strategy may be an update strategy for a quantization codebook and/or an update strategy for a quantization method. For example: periodically updating the quantization codebook and/or quantization method, synchronously updating the quantization codebook and/or quantization method according to the training of the AI network model, updating the quantization codebook and/or quantization method according to the instructions of the network side device, and other update strategies.
  • the terminal may adopt a quantization parameter capable of quantizing the number of floating-point numbers into the number of bits.
  • the above-mentioned quantization bit number may be the quantization bit number in scalar quantization, that is, a floating point number is used to quantize into a bit group of several bits.
  • the segmented method of quantization processing may be to quantize the first information in segments, for example, by dividing the first information into at least two segments according to port grouping, layer grouping, etc., and then each segment is quantized using its corresponding quantization information.
  • the method for updating the AI network model further includes:
  • the terminal receives training information from the network side device, where the training information includes at least one of the following:
  • the receiving method of the third information is the receiving method of the third information.
  • the above training information may be configured in a CSI report configuration, or indicated to the terminal through signaling.
  • the reporting method of the second information, or the reporting method of the second information and the second channel information may include time-frequency resources for the terminal to send the second information or the second information and the second channel information.
  • the manner of receiving the third information may include time-frequency resources used by a terminal to receive the third information.
  • the terminal may determine how to report the second information and/or how to receive the third information according to the configuration or instruction of the network-side device.
  • the reporting method of the second information may include the content of the second information, wherein the content of the second information includes at least one of the following:
  • the first processing result wherein the first processing result is information in a floating point format before quantization
  • the first processing result is sequentially quantized and dequantized to obtain information in floating point format.
  • the content of the second information includes that the first processing result may indicate that the quantization method used in the process of training the AI network model may be different from the quantization method used in the process of using the AI network model for CSI reporting, wherein, in the process of training the AI network model, the first AI network model is used to encode the first channel information on the terminal side to obtain forward information, and the second AI network model can be used to quantize, dequantize and decode the forward information on the network side to obtain reverse information.
  • the quantization method in the process of training the AI network model is determined by the network side device.
  • the content of the second information includes information in binary format obtained by quantizing the first processing result, which can indicate that the quantization method used in the process of training the AI network model can be the same as the quantization method used in the process of using the AI network model for CSI reporting, and in the process of training the AI network model, the first AI network model is used
  • the first channel information is encoded and quantized on the terminal side to obtain forward information in binary format, that is, in bit stream format.
  • the second AI network model can be used to dequantize and decode the forward information on the network side to obtain reverse information.
  • the content of the second information includes information in floating point format obtained by sequentially quantizing and dequantizing the first processing result, which can indicate that the quantization method used in the process of training the AI network model may be different from the quantization method used in the process of CSI reporting using the AI network model, wherein, in the process of training the AI network model, the first AI network model is used to encode, quantize and dequantize the first channel information on the terminal side to obtain forward information in floating point format, and the second AI network model can be used to decode the forward information in floating point format on the network side to obtain reverse information.
  • the quantization method in the process of training the AI network model is determined by the terminal, and the terminal may not report the first quantization information corresponding to the quantization method to the network side device.
  • the content of the second information in the embodiment of the present application can be any one of the above options one to three.
  • the embodiment of the present application is usually illustrated by taking the quantization method used in the process of training the AI network model as the quantization method in the above option two as an example.
  • the method before the terminal performs quantization processing on the first information according to the first quantization information, the method further includes:
  • the terminal receives first indication information from the network side device
  • the terminal determines the first quantization information according to the first indication information
  • the first indication information indicates at least one of the following:
  • first identifier corresponds to the first quantitative information
  • the second identifier is used to identify a quantization codebook in a quantization codebook pool, and the first quantization information includes a quantization codebook corresponding to the second identifier;
  • a third identifier is used to identify a quantization codebook group, the quantization codebook group includes at least two quantization codebooks, and the first quantization information includes a quantization codebook in the quantization codebook group corresponding to the third identifier.
  • the terminal and the network side device can be informed of the correspondence between at least one of the first identifier, the second identifier, and the third identifier and the first quantitative information through a protocol agreement, a pre-indication or configuration of the network side device. Then, the network side device indicates the first identifier or the second identifier or the third identifier through the first indication information, so that the terminal can use the first quantitative information corresponding to the first identifier or the second identifier or the third identifier.
  • the protocol stipulates some fixed quantization information and the identifier (index) of each quantization information.
  • the base station indicates the index to the terminal, and the terminal uses the corresponding quantization information according to the index indicated by the base station.
  • the base station indicates or the protocol stipulates the use of scalar quantization or vector quantization.
  • the quantization information may include the number of quantization bits, each number of quantization bits corresponds to a fixed quantization table, and the base station may indicate the quantization index corresponding to the number of quantization bits or directly indicate the number of quantization bits.
  • some quantization codebooks and the identifier of each quantization codebook can be pre-agreed in the protocol; then, the base station can indicate the index of the quantization codebook so that the terminal uses the quantization codebook, or the base station can indicate the quantization codebook.
  • a codebook group (group) index and a length of a quantization codebook and the terminal selects a quantization codebook of a corresponding length in a quantization codebook group corresponding to the quantization codebook group index, wherein the lengths of quantization codebooks of each quantization codebook group are different from each other.
  • the quantization codebook pool may be agreed upon by the protocol, or pre-indicated or configured by the network side device, or associated with the first AI network model.
  • the first AI network model is associated with at least two quantization information, each of which corresponds to a second identifier, and the network side device indicates one of them as the first quantization information.
  • the quantization codebook at the beginning of training may be indicated by the base station or reported by the terminal to the base station, and subsequent quantization codebooks may be updated by updating the training of the AI network model.
  • the first quantization information is indicated by a network side device.
  • the method before the terminal receives the third information from the network side device, the method further includes:
  • the terminal sends second indication information to the network side device, wherein the second indication information indicates at least one of the following:
  • first identifier corresponds to the first quantitative information
  • the second identifier is used to identify a quantization codebook in a quantization codebook pool, and the first quantization information includes a quantization codebook corresponding to the second identifier;
  • the third identifier is used to identify a quantization codebook group, where the quantization codebook group includes at least two quantization codebooks, and the first quantization information includes a quantization codebook in the quantization codebook group corresponding to the third identifier;
  • a fourth identifier where the fourth identifier is used to identify a target quantization codebook in the quantization codebook group corresponding to the third identifier, and the first quantization information includes the target quantization codebook.
  • the terminal selects and reports the first quantization information, wherein the difference between the second indication information and the first indication information includes that the second indication information can also indicate the identifier of the quantization codebook selected by the terminal in the quantization codebook group corresponding to the quantization codebook group index.
  • the method further includes:
  • the terminal updates the quantization codebook, and the first quantization information includes the updated quantization codebook
  • the terminal sends third indication information to the network side device, wherein the third indication information indicates the updated quantization codebook or an identifier of the updated quantization codebook.
  • the terminal can update the quantization codebook during the training of the AI network model and report the updated quantization codebook or quantization codebook identifier to the network side device, such as: the terminal updates the quantization table used in scalar quantization or the quantization codebook in vector quantization, where the quantization table is used to determine each bit group and the corresponding floating point number.
  • the terminal updates the quantization codebook, including at least one of the following:
  • the terminal periodically updates the quantization codebook
  • the terminal updates the quantization codebook based on an instruction of the network side device.
  • the period for updating the quantization codebook may be agreed upon by a protocol or configured by a network-side device.
  • the terminal can periodically update the quantization codebook, for example, the quantization codebook is updated once every Q updates of the first AI network model.
  • the terminal can also update the quantization codebook based on the instruction of the network side device.
  • the terminal may decide whether to update the quantization codebook based on a conditional trigger. For example, when the terminal trains a better quantization codebook by itself or finds that the channel quality changes and the original quantization codebook is not suitable, the quantization codebook may be updated.
  • the method before the terminal updates the quantization codebook, the method further includes:
  • the terminal sends target request information to the network side device, where the target request information is used to request an update of a quantization codebook of the terminal;
  • the terminal receives target response information from the network-side device, where the target response information is used to allow updating of a quantization codebook of the terminal.
  • the terminal can request the network side device to update the quantization codebook.
  • the terminal updates the quantization codebook. For example, when the terminal finds that the channel quality has changed and the original quantization codebook is not suitable, it sends the target request information to the network side device. If the terminal receives the target response information from the network side device, it updates the quantization codebook.
  • the method before the terminal updates the quantization codebook, the method further includes:
  • the terminal receives fourth indication information, where the fourth indication information indicates to update the quantization codebook.
  • the fourth indication information may be indication information sent by a network side device.
  • the terminal is a terminal in a target group
  • the target group includes at least two terminals
  • the first AI network models of the terminals in the target group correspond to the same second AI network model.
  • the terminals in the target group may be the terminals where all the first AI network models corresponding to a second AI network model of the network side device are located.
  • a second AI network model of the network side device may be a decoding AI network model shared by the encoding AI network models of multiple terminals.
  • the second AI network model of the network side device needs to be jointly trained with the first AI network models of all the terminals in the target group.
  • the network side device may receive forward information from all terminals in the target group, dequantize the forward information according to the first quantization information corresponding to each of them, and then determine the reverse information of each terminal in the target group based on the dequantized forward information of all terminals in the target group, and send it to the corresponding terminal, and determine the reverse information of the second AI network model based on the dequantized forward information of all terminals in the target group and the corresponding sample labels to update the second AI network model.
  • the method before the terminal performs quantization processing on the first information according to the first quantization information, the method further includes:
  • the terminal receives fifth indication information, wherein the fifth indication information indicates to update the quantization codebooks of all terminals in the target group.
  • the network side device may instruct all terminals in the target group to update their respective quantization codebooks.
  • the network side device may directly instruct all terminals in the target group to update the quantization codebook strategy. Based on the strategy, the network side device may obtain the updated quantization codebook of each terminal in the target group to dequantize the second information reported by each terminal accordingly.
  • the method further comprises:
  • the terminal updates the quantization information according to the fifth indication information, and sends the updated quantization codebook to the network side device.
  • each terminal in the target group reports the updated quantization codebook to the network side device, so that the network side device can know the updated quantization codebook of each terminal in the target group, and dequantize the second information reported by each terminal accordingly.
  • the terminal if the terminal finds that the quantization codebook does not need to be updated, the terminal does not report the updated quantization codebook, or reports indication information that the quantization codebook does not need to be updated.
  • the method further comprises:
  • the terminal receives a target quantization codebook from the network side device, wherein the target quantization codebook is a quantization codebook determined according to quantization codebooks of all terminals in the target group;
  • the terminal determines that the first quantization information includes the target quantization codebook.
  • the network side device determines the target quantization codebook according to the updated quantization codebook reported by all the terminals in the target group, and sends it to each terminal in the target group, so that all the terminals in the target group subsequently use a unified target quantization codebook to quantize the first information.
  • the network side device can use the same dequantization method to perform dequantization processing on the second information reported by the terminals in the target group.
  • the first quantization information of all terminals in the target group is the same.
  • all terminals in the target group subsequently use the same first quantization information to quantize their respective first information, so that the network side device can use the same dequantization method to dequantize the second information reported by the terminals in the target group.
  • the terminals in the target group may also use different first quantization information.
  • the terminals in the target group quantize their first information according to their respective quantization methods.
  • the network side equipment also needs to dequantize the corresponding second information according to the dequantization methods corresponding to the terminals in the target group.
  • the method further includes:
  • the terminal sends target capability information to the network side device, where the target capability information indicates at least one of the following:
  • the terminal sends the target capability information to the network side device, and the target capability information can be used as a basis for the network side device to configure or indicate the first quantization information for the terminal, so that the network side device configures or indicates the first quantization information for the terminal.
  • the quantization information is usable by the terminal. For example, for a terminal that does not support updating of the quantization codebook, the network-side device indicates or configures the terminal's quantization codebook update strategy to not update the quantization codebook.
  • the third information is information quantized based on the second quantized information, and the terminal updates the first AI network model according to the third information, including:
  • the terminal performs dequantization processing on the third information based on the second quantization information to obtain fifth information;
  • the terminal updates the first AI network model according to the fifth information.
  • the fifth information is reverse information after dequantization, such as: the degree of similarity between the fifth information and the fourth information is determined by the quantization and dequantization accuracy corresponding to the second quantized information.
  • the second quantization information may be fixed quantization information, for example, the second quantization information includes a target quantization bit number, and the target quantization bit number is indicated by the network side device or agreed upon by a protocol.
  • the second quantization information may be any quantization information known by the terminal and the network side device.
  • the second quantization information may be the same quantization information as the first quantization information.
  • the third information may be quantized reverse information
  • the terminal may dequantize the third information based on its quantization method, and update the first AI network model according to the dequantized reverse information.
  • the terminal can quantize the forward information output by the coding AI network model, that is, the first information, according to the first quantization information, so as to convert the forward information in floating point format into a bit stream, and send the bit stream to the base station, and the network side device can dequantize the bit stream into the forward information in floating point format according to the first quantization information, and calculate the reverse gradient information of the decoding AI network model and the reverse gradient information of the coding AI network model corresponding to the forward information based on the decoding AI network model.
  • the network side device can update the decoding AI network model based on the reverse gradient information of the decoding AI network model, and quantize the reverse gradient information of the coding AI network model and send it to the terminal, so that the terminal updates the coding AI network model accordingly, so as to realize the online joint training of the coding AI network model and the decoding AI network model.
  • the problem that the online joint training of the coding AI network model and the decoding AI network model cannot be implemented due to the network side device not knowing the first quantization information used by the terminal and being unable to dequantize the forward information in the form of a bit stream reported by the terminal is solved.
  • the method for updating the AI network model provided in the embodiment of the present application may be executed by a network-side device. As shown in FIG3 , the method for updating the AI network model may include the following steps:
  • Step 301 the network side device receives second information from the terminal, or receives second information and second channel information from the terminal, wherein the second information is information obtained after quantization processing of the first information, the first information is related to the first processing result of the first AI network model of the terminal on the first channel information, and the second channel information is related to the first channel information.
  • the second information is information obtained after quantization processing of the first information
  • the first information is related to the first processing result of the first AI network model of the terminal on the first channel information
  • the second channel information is related to the first channel information.
  • Step 302 The network-side device dequantizes the second information according to the first quantization information to obtain fourth information.
  • Step 303 The network side device determines third channel information based on a second processing result of the fourth information by the second AI network model.
  • Step 304 The network side device updates the first channel information according to the third channel information and the second channel information. Second, AI network model, and third information is determined.
  • Step 305 The network side device sends the third information to the terminal.
  • the method for updating the AI network model provided in the embodiment of the present application is different from the method for updating the AI network model shown in FIG. 2 in that:
  • the executor of the method for updating the AI network model as shown in Figure 2 is the terminal
  • the executor of the method for updating the AI network model as shown in Figure 3 is the network side device
  • the various steps executed by the network side device in the method for updating the AI network model correspond to the various steps executed by the terminal in the method for updating the AI network model.
  • the meaning and function of the various steps executed by the network side device in the method for updating the AI network model can refer to the meaning and function of the various steps in the method for updating the AI network model as shown in Figure 2, and the two are configured with each other to jointly realize the online joint training of the first AI network model on the terminal side and the second AI network model of the network side device, which will not be repeated here.
  • the first quantization information includes at least one of the following:
  • the quantization method includes scalar quantization and/or vector quantization
  • the method further includes:
  • the network side device sends training information to the terminal, where the training information includes at least one of the following:
  • the receiving method of the third information is the receiving method of the third information.
  • the reporting method of the second information includes the content of the second information, wherein the content of the second information includes at least one of the following:
  • the first processing result wherein the first processing result is information in a floating point format before quantization
  • the first processing result is sequentially quantized and dequantized to obtain information in floating point format.
  • the first quantization information satisfies at least one of the following:
  • the terminal selects and reports the network side device
  • the method before the network side device receives the second information from the terminal, the method further includes:
  • the network side device sends first indication information to the terminal, wherein the first indication information indicates the following One less item:
  • first identifier corresponds to the first quantitative information
  • the second identifier is used to identify a quantization codebook in a quantization codebook pool, and the first quantization information includes a quantization codebook corresponding to the second identifier;
  • a third identifier is used to identify a quantization codebook group, the quantization codebook group includes at least two quantization codebooks, and the first quantization information includes a quantization codebook in the quantization codebook group corresponding to the third identifier.
  • the method before the network side device performs dequantization processing on the second information according to the first quantization information, the method further includes:
  • the network side device receives second indication information from the terminal
  • the network side device determines the first quantization information according to the second indication information
  • the second indication information indicates at least one of the following:
  • first identifier corresponds to the first quantitative information
  • the second identifier is used to identify a quantization codebook in a quantization codebook pool, and the first quantization information includes a quantization codebook corresponding to the second identifier;
  • the third identifier is used to identify a quantization codebook group, where the quantization codebook group includes at least two quantization codebooks, and the first quantization information includes a quantization codebook in the quantization codebook group corresponding to the third identifier;
  • a fourth identifier where the fourth identifier is used to identify a target quantization codebook in the quantization codebook group corresponding to the third identifier, and the first quantization information includes the target quantization codebook.
  • the method further includes:
  • the network-side device receives third indication information from the terminal, wherein the third indication information indicates an updated quantization codebook of the terminal or an identifier of the updated quantization codebook.
  • the method before the network side device receives the third indication information from the terminal, the method further includes:
  • the network side device receives target request information from the terminal, where the target request information is used to request an update of a quantization codebook of the terminal;
  • the network side device sends target response information to the terminal, where the target response information is used to allow updating of a quantization codebook of the terminal.
  • the method before the network side device receives the third indication information from the terminal, the method further includes:
  • the network side device sends fourth indication information to the terminal, wherein the fourth indication information instructs the terminal to update the quantization codebook.
  • the network side device receives the second information from the terminal, including:
  • the network side device receives second information from each terminal in a target group, wherein the target group includes at least two terminals, and the first AI network models of the terminals in the target group correspond to the same second AI network model.
  • the method before the network side device receives the second information from each terminal in the target group, the method further includes:
  • the network-side device sends fifth indication information to each terminal in the target group, wherein the fifth indication information instructs all terminals in the target group to update their respective quantization codebooks.
  • the method further includes:
  • the network side device receives the updated quantization codebook for each terminal in the target group.
  • the method further includes:
  • the network side device determines a target quantization codebook according to quantization codebooks of all terminals in the target group
  • the network-side device sends the target quantization codebook to each terminal in the target group, wherein the first quantization information includes the target quantization codebook.
  • the first quantization information of all terminals in the target group is the same.
  • the method further includes:
  • the network side device receives target capability information from the terminal
  • the network side device determines the first quantization information according to the target capability information
  • the target capability information indicates at least one of the following:
  • the network side device determines, based on a second processing result of the first information by the second AI network model, that the third information includes:
  • the network-side device obtains fourth information based on a second processing result of the first information by the second AI network model
  • the network side device quantizes the fourth information based on the second quantization information to obtain the third information.
  • the second quantization information includes a target quantization bit number, and the target quantization bit number is indicated by the network side device or agreed upon by a protocol.
  • the method for updating the AI network model performed by the network side device provided in the embodiment of the present application is used to cooperate with the method for updating the AI network model performed by the terminal as shown in Figure 2, to quantize, dequantize and transmit the forward information, reverse information and sample labels over the air, so as to jointly realize the online joint training of the first AI network model and the second AI network model.
  • the method for updating the AI network model provided in the embodiment of the present application can be executed by a device for updating the AI network model.
  • the device for updating the AI network model is taken as an example to illustrate the device for updating the AI network model provided in the embodiment of the present application.
  • an apparatus for updating an AI network model provided in an embodiment of the present application may be a device in a terminal.
  • the device 400 for updating the AI network model may include the following modules:
  • a first processing module 401 is configured to perform quantization processing on the first information according to the first quantization information to obtain second information, wherein the terminal has a first AI network model, and the first information is related to a first processing result of the first AI network model on the first channel information;
  • a first sending module 402 configured to send the second information to a network side device or send the second information and second channel information, where the second channel information is related to the first channel information;
  • a first receiving module 403 configured to receive third information from the network side device, wherein the third information is determined according to third channel information and the second channel information, the third channel information is related to a second processing result of the second AI network model on fourth information, and the fourth information is information obtained by dequantizing the second information;
  • the first updating module 404 is configured to update the first AI network model according to the third information.
  • the first quantification information includes at least one of the following:
  • the quantization method includes scalar quantization and/or vector quantization
  • the device 400 for updating the AI network model further includes:
  • the third receiving module is configured to receive training information from the network side device, where the training information includes at least one of the following:
  • the receiving method of the third information is the receiving method of the third information.
  • the reporting method of the second information includes the content of the second information, wherein the content of the second information includes at least one of the following:
  • the first processing result wherein the first processing result is information in a floating point format before quantization
  • the first processing result is sequentially quantized and dequantized to obtain information in floating point format.
  • the first quantization information satisfies at least one of the following:
  • the terminal selects and reports the network side device
  • the device 400 for updating the AI network model further includes:
  • a fourth receiving module configured to receive first indication information from the network side device
  • a second determining module configured to determine the first quantization information according to the first indication information
  • the first indication information indicates at least one of the following:
  • first identifier corresponds to the first quantitative information
  • the second identifier is used to identify a quantization codebook in a quantization codebook pool, and the first quantization information includes a quantization codebook corresponding to the second identifier;
  • a third identifier is used to identify a quantization codebook group, the quantization codebook group includes at least two quantization codebooks, and the first quantization information includes a quantization codebook in the quantization codebook group corresponding to the third identifier.
  • the device 400 for updating the AI network model further includes:
  • the third sending module is configured to send second indication information to the network side device, wherein the second indication information indicates at least one of the following:
  • first identifier corresponds to the first quantitative information
  • the second identifier is used to identify a quantization codebook in a quantization codebook pool, and the first quantization information includes a quantization codebook corresponding to the second identifier;
  • the third identifier is used to identify a quantization codebook group, where the quantization codebook group includes at least two quantization codebooks, and the first quantization information includes a quantization codebook in the quantization codebook group corresponding to the third identifier;
  • a fourth identifier where the fourth identifier is used to identify a target quantization codebook in the quantization codebook group corresponding to the third identifier, and the first quantization information includes the target quantization codebook.
  • the device 400 for updating the AI network model further includes:
  • a third updating module configured to update the quantization codebook, wherein the first quantization information includes an updated quantization codebook
  • the fourth sending module is configured to send third indication information to the network side device, wherein the third indication information indicates the updated quantization codebook or an identifier of the updated quantization codebook.
  • the third updating module is specifically configured to perform at least one of the following:
  • the quantization codebook is updated based on the instruction of the network side device.
  • the device 400 for updating the AI network model further includes:
  • a fifth sending module configured to send target request information to the network side device, wherein the target request information is used to request an update of a quantization codebook of the terminal;
  • the fifth receiving module is used to receive target response information from the network side device, where the target response information is used to allow the quantization codebook of the terminal to be updated.
  • the device 400 for updating the AI network model further includes:
  • the sixth receiving module is configured to receive fourth indication information, wherein the fourth indication information indicates to update the quantization codebook.
  • the terminal is a terminal in a target group, the target group includes at least two terminals, and the first AI network models of the terminals in the target group correspond to the same second AI network model.
  • the device 400 for updating the AI network model further includes:
  • the seventh receiving module is configured to receive fifth indication information, wherein the fifth indication information indicates updating the quantization codebooks of all terminals in the target group.
  • the device 400 for updating the AI network model further includes:
  • a fourth updating module configured to update the quantization information according to the fifth indication information
  • a sixth sending module is configured to send an updated quantization codebook to the network side device.
  • the device 400 for updating the AI network model further includes:
  • an eighth receiving module configured to receive a target quantization codebook from the network side device, wherein the target quantization codebook is a quantization codebook determined according to quantization codebooks of all terminals in the target group;
  • the third determining module is configured to determine that the first quantization information includes the target quantization codebook.
  • the first quantization information of all terminals in the target group is the same.
  • the device 400 for updating the AI network model further includes:
  • a seventh sending module is configured to send target capability information to the network side device, where the target capability information indicates at least one of the following:
  • the third information is information quantized based on the second quantized information
  • the first updating module 404 is specifically configured to:
  • the second quantization information includes a target quantization bit number, and the target quantization bit number is indicated by the network side device or agreed upon by a protocol.
  • the device for updating the AI network model in the embodiment of the present application can be an electronic device, such as an electronic device with an operating system, or a component in an electronic device, such as an integrated circuit or a chip.
  • the electronic device can be a terminal, or it can be other devices other than a terminal.
  • the terminal can include but is not limited to the types of terminal 11 listed above, and other devices can be servers, network attached storage (NAS), etc., which are not specifically limited in the embodiment of the present application.
  • the device 400 for updating the AI network model provided in the embodiment of the present application can implement each process implemented by the terminal in the method embodiment shown in Figure 2, and can achieve the same beneficial effects. To avoid repetition, it will not be described here.
  • an apparatus for updating an AI network model provided in an embodiment of the present application may be a network side device
  • the device 500 for updating the AI network model may include the following modules:
  • a second receiving module 501 is used to receive second information from a terminal or receive second information and second channel information from a terminal, wherein the second information is information obtained by quantizing the first information, the first information is related to a first processing result of the first AI network model of the terminal on the first channel information, and the second channel information is related to the first channel information;
  • a first determination module 503, configured to determine third channel information based on a second processing result of the fourth information by a second AI network model
  • a second updating module 504, configured to update the second AI network model and determine third information according to the third channel information and the second channel information
  • the second sending module 505 is configured to send the third information to the terminal.
  • the first quantification information includes at least one of the following:
  • the quantization method includes scalar quantization and/or vector quantization
  • the device 500 for updating the AI network model further includes:
  • an eighth sending module configured to send training information to the terminal, where the training information includes at least one of the following:
  • the receiving method of the third information is the receiving method of the third information.
  • the reporting method of the second information includes the content of the second information, wherein the content of the second information includes at least one of the following:
  • the first processing result wherein the first processing result is information in a floating point format before quantization
  • the first processing result is sequentially quantized and dequantized to obtain information in floating point format.
  • the first quantization information satisfies at least one of the following:
  • the terminal selects and reports the network side device
  • the device 500 for updating the AI network model further includes:
  • a ninth sending module configured to send first indication information to the terminal, wherein the first indication information indicates at least one of the following:
  • first identifier corresponds to the first quantitative information
  • the second identifier is used to identify a quantization codebook in a quantization codebook pool, and the first quantization information includes a quantization codebook corresponding to the second identifier;
  • a third identifier is used to identify a quantization codebook group, the quantization codebook group includes at least two quantization codebooks, and the first quantization information includes a quantization codebook in the quantization codebook group corresponding to the third identifier.
  • the device 500 for updating the AI network model further includes:
  • a ninth receiving module configured to receive second indication information from the terminal
  • a fourth determining module configured to determine the first quantization information according to the second indication information
  • the second indication information indicates at least one of the following:
  • first identifier corresponds to the first quantitative information
  • the second identifier is used to identify a quantization codebook in a quantization codebook pool, and the first quantization information includes a quantization codebook corresponding to the second identifier;
  • the third identifier is used to identify a quantization codebook group, where the quantization codebook group includes at least two quantization codebooks, and the first quantization information includes a quantization codebook in the quantization codebook group corresponding to the third identifier;
  • a fourth identifier where the fourth identifier is used to identify a target quantization codebook in the quantization codebook group corresponding to the third identifier, and the first quantization information includes the target quantization codebook.
  • the device 500 for updating the AI network model further includes:
  • a tenth receiving module is configured to receive third indication information from the terminal, wherein the third indication information indicates an updated quantization codebook of the terminal or an identifier of the updated quantization codebook.
  • the device 500 for updating the AI network model further includes:
  • an eleventh receiving module configured to receive target request information from the terminal, wherein the target request information is used to request an update of a quantization codebook of the terminal;
  • a tenth sending module is used to send target response information to the terminal, where the target response information is used to allow updating of a quantization codebook of the terminal.
  • the device 500 for updating the AI network model further includes:
  • An eleventh sending module is configured to send fourth indication information to the terminal, wherein the fourth indication information instructs the terminal to update the quantization codebook.
  • the second receiving module 501 is specifically configured to:
  • Second information is received from each terminal in a target group, wherein the target group includes at least two terminals, and the first AI network models of the terminals in the target group correspond to the same second AI network model.
  • the device 500 for updating the AI network model further includes:
  • the twelfth sending module is configured to send fifth indication information to each terminal in the target group, wherein the fifth indication information instructs all terminals in the target group to update their respective quantization codebooks.
  • the device 500 for updating the AI network model further includes:
  • the twelfth receiving module is configured to receive the updated quantization codebook of each terminal in the target group.
  • the device 500 for updating the AI network model further includes:
  • a fifth determination module configured to determine a target quantization codebook according to quantization codebooks of all terminals in the target group
  • a thirteenth sending module is configured to send the target quantization codebook to each terminal in the target group, wherein the first quantization information includes the target quantization codebook.
  • the first quantization information of all terminals in the target group is the same.
  • the device 500 for updating the AI network model further includes:
  • a thirteenth receiving module used to receive target capability information from the terminal
  • a sixth determining module configured to determine the first quantification information according to the target capability information
  • the target capability information indicates at least one of the following:
  • the first determining module 503 is specifically configured to:
  • the fourth information is quantized based on the second quantization information to obtain the third information.
  • the second quantization information includes a target quantization bit number, and the target quantization bit number is indicated by the network side device or agreed upon by a protocol.
  • the device 500 for updating the AI network model provided in the embodiment of the present application can implement each process implemented by the network-side device in the method embodiment shown in Figure 3, and can achieve the same beneficial effects. To avoid repetition, it will not be described here.
  • the embodiment of the present application further provides a communication device 600, including a processor 601 and a memory 602, wherein the memory 602 stores a program or instruction that can be run on the processor 601.
  • the communication device 600 is a terminal
  • the program or instruction is executed by the processor 601 to implement the various steps of the method embodiment shown in FIG2, and the same technical effect can be achieved.
  • the communication device 600 is a network side device
  • the program or instruction is executed by the processor 601 to implement the various steps of the method embodiment shown in FIG3, and the same technical effect can be achieved. To avoid repetition, it will not be repeated here.
  • the embodiment of the present application also provides a terminal, including a processor and a communication interface, wherein the processor is used to quantize the first information according to the first quantization information to obtain the second information, wherein the terminal has a first AI network model, and the first information is related to the first processing result of the first AI network model on the first channel information; the communication interface is used to send the second information to the network side device or send the second information and second channel information, and the second channel information is related to the first channel information; the communication interface is also used to receive the first information from the network side device.
  • the processor is used to quantize the first information according to the first quantization information to obtain the second information
  • the terminal has a first AI network model, and the first information is related to the first processing result of the first AI network model on the first channel information
  • the communication interface is used to send the second information to the network side device or send the second information and second channel information, and the second channel information is related to the first channel information
  • the communication interface is also used to receive the first information from the network side device.
  • the third information is determined according to the third channel information and the second channel information, the third channel information is related to the second processing result of the second AI network model on the fourth information, and the fourth information is information obtained by dequantizing the second information; the processor is also used to update the first AI network model according to the third information.
  • FIG7 is a schematic diagram of the hardware structure of a terminal implementing an embodiment of the present application.
  • the terminal 700 includes but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709 and at least some of the components of a processor 710.
  • the terminal 700 may also include a power source (such as a battery) for supplying power to each component, and the power source may be logically connected to the processor 710 through a power management system, so as to implement functions such as managing charging, discharging, and power consumption management through the power management system.
  • a power source such as a battery
  • the terminal structure shown in FIG7 does not constitute a limitation on the terminal, and the terminal may include more or fewer components than shown in the figure, or combine certain components, or arrange components differently, which will not be described in detail here.
  • the input unit 704 may include a graphics processing unit (GPU) 7041 and a microphone 7042, and the graphics processor 7041 processes the image data of a static picture or video obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode.
  • the display unit 706 may include a display panel 7061, and the display panel 7061 may be configured in the form of a liquid crystal display, an organic light emitting diode, etc.
  • the user input unit 707 includes a touch panel 7071 and at least one of other input devices 7072.
  • the touch panel 7071 is also called a touch screen.
  • the touch panel 7071 may include two parts: a touch detection device and a touch controller.
  • Other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (such as a volume control key, a switch key, etc.), a trackball, a mouse, and a joystick, which will not be repeated here.
  • the RF unit 701 can transmit the data to the processor 710 for processing; in addition, the RF unit 701 can send uplink data to the network side device.
  • the RF unit 701 includes but is not limited to an antenna, an amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, etc.
  • the memory 709 can be used to store software programs or instructions and various data.
  • the memory 709 may mainly include a first storage area for storing programs or instructions and a second storage area for storing data, wherein the first storage area may store an operating system, an application program or instruction required for at least one function (such as a sound playback function, an image playback function, etc.), etc.
  • the memory 709 may include a volatile memory or a non-volatile memory, or the memory 709 may include both volatile and non-volatile memories.
  • the non-volatile memory may be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory.
  • the volatile memory may be a random access memory (RAM), a static random access memory (SRAM), a dynamic random access memory (DRAM), a synchronous dynamic random access memory (SDRAM), a double data rate synchronous dynamic random access memory (SDRAM), or a volatile random access memory (RAM).
  • the memory 709 in the embodiment of the present application includes but is not limited to these and any other suitable types of memory.
  • the processor 710 may include one or more processing units; optionally, the processor 710 integrates an application processor and a modem processor, wherein the application processor mainly processes operations related to an operating system, a user interface, and application programs, and the modem processor mainly processes wireless communication signals, such as a baseband processor. It is understandable that the modem processor may not be integrated into the processor 710.
  • the processor 710 is configured to perform quantization processing on the first information according to the first quantization information to obtain second information, wherein the terminal has a first AI network model, and the first information is related to a first processing result of the first AI network model on the first channel information;
  • a radio frequency unit 701 is configured to send the second information to a network side device or send the second information and second channel information, where the second channel information is related to the first channel information;
  • the radio frequency unit 701 is further configured to receive third information from the network side device, wherein the third information is determined according to the third channel information and the second channel information, the third channel information is related to the second processing result of the second AI network model on the fourth information, and the fourth information is information obtained by dequantizing the second information;
  • the processor 710 is further configured to update the first AI network model according to the third information.
  • the first quantification information includes at least one of the following:
  • the quantization method includes scalar quantization and/or vector quantization
  • the radio frequency unit 701 is further configured to receive training information from the network side device, where the training information includes at least one of the following:
  • the receiving method of the third information is the receiving method of the third information.
  • the reporting method of the second information includes the content of the second information, wherein the content of the second information includes at least one of the following:
  • the first processing result wherein the first processing result is information in a floating point format before quantization
  • the first processing result is sequentially quantized and dequantized to obtain information in floating point format.
  • the first quantization information satisfies at least one of the following:
  • the terminal selects and reports the network side device
  • processor 710 performs the quantization processing on the first information according to the first quantization information:
  • the radio frequency unit 701 is further configured to receive first indication information from the network side device;
  • the processor 710 is further configured to determine the first quantization information according to the first indication information
  • the first indication information indicates at least one of the following:
  • first identifier corresponds to the first quantitative information
  • the second identifier is used to identify a quantization codebook in a quantization codebook pool, and the first quantization information includes a quantization codebook corresponding to the second identifier;
  • a third identifier is used to identify a quantization codebook group, the quantization codebook group includes at least two quantization codebooks, and the first quantization information includes a quantization codebook in the quantization codebook group corresponding to the third identifier.
  • the radio frequency unit 701 is further configured to send second indication information to the network side device, wherein the second indication information indicates at least one of the following:
  • first identifier corresponds to the first quantitative information
  • the second identifier is used to identify a quantization codebook in a quantization codebook pool, and the first quantization information includes a quantization codebook corresponding to the second identifier;
  • the third identifier is used to identify a quantization codebook group, where the quantization codebook group includes at least two quantization codebooks, and the first quantization information includes a quantization codebook in the quantization codebook group corresponding to the third identifier;
  • a fourth identifier where the fourth identifier is used to identify a target quantization codebook in the quantization codebook group corresponding to the third identifier, and the first quantization information includes the target quantization codebook.
  • the processor 710 is further configured to update the quantization codebook, wherein the first quantization information includes an updated quantization codebook;
  • the radio frequency unit 701 is further configured to send third indication information to the network side device, wherein the third indication information indicates the updated quantization codebook or an identifier of the updated quantization codebook.
  • the updating of the quantization codebook performed by the processor 710 includes at least one of the following:
  • the quantization codebook is updated based on the instruction of the network side device.
  • the radio frequency unit 701 is further configured to:
  • Target request information Sending target request information to the network side device, where the target request information is used to request an update of a quantization codebook of the terminal;
  • the radio frequency unit 701 is further configured to receive fourth indication information, wherein the fourth indication information indicates updating the quantization codebook.
  • the terminal is a terminal in a target group, the target group includes at least two terminals, and the first AI network models of the terminals in the target group correspond to the same second AI network model.
  • the radio frequency unit 701 is further used to receive fifth indication information, wherein the fifth indication information indicates to update the quantization codebook of all terminals in the target group.
  • the processor 710 is further configured to update the quantization information according to the fifth indication information
  • the radio frequency unit 701 is further configured to send an updated quantization codebook to the network side device.
  • the radio frequency unit 701 is further configured to receive a target quantization codebook from the network side device, wherein the target quantization codebook is a quantization codebook determined according to quantization codebooks of all terminals in the target group;
  • the processor 710 is further configured to determine that the first quantization information includes the target quantization codebook.
  • the first quantization information of all terminals in the target group is the same.
  • the radio frequency unit 701 is further configured to send target capability information to the network side device, where the target capability information indicates at least one of the following:
  • the second quantization information includes a target quantization bit number, and the target quantization bit number is indicated by the network side device or agreed upon by a protocol.
  • the terminal 700 provided in the embodiment of the present application can implement the various processes performed by the device 400 for updating the AI network model as shown in Figure 4, and can achieve the same beneficial effects. To avoid repetition, they will not be described here.
  • the embodiment of the present application also provides a network side device, including a processor and a communication interface, wherein the communication interface is used to receive second information from a terminal or receive second information and second channel information from a terminal, wherein the second information is information obtained after quantizing the first information, the first information is related to a first processing result of the first channel information by a first AI network model of the terminal, and the second channel information is related to the first channel information; the processor is used to dequantize the second information according to the first quantization information to obtain fourth information, And based on the second processing result of the fourth information by the second AI network model, determine the third channel information, and update the second AI network model according to the third channel information and the second channel information, and determine the third information; the communication interface is also used to send the third information to the terminal.
  • the communication interface is also used to send the third information to the terminal.
  • the network side device embodiment can implement the various processes performed by the device 500 for updating the AI network model as shown in Figure 5, and can achieve the same technical effect, which will not be repeated here.
  • the embodiment of the present application also provides a network side device.
  • the network side device 800 includes: an antenna 801, a radio frequency device 802, a baseband device 803, a processor 804 and a memory 805.
  • Antenna 801 is connected to the radio frequency device 802.
  • the radio frequency device 802 receives information through the antenna 801 and sends the received information to the baseband device 803 for processing.
  • the baseband device 803 processes the information to be sent and sends it to the radio frequency device 802.
  • the radio frequency device 802 processes the received information and sends it out through the antenna 801.
  • the method executed by the network-side device in the above embodiment may be implemented in the baseband device 803, which includes a baseband processor.
  • the baseband device 803 may include, for example, at least one baseband board, on which multiple chips are arranged, as shown in Figure 8, one of which is, for example, a baseband processor, which is connected to the memory 805 through a bus interface to call the program in the memory 805 and execute the network device operations shown in the above method embodiment.
  • the network side device may also include a network interface 806, which is, for example, a Common Public Radio Interface (CPRI).
  • CPRI Common Public Radio Interface
  • the network side device 800 of the embodiment of the present application also includes: instructions or programs stored in the memory 805 and executable on the processor 804.
  • the processor 804 calls the instructions or programs in the memory 805 to execute the methods executed by the modules shown in Figure 5 and achieve the same technical effect. To avoid repetition, it will not be repeated here.
  • An embodiment of the present application also provides a readable storage medium, on which a program or instruction is stored.
  • a program or instruction is stored.
  • the program or instruction is executed by a processor, each process of the method embodiment shown in Figure 2 or Figure 3 is implemented, and the same technical effect can be achieved. To avoid repetition, it will not be repeated here.
  • the processor is the processor in the terminal described in the above embodiment.
  • the readable storage medium includes a computer readable storage medium, such as a computer read-only memory ROM, a random access memory RAM, a magnetic disk or an optical disk.
  • An embodiment of the present application further provides a chip, which includes a processor and a communication interface, wherein the communication interface is coupled to the processor, and the processor is used to run programs or instructions to implement the various processes of the method embodiment shown in Figure 2 or Figure 3, and can achieve the same technical effect. To avoid repetition, it will not be repeated here.
  • the chip mentioned in the embodiments of the present application can also be called a system-level chip, a system chip, a chip system or a system-on-chip chip, etc.
  • the embodiments of the present application further provide a computer program/program product, which is stored in a storage medium, and is executed by at least one processor to implement the various processes of the method embodiment shown in Figure 2 or Figure 3, and can achieve the same technical effect. To avoid repetition, it will not be repeated here.
  • the present application also provides a communication system, including: a terminal and a network side device, wherein the terminal can be used to perform
  • the network side device may be used to execute the steps of the method for updating the AI network model as shown in FIG. 2
  • the network side device may be used to execute the steps of the method for updating the AI network model as shown in FIG. 3 .
  • the technical solution of the present application can be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, a magnetic disk, or an optical disk), and includes a number of instructions for enabling a terminal (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the methods described in each embodiment of the present application.
  • a storage medium such as ROM/RAM, a magnetic disk, or an optical disk
  • a terminal which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

本申请公开了一种更新AI网络模型的方法、装置和通信设备,属于通信技术领域,本申请实施例的方法包括:终端根据第一量化信息对第一信息进行量化处理,得到第二信息(201),终端具有第一AI网络模型,第一信息与第一AI网络模型对第一信道信息的第一处理结果相关;终端向网络侧设备发送第二信息或者,发送第二信息和第二信道信息(202),第二信道信息与第一信道信息相关;终端接收来自网络侧设备的第三信息(203),第三信息是根据第三信道信息和第二信道信息确定的,第三信道信息与第二AI网络模型对第四信息的第二处理结果相关,第四信息为对第二信息进行解量化处理得到的信息;终端根据第三信息更新第一AI网络模型(204)。

Description

更新AI网络模型的方法、装置和通信设备
相关申请的交叉引用
本申请主张在2022年11月14日在中国提交的中国专利申请No.202211426189.1的优先权,其全部内容通过引用包含于此。
技术领域
本申请属于通信技术领域,具体涉及一种更新AI网络模型的方法、装置和通信设备。
背景技术
在相关技术中,对借助人工智能(Artificial Intelligence,AI)网络模型来传输信道特征信息的方法进行了研究。
该AI网络模型包括应用于终端的编码AI网络模型和应用于基站的解码AI网络模型,其中,编码AI网络模型需要与解码AI网络模型匹配。可以采用线上联合训练的方式训练编码AI网络模型和解码AI网络模型,即终端侧计算编码AI网络模型的前向信息,并发送给基站,基站计算反向梯度信息,并发送给终端,以使终端和基站分别根据前向信息和反向梯度信息更新编码AI网络模型和解码AI网络模型,并依此迭代循环,直至训练得到相互匹配的编码AI网络模型和解码AI网络模型。
具体地,编码AI网络模型可以分为编码部分和量化部分,编码方法包括标量编码和矢量编码,终端基于编码部分得到浮点数格式的前向信息,通过量化部分将前向信息转化为比特流形式,然后将该比特流形式的前向信息传输给基站;解码AI网络模型可以分为解量化部分和解码部分,基站在接收比特流形式的前向信息时,基于解量化部分将该比特流形式的前向信息转化为浮点数格式的前向信息,通过解码部分根据浮点数格式的前向信息计算浮点数格式的反向梯度信息,然后将浮点数格式的反向梯度信息转换为比特流格式的反向梯度信息后,传递给终端。
由于一个基站可以获取多个终端的信道状态信息(Channel State Information,CSI)反馈,这样,基站的解码AI网络模型需要与多个终端的编码AI网络模型进行联合训练,此时,由于不同的终端可以采用不同的量化方式,或者基站可能不知道每一个终端的量化方式,这样,将会造成基站和终端无法进行线上联合训练。
发明内容
本申请实施例提供一种更新AI网络模型的方法、装置和通信设备,使终端按照特定的量化方式量化联合训练编码和解码AI网络模型过程中的前向信息,并将量化后的前向信息传递给基站,使得基站能够准确的解量化该前向信息,并据此计算对应的反向梯度信 息后传递至终端,使得线上联合训练能够顺利进行,且提升了训练得到的编码AI网络模型和解码AI网络模型之间的匹配程度。
第一方面,提供了一种更新AI网络模型的方法,该方法包括:
终端根据第一量化信息对第一信息进行量化处理,得到第二信息,其中,所述终端具有第一AI网络模型,所述第一信息与所述第一AI网络模型对第一信道信息的第一处理结果相关;
所述终端向网络侧设备发送所述第二信息或者,发送所述第二信息和第二信道信息,所述第二信道信息与所述第一信道信息相关;
所述终端接收来自所述网络侧设备的第三信息,其中,所述第三信息是根据第三信道信息和所述第二信道信息确定的,所述第三信道信息与第二AI网络模型对第四信息的第二处理结果相关,所述第四信息为对所述第二信息进行解量化处理得到的信息;
所述终端根据所述第三信息更新所述第一AI网络模型。
第二方面,提供了一种更新AI网络模型的装置,应用于终端,该装置包括:
第一处理模块,用于根据第一量化信息对第一信息进行量化处理,得到第二信息,其中,所述终端具有第一AI网络模型,所述第一信息与所述第一AI网络模型对第一信道信息的第一处理结果相关;
第一发送模块,用于向网络侧设备发送所述第二信息或者,发送所述第二信息和第二信道信息,所述第二信道信息与所述第一信道信息相关;
第一接收模块,用于接收来自所述网络侧设备的第三信息,其中,所述第三信息是根据第三信道信息和所述第二信道信息确定的,所述第三信道信息与第二AI网络模型对第四信息的第二处理结果相关,所述第四信息为对所述第二信息进行解量化处理得到的信息;
第一更新模块,用于根据所述第三信息更新所述第一AI网络模型。
第三方面,提供了一种更新AI网络模型的方法,包括:
网络侧设备接收来自终端的第二信息或者,接收来自终端的第二信息和第二信道信息,其中,所述第二信息为对第一信息进行量化处理后得到的信息,所述第一信息与所述终端具有的第一AI网络模型对第一信道信息的第一处理结果相关,所述第二信道信息与所述第一信道信息相关;
所述网络侧设备根据第一量化信息对所述第二信息进行解量化处理,得到第四信息;
所述网络侧设备基于第二AI网络模型对所述第四信息的第二处理结果,确定第三信道信息;
所述网络侧设备根据所述第三信道信息和所述第二信道信息,更新所述第二AI网络模型,以及确定第三信息;
所述网络侧设备向所终端发送所述第三信息。
第四方面,提供了一种更新AI网络模型的装置,应用于网络侧设备,该装置包括:
第二接收模块,用于接收来自终端的第二信息或者,接收来自终端的第二信息和第二 信道信息,其中,所述第二信息为对第一信息进行量化处理后得到的信息,所述第一信息与所述终端具有的第一AI网络模型对第一信道信息的第一处理结果相关,所述第二信道信息与所述第一信道信息相关;
第二处理模块,用于根据第一量化信息对所述第二信息进行解量化处理,得到第四信息;
第一确定模块,用于基于第二AI网络模型对所述第四信息的第二处理结果,确定第三信道信息;
第二更新模块,用于根据所述第三信道信息和所述第二信道信息,更新所述第二AI网络模型,以及确定第三信息;
第二发送模块,用于向所终端发送所述第三信息。
第五方面,提供了一种通信设备,该通信设备包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面或第三方面所述的方法的步骤。
第六方面,提供了一种终端,包括处理器及通信接口,其中,所述处理器用于根据第一量化信息对第一信息进行量化处理,得到第二信息,其中,所述终端具有第一AI网络模型,所述第一信息与所述第一AI网络模型对第一信道信息的第一处理结果相关;所述通信接口用于向网络侧设备发送所述第二信息或者,发送所述第二信息和第二信道信息,所述第二信道信息与所述第一信道信息相关;所述通信接口还用于接收来自所述网络侧设备的第三信息,其中,所述第三信息是根据第三信道信息和所述第二信道信息确定的,所述第三信道信息与第二AI网络模型对第四信息的第二处理结果相关,所述第四信息为对所述第二信息进行解量化处理得到的信息;所述处理器还用于根据所述第三信息更新所述第一AI网络模型。
第七方面,提供了一种网络侧设备,包括处理器及通信接口,其中,所述通信接口用于接收来自终端的第二信息或者,接收来自终端的第二信息和第二信道信息,其中,所述第二信息为对第一信息进行量化处理后得到的信息,所述第一信息与所述终端具有的第一AI网络模型对第一信道信息的第一处理结果相关,所述第二信道信息与所述第一信道信息相关;所述处理器用于根据第一量化信息对所述第二信息进行解量化处理,得到第四信息,以及基于第二AI网络模型对所述第四信息的第二处理结果,确定第三信道信息,以及根据所述第三信道信息和所述第二信道信息,更新所述第二AI网络模型,以及确定第三信息;所述通信接口还用于向所终端发送所述第三信息。
第八方面,提供了一种通信系统,包括:终端和网络侧设备,所述终端可用于执行如第一方面所述的更新AI网络模型的方法的步骤,所述网络侧设备可用于执行如第三方面所述的更新AI网络模型的方法的步骤。
第九方面,提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的方法的步骤,或者实现如第三方面所述的 方法的步骤。
第十方面,提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法,或实现如第三方面所述的方法。
第十一方面,提供了一种计算机程序/程序产品,所述计算机程序/程序产品被存储在存储介质中,所述计算机程序/程序产品被至少一个处理器执行以实现如第一方面所述的更新AI网络模型的方法的步骤,或者所述计算机程序/程序产品被至少一个处理器执行以实现如第三方面所述的更新AI网络模型的方法的步骤。
在本申请实施例中,终端能够按照第一量化信息对编码AI网络模型输出的前向信息即第一信息进行量化处理,以将浮点数格式的前向信息转化为比特流,并向基站发送该比特流,且网络侧设备能够根据第一量化信息将该比特流解量化为浮点数格式的前向信息,并基于解码AI网络模型计算该前向信息对应的解码AI网络模型的反向梯度信息和编码AI网络模型的反向梯度信息,最终网络侧设备能够基于解码AI网络模型的反向梯度信息更新解码AI网络模型,并将编码AI网络模型的反向梯度信息量化后发送给终端,以使终端据此更新编码AI网络模型,至此便实现了编码AI网络模型和解码AI网络模型的线上联合训练。解决了因网络侧设备不知道终端使用的第一量化信息,而无法解量化该终端上报的比特流形式的前向信息,造成编码AI网络模型和解码AI网络模型的线上联合训练无法实施的问题。
附图说明
图1是本申请实施例能够应用的一种无线通信系统的结构示意图;
图2是本申请实施例提供的一种更新AI网络模型的方法的流程图;
图3是本申请实施例提供的一种更新AI网络模型的方法的流程图;
图4是本申请实施例提供的一种更新AI网络模型的装置的结构示意图;
图5是本申请实施例提供的一种更新AI网络模型的装置的结构示意图;
图6是本申请实施例提供的一种通信设备的结构示意图;
图7是本申请实施例提供的一种终端的硬件结构示意图
图8是本申请实施例提供的一种网络侧设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象, 而不用于描述特定的顺序或先后次序。应该理解这样使用的术语在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”所区别的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”一般表示前后关联对象是一种“或”的关系。
值得指出的是,本申请实施例所描述的技术不限于长期演进型(Long Term Evolution,LTE)/LTE的演进(LTE-Advanced,LTE-A)系统,还可用于其他无线通信系统,诸如码分多址(Code Division Multiple Access,CDMA)、时分多址(Time Division Multiple Access,TDMA)、频分多址(Frequency Division Multiple Access,FDMA)、正交频分多址(Orthogonal Frequency Division Multiple Access,OFDMA)、单载波频分多址(Single-carrier Frequency Division Multiple Access,SC-FDMA)和其他系统。本申请实施例中的术语“系统”和“网络”常被可互换地使用,所描述的技术既可用于以上提及的系统和无线电技术,也可用于其他系统和无线电技术。以下描述出于示例目的描述了新空口(New Radio,NR)系统,并且在以下大部分描述中使用NR术语,但是这些技术也可应用于NR系统应用以外的应用,如第6代(6th Generation,6G)通信系统。
图1示出本申请实施例可应用的一种无线通信系统的框图。无线通信系统包括终端11和网络侧设备12。其中,终端11可以是手机、平板电脑(Tablet Personal Computer)、膝上型电脑(Laptop Computer)或称为笔记本电脑、个人数字助理(Personal Digital Assistant,PDA)、掌上电脑、上网本、超级移动个人计算机(ultra-mobile personal computer,UMPC)、移动上网装置(Mobile Internet Device,MID)、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、机器人、可穿戴式设备(Wearable Device)、车载设备(Vehicle User Equipment,VUE)、行人终端(Pedestrian User Equipment,PUE)、智能家居(具有无线通信功能的家居设备,如冰箱、电视、洗衣机或者家具等)、游戏机、个人计算机(personal computer,PC)、柜员机或者自助机等终端侧设备,可穿戴式设备包括:智能手表、智能手环、智能耳机、智能眼镜、智能首饰(智能手镯、智能手链、智能戒指、智能项链、智能脚镯、智能脚链等)、智能腕带、智能服装等。需要说明的是,在本申请实施例并不限定终端11的具体类型。网络侧设备12可以包括接入网设备或核心网设备,其中,接入网设备也可以称为无线接入网设备、无线接入网(Radio Access Network,RAN)、无线接入网功能或无线接入网单元。接入网设备可以包括基站、无线局域网(Wireless Local Area Networks,WLAN)接入点或WiFi节点等,基站可被称为节点B、演进节点B(eNB)、接入点、基收发机站(Base Transceiver Station,BTS)、无线电基站、无线电收发机、基本服务集(Basic Service Set,BSS)、扩展服务集(Extended Service Set,ESS)、家用B节点、家用演进型B节点、发送接收点(Transmitting Receiving Point,TRP)或所述领域中其他某个合适的术语,只要达到相同的技术效果,所述基站不限于特定技术词汇,需要说明的是,在本申请实施例中仅以NR系统中的基站为例进行介绍,并不限定基站的具体类型。
由信息论可知,准确的信道状态信息(channel state information,CSI)对信道容量的至关重要。尤其是对于多天线系统来讲,发送端可以根据CSI优化信号的发送,使其更加匹配信道的状态。如:信道质量指示(channel quality indicator,CQI)可以用来选择合适的调制编码方案(modulation and coding scheme,MCS)实现链路自适应;预编码矩阵指示(precoding matrix indicator,PMI)可以用来实现特征波束成形(eigen beamforming)从而最大化接收信号的强度,或者用来抑制干扰(如小区间干扰、多用户之间干扰等)。因此,自从多天线技术(multi-input multi-output,MIMO)被提出以来,CSI获取一直都是研究热点。
通常,基站在在某个时隙(slot)的某些时频资源上发送CSI参考信号(CSI Reference Signal,CSI-RS),终端根据CSI-RS进行信道估计,计算这个slot上的信道信息,通过码本将PMI反馈给基站,基站根据终端反馈的码本信息组合出信道信息,在下一次CSI上报之前,基站以此进行数据预编码及多用户调度。
为了进一步减少CSI反馈开销,终端可以将每个子带上报PMI改成按照时延(delay)上报PMI,由于delay域的信道更集中,用更少的delay的PMI就可以近似表示全部子带的PMI,即将delay域信息压缩之后再上报。
同样,为了减少开销,基站可以事先对CSI-RS进行预编码,将编码后的CSI-RS发送个终端,终端看到的是经过编码之后的CSI-RS对应的信道,终端只需要在网络侧指示的端口中选择若干个强度较大的端口,并上报这些端口对应的系数即可。
进一步,为了更好的压缩信道信息,可以使用神经网络或机器学习的方法。
人工智能目前在各个领域获得了广泛的应用。AI模块有多种实现方式,例如神经网络、决策树、支持向量机、贝叶斯分类器等。本申请以神经网络为例进行说明,但是并不限定AI模块的具体类型。
神经网络的参数通过优化算法进行优化。优化算法就是一种能够帮我们最小化或者最大化目标函数(有时候也叫损失函数)的一类算法。而目标函数往往是模型参数和数据的数学组合。例如给定数据X和其对应的标签Y,我们构建一个神经网络模型f(.),有了模型后,根据输入x就可以得到预测输出f(x),并且可以计算出预测值和真实值之间的差距(f(x)-Y),这个就是损失函数。我们的目的是找到合适的权值和偏置,使上述的损失函数的值达到最小,损失值越小,则说明我们的模型越接近于真实情况。
目前常见的优化算法,基本都是基于误差反向传播(error Back Propagation,BP)算法。BP算法的基本思想是,学习过程由信号的正向传播与误差的反向传播两个过程组成。正向传播时,输入样本从输入层传入,经各隐层逐层处理后,传向输出层。若输出层的实际输出与期望的输出不符,则转入误差的反向传播阶段。误差反传是将输出误差以某种形式通过隐层向输入层逐层反传,并将误差分摊给各层的所有单元,从而获得各层单元的误差信号,此误差信号即作为修正各单元权值的依据。这种信号正向传播与误差反向传播的各层权值调整过程,是周而复始地进行的。权值不断调整的过程,也就是网络的学习训练 过程。此过程一直进行到网络输出的误差减少到可接受的程度,或进行到预先设定的学习次数为止。
常见的优化算法有梯度下降(Gradient Descent)、随机梯度下降(Stochastic Gradient Descent,SGD)、小批量梯度下降(mini-batch gradient descent)、动量法(Momentum)、Nesterov(其表示带动量的随机梯度下降)、自适应梯度下降(Adaptive gradient descent,Adagrad)、自适应学习率调整(Adadelta)、均方根误差降速(root mean square prop,RMSprop)、自适应动量估计(Adaptive Moment Estimation,Adam)等。
这些优化算法在误差反向传播时,都是根据损失函数得到的误差/损失,对当前神经元求导数/偏导,加上学习速率、之前的梯度/导数/偏导等影响,得到梯度,将梯度传给上一层。
CSI压缩恢复流程为:终端估计CSI-RS,计算信道信息,将计算的信道信息或者原始的估计到的信道信息通过编码AI网络模型得到编码结果,将编码结果发送给基站,基站接收编码后的结果,输入到解码AI网络模型中,恢复信道信息。
具体的,基于神经网络的CSI压缩反馈方案是,在终端对信道信息进行压缩编码,将压缩后的内容发送给基站,在基站对压缩后的内容进行解码,从而恢复信道信息,此时基站的解码AI网络模型和终端的编码AI网络模型需要联合训练,达到合理的匹配度。编码AI网络模型的输入是信道信息,输出是编码信息,即信道特征信息,解码AI网络模型的输入是编码信息,输出是恢复的信道信息。
在相关技术中,训练编码AI网络模型和解码AI网络模型的方法有以下三种:
1)在终端和网络侧设备中的一侧训练编码AI网络模型和解码AI网络模型,并将训练结果传递给另一侧;
2)在终端训练编码AI网络模型,在网络侧设备训练解码AI网络模型,然后再对独立训练的编码AI网络模型和解码AI网络模型进行匹配;
3)通过终端和网络侧设备之间的线上交互,实现编码AI网络模型和解码AI网络模型的线上联合训练,即终端向网络侧设备发送联合训练过程中的前向信息,网络侧设备向终端发送对应的反向信息,以使网络侧设备根据前向信息更新解码AI网络模型,使终端根据反向信息更新编码AI网络模型。
本申请实施例主要针对上述方法3)中的编码AI网络模型和解码AI网络模型的线上联合训练提出改进。
在相关技术中,当基站存在多个用户时,基站的解码AI网络模型需要与全部用户的编码AI网络模型匹配,此时,每个用户计算自己的前向信息,发送给基站,基站计算每一个用户的反向梯度信息,再发送给对应的用户。
本申请实施例中的AI网络模型包括第一AI网络模型和第二AI网络模型。
其中,第一AI网络模型可以具有压缩和/或编码功能,即该第一AI网络模型可以是编码AI网络模型、压缩和编码AI网络模型、压缩AI网络模型中的任一项。与之相对应 的,第二AI网络模型可以具有解压缩和/或解编码功能,即该第二AI网络模型可以是解码AI网络模型、解压缩和解编码AI网络模型、解压缩AI网络模型中的任一项。为了便于说明,本申请实施例中以第一AI网络模型为编码AI网络模型,第二AI网络模型为解码AI网络模型为例进行举例说明,在此不构成具体限定。
一种实施方式中,编码AI网络模型又可以包括编码部分和量化部分,即编码AI网络模型又可以称之为量化AI网络模型或编码和量化AI网络模型;与之相对应的,解码AI网络模型又可以包括解码部分和解量化部分,即解码AI网络模型又可以称之为解量化AI网络模型或解码和解量化AI网络模型。
当然,本申请实施例中的量化和解量化也可以采用非AI的方式,或者采用与编码AI网络模型和解码AI网络模型独立的AI网络模型,且为了便于说明,本申请实施例中以编码AI网络模型又可以包括编码部分和量化部分,且解码AI网络模型又可以包括解码部分和解量化部分为例进行举例说明,在此不构成具体限定。
其中,编码部分所输出的信道特征信息以及解码部分的输入是浮点数格式的数据,而数据传递过程中,通常是传递比特流格式的数据,本申请实施例中,终端通过量化将浮点数格式的数据转化为比特流格式的数据后进行传递,且网络侧设备在接收比特流格式的数据后,对该比特流格式的数据进行相应的解量化,以恢复浮点数格式的数据,以便于将浮点数格式的数据输入解码部分,得到对应的反向信息,其中,反向信息又可以称之为后向信息或反向梯度信息等,在此不作具体限定。
在相关技术中,由于在训练AI网络模型的过程中,量化码本等量化信息也是不断训练和更新的,基站可能不知道终端实际使用的量化信息,造成基站不能够准确的解量化终端发送的比特流形式的前向信息,造成编码AI网络模型和解码AI网络模型的线上联合训练不能够实现。
而本申请实施例中,终端基于与网络侧设备都知道的第一量化信息来量化前向信息,网络侧设备能够准确的解量化终端发送的比特流形式的前向信息,和/或,网络侧设备采用固定的或与终端都知道的量化方法来量化反向信息,使得终端能够准确的解量化网络侧设备发送的比特流形式的反向信息,使得编码AI网络模型和解码AI网络模型的线上联合训练能够实现,最终训练得到相互匹配的编码AI网络模型和解码AI网络模型。
下面结合附图,通过一些实施例及其应用场景对本申请实施例提供的更新AI网络模型的方法、更新AI网络模型的装置及通信设备等进行详细地说明。
请参阅图2,本申请实施例提供的一种更新AI网络模型的方法,其执行主体是终端,如图2所示,该终端执行的更新AI网络模型的方法可以包括以下步骤:
步骤201、终端根据第一量化信息对第一信息进行量化处理,得到第二信息,其中,所述终端具有第一AI网络模型,所述第一信息与所述第一AI网络模型对第一信道信息的第一处理结果相关。
步骤202、所述终端向网络侧设备发送所述第二信息或者,发送所述第二信息和第二 信道信息,所述第二信道信息与所述第一信道信息相关。
步骤203、所述终端接收来自所述网络侧设备的第三信息,其中,所述第三信息是根据第三信道信息和所述第二信道信息确定的,所述第三信道信息与第二AI网络模型对第四信息的第二处理结果相关,所述第四信息为对所述第二信息进行解量化处理得到的信息。
步骤204、所述终端根据所述第三信息更新所述第一AI网络模型。
其中,第一量化信息是终端和网络侧设备都知道的量化信息,例如:所述第一量化信息满足以下至少一项:
由所述网络侧设备指示;
由协议约定;
由所述终端选择并上报所述网络侧设备;
与所述第一AI网络模型关联。
一种实施方式中,第一量化信息与所述第一AI网络模型关联,可以是第一量化信息与所述第一AI网络模型一起训练,此时,终端可以上报第一AI网络模型的编号,或者由网路侧设备提前指示或配置第一AI网络模型的初始AI网络模型,这样,网络侧设备在获知终端使用的第一AI网络模型时,也一并获知了该第一AI网络模型关联的第一量化信息。
一种实施方式中,第一AI网络模型用于对第一信道信息进行压缩和/或编码处理,得到信道特征信息,即第一处理为压缩和/或编码处理,第一处理结果可以是信道特征信息,此时,所述第一信息与所述第一AI网络模型对第一信道信息的第一处理结果相关,可以是第一信息包括第一AI网络模型输出的信道特征信息,或者第一信息包括对第一AI网络模型输出的信道特征信息进行一定的处理后得到的信息,如分段处理后的信道特征信息。
一种实施方式中,第一信息可以理解为浮点数格式的前向信息,第二信息可以是通过对浮点数格式的前向信息进行量化处理后得到的前向信息。该第二信息可以是比特流格式的前向信息或浮点数格式的前向信息,为了便于说明,本申请实施例中通常以第二信息是比特流格式的前向信息为例进行举例说明。
本申请实施例中的量化方法可以包括标量量化和矢量量化中的至少一项,其中,标量量化是对每一个浮点数使用固定的比特(bit)数进行量化,需要通过查表来确定每个bit组合对应的值,以将浮点数转化为对应的bit组合,在解量化的时候,则需要基于相同的表格或规则进行查表,以将bit组合解量化为对应的浮点数。而矢量量化是将多个浮点数一起量化为一定长度的bit流,需要基于量化码本进行量化,同样,在解量化的时候,需要对应的量化码本进行解量化。为了便于说明,本申请实施例中,可以将标量量化使用的表格作为矢量量化的一个量化码本。
在训练AI网络模型的过程中,量化码本也是不断训练和更新的,为了在终端侧进行量化,以及在网络侧进行相应的解量化,终端和网络侧设备都需要知道使用的量化码本,因此,在终端更新量化码本之后,需要和网络侧设备同步量化码本,否则无法正确传递前向信息,同样,反向信息在传递的时候通常也要通过量化以bit流的形式发送,此时,也 需要使用固定的量化方式或者使用双方已知的量化方式对反向信息进行量化和解量化。
网络侧设备会根据上述第一量化信息对第二信息进行解量化处理,以得到第四信息,其中,第四信息可以是与第一信息完全相同或者部分相同的浮点数格式的前向信息。然后,网络侧设备基于第二AI网络模型对第四信息进行解编码和/或解压缩处理,以恢复第三信道信息,并将第三信道信息与第二信道信息之间的梯度差反向传递至第二AI网络模型的输入层,得到第一AI网络模型的反向信息。例如:假设编码AI网络模型包括A、B、C这三个层,解码AI网络模型包括D、E、F这三个层,则前向信息的传递过程为:Input->A->B->C->X->D->E->F->output,其中,input是编码AI网络模型的输入,即第一信道信息,output是解码AI网络模型的输出,即第三信道信息,X包括终端侧的量化和基站侧的解量化过程,C的输出是前向信息,X包括对该前向信息的量化,终端将量化后的比特流上报给基站,基站解量化该比特流以恢复前向信息,然后经过DEF得到前向信息对应的输出,将该输出与第二信道信息做梯度差得到F的反向信息,然后传给E,计算E的反向信息,然后传给D计算D的反向信息,即量化前的第三信息,基站将该第三信息量化后传递给C,即到达编码AI网络模型侧,其中,第三信息的量化可以与上述X无关的量化过程,例如:对第三信息采用浮点型数据类型16(float 16)的量化方式进行量化,终端在收到第三信息后,可以依次通过C、B、A进行反向传播,以依次调整C、B、A的权重,完成对第一AI网络模型的一次更新。
一种实施方式中,终端可以向网络侧设备上报第二信息而不上报第二信道信息,例如:第一信道信息是协议约定的,或者是网络侧设备和终端协商确定的离线的信道信息,此时,终端无需上报第二信道信息,其中,第二信道信息作为样本标签,用于供网络侧设备根据第二AI网络模型的解码结果和该样本标签来确定反向信息。
一种实施方式中,终端可以向网络侧设备上报第二信息和第二信道信息,例如:第一信道信息是终端测量到的信道信息,网络侧设备并不知道终端的信道信息,此时,终端向网路侧设备上报第二信息和第二信道信息,以使网络侧设备根据第二AI网络模型的解码结果和第二信道信息来确定反向信息。
一种实施方式中,所述第二信道信息与所述第一信道信息相关,可以是第二信道信息与第一信道信息相同。此时,网络侧设备可以基于该第二信道信息与第三信道信息作梯度差,以确定第一AI网络模型的反向信息。
一种实施方式中,所述第二信道信息与所述第一信道信息相关,可以是第二信道信息是对第一信道信息进行编码等处理后得到的信道信息。此时,网络侧设备可以对第二信道信息进行解码处理,得到与第一信道信息完全相同或者部分相同的信道信息,并基于该解码后的第二信道信息与第三信道信息作梯度差,以确定第一AI网络模型的反向信息。
一种实施方式中,第三信息可以理解为反向信息,例如:比特流格式的反向信息。网络侧设备可以将第二AI网络模型输出的浮点数格式的反向信息,量化为比特流格式的反向信息后传递给终端,其中,网络侧设备采用的量化方式可以与终端量化第一信息的方式 相同,或者,网络侧设备采用固定不变的量化方式,或者网络侧设备采用终端已知的其他量化方式,在此不作具体限定。
一种实施方式中,第四信息可以理解为对第二信息进行解量化处理后得到的浮点数格式的前向信息。网络侧设备可以基于已知的第一量化信息对第二信息进行解量化处理,得到浮点数格式的前向信息,并将该浮点数格式的前向信息输入至第二AI网络模型,以得到第三信道信息。
一种实施方式中,第二AI网络模型用于对第四信息进行解码和/或解压缩处理,以恢复信道信息。第三信道信息与第二AI网络模型对第四信息的第二处理结果相关,可以是第三信道信息是第二AI网络模型输出的解码和/或解压缩处理后的信道信息,或者,第三信道信息是对第二AI网络模型输出的解码和/或解压缩处理后的信道信息进行一定处理后得到的信道信息,例如:第三信道信息是对第二AI网络模型输出的解码和/或解压缩处理后的至少两个分段的信道信息进行拼接处理后得到的信道信息,其中,第二处理即上述解码和/或解压缩处理。
作为一种可选的实施方式,所述第一量化信息包括以下至少一项:
量化方式,所述量化方式包括标量量化和/或矢量量化;
更新策略;
所述第一AI网络模型的第一处理结果的浮点数个数;
所述第二信息的比特数;
量化比特数;
量化码本;
量化处理的分段方式。
一种实施方式中,对于标量量化,其量化信息的指示,可以针对每一个浮点数所使用的量化比特数,可以使用一个长度与浮点数个数相同的向量指示(例如使用向量[2,2,2,3,3,3]表示前三个浮点数使用2比特量化,后三个浮点数使用三个比特进行量化),也可以集中指示使用相同比特数量化的浮点数集合(例如约定连续前k1个浮点数使用1比特量化,之后连续k2个浮点数使用2比特量化,再之后连续k3个浮点数使用3bit进行量化,最后剩余连续k4个浮点数使用4比特进行量化。此时可以指示[0,3,3,0]表示前3个浮点数使用2比特进行量化,后三个浮点数使用3比特进行量化)。
一种实施方式中,对于矢量量化,其中量化信息的指示,可以指示矢量量化码本以及每个码本对应量化的浮点数序列。例如:对于一个长为6的浮点数序列,分为两段长为3的子序列,每个子序列分别使用大小为2^6的码本与2^9的码本进行量化。则需指示上述两个码本以及第一码本量化前3个浮点数,第二个码本量化后三个浮点数。指示码本权重的方法包括:1)下发完整的码本;2)预定义一些候选码本,当需要指示码本时从候选码本中选择并指示相应序号。
一种实施方式中,更新策略可以是量化码本的更新策略和/或量化方式的更新策略, 如:周期性地更新量化码本和/或量化方式、根据AI网络模型的训练同步更新量化码本和/或量化方式、根据网络侧设备的指示更新量化码本和/或量化方式等更新策略。
一种实施方式中,在确定第一AI网络模型的第一处理结果的浮点数个数和第二信息的比特数后,终端可以采用能够将该浮点数个数量化成该比特数的量化参数。
一种实施方式中,上述量化比特数可以是标量量化中的量化比特数,即一个浮点数用于量化成几个比特的比特组。
一种实施方式中,量化处理的分段方式可以是对第一信息进行分段量化,例如:按照端口分组、层(layer)分组等方式,将第一信息划分为至少两个分段,然后,每个分段采用各自对应的量化信息分别进行量化处理。
作为一种可选的实施方式,所述更新AI网络模型的方法还包括:
所述终端接收来自所述网络侧设备的训练信息,所述训练信息包括以下至少一项:
所述第二信息的上报方式,或,所述第二信息和所述第二信道信息的上报方式;
所述第三信息的接收方式。
一种实施方式中,上述训练信息可以在CSI报告配置中进行配置,或者通过信令指示终端。
一种实施方式中,所述第二信息的上报方式,或,所述第二信息和所述第二信道信息的上报方式,可以包括终端发送第二信息或第二信息和第二信道信息的时频资源。
一种实施方式中,所述第三信息的接收方式,可以包括终端接收所述第三信息的时频资源。
本实施方式中,终端可以根据网络侧设备的配置或指示来确定如何上报第二信息,和/或,如何接收第三信息。
可选地,第二信息的上报方式,可以包括所述第二信息的内容,其中,所述第二信息的内容包括以下至少一项:
所述第一处理结果,所述第一处理结果为量化前的浮点数格式的信息;
对所述第一处理结果进行量化处理得到的二进制格式的信息;
对所述第一处理结果依次进行量化处理和解量化处理后得到的浮点数格式的信息。
选项一,第二信息的内容包括所述第一处理结果可以表示,训练AI网络模型的过程中使用的量化方式,与使用AI网络模型进行CSI上报过程中的量化方式可以不同,其中,在训练AI网络模型的过程中,第一AI网络模型用于在终端侧对第一信道信息进行编码处理,得到前向信息,第二AI网络模型可以用于在网络侧对前向信息进行量化、解量化和解码处理,得到反向信息。此时,在训练AI网络模型的过程中的量化方式由网络侧设备决定。
选项二,第二信息的内容包括对所述第一处理结果进行量化处理得到的二进制格式的信息可以表示,训练AI网络模型的过程中使用的量化方式,与使用AI网络模型进行CSI上报过程中的量化方式可以相同,且在训练AI网络模型的过程中,第一AI网络模型用于 在终端侧对第一信道信息进行编码处理和量化处理,得到二进制格式,即比特流格式的前向信息,第二AI网络模型可以用于在网络侧对前向信息进行解量化和解码处理,得到反向信息。
选项三,第二信息的内容包括对所述第一处理结果依次进行量化处理和解量化处理后得到的浮点数格式的信息可以表示,训练AI网络模型的过程中使用的量化方式,与使用AI网络模型进行CSI上报过程中的量化方式可以不同,其中,在训练AI网络模型的过程中,第一AI网络模型用于在终端侧对第一信道信息进行编码、量化和解量化处理,得到浮点数格式的前向信息,第二AI网络模型可以用于在网络侧对该浮点数格式的前向信息进行解码处理,得到反向信息。此时,在训练AI网络模型的过程中的量化方式由终端决定,且终端可以不向网络侧设备上报该量化方式对应的第一量化信息。
需要说明的是,本申请实施例中的第二信息的内容可以是以上选项一至选项三中的任一项,为了便于说明,本申请实施例中通常以训练AI网络模型的过程中使用的量化方式为上述选项二中的量化方式为例进行举例说明。
作为一种可选的实施方式,在所述终端根据第一量化信息对第一信息进行量化处理之前,所述方法还包括:
所述终端接收来自所述网络侧设备的第一指示信息;
所述终端根据所述第一指示信息确定所述第一量化信息;
其中,所述第一指示信息指示以下至少一项:
所述第一量化信息;
第一标识,所述第一标识与所述第一量化信息对应;
第二标识,所述第二标识用于标识量化码本池中的量化码本,所述第一量化信息包括所述第二标识对应的量化码本;
第三标识,所述第三标识用于标识量化码本组,所述量化码本组包括至少两个量化码本,所述第一量化信息包括所述第三标识对应的量化码本组中的量化码本。
一种实施方式中,可以通过协议约定、网络侧设备预先指示或配置的方式,使终端和网络侧设备获知第一标识、第二标识、第三标识中的至少一项与第一量化信息之间的对应关系,然后,网络侧设备通过第一指示信息指示第一标识或第二标识或第三标识,便可以使终端使用该第一标识或第二标识或第三标识对应的第一量化信息。
例如:协议约定一些固定的量化信息以及各个量化信息的标识(index),基站向终端指示其中的index,终端根据基站指示的index使用对应的量化信息。
再例如:基站指示或协议约定使用标量量化或矢量量化。
对于标量量化,其量化信息可以包括量化比特数,每个量化比特数对应一个固定的量化表格,基站可以指示与量化比特数对应的量化index或者直接指示量化比特数。
对于矢量量化,可以在协议中预先约定一些量化码本,以及每一个量化码本的标识;然后,基站可以指示量化码本的index,以使终端使用该量化码本,或者,基站指示量化 码本组(group)index和量化码本的长度,终端在量化码本group index对应的量化码本group中选择对应的该长度的量化码本,其中,每个量化码本group的量化码本的长度互不相同。
一种实施方式中,量化码本池可以是协议约定的,或者是网络侧设备预先指示或配置的,或者是与第一AI网络模型关联的。例如:第一AI网络模型关联了至少两个量化信息,每一个量化信息对应一个第二标识,则网络侧设备指示其中一个作为第一量化信息。
一种实施方式中,在训练开始时的量化码本可以由基站指示或者由终端上报给基站,后续的量化码本可以更新AI网络模型的训练而更新。
本实施方式中,第一量化信息由网络侧设备指示。
作为一种可选的实施方式,在所述终端接收来自所述网络侧设备的第三信息之前,所述方法还包括:
所述终端向所述网络侧设备发送第二指示信息,其中,所述第二指示信息指示以下至少一项:
所述第一量化信息;
第一标识,所述第一标识与所述第一量化信息对应;
第二标识,所述第二标识用于标识量化码本池中的量化码本,所述第一量化信息包括所述第二标识对应的量化码本;
第三标识,所述第三标识用于标识量化码本组,所述量化码本组包括至少两个量化码本,所述第一量化信息包括所述第三标识对应的量化码本组中的量化码本;
第四标识,所述第四标识用于标识所述第三标识对应的量化码本组内的目标量化码本,所述第一量化信息包括所述目标量化码本。
本实施方式中,由终端选择并上报第一量化信息,其中,第二指示信息与第一指示信息的区别包括,第二指示信息还可以指示终端在量化码本group index对应的量化码本group中选择的量化码本的标识。
作为一种可选的实施方式,在所述第一量化信息包括量化码本的情况下,所述方法还包括:
所述终端对所述量化码本进行更新,所述第一量化信息包括更新后的量化码本;
所述终端向所述网络侧设备发送第三指示信息,其中,所述第三指示信息指示所述更新后的量化码本或所述更新后的量化码本的标识。
本实施方式中,终端可以在训练AI网络模型的过程中更新量化码本并向网络侧设备上报更新后的量化码本或量化码本标识,如:终端更新标量量化中使用的量化表格或矢量量化中的量化码本,其中,量化表格用于确定每一个比特组和对应的浮点数。
一种实施方式中,所述终端对所述量化码本进行更新,包括以下至少一项:
所述终端周期性地更新所述量化码本;
所述终端基于所述网络侧设备的指示更新所述量化码本。
其中,在所述终端周期性地更新所述量化码本的情况下,更新所述量化码本的周期可以是协议约定的或网络侧设备配置的。
这样,终端可以周期性的更新量化码本,例如:每更新Q次第一AI网络模型,就更新一次量化码本。或者,终端也可以基于网络侧设备的指示来更新量化码本。
一种实施方式中,终端可以基于条件触发来决定是否更新量化码本,例如:当终端自行训练了更好的量化码本或者终端发现信道质量变化,原先的量化码本不适合的情况下,可以更新量化码本。
一种实施方式中,在所述终端对所述量化码本进行更新之前,所述方法还包括:
所述终端向所述网络侧设备发送目标请求信息,所述目标请求信息用于请求对所述终端的量化码本进行更新;
所述终端接收来自所述网络侧设备的目标响应信息,所述目标响应信息用于允许对所述终端的量化码本进行更新。
这样,终端可以向网络侧设备请求更新量化码本,当网络侧设备同意终端更新并反馈目标响应信息时,终端更新量化码本。例如:终端发现信道质量变化,原先的量化码本不适合时,向网络侧设备发送目标请求信息,若终端接收到来自所述网络侧设备的目标响应信息,则更新量化码本。
一种实施方式中,在所述终端对所述量化码本进行更新之前,所述方法还包括:
所述终端接收第四指示信息,其中,所述第四指示信息指示对所述量化码本进行更新。
其中,第四指示信息可以是网络侧设备发送的指示信息。
作为一种可选的实施方式,所述终端为目标组内的终端,所述目标组包括至少两个终端,所述目标组内的终端的第一AI网络模型对应同一个第二AI网络模型。
本实施方式中,目标组内的终端可以是网络侧设备的一个第二AI网络模型对应的全部第一AI网络模型所在的终端。也就是说,网络侧设备的一个第二AI网络模型可以是多个终端的编码AI网络模型共用的解码AI网络模型。此时,网络侧设备的第二AI网络模型需要与目标组内的全部终端的第一AI网络模型进行联合训练。
一种实施方式中,网络侧设备可以接收来自目标组内的全部终端的前向信息,分别按照各自对应的第一量化信息对前向信息进行解量化,然后基于目标组内的全部终端的解量化后的前向信息确定目标组内的每一个终端的反向信息,并发送给对应的终端,以及基于目标组内的全部终端的解量化后的前向信息和对应的样本标签,确定第二AI网络模型的反向信息,以更新该第二AI网络模型。
可选地,在所述终端根据第一量化信息对第一信息进行量化处理之前,所述方法还包括:
所述终端接收第五指示信息,其中,所述第五指示信息指示对所述目标组内的全部终端的量化码本进行更新。
本实施方式中,网络侧设备可以指示目标组内的全部终端对各自的量化码本进行更新, 在实施中,网络侧设备可以直接指示目标组内的全部终端更新量化码本的策略,基于该策略,网络侧设备可以获知目标组内的每一个终端更新后的量化码本,以据此解量化各个终端上报的第二信息。
可选地,所述方法还包括:
所述终端根据所述第五指示信息更新所述量化信息,并向所述网络侧设备发送更新后的量化码本。
本实施方式中,目标组内的每一个终端在更新各自的量化码本后,将更新的量化码本上报给网络侧设备,以使网络侧设备可以获知目标组内的每一个终端更新后的量化码本,以据此解量化各个终端上报的第二信息。
一种实施方式中,如果终端发现量化码本不用更新,则不上报更新后的量化码本,或者上报一个不用更新量化码本的指示信息。
可选地,所述方法还包括:
所述终端接收来自所述网络侧设备的目标量化码本,其中,所述目标量化码本是根据所述目标组内的全部终端的量化码本确定的量化码本;
所述终端确定所述第一量化信息包括所述目标量化码本。
本实施方式中,目标组内的每一个终端在更新各自的量化码本后,并将更新的量化码本上报给网络侧设备之后,网络侧设备根据目标组内的全部终端上报的更新后的量化码本决定目标量化码本,并下发给目标组内的每一个终端,以使目标组内的全部终端后续使用统一的目标量化码本量化第一信息,这样,网络侧设备对目标组内的终端上报的第二信息可以采用相同的解量化方式进行解量化处理。
可选地,所述目标组内的全部终端的第一量化信息相同。
本实施方式中,目标组内的全部终端后续使用相同的第一量化信息对各自的第一信息进行量化处理,这样,网络侧设备对目标组内的终端上报的第二信息可以采用相同的解量化方式进行解量化处理。
需要说明的是,在一种可能的实现方式中,目标组内的终端也可能使用不同的第一量化信息,例如:目标组内的终端按照各自的量化方式对其第一信息进行量化处理,这样,网络侧设备也需要按照目标组内的终端各自对应的解量化方式对对应的第二信息进行解量化处理。
作为一种可选的实施方式,所述方法还包括:
所述终端向所述网络侧设备发送目标能力信息,所述目标能力信息指示以下至少一项:
所述终端是否支持量化码本;
所述终端是否支持更新量化码本;
所述终端支持的量化码本的标识。
本实施方式中,终端向所述网络侧设备发送目标能力信息,该目标能力信息可以作为网络侧设备为终端配置或指示第一量化信息的依据,以使网络侧设备为终端配置或指示第 一量化信息是终端能够使用的。例如:对于不支持更新量化码本的终端,网络侧设备指示或配置终端的量化码本的更新策略为不更新量化码本。
作为一种可选的实施方式,所述第三信息为基于第二量化信息量化后的信息,所述终端根据所述第三信息更新所述第一AI网络模型,包括:
所述终端基于所述第二量化信息对所述第三信息进行解量化处理,得到第五信息;
所述终端根据所述第五信息更新所述第一AI网络模型。
一种实施方式中,第五信息为解量化后的反向信息,如:第五信息与第四信息的相似程度,由第二量化信息对应的量化和解量化精度确定。
一种实施方式中,第二量化信息可以是固定的量化信息,例如:所述第二量化信息包括目标量化比特数,所述目标量化比特数由所述网络侧设备指示或由协议约定。
一种实施方式中,第二量化信息可以是终端和网络侧设备已知的任意量化信息。
一种实施方式中,第二量化信息可以是与第一量化信息相同的量化信息。
本实施方式中,第三信息可以是量化后的反向信息,终端可以基于其量化方式对第三信息进行解量化处理,并根据解量化后的反向信息来更新第一AI网络模型。
在本申请实施例中,终端能够按照第一量化信息对编码AI网络模型输出的前向信息即第一信息进行量化处理,以将浮点数格式的前向信息转化为比特流,并向基站发送该比特流,且网络侧设备能够根据第一量化信息将该比特流解量化为浮点数格式的前向信息,并基于解码AI网络模型计算该前向信息对应的解码AI网络模型的反向梯度信息和编码AI网络模型的反向梯度信息,最终网络侧设备能够基于解码AI网络模型的反向梯度信息更新解码AI网络模型,并将编码AI网络模型的反向梯度信息量化后发送给终端,以使终端据此更新编码AI网络模型,至此便实现了编码AI网络模型和解码AI网络模型的线上联合训练。解决了因网络侧设备不知道终端使用的第一量化信息,而无法解量化该终端上报的比特流形式的前向信息,造成编码AI网络模型和解码AI网络模型的线上联合训练无法实施的问题。
请参阅图3,本申请实施例提供的更新AI网络模型的方法,其执行主体可以是网络侧设备,如图3所示,该更新AI网络模型的方法可以包括以下步骤:
步骤301、网络侧设备接收来自终端的第二信息或者,接收来自终端的第二信息和第二信道信息,其中,所述第二信息为对第一信息进行量化处理后得到的信息,所述第一信息与所述终端具有的第一AI网络模型对第一信道信息的第一处理结果相关,所述第二信道信息与所述第一信道信息相关。
步骤302、所述网络侧设备根据第一量化信息对所述第二信息进行解量化处理,得到第四信息。
步骤303、所述网络侧设备基于第二AI网络模型对所述第四信息的第二处理结果,确定第三信道信息。
步骤304、所述网络侧设备根据所述第三信道信息和所述第二信道信息,更新所述第 二AI网络模型,以及确定第三信息。
步骤305、所述网络侧设备向所终端发送所述第三信息。
本申请实施例提供的更新AI网络模型的方法与如图2所示的更新AI网络模型的方法的区别包括:
如图2所示的更新AI网络模型的方法的执行主体是终端,如图3所示的更新AI网络模型的方法的执行主体是网络侧设备,且网络侧设备在更新AI网络模型的方法中执行的各个步骤与终端在更新AI网络模型的方法中执行的各个步骤相对应,网络侧设备在更新AI网络模型的方法中执行的各个步骤的含义和作用,可以参考如图2所示的更新AI网络模型的方法中的各个步骤的含义和作用,且两者相互配置以共同实现终端侧的第一AI网络模型与网络侧设备的第二AI网络模型的线上联合训练,在此不再赘述。
作为一种可选的实施方式,所述第一量化信息包括以下至少一项:
量化方式,所述量化方式包括标量量化和/或矢量量化;
更新策略;
所述第一AI网络模型的第一处理结果的浮点数个数;
所述第二信息的比特数;
量化比特数;
量化码本;
量化处理的分段方式。
作为一种可选的实施方式,所述方法还包括:
所述网络侧设备向所述终端发送训练信息,所述训练信息包括以下至少一项:
所述第二信息的上报方式,或,所述第二信息和所述第二信道信息的上报方式;
所述第三信息的接收方式。
作为一种可选的实施方式,所述第二信息的上报方式,包括所述第二信息的内容,其中,所述第二信息的内容包括以下至少一项:
所述第一处理结果,所述第一处理结果为量化前的浮点数格式的信息;
对所述第一处理结果进行量化处理得到的二进制格式的信息;
对所述第一处理结果依次进行量化处理和解量化处理后得到的浮点数格式的信息。
作为一种可选的实施方式,所述第一量化信息满足以下至少一项:
由所述网络侧设备指示;
由协议约定;
由所述终端选择并上报所述网络侧设备;
与所述第一AI网络模型关联。
作为一种可选的实施方式,在所述网络侧设备接收来自终端的第二信息之前,所述方法还包括:
所述网络侧设备向所述终端发送第一指示信息,其中,所述第一指示信息指示以下至 少一项:
所述第一量化信息;
第一标识,所述第一标识与所述第一量化信息对应;
第二标识,所述第二标识用于标识量化码本池中的量化码本,所述第一量化信息包括所述第二标识对应的量化码本;
第三标识,所述第三标识用于标识量化码本组,所述量化码本组包括至少两个量化码本,所述第一量化信息包括所述第三标识对应的量化码本组中的量化码本。
作为一种可选的实施方式,在所述网络侧设备根据第一量化信息对所述第二信息进行解量化处理之前,所述方法还包括:
所述网络侧设备接收来自所述终端的第二指示信息;
所述网络侧设备根据所述第二指示信息确定所述第一量化信息;
其中,所述第二指示信息指示以下至少一项:
所述第一量化信息;
第一标识,所述第一标识与所述第一量化信息对应;
第二标识,所述第二标识用于标识量化码本池中的量化码本,所述第一量化信息包括所述第二标识对应的量化码本;
第三标识,所述第三标识用于标识量化码本组,所述量化码本组包括至少两个量化码本,所述第一量化信息包括所述第三标识对应的量化码本组中的量化码本;
第四标识,所述第四标识用于标识所述第三标识对应的量化码本组内的目标量化码本,所述第一量化信息包括所述目标量化码本。
作为一种可选的实施方式,在所述第一量化信息包括量化码本的情况下,所述方法还包括:
所述网络侧设备接收来自所述终端的第三指示信息,其中,所述第三指示信息指示所述终端更新后的量化码本或所述更新后的量化码本的标识。
作为一种可选的实施方式,在所述网络侧设备接收来自所述终端的第三指示信息之前,所述方法还包括:
所述网络侧设备接收来自所述终端的目标请求信息,所述目标请求信息用于请求对所述终端的量化码本进行更新;
所述网络侧设备向所述终端发送目标响应信息,所述目标响应信息用于允许对所述终端的量化码本进行更新。
作为一种可选的实施方式,在所述网络侧设备接收来自所述终端的第三指示信息新之前,所述方法还包括:
所述网络侧设备向所述终端发送第四指示信息,其中,所述第四指示信息指示终端对所述量化码本进行更新。
作为一种可选的实施方式,所述网络侧设备接收来自终端的第二信息,包括:
所述网络侧设备接收来自目标组内的每一个终端的第二信息,其中,所述目标组包括至少两个终端,所述目标组内的终端的第一AI网络模型对应同一个第二AI网络模型。
作为一种可选的实施方式,在所述网络侧设备接收来自目标组内的每一个终端的第二信息之前,所述方法还包括:
所述网络侧设备向所述目标组内的每一个终端发送第五指示信息,其中,所述第五指示信息指示所述目标组内的全部终端更新各自的量化码本。
作为一种可选的实施方式,所述方法还包括:
所述网络侧设备接收所述目标组内的每一个终端各自更新后的量化码本。
作为一种可选的实施方式,所述方法还包括:
所述网络侧设备根据所述目标组内的全部终端的量化码本,确定目标量化码本;
所述网络侧设备向所述目标组内的每一个终端发送所述目标量化码本,其中,所述第一量化信息包括所述目标量化码本。
作为一种可选的实施方式,所述目标组内的全部终端的第一量化信息相同。
作为一种可选的实施方式,所述方法还包括:
所述网络侧设备接收来自所述终端的目标能力信息;
所述网络侧设备根据所述目标能力信息,确定所述第一量化信息;
其中,所述目标能力信息指示以下至少一项:
所述终端是否支持量化码本;
所述终端是否支持更新量化码本;
所述终端支持的量化码本的标识。
作为一种可选的实施方式,所述网络侧设备基于第二AI网络模型对所述第一信息的第二处理结果,确定第三信息包括:
所述网络侧设备基于第二AI网络模型对所述第一信息的第二处理结果,得到第四信息;
所述网络侧设备基于第二量化信息对所述第四信息进行量化处理,得到所述第三信息。
作为一种可选的实施方式,所述第二量化信息包括目标量化比特数,所述目标量化比特数由所述网络侧设备指示或由协议约定。
本申请实施例提供的网络侧设备执行的更新AI网络模型的方法用于与如图2所示终端执行的更新AI网络模型的方法向配合,对前向信息、反向信息以及样本标签进行量化、解量化和空中传递,以使共同实现对第一AI网络模型和第二AI网络模型进行线上联合训练。
本申请实施例提供的更新AI网络模型的方法,执行主体可以为更新AI网络模型的装置。本申请实施例中以更新AI网络模型的装置执行更新AI网络模型的方法为例,说明本申请实施例提供的更新AI网络模型的装置。
请参阅图4,本申请实施例提供的一种更新AI网络模型的装置,可以是终端内的装 置,如图4所示,该更新AI网络模型的装置400可以包括以下模块:
第一处理模块401,用于根据第一量化信息对第一信息进行量化处理,得到第二信息,其中,所述终端具有第一AI网络模型,所述第一信息与所述第一AI网络模型对第一信道信息的第一处理结果相关;
第一发送模块402,用于向网络侧设备发送所述第二信息或者,发送所述第二信息和第二信道信息,所述第二信道信息与所述第一信道信息相关;
第一接收模块403,用于接收来自所述网络侧设备的第三信息,其中,所述第三信息是根据第三信道信息和所述第二信道信息确定的,所述第三信道信息与第二AI网络模型对第四信息的第二处理结果相关,所述第四信息为对所述第二信息进行解量化处理得到的信息;
第一更新模块404,用于根据所述第三信息更新所述第一AI网络模型。
可选地,所述第一量化信息包括以下至少一项:
量化方式,所述量化方式包括标量量化和/或矢量量化;
更新策略;
所述第一AI网络模型的第一处理结果的浮点数个数;
所述第二信息的比特数;
量化比特数;
量化码本;
量化处理的分段方式。
可选地,更新AI网络模型的装置400还包括:
第三接收模块,用于接收来自所述网络侧设备的训练信息,所述训练信息包括以下至少一项:
所述第二信息的上报方式,或,所述第二信息和所述第二信道信息的上报方式;
所述第三信息的接收方式。
可选地,所述第二信息的上报方式,包括所述第二信息的内容,其中,所述第二信息的内容包括以下至少一项:
所述第一处理结果,所述第一处理结果为量化前的浮点数格式的信息;
对所述第一处理结果进行量化处理得到的二进制格式的信息;
对所述第一处理结果依次进行量化处理和解量化处理后得到的浮点数格式的信息。
可选地,所述第一量化信息满足以下至少一项:
由所述网络侧设备指示;
由协议约定;
由所述终端选择并上报所述网络侧设备;
与所述第一AI网络模型关联。
可选地,更新AI网络模型的装置400还包括:
第四接收模块,用于接收来自所述网络侧设备的第一指示信息;
第二确定模块,用于根据所述第一指示信息确定所述第一量化信息;
其中,所述第一指示信息指示以下至少一项:
所述第一量化信息;
第一标识,所述第一标识与所述第一量化信息对应;
第二标识,所述第二标识用于标识量化码本池中的量化码本,所述第一量化信息包括所述第二标识对应的量化码本;
第三标识,所述第三标识用于标识量化码本组,所述量化码本组包括至少两个量化码本,所述第一量化信息包括所述第三标识对应的量化码本组中的量化码本。
可选地,更新AI网络模型的装置400还包括:
第三发送模块,用于向所述网络侧设备发送第二指示信息,其中,所述第二指示信息指示以下至少一项:
所述第一量化信息;
第一标识,所述第一标识与所述第一量化信息对应;
第二标识,所述第二标识用于标识量化码本池中的量化码本,所述第一量化信息包括所述第二标识对应的量化码本;
第三标识,所述第三标识用于标识量化码本组,所述量化码本组包括至少两个量化码本,所述第一量化信息包括所述第三标识对应的量化码本组中的量化码本;
第四标识,所述第四标识用于标识所述第三标识对应的量化码本组内的目标量化码本,所述第一量化信息包括所述目标量化码本。
可选地,在所述第一量化信息包括量化码本的情况下,更新AI网络模型的装置400还包括:
第三更新模块,用于对所述量化码本进行更新,所述第一量化信息包括更新后的量化码本;
第四发送模块,用于向所述网络侧设备发送第三指示信息,其中,所述第三指示信息指示所述更新后的量化码本或所述更新后的量化码本的标识。
可选地,所述第三更新模块,具体用于执行以下至少一项:
周期性地更新所述量化码本;
基于所述网络侧设备的指示更新所述量化码本。
可选地,更新AI网络模型的装置400还包括:
第五发送模块,用于向所述网络侧设备发送目标请求信息,所述目标请求信息用于请求对所述终端的量化码本进行更新;
第五接收模块,用于接收来自所述网络侧设备的目标响应信息,所述目标响应信息用于允许对所述终端的量化码本进行更新。
可选地,更新AI网络模型的装置400还包括:
第六接收模块,用于接收第四指示信息,其中,所述第四指示信息指示对所述量化码本进行更新。
可选地,所述终端为目标组内的终端,所述目标组包括至少两个终端,所述目标组内的终端的第一AI网络模型对应同一个第二AI网络模型。
可选地,更新AI网络模型的装置400还包括:
第七接收模块,用于接收第五指示信息,其中,所述第五指示信息指示对所述目标组内的全部终端的量化码本进行更新。
可选地,更新AI网络模型的装置400还包括:
第四更新模块,用于根据所述第五指示信息更新所述量化信息;
第六发送模块,用于向所述网络侧设备发送更新后的量化码本。
可选地,更新AI网络模型的装置400还包括:
第八接收模块,用于接收来自所述网络侧设备的目标量化码本,其中,所述目标量化码本是根据所述目标组内的全部终端的量化码本确定的量化码本;
第三确定模块,用于确定所述第一量化信息包括所述目标量化码本。
可选地,所述目标组内的全部终端的第一量化信息相同。
可选地,更新AI网络模型的装置400还包括:
第七发送模块,用于向所述网络侧设备发送目标能力信息,所述目标能力信息指示以下至少一项:
所述终端是否支持量化码本;
所述终端是否支持更新量化码本;
所述终端支持的量化码本的标识。
可选地,所述第三信息为基于第二量化信息量化后的信息,第一更新模块404,具体用于:
基于所述第二量化信息对所述第三信息进行解量化处理,得到第五信息;
根据所述第五信息更新所述第一AI网络模型。
可选地,所述第二量化信息包括目标量化比特数,所述目标量化比特数由所述网络侧设备指示或由协议约定。
本申请实施例中的更新AI网络模型的装置可以是电子设备,例如具有操作系统的电子设备,也可以是电子设备中的部件,例如集成电路或芯片。该电子设备可以是终端,也可以为除终端之外的其他设备。示例性的,终端可以包括但不限于上述所列举的终端11的类型,其他设备可以为服务器、网络附属存储器(Network Attached Storage,NAS)等,本申请实施例不作具体限定。
本申请实施例提供的更新AI网络模型的装置400,能够实现如图2所示方法实施例中终端实现的各个过程,且能够取得相同的有益效果,为避免重复,在此不再赘述。
请参阅图5,本申请实施例提供的一种更新AI网络模型的装置,可以是网络侧设备 内的装置,如图5所示,该更新AI网络模型的装置500可以包括以下模块:
第二接收模块501,用于接收来自终端的第二信息或者,接收来自终端的第二信息和第二信道信息,其中,所述第二信息为对第一信息进行量化处理后得到的信息,所述第一信息与所述终端具有的第一AI网络模型对第一信道信息的第一处理结果相关,所述第二信道信息与所述第一信道信息相关;
第二处理模块502,用于根据第一量化信息对所述第二信息进行解量化处理,得到第四信息;
第一确定模块503,用于基于第二AI网络模型对所述第四信息的第二处理结果,确定第三信道信息;
第二更新模块504,用于根据所述第三信道信息和所述第二信道信息,更新所述第二AI网络模型,以及确定第三信息;
第二发送模块505,用于向所终端发送所述第三信息。
可选地,所述第一量化信息包括以下至少一项:
量化方式,所述量化方式包括标量量化和/或矢量量化;
更新策略;
所述第一AI网络模型的第一处理结果的浮点数个数;
所述第二信息的比特数;
量化比特数;
量化码本;
量化处理的分段方式。
可选地,更新AI网络模型的装置500还包括:
第八发送模块,用于向所述终端发送训练信息,所述训练信息包括以下至少一项:
所述第二信息的上报方式,或,所述第二信息和所述第二信道信息的上报方式;
所述第三信息的接收方式。
可选地,所述第二信息的上报方式,包括所述第二信息的内容,其中,所述第二信息的内容包括以下至少一项:
所述第一处理结果,所述第一处理结果为量化前的浮点数格式的信息;
对所述第一处理结果进行量化处理得到的二进制格式的信息;
对所述第一处理结果依次进行量化处理和解量化处理后得到的浮点数格式的信息。
可选地,所述第一量化信息满足以下至少一项:
由所述网络侧设备指示;
由协议约定;
由所述终端选择并上报所述网络侧设备;
与所述第一AI网络模型关联。
可选地,更新AI网络模型的装置500还包括:
第九发送模块,用于向所述终端发送第一指示信息,其中,所述第一指示信息指示以下至少一项:
所述第一量化信息;
第一标识,所述第一标识与所述第一量化信息对应;
第二标识,所述第二标识用于标识量化码本池中的量化码本,所述第一量化信息包括所述第二标识对应的量化码本;
第三标识,所述第三标识用于标识量化码本组,所述量化码本组包括至少两个量化码本,所述第一量化信息包括所述第三标识对应的量化码本组中的量化码本。
可选地,更新AI网络模型的装置500还包括:
第九接收模块,用于接收来自所述终端的第二指示信息;
第四确定模块,用于根据所述第二指示信息确定所述第一量化信息;
其中,所述第二指示信息指示以下至少一项:
所述第一量化信息;
第一标识,所述第一标识与所述第一量化信息对应;
第二标识,所述第二标识用于标识量化码本池中的量化码本,所述第一量化信息包括所述第二标识对应的量化码本;
第三标识,所述第三标识用于标识量化码本组,所述量化码本组包括至少两个量化码本,所述第一量化信息包括所述第三标识对应的量化码本组中的量化码本;
第四标识,所述第四标识用于标识所述第三标识对应的量化码本组内的目标量化码本,所述第一量化信息包括所述目标量化码本。
可选地,在所述第一量化信息包括量化码本的情况下,更新AI网络模型的装置500还包括:
第十接收模块,用于接收来自所述终端的第三指示信息,其中,所述第三指示信息指示所述终端更新后的量化码本或所述更新后的量化码本的标识。
可选地,更新AI网络模型的装置500还包括:
第十一接收模块,用于接收来自所述终端的目标请求信息,所述目标请求信息用于请求对所述终端的量化码本进行更新;
第十发送模块,用于向所述终端发送目标响应信息,所述目标响应信息用于允许对所述终端的量化码本进行更新。
可选地,更新AI网络模型的装置500还包括:
第十一发送模块,用于向所述终端发送第四指示信息,其中,所述第四指示信息指示终端对所述量化码本进行更新。
可选地,第二接收模块501,具体用于:
接收来自目标组内的每一个终端的第二信息,其中,所述目标组包括至少两个终端,所述目标组内的终端的第一AI网络模型对应同一个第二AI网络模型。
可选地,更新AI网络模型的装置500还包括:
第十二发送模块,用于向所述目标组内的每一个终端发送第五指示信息,其中,所述第五指示信息指示所述目标组内的全部终端更新各自的量化码本。
可选地,更新AI网络模型的装置500还包括:
第十二接收模块,用于接收所述目标组内的每一个终端各自更新后的量化码本。
可选地,更新AI网络模型的装置500还包括:
第五确定模块,用于根据所述目标组内的全部终端的量化码本,确定目标量化码本;
第十三发送模块,用于向所述目标组内的每一个终端发送所述目标量化码本,其中,所述第一量化信息包括所述目标量化码本。
可选地,所述目标组内的全部终端的第一量化信息相同。
可选地,更新AI网络模型的装置500还包括:
第十三接收模块,用于接收来自所述终端的目标能力信息;
第六确定模块,用于根据所述目标能力信息,确定所述第一量化信息;
其中,所述目标能力信息指示以下至少一项:
所述终端是否支持量化码本;
所述终端是否支持更新量化码本;
所述终端支持的量化码本的标识。
可选地,第一确定模块503具体用于:
基于第二AI网络模型对所述第一信息的第二处理结果,得到第四信息;
基于第二量化信息对所述第四信息进行量化处理,得到所述第三信息。
可选地,所述第二量化信息包括目标量化比特数,所述目标量化比特数由所述网络侧设备指示或由协议约定。
本申请实施例提供的更新AI网络模型的装置500,能够实现如图3所示方法实施例中网络侧设备实现的各个过程,且能够取得相同的有益效果,为避免重复,在此不再赘述。
可选的,如图6所示,本申请实施例还提供一种通信设备600,包括处理器601和存储器602,存储器602上存储有可在所述处理器601上运行的程序或指令,例如,该通信设备600为终端时,该程序或指令被处理器601执行时实现如图2所示方法实施例的各个步骤,且能达到相同的技术效果。该通信设备600为网络侧设备时,该程序或指令被处理器601执行时实现如图3所示方法实施例的各个步骤,且能达到相同的技术效果,为避免重复,这里不再赘述。
本申请实施例还提供一种终端,包括处理器和通信接口,所述处理器用于根据第一量化信息对第一信息进行量化处理,得到第二信息,其中,所述终端具有第一AI网络模型,所述第一信息与所述第一AI网络模型对第一信道信息的第一处理结果相关;所述通信接口用于向网络侧设备发送所述第二信息或者,发送所述第二信息和第二信道信息,所述第二信道信息与所述第一信道信息相关;所述通信接口还用于接收来自所述网络侧设备的第 三信息,其中,所述第三信息是根据第三信道信息和所述第二信道信息确定的,所述第三信道信息与第二AI网络模型对第四信息的第二处理结果相关,所述第四信息为对所述第二信息进行解量化处理得到的信息;所述处理器还用于根据所述第三信息更新所述第一AI网络模型。
该终端实施例能够实现如图4所示更新AI网络模型的装置400执行的各个过程,且能达到相同的技术效果,在此不再赘述。具体地,图7为实现本申请实施例的一种终端的硬件结构示意图。
该终端700包括但不限于:射频单元701、网络模块702、音频输出单元703、输入单元704、传感器705、显示单元706、用户输入单元707、接口单元708、存储器709以及处理器710等中的至少部分部件。
本领域技术人员可以理解,终端700还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器710逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图7中示出的终端结构并不构成对终端的限定,终端可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
应理解的是,本申请实施例中,输入单元704可以包括图形处理单元(Graphics Processing Unit,GPU)7041和麦克风7042,图形处理器7041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。显示单元706可包括显示面板7061,可以采用液晶显示器、有机发光二极管等形式来配置显示面板7061。用户输入单元707包括触控面板7071以及其他输入设备7072中的至少一种。触控面板7071,也称为触摸屏。触控面板7071可包括触摸检测装置和触摸控制器两个部分。其他输入设备7072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
本申请实施例中,射频单元701接收来自网络侧设备的下行数据后,可以传输给处理器710进行处理;另外,射频单元701可以向网络侧设备发送上行数据。通常,射频单元701包括但不限于天线、放大器、收发信机、耦合器、低噪声放大器、双工器等。
存储器709可用于存储软件程序或指令以及各种数据。存储器709可主要包括存储程序或指令的第一存储区和存储数据的第二存储区,其中,第一存储区可存储操作系统、至少一个功能所需的应用程序或指令(比如声音播放功能、图像播放功能等)等。此外,存储器709可以包括易失性存储器或非易失性存储器,或者,存储器709可以包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机 存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本申请实施例中的存储器709包括但不限于这些和任意其它适合类型的存储器。
处理器710可包括一个或多个处理单元;可选地,处理器710集成应用处理器和调制解调处理器,其中,应用处理器主要处理涉及操作系统、用户界面和应用程序等的操作,调制解调处理器主要处理无线通信信号,如基带处理器。可以理解的是,上述调制解调处理器也可以不集成到处理器710中。
其中,处理器710,用于根据第一量化信息对第一信息进行量化处理,得到第二信息,其中,所述终端具有第一AI网络模型,所述第一信息与所述第一AI网络模型对第一信道信息的第一处理结果相关;
射频单元701,用于向网络侧设备发送所述第二信息或者,发送所述第二信息和第二信道信息,所述第二信道信息与所述第一信道信息相关;
射频单元701,还用于接收来自所述网络侧设备的第三信息,其中,所述第三信息是根据第三信道信息和所述第二信道信息确定的,所述第三信道信息与第二AI网络模型对第四信息的第二处理结果相关,所述第四信息为对所述第二信息进行解量化处理得到的信息;
处理器710,还用于根据所述第三信息更新所述第一AI网络模型。
可选地,所述第一量化信息包括以下至少一项:
量化方式,所述量化方式包括标量量化和/或矢量量化;
更新策略;
所述第一AI网络模型的第一处理结果的浮点数个数;
所述第二信息的比特数;
量化比特数;
量化码本;
量化处理的分段方式。
可选地,射频单元701,还用于接收来自所述网络侧设备的训练信息,所述训练信息包括以下至少一项:
所述第二信息的上报方式,或,所述第二信息和所述第二信道信息的上报方式;
所述第三信息的接收方式。
可选地,所述第二信息的上报方式,包括所述第二信息的内容,其中,所述第二信息的内容包括以下至少一项:
所述第一处理结果,所述第一处理结果为量化前的浮点数格式的信息;
对所述第一处理结果进行量化处理得到的二进制格式的信息;
对所述第一处理结果依次进行量化处理和解量化处理后得到的浮点数格式的信息。
可选地,所述第一量化信息满足以下至少一项:
由所述网络侧设备指示;
由协议约定;
由所述终端选择并上报所述网络侧设备;
与所述第一AI网络模型关联。
可选地,在处理器710执行所述根据第一量化信息对第一信息进行量化处理之前:
射频单元701,还用于接收来自所述网络侧设备的第一指示信息;
处理器710,还用于根据所述第一指示信息确定所述第一量化信息;
其中,所述第一指示信息指示以下至少一项:
所述第一量化信息;
第一标识,所述第一标识与所述第一量化信息对应;
第二标识,所述第二标识用于标识量化码本池中的量化码本,所述第一量化信息包括所述第二标识对应的量化码本;
第三标识,所述第三标识用于标识量化码本组,所述量化码本组包括至少两个量化码本,所述第一量化信息包括所述第三标识对应的量化码本组中的量化码本。
可选地,射频单元701在执行所述接收来自所述网络侧设备的第三信息之前,还用于向所述网络侧设备发送第二指示信息,其中,所述第二指示信息指示以下至少一项:
所述第一量化信息;
第一标识,所述第一标识与所述第一量化信息对应;
第二标识,所述第二标识用于标识量化码本池中的量化码本,所述第一量化信息包括所述第二标识对应的量化码本;
第三标识,所述第三标识用于标识量化码本组,所述量化码本组包括至少两个量化码本,所述第一量化信息包括所述第三标识对应的量化码本组中的量化码本;
第四标识,所述第四标识用于标识所述第三标识对应的量化码本组内的目标量化码本,所述第一量化信息包括所述目标量化码本。
可选地,在所述第一量化信息包括量化码本的情况下:
处理器710,还用于对所述量化码本进行更新,所述第一量化信息包括更新后的量化码本;
射频单元701,还用于向所述网络侧设备发送第三指示信息,其中,所述第三指示信息指示所述更新后的量化码本或所述更新后的量化码本的标识。
可选地,处理器710执行的所述对所述量化码本进行更新,包括以下至少一项:
周期性地更新所述量化码本;
基于所述网络侧设备的指示更新所述量化码本。
可选地,在处理器710执行所述对所述量化码本进行更新之前,射频单元701,还用于:
向所述网络侧设备发送目标请求信息,所述目标请求信息用于请求对所述终端的量化码本进行更新;
接收来自所述网络侧设备的目标响应信息,所述目标响应信息用于允许对所述终端的量化码本进行更新。
可选地,在处理器710执行所述对所述量化码本进行更新之前,射频单元701,还用于接收第四指示信息,其中,所述第四指示信息指示对所述量化码本进行更新。
可选地,所述终端为目标组内的终端,所述目标组包括至少两个终端,所述目标组内的终端的第一AI网络模型对应同一个第二AI网络模型。
可选地,在处理器710执行所述根据第一量化信息对第一信息进行量化处理之前,射频单元701,还用于接收第五指示信息,其中,所述第五指示信息指示对所述目标组内的全部终端的量化码本进行更新。
可选地,处理器710,还用于根据所述第五指示信息更新所述量化信息;
射频单元701,还用于向所述网络侧设备发送更新后的量化码本。
可选地,射频单元701,还用于接收来自所述网络侧设备的目标量化码本,其中,所述目标量化码本是根据所述目标组内的全部终端的量化码本确定的量化码本;
处理器710,还用于确定所述第一量化信息包括所述目标量化码本。
可选地,所述目标组内的全部终端的第一量化信息相同。
可选地,射频单元701,还用于向所述网络侧设备发送目标能力信息,所述目标能力信息指示以下至少一项:
所述终端是否支持量化码本;
所述终端是否支持更新量化码本;
所述终端支持的量化码本的标识。
可选地,所述第三信息为基于第二量化信息量化后的信息,处理器710执行的所述根据所述第三信息更新所述第一AI网络模型,包括:
基于所述第二量化信息对所述第三信息进行解量化处理,得到第五信息;
根据所述第五信息更新所述第一AI网络模型。
可选地,所述第二量化信息包括目标量化比特数,所述目标量化比特数由所述网络侧设备指示或由协议约定。
本申请实施例提供的终端700能够实现如图4所示更新AI网络模型的装置400执行的各个过程,且能够取得相同的有益效果,为避免重复,在此不再赘述。
本申请实施例还提供一种网络侧设备,包括处理器和通信接口,所述通信接口用于接收来自终端的第二信息或者,接收来自终端的第二信息和第二信道信息,其中,所述第二信息为对第一信息进行量化处理后得到的信息,所述第一信息与所述终端具有的第一AI网络模型对第一信道信息的第一处理结果相关,所述第二信道信息与所述第一信道信息相关;所述处理器用于根据第一量化信息对所述第二信息进行解量化处理,得到第四信息, 以及基于第二AI网络模型对所述第四信息的第二处理结果,确定第三信道信息,以及根据所述第三信道信息和所述第二信道信息,更新所述第二AI网络模型,以及确定第三信息;所述通信接口还用于向所终端发送所述第三信息。
该网络侧设备实施例能够实现如图5所示更新AI网络模型的装置500执行的各个过程,且能达到相同的技术效果,在此不再赘述。具体地,本申请实施例还提供了一种网络侧设备。如图8所示,该网络侧设备800包括:天线801、射频装置802、基带装置803、处理器804和存储器805。天线801与射频装置802连接。在上行方向上,射频装置802通过天线801接收信息,将接收的信息发送给基带装置803进行处理。在下行方向上,基带装置803对要发送的信息进行处理,并发送给射频装置802,射频装置802对收到的信息进行处理后经过天线801发送出去。
以上实施例中网络侧设备执行的方法可以在基带装置803中实现,该基带装置803包括基带处理器。
基带装置803例如可以包括至少一个基带板,该基带板上设置有多个芯片,如图8所示,其中一个芯片例如为基带处理器,通过总线接口与存储器805连接,以调用存储器805中的程序,执行以上方法实施例中所示的网络设备操作。
该网络侧设备还可以包括网络接口806,该接口例如为通用公共无线接口(Common Public Radio Interface,CPRI)。
具体地,本申请实施例的网络侧设备800还包括:存储在存储器805上并可在处理器804上运行的指令或程序,处理器804调用存储器805中的指令或程序执行图5所示各模块执行的方法,并达到相同的技术效果,为避免重复,故不在此赘述。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现如图2或图3所示方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的终端中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器ROM、随机存取存储器RAM、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如图2或图3所示方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片,系统芯片,芯片系统或片上系统芯片等。
本申请实施例另提供了一种计算机程序/程序产品,所述计算机程序/程序产品被存储在存储介质中,所述计算机程序/程序产品被至少一个处理器执行以实现如图2或图3所示方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本申请实施例还提供了一种通信系统,包括:终端和网络侧设备,所述终端可用于执 行如图2所示的更新AI网络模型的方法的步骤,所述网络侧设备可用于执行如图3所示的更新AI网络模型的方法的步骤。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (41)

  1. 一种更新AI网络模型的方法,包括:
    终端根据第一量化信息对第一信息进行量化处理,得到第二信息,其中,所述终端具有第一AI网络模型,所述第一信息与所述第一AI网络模型对第一信道信息的第一处理结果相关;
    所述终端向网络侧设备发送所述第二信息或者,发送所述第二信息和第二信道信息,所述第二信道信息与所述第一信道信息相关;
    所述终端接收来自所述网络侧设备的第三信息,其中,所述第三信息是根据第三信道信息和所述第二信道信息确定的,所述第三信道信息与第二AI网络模型对第四信息的第二处理结果相关,所述第四信息为对所述第二信息进行解量化处理得到的信息;
    所述终端根据所述第三信息更新所述第一AI网络模型。
  2. 根据权利要求1所述的方法,其中,所述第一量化信息包括以下至少一项:
    量化方式,所述量化方式包括标量量化和/或矢量量化;
    更新策略;
    所述第一AI网络模型的第一处理结果的浮点数个数;
    所述第二信息的比特数;
    量化比特数;
    量化码本;
    量化处理的分段方式。
  3. 根据权利要求2所述的方法,其中,所述方法还包括:
    所述终端接收来自所述网络侧设备的训练信息,所述训练信息包括以下至少一项:
    所述第二信息的上报方式,或,所述第二信息和所述第二信道信息的上报方式;
    所述第三信息的接收方式。
  4. 根据权利要求3所述的方法,其中,所述第二信息的上报方式,包括所述第二信息的内容,其中,所述第二信息的内容包括以下至少一项:
    所述第一处理结果,所述第一处理结果为量化前的浮点数格式的信息;
    对所述第一处理结果进行量化处理得到的二进制格式的信息;
    对所述第一处理结果依次进行量化处理和解量化处理后得到的浮点数格式的信息。
  5. 根据权利要求1所述的方法,其中,所述第一量化信息满足以下至少一项:
    由所述网络侧设备指示;
    由协议约定;
    由所述终端选择并上报所述网络侧设备;
    与所述第一AI网络模型关联。
  6. 根据权利要求1至5中任一项所述的方法,其中,在所述终端根据第一量化信息对第一信息进行量化处理之前,所述方法还包括:
    所述终端接收来自所述网络侧设备的第一指示信息;
    所述终端根据所述第一指示信息确定所述第一量化信息;
    其中,所述第一指示信息指示以下至少一项:
    所述第一量化信息;
    第一标识,所述第一标识与所述第一量化信息对应;
    第二标识,所述第二标识用于标识量化码本池中的量化码本,所述第一量化信息包括所述第二标识对应的量化码本;
    第三标识,所述第三标识用于标识量化码本组,所述量化码本组包括至少两个量化码本,所述第一量化信息包括所述第三标识对应的量化码本组中的量化码本。
  7. 根据权利要求1至5中任一项所述的方法,其中,在所述终端接收来自所述网络侧设备的第三信息之前,所述方法还包括:
    所述终端向所述网络侧设备发送第二指示信息,其中,所述第二指示信息指示以下至少一项:
    所述第一量化信息;
    第一标识,所述第一标识与所述第一量化信息对应;
    第二标识,所述第二标识用于标识量化码本池中的量化码本,所述第一量化信息包括所述第二标识对应的量化码本;
    第三标识,所述第三标识用于标识量化码本组,所述量化码本组包括至少两个量化码本,所述第一量化信息包括所述第三标识对应的量化码本组中的量化码本;
    第四标识,所述第四标识用于标识所述第三标识对应的量化码本组内的目标量化码本,所述第一量化信息包括所述目标量化码本。
  8. 根据权利要求1至5中任一项所述的方法,其中,在所述第一量化信息包括量化码本的情况下,所述方法还包括:
    所述终端对所述量化码本进行更新,所述第一量化信息包括更新后的量化码本;
    所述终端向所述网络侧设备发送第三指示信息,其中,所述第三指示信息指示所述更新后的量化码本或所述更新后的量化码本的标识。
  9. 根据权利要求8所述的方法,其中,所述终端对所述量化码本进行更新,包括以下至少一项:
    所述终端周期性地更新所述量化码本;
    所述终端基于所述网络侧设备的指示更新所述量化码本。
  10. 根据权利要求8所述的方法,其中,在所述终端对所述量化码本进行更新之前,所述方法还包括:
    所述终端向所述网络侧设备发送目标请求信息,所述目标请求信息用于请求对所述终 端的量化码本进行更新;
    所述终端接收来自所述网络侧设备的目标响应信息,所述目标响应信息用于允许对所述终端的量化码本进行更新。
  11. 根据权利要求8所述的方法,其中,在所述终端对所述量化码本进行更新之前,所述方法还包括:
    所述终端接收第四指示信息,其中,所述第四指示信息指示对所述量化码本进行更新。
  12. 根据权利要求1至5中任一项所述的方法,其中,所述终端为目标组内的终端,所述目标组包括至少两个终端,所述目标组内的终端的第一AI网络模型对应同一个第二AI网络模型。
  13. 根据权利要求12所述的方法,其中,在所述终端根据第一量化信息对第一信息进行量化处理之前,所述方法还包括:
    所述终端接收第五指示信息,其中,所述第五指示信息指示对所述目标组内的全部终端的量化码本进行更新。
  14. 根据权利要求13所述的方法,其中,所述方法还包括:
    所述终端根据所述第五指示信息更新所述量化信息,并向所述网络侧设备发送更新后的量化码本。
  15. 根据权利要求14所述的方法,其中,所述方法还包括:
    所述终端接收来自所述网络侧设备的目标量化码本,其中,所述目标量化码本是根据所述目标组内的全部终端的量化码本确定的量化码本;
    所述终端确定所述第一量化信息包括所述目标量化码本。
  16. 根据权利要求12所述的方法,其中,所述目标组内的全部终端的第一量化信息相同。
  17. 根据权利要求1至5中任一项所述的方法,其中,所述方法还包括:
    所述终端向所述网络侧设备发送目标能力信息,所述目标能力信息指示以下至少一项:
    所述终端是否支持量化码本;
    所述终端是否支持更新量化码本;
    所述终端支持的量化码本的标识。
  18. 根据权利要求1所述的方法,其中,所述第三信息为基于第二量化信息量化后的信息,所述终端根据所述第三信息更新所述第一AI网络模型,包括:
    所述终端基于所述第二量化信息对所述第三信息进行解量化处理,得到第五信息;
    所述终端根据所述第五信息更新所述第一AI网络模型。
  19. 根据权利要求18所述的方法,其中,所述第二量化信息包括目标量化比特数,所述目标量化比特数由所述网络侧设备指示或由协议约定。
  20. 一种更新AI网络模型的方法,包括:
    网络侧设备接收来自终端的第二信息或者,接收来自终端的第二信息和第二信道信息, 其中,所述第二信息为对第一信息进行量化处理后得到的信息,所述第一信息与所述终端具有的第一AI网络模型对第一信道信息的第一处理结果相关,所述第二信道信息与所述第一信道信息相关;
    所述网络侧设备根据第一量化信息对所述第二信息进行解量化处理,得到第四信息;
    所述网络侧设备基于第二AI网络模型对所述第四信息的第二处理结果,确定第三信道信息;
    所述网络侧设备根据所述第三信道信息和所述第二信道信息,更新所述第二AI网络模型,以及确定第三信息;
    所述网络侧设备向所终端发送所述第三信息。
  21. 根据权利要求20所述的方法,其中,所述第一量化信息包括以下至少一项:
    量化方式,所述量化方式包括标量量化和/或矢量量化;
    更新策略;
    所述第一AI网络模型的第一处理结果的浮点数个数;
    所述第二信息的比特数;
    量化比特数;
    量化码本;
    量化处理的分段方式。
  22. 根据权利要求21所述的方法,其中,所述方法还包括:
    所述网络侧设备向所述终端发送训练信息,所述训练信息包括以下至少一项:
    所述第二信息的上报方式,或,所述第二信息和所述第二信道信息的上报方式;
    所述第三信息的接收方式。
  23. 根据权利要求22所述的方法,其中,所述第二信息的上报方式,包括所述第二信息的内容,其中,所述第二信息的内容包括以下至少一项:
    所述第一处理结果,所述第一处理结果为量化前的浮点数格式的信息;
    对所述第一处理结果进行量化处理得到的二进制格式的信息;
    对所述第一处理结果依次进行量化处理和解量化处理后得到的浮点数格式的信息。
  24. 根据权利要求20所述的方法,其中,所述第一量化信息满足以下至少一项:
    由所述网络侧设备指示;
    由协议约定;
    由所述终端选择并上报所述网络侧设备;
    与所述第一AI网络模型关联。
  25. 根据权利要求20至24中任一项所述的方法,其中,在所述网络侧设备接收来自终端的第二信息之前,所述方法还包括:
    所述网络侧设备向所述终端发送第一指示信息,其中,所述第一指示信息指示以下至少一项:
    所述第一量化信息;
    第一标识,所述第一标识与所述第一量化信息对应;
    第二标识,所述第二标识用于标识量化码本池中的量化码本,所述第一量化信息包括所述第二标识对应的量化码本;
    第三标识,所述第三标识用于标识量化码本组,所述量化码本组包括至少两个量化码本,所述第一量化信息包括所述第三标识对应的量化码本组中的量化码本。
  26. 根据权利要求20至24中任一项所述的方法,其中,在所述网络侧设备根据第一量化信息对所述第二信息进行解量化处理之前,所述方法还包括:
    所述网络侧设备接收来自所述终端的第二指示信息;
    所述网络侧设备根据所述第二指示信息确定所述第一量化信息;
    其中,所述第二指示信息指示以下至少一项:
    所述第一量化信息;
    第一标识,所述第一标识与所述第一量化信息对应;
    第二标识,所述第二标识用于标识量化码本池中的量化码本,所述第一量化信息包括所述第二标识对应的量化码本;
    第三标识,所述第三标识用于标识量化码本组,所述量化码本组包括至少两个量化码本,所述第一量化信息包括所述第三标识对应的量化码本组中的量化码本;
    第四标识,所述第四标识用于标识所述第三标识对应的量化码本组内的目标量化码本,所述第一量化信息包括所述目标量化码本。
  27. 根据权利要求20至24中任一项所述的方法,其中,在所述第一量化信息包括量化码本的情况下,所述方法还包括:
    所述网络侧设备接收来自所述终端的第三指示信息,其中,所述第三指示信息指示所述终端更新后的量化码本或所述更新后的量化码本的标识。
  28. 根据权利要求27所述的方法,其中,在所述网络侧设备接收来自所述终端的第三指示信息之前,所述方法还包括:
    所述网络侧设备接收来自所述终端的目标请求信息,所述目标请求信息用于请求对所述终端的量化码本进行更新;
    所述网络侧设备向所述终端发送目标响应信息,所述目标响应信息用于允许对所述终端的量化码本进行更新。
  29. 根据权利要求27所述的方法,其中,在所述网络侧设备接收来自所述终端的第三指示信息新之前,所述方法还包括:
    所述网络侧设备向所述终端发送第四指示信息,其中,所述第四指示信息指示终端对所述量化码本进行更新。
  30. 根据权利要求20至24中任一项所述的方法,其中,所述网络侧设备接收来自终端的第二信息,包括:
    所述网络侧设备接收来自目标组内的每一个终端的第二信息,其中,所述目标组包括至少两个终端,所述目标组内的终端的第一AI网络模型对应同一个第二AI网络模型。
  31. 根据权利要求30所述的方法,其中,在所述网络侧设备接收来自目标组内的每一个终端的第二信息之前,所述方法还包括:
    所述网络侧设备向所述目标组内的每一个终端发送第五指示信息,其中,所述第五指示信息指示所述目标组内的全部终端更新各自的量化码本。
  32. 根据权利要求31所述的方法,其中,所述方法还包括:
    所述网络侧设备接收所述目标组内的每一个终端各自更新后的量化码本。
  33. 根据权利要求32所述的方法,其中,所述方法还包括:
    所述网络侧设备根据所述目标组内的全部终端的量化码本,确定目标量化码本;
    所述网络侧设备向所述目标组内的每一个终端发送所述目标量化码本,其中,所述第一量化信息包括所述目标量化码本。
  34. 根据权利要求30所述的方法,其中,所述目标组内的全部终端的第一量化信息相同。
  35. 根据权利要求20至24中任一项所述的方法,其中,所述方法还包括:
    所述网络侧设备接收来自所述终端的目标能力信息;
    所述网络侧设备根据所述目标能力信息,确定所述第一量化信息;
    其中,所述目标能力信息指示以下至少一项:
    所述终端是否支持量化码本;
    所述终端是否支持更新量化码本;
    所述终端支持的量化码本的标识。
  36. 根据权利要求20所述的方法,其中,所述网络侧设备基于第二AI网络模型对所述第一信息的第二处理结果,确定第三信息包括:
    所述网络侧设备基于第二AI网络模型对所述第一信息的第二处理结果,得到第四信息;
    所述网络侧设备基于第二量化信息对所述第四信息进行量化处理,得到所述第三信息。
  37. 根据权利要求36所述的方法,其中,所述第二量化信息包括目标量化比特数,所述目标量化比特数由所述网络侧设备指示或由协议约定。
  38. 一种更新AI网络模型的装置,应用于终端,所述装置包括:
    第一处理模块,用于根据第一量化信息对第一信息进行量化处理,得到第二信息,其中,所述终端具有第一AI网络模型,所述第一信息与所述第一AI网络模型对第一信道信息的第一处理结果相关;
    第一发送模块,用于向网络侧设备发送所述第二信息或者,发送所述第二信息和第二信道信息,所述第二信道信息与所述第一信道信息相关;
    第一接收模块,用于接收来自所述网络侧设备的第三信息,其中,所述第三信息是根 据第三信道信息和所述第二信道信息确定的,所述第三信道信息与第二AI网络模型对第四信息的第二处理结果相关,所述第四信息为对所述第二信息进行解量化处理得到的信息;
    第一更新模块,用于根据所述第三信息更新所述第一AI网络模型。
  39. 一种更新AI网络模型的装置,应用于网络侧设备,所述装置包括:
    第二接收模块,用于接收来自终端的第二信息或者,接收来自终端的第二信息和第二信道信息,其中,所述第二信息为对第一信息进行量化处理后得到的信息,所述第一信息与所述终端具有的第一AI网络模型对第一信道信息的第一处理结果相关,所述第二信道信息与所述第一信道信息相关;
    第二处理模块,用于根据第一量化信息对所述第二信息进行解量化处理,得到第四信息;
    第一确定模块,用于基于第二AI网络模型对所述第四信息的第二处理结果,确定第三信道信息;
    第二更新模块,用于根据所述第三信道信息和所述第二信道信息,更新所述第二AI网络模型,以及确定第三信息;
    第二发送模块,用于向所终端发送所述第三信息。
  40. 一种通信设备,包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1至19中任一项所述的更新AI网络模型的方法的步骤,或者实现如权利要求20至37中任一项所述的更新AI网络模型的方法的步骤。
  41. 一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1至19中任一项所述的更新AI网络模型的方法的步骤,或者实现如权利要求20至37中任一项所述的更新AI网络模型的方法的步骤。
PCT/CN2023/128033 2022-11-14 2023-10-31 更新ai网络模型的方法、装置和通信设备 WO2024104126A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211426189.1A CN118075120A (zh) 2022-11-14 2022-11-14 更新ai网络模型的方法、装置和通信设备
CN202211426189.1 2022-11-14

Publications (1)

Publication Number Publication Date
WO2024104126A1 true WO2024104126A1 (zh) 2024-05-23

Family

ID=91083761

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/128033 WO2024104126A1 (zh) 2022-11-14 2023-10-31 更新ai网络模型的方法、装置和通信设备

Country Status (2)

Country Link
CN (1) CN118075120A (zh)
WO (1) WO2024104126A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109525292A (zh) * 2018-12-24 2019-03-26 东南大学 一种采用比特级优化网络的信道信息压缩反馈方法
CN113810086A (zh) * 2020-06-12 2021-12-17 华为技术有限公司 信道信息反馈方法、通信装置及存储介质
WO2021253936A1 (zh) * 2020-06-19 2021-12-23 株式会社Ntt都科摩 用户设备、基站、用户设备和基站的信道估计和反馈系统
CN114337911A (zh) * 2020-09-30 2022-04-12 华为技术有限公司 一种基于神经网络的通信方法以及相关装置
US20220116764A1 (en) * 2020-10-09 2022-04-14 Qualcomm Incorporated User equipment (ue) capability report for machine learning applications
WO2022151084A1 (zh) * 2021-01-13 2022-07-21 Oppo广东移动通信有限公司 信息量化方法、装置、通信设备及存储介质
CN115037608A (zh) * 2021-03-04 2022-09-09 维沃移动通信有限公司 量化的方法、装置、设备及可读存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109525292A (zh) * 2018-12-24 2019-03-26 东南大学 一种采用比特级优化网络的信道信息压缩反馈方法
CN113810086A (zh) * 2020-06-12 2021-12-17 华为技术有限公司 信道信息反馈方法、通信装置及存储介质
WO2021253936A1 (zh) * 2020-06-19 2021-12-23 株式会社Ntt都科摩 用户设备、基站、用户设备和基站的信道估计和反馈系统
CN114337911A (zh) * 2020-09-30 2022-04-12 华为技术有限公司 一种基于神经网络的通信方法以及相关装置
US20220116764A1 (en) * 2020-10-09 2022-04-14 Qualcomm Incorporated User equipment (ue) capability report for machine learning applications
WO2022151084A1 (zh) * 2021-01-13 2022-07-21 Oppo广东移动通信有限公司 信息量化方法、装置、通信设备及存储介质
CN115037608A (zh) * 2021-03-04 2022-09-09 维沃移动通信有限公司 量化的方法、装置、设备及可读存储介质

Also Published As

Publication number Publication date
CN118075120A (zh) 2024-05-24

Similar Documents

Publication Publication Date Title
WO2023246618A1 (zh) 信道矩阵处理方法、装置、终端及网络侧设备
WO2023185978A1 (zh) 信道特征信息上报及恢复方法、终端和网络侧设备
WO2024104126A1 (zh) 更新ai网络模型的方法、装置和通信设备
WO2023179476A1 (zh) 信道特征信息上报及恢复方法、终端和网络侧设备
WO2024037380A1 (zh) 信道信息处理方法、装置、通信设备及存储介质
WO2023179570A1 (zh) 信道特征信息传输方法、装置、终端及网络侧设备
WO2024032606A1 (zh) 信息传输方法、装置、设备、系统及存储介质
WO2024217495A1 (zh) 信息处理方法、信息处理装置、终端及网络侧设备
WO2024088161A1 (zh) 信息传输方法、信息处理方法、装置和通信设备
WO2024055974A1 (zh) Cqi传输方法、装置、终端及网络侧设备
WO2024027683A1 (zh) 模型匹配方法、装置、通信设备及可读存储介质
WO2024149156A1 (zh) 信息传输方法、装置、终端及网络侧设备
WO2023179473A1 (zh) 信道特征信息上报及恢复方法、终端和网络侧设备
WO2023185980A1 (zh) 信道特征信息传输方法、装置、终端及网络侧设备
CN118042450A (zh) 信息传输方法、更新ai网络模型的方法、装置和通信设备
WO2024149157A1 (zh) Csi传输方法、装置、终端及网络侧设备
WO2023179474A1 (zh) 信道特征信息辅助上报及恢复方法、终端和网络侧设备
WO2024055993A1 (zh) Cqi传输方法、装置、终端及网络侧设备
CN117411527A (zh) 信道特征信息上报及恢复方法、终端和网络侧设备
WO2024007949A1 (zh) Ai模型处理方法、装置、终端及网络侧设备
WO2023207920A1 (zh) 信道信息反馈方法、终端及网络侧设备
WO2023179460A1 (zh) 信道特征信息传输方法、装置、终端及网络侧设备
WO2024164962A1 (zh) 通信处理方法、装置、设备及可读存储介质
CN117978304A (zh) 信息传输方法、信息处理方法、装置和通信设备
WO2023134628A1 (zh) 传输方法、装置和设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23890562

Country of ref document: EP

Kind code of ref document: A1