WO2023168718A1 - 一种模型训练部署方法/装置/设备及存储介质 - Google Patents

一种模型训练部署方法/装置/设备及存储介质 Download PDF

Info

Publication number
WO2023168718A1
WO2023168718A1 PCT/CN2022/080478 CN2022080478W WO2023168718A1 WO 2023168718 A1 WO2023168718 A1 WO 2023168718A1 CN 2022080478 W CN2022080478 W CN 2022080478W WO 2023168718 A1 WO2023168718 A1 WO 2023168718A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
information
encoder
decoder
encoder model
Prior art date
Application number
PCT/CN2022/080478
Other languages
English (en)
French (fr)
Inventor
池连刚
许威
Original Assignee
北京小米移动软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京小米移动软件有限公司 filed Critical 北京小米移动软件有限公司
Priority to PCT/CN2022/080478 priority Critical patent/WO2023168718A1/zh
Publication of WO2023168718A1 publication Critical patent/WO2023168718A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W8/00Network data management
    • H04W8/22Processing or transfer of terminal data, e.g. status or physical capabilities
    • H04W8/24Transfer of terminal data

Definitions

  • the present disclosure relates to the field of communication technology, and in particular, to a model training deployment method/device/equipment and a storage medium.
  • AI Artificial Intelligent, artificial intelligence
  • ML Machine Learning, machine learning
  • the model training deployment method/device/equipment and storage medium proposed in this disclosure are used to train and deploy AI/ML models.
  • the method proposed in one aspect of the present disclosure is applied to network equipment, including:
  • capability information reported by the UE where the capability information is used to indicate the AI and/or ML support capabilities of the UE;
  • the model information of the encoder model is sent to the UE, and the model information of the encoder model is used to deploy the encoder model.
  • the method proposed by another aspect of the present disclosure is applied to UE and includes:
  • An encoder model is generated based on the model information of the encoder model.
  • Another aspect of the present disclosure provides a device, including:
  • An acquisition module configured to acquire capability information reported by the UE, where the capability information is used to indicate the AI and/or ML support capabilities of the UE;
  • a generation module for generating an encoder model and a decoder model based on the capability information
  • a sending module configured to send model information of the encoder model to the UE, where the model information of the encoder model is used to deploy the encoder model.
  • Another aspect of the present disclosure provides a device, including:
  • a reporting module configured to report capability information to the network device, where the capability information is used to indicate the AI and/or ML support capabilities of the UE;
  • An acquisition module configured to acquire the model information of the encoder model sent by the network device, and the model information of the encoder model is used to deploy the encoder model;
  • a generating module configured to generate an encoder model based on the model information of the encoder model.
  • the device includes a processor and a memory.
  • a computer program is stored in the memory.
  • the processor executes the computer program stored in the memory so that the The device performs the method proposed in the embodiment of the above aspect.
  • the device includes a processor and a memory.
  • a computer program is stored in the memory.
  • the processor executes the computer program stored in the memory so that the The device performs the method proposed in the above embodiment.
  • a communication device provided by another embodiment of the present disclosure includes: a processor and an interface circuit
  • the interface circuit is used to receive code instructions and transmit them to the processor
  • the processor is configured to run the code instructions to perform the method proposed in the embodiment of one aspect.
  • a communication device provided by another embodiment of the present disclosure includes: a processor and an interface circuit
  • the interface circuit is used to receive code instructions and transmit them to the processor
  • the processor is configured to run the code instructions to perform the method proposed in another embodiment.
  • a computer-readable storage medium provided by an embodiment of another aspect of the present disclosure is used to store instructions. When the instructions are executed, the method proposed by the embodiment of the present disclosure is implemented.
  • a computer-readable storage medium provided by an embodiment of another aspect of the present disclosure is used to store instructions. When the instructions are executed, the method proposed by the embodiment of another aspect is implemented.
  • the network device will first obtain the capability information reported by the UE, and the capability information is used to indicate the AI and/or ML of the UE.
  • the network device will then generate an encoder model and a decoder model based on the capability information, and send the model information of the encoder model to the UE.
  • the model information of the encoder model is used to deploy the encoder model, then the UE can An encoder model is generated based on the model information of the encoder model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 1 is a schematic flowchart of a method provided by an embodiment of the present disclosure
  • Figure 2a is a schematic flowchart of a method provided by another embodiment of the present disclosure.
  • Figure 2b is a schematic flowchart of a method provided by yet another embodiment of the present disclosure.
  • Figure 2c is a schematic flowchart of a method provided by yet another embodiment of the present disclosure.
  • Figure 2d is a schematic flowchart of a method provided by yet another embodiment of the present disclosure.
  • Figure 2e is a schematic flowchart of a method provided by yet another embodiment of the present disclosure.
  • Figure 3a is a schematic flowchart of a method provided by yet another embodiment of the present disclosure.
  • Figure 3b is a schematic flowchart of a method provided by yet another embodiment of the present disclosure.
  • Figure 3c is a schematic flowchart of a method provided by yet another embodiment of the present disclosure.
  • Figure 3d is a schematic flowchart of a method provided by yet another embodiment of the present disclosure.
  • Figure 3e is a schematic flowchart of a method provided by yet another embodiment of the present disclosure.
  • Figure 4 is a schematic flowchart of a method provided by yet another embodiment of the present disclosure.
  • Figure 5 is a schematic flowchart of a method provided by yet another embodiment of the present disclosure.
  • Figure 6a is a schematic flowchart of a method provided by yet another embodiment of the present disclosure.
  • Figure 6b is a schematic flowchart of a method provided by yet another embodiment of the present disclosure.
  • Figure 6c is a schematic flowchart of a method provided by yet another embodiment of the present disclosure.
  • Figure 6d is a schematic flowchart of a method provided by yet another embodiment of the present disclosure.
  • Figure 7 is a schematic structural diagram of a device provided by an embodiment of the present disclosure.
  • Figure 8 is a schematic structural diagram of a device provided by another embodiment of the present disclosure.
  • Figure 9 is a block diagram of a user equipment provided by an embodiment of the present disclosure.
  • Figure 10 is a block diagram of a network side device provided by an embodiment of the present disclosure.
  • first, second, third, etc. may be used to describe various information in the embodiments of the present disclosure, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from each other.
  • first information may also be called second information, and similarly, the second information may also be called first information.
  • the words "if” and “if” as used herein may be interpreted as "when” or "when” or "in response to determination.”
  • Figure 1 is a schematic flowchart of a model training and deployment method provided by an embodiment of the present disclosure. The method is executed by a network device. As shown in Figure 1, the method may include the following steps:
  • Step 101 Obtain the capability information reported by the UE (User Equipment).
  • a UE may be a device that provides voice and/or data connectivity to users.
  • Terminal devices can communicate with one or more core networks via RAN (Radio Access Network).
  • UEs can be IoT terminals, such as sensor devices, mobile phones (or "cellular" phones) and devices with
  • the computer of the network terminal may, for example, be a fixed, portable, pocket-sized, handheld, built-in computer or vehicle-mounted device.
  • station STA
  • subscriber unit subscriberunit
  • subscriber station subscriberstation
  • mobile station mobile station
  • mobile station mobile station
  • remote station remote station
  • access point remote terminal
  • remote terminal remote terminal
  • access point Access terminal user terminal or user agent.
  • the UE may also be a device of an unmanned aerial vehicle.
  • the UE may also be a vehicle-mounted device, for example, it may be a driving computer with a wireless communication function, or a wireless terminal connected to an external driving computer.
  • the UE may also be a roadside device, for example, it may be a streetlight, a signal light, or other roadside device with wireless communication functions.
  • the capability information may be used to indicate the UE's AI (Artificial Intelligent, artificial intelligence) and/or ML (Machine Learning, machine learning) support capabilities.
  • AI Artificial Intelligent, artificial intelligence
  • ML Machine Learning, machine learning
  • the model may include at least one of the following:
  • the above capability information may include at least one of the following:
  • the maximum support capability information of the UE for the model includes the structural information of the most complex model supported by the UE.
  • the above-mentioned structural information may include, for example, the number of layers of the model.
  • Step 102 Generate an encoder model and a decoder model based on the capability information.
  • multiple different encoder models to be trained and/or multiple different decoder models to be trained are stored in the network device, where the encoder model and the decoder model There is a corresponding relationship between them.
  • the network device can select an encoder model to be trained that matches the AI and/or ML support capabilities of the UE based on the capability information, and/or select an encoder model that matches the AI and/or ML support capabilities of the network device.
  • the decoder model to be trained with its own AI and/or ML support capabilities, and then the encoder model to be trained and/or the decoder model to be trained are trained to generate the encoder model and decoder model .
  • Step 103 Send the model information of the encoder model to the UE, and the model information of the encoder model can be used to deploy the encoder model.
  • the model information of the above-mentioned encoder model may include at least one of the following:
  • the types of the above-mentioned encoder models may include:
  • CNN Convolutional Neural Network, convolutional neural network
  • the model parameters of the encoder model may include the compression rate of the CNN model, the number of convolutional layers of the CNN model, and each Arrangement information between convolutional layers, weight information of each convolutional layer, convolutional kernel size of each convolutional layer, at least one of the normalization layer and activation function type applied to each convolutional layer .
  • the model parameters of the encoder model may include the compression rate of the fully connected DNN model, the number of fully connected layers, and the number of fully connected layers. Arrangement information between connected layers, weight information of each fully connected layer, the number of nodes in each fully connected layer, and at least one of the normalization layer and activation function type applied to each fully connected layer.
  • the model parameters of the encoder model may include the compression rate, volume of the combined model of CNN and fully connected DNN.
  • the number of cumulative layers and fully connected layers, the matching mode, the weight information of the convolution layer, the convolution kernel size, the number of nodes in the fully connected layer, the weight information of the fully connected layer, and the application of each fully connected layer and convolution layer At least one of normalization layer and activation function types.
  • the network device will first obtain the capability information reported by the UE.
  • the capability information is used to indicate the AI and/or ML support capabilities of the UE, and then the network device will based on the capabilities.
  • the information generates an encoder model and a decoder model, and sends the model information of the encoder model to the UE.
  • the model information of the encoder model is used to deploy the encoder model, and the UE can generate encoding based on the model information of the encoder model. device model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 2a is a schematic flowchart of a model training and deployment method provided by an embodiment of the present disclosure. The method is executed by a network device. As shown in Figure 2a, the method may include the following steps:
  • Step 201a Obtain the capability information reported by the UE.
  • step 201a For detailed introduction to step 201a, reference may be made to the description of the above embodiments, and the embodiments of the disclosure will not be described again here.
  • Step 202a Select the encoder model to be trained and the decoder model to be trained based on the capability information.
  • the network device can select the UE supported by the UE based on the capability information from among the multiple different encoder models to be trained and the multiple different decoder models to be trained.
  • the encoder model to be trained and the decoder model to be trained supported by the network device are selected.
  • the encoder model to be trained and the decoder model to be trained selected by the network device correspond to each other.
  • Step 203a Determine sample data based on at least one of the information reported by the UE and the stored information of the network device.
  • the above-mentioned information reported by the UE may be information currently reported by the UE, or may be information reported by the UE historically.
  • the above information may be uncoded and compressed information reported by the UE, and/or coded and compressed information reported by the UE.
  • Step 204a Train the encoder model to be trained and the decoder model to be trained based on the sample data to generate an encoder model and a decoder model.
  • Step 205a Send the model information of the encoder model to the UE, and the model information of the encoder model is used to deploy the encoder model.
  • step 205a For a detailed introduction to step 205a, reference may be made to the description of the above embodiments, and the embodiments of the present disclosure will not be described again here.
  • the network device will first obtain the capability information reported by the UE.
  • the capability information is used to indicate the AI and/or ML support capabilities of the UE, and then the network device will based on the capabilities.
  • the information generates an encoder model and a decoder model, and sends the model information of the encoder model to the UE.
  • the model information of the encoder model is used to deploy the encoder model, and the UE can generate encoding based on the model information of the encoder model. device model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 2b is a schematic flowchart of a model training and deployment method provided by an embodiment of the present disclosure. The method is executed by a network device. As shown in Figure 2b, the method may include the following steps:
  • Step 201b Obtain the capability information reported by the UE.
  • step 201b For a detailed introduction to step 201b, reference may be made to the description of the above embodiments, and the embodiments of the present disclosure will not be described again here.
  • Step 202b Select the encoder model to be trained based on the capability information.
  • the network device can select the encoder model to be trained supported by the UE based on the capability information from a plurality of different encoder models to be trained stored therein.
  • the above-mentioned encoder model to be trained should meet the following conditions:
  • the selected encoder model to be trained should be a model that matches the capability information of the UE (that is, the model supported by the UE);
  • the decoder model to be trained corresponding to the selected encoder model to be trained should be a model supported by the network device.
  • Step 203b Determine a decoder model to be trained that matches the encoder model to be trained based on the encoder model to be trained.
  • the model information of the decoder model to be trained that matches the encoder model to be trained can be determined based on the model information of the encoder model to be trained, and then, based on the model information of the encoder model to be trained, The model information for training the decoder model deploys the decoder model to be trained.
  • Step 204b Determine sample data based on at least one of the information reported by the UE and the stored information of the network device.
  • the above-mentioned information reported by the UE may be information currently reported by the UE, or may be information reported by the UE historically.
  • the above information may be uncoded and compressed information reported by the UE, and/or coded and compressed information reported by the UE.
  • Step 205b Train the encoder model to be trained and the decoder model to be trained based on the sample data to generate an encoder model and a decoder model.
  • Step 206b Send the model information of the encoder model to the UE, and the model information of the encoder model is used to deploy the encoder model.
  • step 206b For a detailed introduction to step 206b, reference may be made to the description of the above embodiments, and the embodiments of the present disclosure will not be described again here.
  • the network device will first obtain the capability information reported by the UE.
  • the capability information is used to indicate the AI and/or ML support capabilities of the UE, and then the network device will based on the capabilities.
  • the information generates an encoder model and a decoder model, and sends the model information of the encoder model to the UE.
  • the model information of the encoder model is used to deploy the encoder model, and the UE can generate encoding based on the model information of the encoder model. device model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 2c is a schematic flowchart of a model training and deployment method provided by an embodiment of the present disclosure. The method is executed by a network device. As shown in Figure 2c, the method may include the following steps:
  • Step 201c Obtain the capability information reported by the UE.
  • step 201c For a detailed introduction to step 201c, reference may be made to the description of the above embodiments, and the embodiments of the present disclosure will not be described again here.
  • Step 202c Select the encoder model to be trained based on the capability information.
  • the network device can select the encoder model to be trained supported by the UE based on the capability information from a plurality of different encoder models to be trained stored therein.
  • the above-mentioned encoder model to be trained should meet the following conditions:
  • the selected encoder model to be trained should be a model that matches the capability information of the UE (that is, the model supported by the UE);
  • the decoder model to be trained corresponding to the selected encoder model to be trained should be a model supported by the network device.
  • Step 203c Determine sample data based on at least one of the information reported by the UE and the stored information of the network device.
  • the above-mentioned information reported by the UE may be information currently reported by the UE, or may be information reported by the UE historically.
  • the above information may be uncoded and compressed information reported by the UE, and/or coded and compressed information reported by the UE.
  • Step 204c Train the encoder model to be trained based on the sample data to generate an encoder model.
  • Step 205c Determine a decoder model that matches the encoder model based on the encoder model.
  • model information of a decoder model that matches the encoder model can be determined based on model information of the encoder model, and then, based on the model information of the decoder model Deploy the decoder model.
  • Step 206c Send the model information of the encoder model to the UE, and the model information of the encoder model is used to deploy the encoder model.
  • step 206c For a detailed introduction to step 206c, reference may be made to the description of the above embodiments, and the embodiments of this disclosure will not be described again here.
  • the network device will first obtain the capability information reported by the UE.
  • the capability information is used to indicate the AI and/or ML support capabilities of the UE, and then the network device will based on the capabilities.
  • the information generates an encoder model and a decoder model, and sends the model information of the encoder model to the UE.
  • the model information of the encoder model is used to deploy the encoder model, and the UE can generate encoding based on the model information of the encoder model. device model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 2d is a schematic flowchart of a model training and deployment method provided by an embodiment of the present disclosure. The method is executed by a network device. As shown in Figure 2d, the method may include the following steps:
  • Step 201d Obtain the capability information reported by the UE.
  • step 201d For detailed introduction to step 201d, reference may be made to the description of the above embodiments, and the embodiments of the present disclosure will not be described in detail here.
  • Step 202d Select the decoder model to be trained based on the capability information.
  • the network device can select a decoder model to be trained based on the capability information from a plurality of different decoder models to be trained stored therein.
  • the above-mentioned "selecting a decoder model to be trained based on capability information” specifically means: when selecting a decoder model to be trained based on capability information, the selected The decoder model to be trained must meet the following conditions:
  • the selected decoder model to be trained should be a model supported by the network device
  • the encoder model to be trained corresponding to the selected decoder model to be trained should be a model that matches the capability information of the UE (that is, a model supported by the UE).
  • Step 203d Determine an encoder model to be trained that matches the decoder model to be trained based on the decoder model to be trained.
  • the model information of the encoder model to be trained that matches the decoder model to be trained can be determined based on the model information of the decoder model to be trained, and then, based on the The model information of the encoder model to be trained deploys the encoder model to be trained.
  • the determined encoder model to be trained is specifically a model supported by the UE.
  • Step 204d Determine sample data based on at least one of the information reported by the UE and the stored information of the network device.
  • the above-mentioned information reported by the UE may be information currently reported by the UE, or may be information reported by the UE historically.
  • the above information may be uncoded and compressed information reported by the UE, and/or coded and compressed information reported by the UE.
  • Step 205d Train the encoder model to be trained and the decoder model to be trained based on the sample data to generate an encoder model and a decoder model.
  • Step 206d Send the model information of the encoder model to the UE, and the model information of the encoder model is used to deploy the encoder model.
  • step 206d For detailed introduction to step 206d, reference may be made to the description of the above embodiments, and the embodiments of the present disclosure will not be described in detail here.
  • Figure 2e is a schematic flowchart of a model training and deployment method provided by an embodiment of the present disclosure. The method is executed by a network device. As shown in Figure 2e, the method may include the following steps:
  • Step 201e Obtain the capability information reported by the UE.
  • step 201e For a detailed introduction to step 201e, reference may be made to the description of the above embodiments, and the embodiments of the present disclosure will not be described again here.
  • Step 202e Select the decoder model to be trained based on the capability information.
  • the network device can select a decoder model to be trained based on the capability information from a plurality of different decoder models to be trained stored therein.
  • the above-mentioned "selecting a decoder model to be trained based on capability information” specifically means: when selecting a decoder model to be trained based on capability information, the selected The decoder model to be trained must meet the following conditions:
  • the selected decoder model to be trained should be a model supported by the network device
  • the encoder model to be trained corresponding to the selected decoder model to be trained should be a model that matches the capability information of the UE (that is, a model supported by the UE).
  • Step 203e Determine sample data based on at least one of the information reported by the UE and the stored information of the network device.
  • the above-mentioned information reported by the UE may be information currently reported by the UE, or may be information reported by the UE historically.
  • the above information may be uncoded and compressed information reported by the UE, and/or coded and compressed information reported by the UE.
  • Step 204e Train the decoder model to be trained based on the sample data to generate a decoder model.
  • Step 205e Determine an encoder model that matches the decoder model based on the decoder model.
  • the model information of the encoder model matching the decoder model can be determined based on the model information of the decoder model, and then, based on the model information of the encoder model Deploy the encoder model.
  • Step 206e Send the model information of the encoder model to the UE, and the model information of the encoder model is used to deploy the encoder model.
  • step 206e For a detailed introduction to step 206e, reference may be made to the description of the above embodiments, and the embodiments of the present disclosure will not be described again here.
  • Figure 3a is a schematic flowchart of a model training and deployment method provided by an embodiment of the present disclosure. The method is executed by a network device. As shown in Figure 3a, the method may include the following steps:
  • Step 301a Obtain the capability information reported by the UE.
  • Step 302a Generate an encoder model and a decoder model based on the capability information.
  • Step 303a Send the model information of the encoder model to the UE, and the model information of the encoder model is used to deploy the encoder model.
  • steps 301a to 303a please refer to the description of the above embodiments, and the embodiments of the present disclosure will not be described again here.
  • Step 304a Send indication information to the UE.
  • the indication information may be used to indicate the type of information reported by the UE to the network device.
  • the information type may include at least one of the following:
  • the original reported information is encoded by the encoder model.
  • the reported information may be information to be reported by the UE to the network device.
  • the reported information may include CSI (Channel State Information, channel state information) information;
  • the CSI information may include at least one of the following:
  • PMI Precoding Matrix Indicator, precoding matrix identifier
  • CQI Channel Quality Indicator, channel quality indication information
  • RI RankIndicator, channel rank indicator information
  • RSRP Reference Signal Received Power, reference signal received power
  • RSRQ Reference Signal Received Quality, reference signal received quality
  • SINR Signal-to-Interference plus Noise Ratio, signal to dryness ratio
  • the network device may send indication information to the UE through signaling.
  • the network device will first obtain the capability information reported by the UE.
  • the capability information is used to indicate the AI and/or ML support capabilities of the UE, and then the network device will based on the capabilities.
  • the information generates an encoder model and a decoder model, and sends the model information of the encoder model to the UE.
  • the model information of the encoder model is used to deploy the encoder model, and the UE can generate encoding based on the model information of the encoder model. device model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 3b is a schematic flowchart of a model training and deployment method provided by an embodiment of the present disclosure. The method is executed by a network device. As shown in Figure 3b, the method may include the following steps:
  • Step 301b Obtain the capability information reported by the UE.
  • Step 302b Generate an encoder model and a decoder model based on the capability information.
  • Step 303b Send the model information of the encoder model to the UE, and the model information of the encoder model is used to deploy the encoder model.
  • Step 304b Send indication information to the UE.
  • the indication information is used to indicate that the type of information reported by the UE to the network device includes: information after the original reported information has been encoded by the encoder model.
  • steps 303b to 304b please refer to the description of the above embodiments, and the embodiments of the present disclosure will not be described in detail here.
  • Step 305b When receiving the information reported by the UE, use the decoder model to decode the information reported by the UE.
  • the UE report received by the network device The information is essentially the information encoded by the encoder model. Based on this, the network device needs to use the decoder model (such as the decoder model generated in step 303b above) to decode the information reported by the UE to obtain the original Report information.
  • the decoder model such as the decoder model generated in step 303b above
  • the network device will first obtain the capability information reported by the UE.
  • the capability information is used to indicate the AI and/or ML support capabilities of the UE, and then the network device will based on the capabilities.
  • the information generates an encoder model and a decoder model, and sends the model information of the encoder model to the UE.
  • the model information of the encoder model is used to deploy the encoder model, and the UE can generate encoding based on the model information of the encoder model. device model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 3c is a schematic flowchart of a model training and deployment method provided by an embodiment of the present disclosure. The method is executed by a network device. As shown in Figure 3c, the method may include the following steps:
  • Step 301c Obtain the capability information reported by the UE.
  • Step 302c Generate an encoder model and a decoder model based on the capability information.
  • Step 303c Send the model information of the encoder model to the UE, and the model information of the encoder model is used to deploy the encoder model.
  • Step 304c Update the encoder model and the decoder model to generate an updated encoder model and an updated decoder model.
  • model updating of the encoder model and the decoder model may specifically include the following steps:
  • Step 1 Determine a new encoder model and a new decoder model based on the original encoder model and the original decoder model.
  • a new encoder model and a new decoder model may be determined.
  • the above-mentioned method of determining a new encoder model and a new decoder model may include:
  • a new encoder model and a new decoder model that are different from the original encoder model and the original decoder model are re-selected based on the capability information.
  • the new encoder model is a model supported by the UE, and the new decoder model is a model supported by the network device.
  • Step 2 Retrain the new encoder model and the new decoder model to obtain an updated encoder model and an updated decoder model.
  • model update of the encoder model and the decoder model may specifically include the following steps:
  • Step a Monitor the distortion of the original encoder model and the original decoder model.
  • the distortion degree of the original encoder model and the original decoder model can be monitored in real time.
  • the uncoded and compressed information reported by the UE can be used as input information and sequentially input into the original encoder model and the original decoder model to sequentially perform encoding and decoding operations to obtain the output information, and calculate the output information.
  • the distortion degree of the original encoder model and the original decoder model is determined by matching the input information.
  • Step b When the distortion exceeds the first threshold, determine a new encoder model and a new decoder model based on the original encoder model and the original decoder model, and retrain the new encoder model and the new decoder model.
  • the decoder model obtains an updated encoder model and an updated decoder model, wherein the distortion of the updated encoder model and the updated decoder model is lower than the second threshold, and the second threshold is less than equal to the first threshold.
  • the distortion exceeds the first threshold, it means that the encoding and decoding accuracy of the original encoder model and the original decoder model is low, which will affect the subsequent processing of the signal. Processing accuracy. Therefore, it is necessary to determine a new encoder model and a new decoder model, and retrain the new encoder model and new decoder model to obtain an updated encoder model and an updated decoder model. And, it should be ensured that the distortion of the updated encoder model and the updated decoder model is low and lower than the second threshold, so as to ensure the encoding and decoding accuracy of the model.
  • the above-mentioned first threshold and second threshold may be set in advance.
  • Step 305c Directly replace the original decoder model with the updated decoder model.
  • the network device can use the updated decoder model to perform decoding, where, The decoding accuracy based on the updated decoder model is higher, which can ensure the accuracy of subsequent signal processing.
  • the network device will first obtain the capability information reported by the UE.
  • the capability information is used to indicate the AI and/or ML support capabilities of the UE, and then the network device will based on the capabilities.
  • the information generates an encoder model and a decoder model, and sends the model information of the encoder model to the UE.
  • the model information of the encoder model is used to deploy the encoder model, and the UE can generate encoding based on the model information of the encoder model. device model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 3d is a schematic flowchart of a model training and deployment method provided by an embodiment of the present disclosure. The method is executed by a network device. As shown in Figure 3d, the method may include the following steps:
  • Step 301d Obtain the capability information reported by the UE.
  • Step 302d Generate an encoder model and a decoder model based on the capability information.
  • Step 303d Send the model information of the encoder model to the UE, and the model information of the encoder model is used to deploy the encoder model.
  • Step 304d Update the encoder model and the decoder model to generate an updated encoder model and an updated decoder model.
  • steps 301d to 304d please refer to the description of the above embodiments, and the embodiments of the present disclosure will not be described in detail here.
  • Step 305d Determine the difference model information between the model information of the updated decoder model and the model information of the original decoder model.
  • Step 306d Optimize the original decoder model based on the difference model information.
  • the model information of the optimized and adjusted decoder model can be the same as the model information generated in the above step 304d.
  • the model information of the updated decoder model is consistent, so that the network device can subsequently use the updated decoder model to perform decoding, where the decoding accuracy based on the updated decoder model is higher. This can ensure the subsequent processing accuracy of the signal.
  • the network device will first obtain the capability information reported by the UE.
  • the capability information is used to indicate the AI and/or ML support capabilities of the UE, and then the network device will based on the capabilities.
  • the information generates an encoder model and a decoder model, and sends the model information of the encoder model to the UE.
  • the model information of the encoder model is used to deploy the encoder model, and the UE can generate encoding based on the model information of the encoder model. device model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 3e is a schematic flowchart of a model training and deployment method provided by an embodiment of the present disclosure. The method is executed by a network device. As shown in Figure 3e, the method may include the following steps:
  • Step 301e Obtain the capability information reported by the UE.
  • Step 302e Generate an encoder model and a decoder model based on the capability information.
  • Step 303e Send the model information of the encoder model to the UE, and the model information of the encoder model is used to deploy the encoder model.
  • Step 304e Update the encoder model and the decoder model to generate an updated encoder model and an updated decoder model.
  • Step 305e Send the updated model information of the encoder model to the UE.
  • model information of the updated encoder model may include:
  • Difference model information between the model information of the updated encoder model and the model information of the original encoder model is a difference model information between the model information of the updated encoder model and the model information of the original encoder model.
  • the network device can cause the UE to use the updated encoder model to encode, wherein based on the update
  • the coding accuracy of the final encoder model is higher, which can ensure the accuracy of subsequent signal processing.
  • the network device will first obtain the capability information reported by the UE.
  • the capability information is used to indicate the AI and/or ML support capabilities of the UE, and then the network device will based on the capabilities.
  • the information generates an encoder model and a decoder model, and sends the model information of the encoder model to the UE.
  • the model information of the encoder model is used to deploy the encoder model, and the UE can generate encoding based on the model information of the encoder model. device model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 4 is a schematic flowchart of a model training deployment method provided by an embodiment of the present disclosure. The method is executed by a UE. As shown in Figure 4, the method may include the following steps:
  • Step 401 Report capability information to the network device.
  • the capability information is used to indicate the AI and/or ML support capabilities of the UE.
  • the model may include at least one of the following:
  • the capability information may include at least one of the following:
  • the maximum support capability information of the UE for the model includes the structural information of the most complex model supported by the UE.
  • the above-mentioned structural information may include, for example, the number of layers of the model.
  • Step 402 Obtain the model information of the encoder model sent by the network device.
  • the model information of the encoder model is used to deploy the encoder model.
  • model information of the encoder model may include at least one of the following:
  • Step 403 Generate an encoder model based on the model information of the encoder model.
  • the network device will first obtain the capability information reported by the UE.
  • the capability information is used to indicate the AI and/or ML support capabilities of the UE, and then the network device will based on the capabilities.
  • the information generates an encoder model and a decoder model, and sends the model information of the encoder model to the UE.
  • the model information of the encoder model is used to deploy the encoder model, and the UE can generate encoding based on the model information of the encoder model. device model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 5 is a schematic flowchart of a model training deployment method provided by an embodiment of the present disclosure. The method is executed by a UE. As shown in Figure 5, the method may include the following steps:
  • Step 501 Report capability information to the network device.
  • Step 502 Obtain the model information of the encoder model sent by the network device.
  • Step 503 Generate an encoder model based on the model information of the encoder model.
  • steps 501 to 503 please refer to the description of the above embodiments, and the embodiments of the present disclosure will not be described again here.
  • Step 504 Obtain the network device sending instruction information.
  • the indication information is used to indicate the type of information reported by the UE to the network device.
  • the information type may include at least one of original reported information without encoding by the encoder model and information after the original reported information has been encoded by the encoder model.
  • the reported information is information to be reported by the UE to the network device.
  • the reported information may include CSI information;
  • the CSI information may include at least one of the following:
  • Step 505 Report to the network device based on the indication information.
  • the indication information in the above step 504 when the indication information in the above step 504 is used to indicate that the type of information reported by the UE to the network device is: original reporting information that has not been encoded by the encoder model, then this step When the UE wants to report to the network device in step 505, it can directly send the original reporting information to the network device without encoding; and when the instruction information in the above step 504 is used to instruct the UE to report to the network device, the type of information is: When the original reported information is encoded by the encoder model, when the UE wants to report to the network device in step 505, it must first use the encoder model to encode the reported information, and report the encoded information to the network device. .
  • the network device will first obtain the capability information reported by the UE.
  • the capability information is used to indicate the AI and/or ML support capabilities of the UE, and then the network device will based on the capabilities.
  • the information generates an encoder model and a decoder model, and sends the model information of the encoder model to the UE.
  • the model information of the encoder model is used to deploy the encoder model, and the UE can generate encoding based on the model information of the encoder model. device model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 6a is a schematic flowchart of a model training deployment method provided by an embodiment of the present disclosure. The method is executed by a UE. As shown in Figure 6, the method may include the following steps:
  • Step 601a Report capability information to the network device.
  • Step 602a Obtain the model information of the encoder model sent by the network device.
  • Step 603a Generate an encoder model based on the model information of the encoder model.
  • steps 601a to 603a please refer to the description of the above embodiments, and the embodiments of the disclosure will not be described again here.
  • Step 604a Obtain instruction information sent by the network device.
  • the information type indicated by the instruction information may include: information after the original reported information has been encoded by the encoder model.
  • Step 605a Use the encoder model to encode the reported information.
  • Step 606a Report the encoded information to the network device.
  • the network device will first obtain the capability information reported by the UE.
  • the capability information is used to indicate the AI and/or ML support capabilities of the UE, and then the network device will based on the capabilities.
  • the information generates an encoder model and a decoder model, and sends the model information of the encoder model to the UE.
  • the model information of the encoder model is used to deploy the encoder model, and the UE can generate encoding based on the model information of the encoder model. device model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 6b is a schematic flowchart of a model training and deployment method provided by an embodiment of the present disclosure. The method is executed by a network device. As shown in Figure 6b, the method may include the following steps:
  • Step 601b Report capability information to the network device.
  • Step 602b Obtain the model information of the encoder model sent by the network device.
  • Step 603b Generate an encoder model based on the model information of the encoder model.
  • steps 601b to 603b please refer to the description of the above embodiments, and the embodiments of this disclosure will not be described again here.
  • Step 604b Receive the updated model information of the encoder model sent by the network device.
  • model information of the updated encoder model may include:
  • Difference model information between the model information of the updated encoder model and the model information of the original encoder model is a difference model information between the model information of the updated encoder model and the model information of the original encoder model.
  • Step 605b Update the model based on the updated model information of the encoder model.
  • model update based on the model information of the updated encoder model will be introduced in subsequent embodiments.
  • the network device will first obtain the capability information reported by the UE.
  • the capability information is used to indicate the AI and/or ML support capabilities of the UE, and then the network device will based on the capabilities.
  • the information generates an encoder model and a decoder model, and sends the model information of the encoder model to the UE.
  • the model information of the encoder model is used to deploy the encoder model, and the UE can generate encoding based on the model information of the encoder model. device model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 6c is a schematic flowchart of a model training and deployment method provided by an embodiment of the present disclosure. The method is executed by a network device. As shown in Figure 6b, the method may include the following steps:
  • Step 601c Report capability information to the network device.
  • Step 602c Obtain the model information of the encoder model sent by the network device.
  • Step 603c Generate an encoder model based on the model information of the encoder model.
  • steps 601c to 603c please refer to the description of the above embodiments, and the embodiments of this disclosure will not be described again here.
  • Step 604c Receive the updated model information of the encoder model sent by the network device.
  • model information of the updated encoder model may include:
  • Difference model information between the model information of the updated encoder model and the model information of the original encoder model is a difference model information between the model information of the updated encoder model and the model information of the original encoder model.
  • Step 605c Generate an updated encoder model based on the model information of the updated encoder model.
  • the UE when the model information of the updated encoder model received in the above step 604b is "all model information of the updated encoder model", then the UE can directly based on the model information of the updated encoder model. All model information is used to generate an updated encoder model.
  • the UE when the model information of the updated encoder model received in the above step 604b is "the model information of the updated encoder model and the model of the original encoder model "Difference model information between information", the UE can first determine the model information of its own original encoder model, and then determine the updated encoding based on the model information of the original encoder model and the difference model information. model information of the encoder model, and then generate an updated encoder model based on the model information of the updated encoder model.
  • Step 606c Use the updated encoder model to replace the original encoder model to update the model.
  • the UE after replacing the original encoder model with the updated encoder model, the UE can use the updated encoder model to perform decoding, wherein based on the updated encoder model The encoding accuracy of the encoder model is higher, which can ensure the accuracy of subsequent signal processing.
  • the network device will first obtain the capability information reported by the UE.
  • the capability information is used to indicate the AI and/or ML support capabilities of the UE, and then the network device will based on the capabilities.
  • the information generates an encoder model and a decoder model, and sends the model information of the encoder model to the UE.
  • the model information of the encoder model is used to deploy the encoder model, and the UE can generate encoding based on the model information of the encoder model. device model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 6d is a schematic flowchart of a model training and deployment method provided by an embodiment of the present disclosure. The method is executed by a network device. As shown in Figure 6c, the method may include the following steps:
  • Step 601d Report capability information to the network device.
  • Step 602d Obtain the model information of the encoder model sent by the network device.
  • Step 603d Generate an encoder model based on the model information of the encoder model.
  • Step 604d Receive the updated model information of the encoder model sent by the network device.
  • steps 601d to 604d please refer to the description of the above embodiments, and the embodiments of this disclosure will not be described again here.
  • Step 605d Optimize the original decoder model based on the model information of the updated encoder model to update the model.
  • the UE may first determine The model difference information between all the model information and the model information of the original encoder model is then used to optimize the original decoder model based on the model difference information for model updating.
  • the UE when the model information of the updated encoder model received in the above step 604d is "the model information of the updated encoder model and the model of the original encoder model "Difference model information between information", the UE can directly optimize the original decoder model based on the model difference information to perform model update.
  • the model information of the optimized and adjusted encoder model can be consistent with the model of the updated encoder model.
  • the information is consistent, so that the UE can subsequently use the updated encoder model to perform decoding. If the coding accuracy based on the updated encoder model is higher, subsequent signal processing accuracy can be ensured.
  • the network device will first obtain the capability information reported by the UE.
  • the capability information is used to indicate the AI and/or ML support capabilities of the UE, and then the network device will based on the capabilities.
  • the information generates an encoder model and a decoder model, and sends the model information of the encoder model to the UE.
  • the model information of the encoder model is used to deploy the encoder model, and the UE can generate encoding based on the model information of the encoder model. device model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 7 is a schematic structural diagram of a model training and deployment device provided by an embodiment of the present disclosure. As shown in Figure 7, the device may include:
  • An acquisition module configured to acquire capability information reported by the UE, where the capability information is used to indicate the AI and/or ML support capabilities of the UE;
  • a generation module for generating an encoder model and a decoder model based on the capability information
  • a sending module configured to send model information of the encoder model to the UE, where the model information of the encoder model is used to deploy the encoder model.
  • the network device will first obtain the capability information reported by the UE.
  • the capability information is used to indicate the AI and/or ML support capabilities of the UE, and then the network device will based on the capabilities.
  • the information generates an encoder model and a decoder model, and sends the model information of the encoder model to the UE.
  • the model information of the encoder model is used to deploy the encoder model, and the UE can generate encoding based on the model information of the encoder model. device model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • the model includes at least one of the following:
  • the capability information includes at least one of the following:
  • the maximum support capability information of the UE for the model includes the structural information of the most complex model supported by the UE.
  • the generation module is also used to:
  • an encoder model to be trained and/or a decoder model to be trained based on the capability information; wherein the encoder model to be trained is a model supported by the UE, and the decoder model to be trained is the Models supported by network equipment;
  • the encoder model to be trained and/or the decoder model to be trained are trained based on the sample data to generate an encoder model and a decoder model.
  • the model information of the encoder model includes at least one of the following:
  • the device is also used for:
  • the indication information is used to indicate the type of information when the UE reports to the network device;
  • the information type includes at least one of the following:
  • the original reported information is encoded by the encoder model.
  • the reported information is information to be reported by the UE to the network device; the reported information includes CSI information;
  • the CSI information includes at least one of the following:
  • the information type indicated by the indication information includes information after the original reported information has been encoded by the encoder model
  • the device is also used for:
  • the decoder model is used to decode the information reported by the UE.
  • the device is also used for:
  • the device is also used for:
  • the new encoder model and the new decoder model are retrained to obtain an updated encoder model and an updated decoder model.
  • the device is also used for:
  • the device is also used for:
  • a new encoder model and a new decoder model that are different from the original encoder model and the original decoder model are re-selected based on the capability information.
  • the device is also used for:
  • the original decoder model is directly replaced with the updated decoder model.
  • the device is also used for:
  • the original decoder model is optimized based on the difference model information.
  • the device is also used for:
  • the model information of the updated encoder model is sent to the UE.
  • the model information of the updated encoder model includes:
  • Difference model information between the model information of the updated encoder model and the model information of the original encoder model is a difference model information between the model information of the updated encoder model and the model information of the original encoder model.
  • Figure 8 is a schematic structural diagram of a model training and deployment device provided by an embodiment of the present disclosure. As shown in Figure 8, the device may include:
  • a reporting module configured to report capability information to the network device, where the capability information is used to indicate the AI and/or ML support capabilities of the UE;
  • An acquisition module configured to acquire the model information of the encoder model sent by the network device, and the model information of the encoder model is used to deploy the encoder model;
  • a generating module configured to generate an encoder model based on the model information of the encoder model.
  • the network device will first obtain the capability information reported by the UE.
  • the capability information is used to indicate the AI and/or ML support capabilities of the UE, and then the network device will based on the capabilities.
  • the information generates an encoder model and a decoder model, and sends the model information of the encoder model to the UE.
  • the model information of the encoder model is used to deploy the encoder model, and the UE can generate encoding based on the model information of the encoder model. device model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • the model includes at least one of the following:
  • the capability information includes at least one of the following:
  • the maximum support capability information of the UE for the model includes the structural information of the most complex model supported by the UE.
  • the model information of the encoder model includes at least one of the following:
  • the device is also used for:
  • the indication information is used to indicate the type of information when the UE reports to the network device; the information type includes the original reporting information that has not been encoded by the encoder model and the original reporting information. At least one of the information after the information is encoded by the encoder model;
  • the reported information is information to be reported by the UE to the network device; the reported information includes CSI information;
  • the CSI information includes at least one of the following:
  • the information type indicated by the indication information includes information after the original reported information has been encoded by the encoder model
  • the device is also used for:
  • the device is also used for:
  • Model updating is performed based on the model information of the updated encoder model.
  • the device is also used for:
  • Difference model information between the model information of the updated encoder model and the model information of the original encoder model is a difference model information between the model information of the updated encoder model and the model information of the original encoder model.
  • the device is also used for:
  • the model update based on the model information of the updated encoder model includes:
  • the updated encoder model is used to replace the original encoder model to perform model updating.
  • the model information of the updated encoder model includes:
  • the original decoder model is optimized to perform model update.
  • FIG. 9 is a block diagram of a user equipment UE900 provided by an embodiment of the present disclosure.
  • UE900 can be a mobile phone, computer, digital broadcast terminal device, messaging device, game console, tablet device, medical device, fitness device, personal digital assistant, etc.
  • UE 900 may include at least one of the following components: a processing component 902 , a memory 904 , a power supply component 906 , a multimedia component 908 , an audio component 910 , an input/output (I/O) interface 912 , a sensor component 913 , and a communication component. 916.
  • Processing component 902 generally controls the overall operations of UE 900, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 902 may include at least one processor 920 to execute instructions to complete all or part of the steps of the above method. Additionally, processing component 902 may include at least one module that facilitates interaction between processing component 902 and other components. For example, processing component 902 may include a multimedia module to facilitate interaction between multimedia component 908 and processing component 902.
  • Memory 904 is configured to store various types of data to support operations at UE 900. Examples of this data include instructions for any application or method operating on the UE900, contact data, phonebook data, messages, pictures, videos, etc.
  • Memory 904 may be implemented by any type of volatile or non-volatile storage device, or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EEPROM), Programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EEPROM erasable programmable read-only memory
  • EPROM Programmable read-only memory
  • PROM programmable read-only memory
  • ROM read-only memory
  • magnetic memory flash memory, magnetic or optical disk.
  • Power supply component 906 provides power to various components of UE 900.
  • Power component 906 may include a power management system, at least one power supply, and other components associated with generating, managing, and distributing power to UE 900.
  • Multimedia component 908 includes a screen that provides an output interface between the UE 900 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes at least one touch sensor to sense touches, slides, and gestures on the touch panel. The touch sensor may not only sense the boundary of the touch or sliding operation, but also detect the wake-up time and pressure related to the touch or sliding operation.
  • multimedia component 908 includes a front-facing camera and/or a rear-facing camera. When UE900 is in operating mode, such as shooting mode or video mode, the front camera and/or rear camera can receive external multimedia data.
  • Each front-facing camera and rear-facing camera can be a fixed optical lens system or have a focal length and optical zoom capabilities.
  • Audio component 910 is configured to output and/or input audio signals.
  • audio component 910 includes a microphone (MIC) configured to receive external audio signals when UE 900 is in operating modes, such as call mode, recording mode, and voice recognition mode. The received audio signals may be further stored in memory 904 or sent via communications component 916 .
  • audio component 910 also includes a speaker for outputting audio signals.
  • the I/O interface 912 provides an interface between the processing component 902 and a peripheral interface module, which may be a keyboard, a click wheel, a button, etc. These buttons may include, but are not limited to: Home button, Volume buttons, Start button, and Lock button.
  • the sensor component 913 includes at least one sensor for providing various aspects of status assessment for the UE 900 .
  • the sensor component 913 can detect the open/closed state of the device 900, the relative positioning of components, such as the display and keypad of the UE900, the sensor component 913 can also detect the position change of the UE900 or a component of the UE900, the user and the Presence or absence of UE900 contact, UE900 orientation or acceleration/deceleration and temperature changes of UE900.
  • Sensor assembly 913 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • Sensor assembly 913 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 913 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 916 is configured to facilitate wired or wireless communication between UE 900 and other devices.
  • UE900 can access wireless networks based on communication standards, such as WiFi, 2G or 3G, or a combination thereof.
  • the communication component 916 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communications component 916 also includes a near field communications (NFC) module to facilitate short-range communications.
  • NFC near field communications
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • UE 900 may be configured by at least one Application Specific Integrated Circuit (ASIC), Digital Signal Processor (DSP), Digital Signal Processing Device (DSPD), Programmable Logic Device (PLD), Field Programmable Gate Array ( FPGA), controller, microcontroller, microprocessor or other electronic component implementation for executing the above method.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processor
  • DSPD Digital Signal Processing Device
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • controller microcontroller, microprocessor or other electronic component implementation for executing the above method.
  • FIG. 10 is a block diagram of a network side device 1000 provided by an embodiment of the present disclosure.
  • the network side device 1000 may be provided as a network side device.
  • the network side device 1000 includes a processing component 1011, which further includes at least one processor, and a memory resource represented by a memory 1032 for storing instructions, such as application programs, that can be executed by the processing component 1022.
  • the application program stored in memory 1032 may include one or more modules, each corresponding to a set of instructions.
  • the processing component 1010 is configured to execute instructions to perform any of the foregoing methods applied to the network side device, for example, the method shown in FIG. 1 .
  • the network side device 1000 may also include a power supply component 1026 configured to perform power management of the network side device 1000, a wired or wireless network interface 1050 configured to connect the network side device 1000 to the network, and an input/output (I/O ) interface 1058.
  • the network side device 1000 can operate based on an operating system stored in the memory 1032, such as Windows Server TM, Mac OS X TM, Unix TM, Linux TM, FreeBSD TM or similar.
  • the methods provided by the embodiments of the present disclosure are introduced from the perspectives of network side equipment and UE respectively.
  • the network side device and the UE may include a hardware structure and a software module to implement the above functions in the form of a hardware structure, a software module, or a hardware structure plus a software module.
  • a certain function among the above functions can be executed by a hardware structure, a software module, or a hardware structure plus a software module.
  • the methods provided by the embodiments of the present disclosure are introduced from the perspectives of network side equipment and UE respectively.
  • the network side device and the UE may include a hardware structure and a software module to implement the above functions in the form of a hardware structure, a software module, or a hardware structure plus a software module.
  • a certain function among the above functions can be executed by a hardware structure, a software module, or a hardware structure plus a software module.
  • the communication device may include a transceiver module and a processing module.
  • the transceiver module may include a sending module and/or a receiving module.
  • the sending module is used to implement the sending function
  • the receiving module is used to implement the receiving function.
  • the transceiving module may implement the sending function and/or the receiving function.
  • the communication device may be a terminal device (such as the terminal device in the foregoing method embodiment), a device in the terminal device, or a device that can be used in conjunction with the terminal device.
  • the communication device may be a network device, a device in a network device, or a device that can be used in conjunction with the network device.
  • the communication device may be a network device, or may be a terminal device (such as the terminal device in the foregoing method embodiment), or may be a chip, chip system, or processor that supports the network device to implement the above method, or may be a terminal device that supports A chip, chip system, or processor that implements the above method.
  • the device can be used to implement the method described in the above method embodiment. For details, please refer to the description in the above method embodiment.
  • a communications device may include one or more processors.
  • the processor may be a general-purpose processor or a special-purpose processor, etc.
  • it can be a baseband processor or a central processing unit.
  • the baseband processor can be used to process communication protocols and communication data
  • the central processor can be used to control and execute communication devices (such as network side equipment, baseband chips, terminal equipment, terminal equipment chips, DU or CU, etc.)
  • a computer program processes data for a computer program.
  • the communication device may also include one or more memories, on which a computer program may be stored, and the processor executes the computer program, so that the communication device executes the method described in the above method embodiment.
  • data may also be stored in the memory.
  • the communication device and the memory can be provided separately or integrated together.
  • the communication device may also include a transceiver and an antenna.
  • the transceiver can be called a transceiver unit, a transceiver, or a transceiver circuit, etc., and is used to implement transceiver functions.
  • the transceiver can include a receiver and a transmitter.
  • the receiver can be called a receiver or a receiving circuit, etc., and is used to implement the receiving function;
  • the transmitter can be called a transmitter or a transmitting circuit, etc., and is used to implement the transmitting function.
  • the communication device may also include one or more interface circuits.
  • Interface circuitry is used to receive code instructions and transmit them to the processor.
  • the processor executes the code instructions to cause the communication device to perform the method described in the above method embodiment.
  • the communication device is a terminal device (such as the terminal device in the foregoing method embodiment): the processor is configured to execute the method shown in any one of Figures 1-4.
  • the communication device is a network device: a transceiver is used to perform the method shown in any one of Figures 5-7.
  • a transceiver for implementing receiving and transmitting functions may be included in the processor.
  • the transceiver may be a transceiver circuit, an interface, or an interface circuit.
  • the transceiver circuits, interfaces or interface circuits used to implement the receiving and transmitting functions can be separate or integrated together.
  • the above-mentioned transceiver circuit, interface or interface circuit can be used for reading and writing codes/data, or the above-mentioned transceiver circuit, interface or interface circuit can be used for signal transmission or transfer.
  • the processor may store a computer program, and the computer program runs on the processor, which can cause the communication device to perform the method described in the above method embodiment.
  • the computer program may be embedded in the processor, in which case the processor may be implemented in hardware.
  • the communication device may include a circuit, and the circuit may implement the functions of sending or receiving or communicating in the foregoing method embodiments.
  • the processors and transceivers described in this disclosure can be implemented in integrated circuits (ICs), analog ICs, radio frequency integrated circuits RFICs, mixed signal ICs, application specific integrated circuits (ASICs), printed circuit boards (printed circuit boards). circuit board, PCB), electronic equipment, etc.
  • the processor and transceiver can also be manufactured using various IC process technologies, such as complementary metal oxide semiconductor (CMOS), n-type metal oxide-semiconductor (NMOS), P-type Metal oxide semiconductor (positive channel metal oxide semiconductor, PMOS), bipolar junction transistor (BJT), bipolar CMOS (BiCMOS), silicon germanium (SiGe), gallium arsenide (GaAs), etc.
  • CMOS complementary metal oxide semiconductor
  • NMOS n-type metal oxide-semiconductor
  • PMOS P-type Metal oxide semiconductor
  • BJT bipolar junction transistor
  • BiCMOS bipolar CMOS
  • SiGe silicon germanium
  • GaAs gallium arsenide
  • the communication device described in the above embodiments may be a network device or a terminal device (such as the terminal device in the foregoing method embodiment), but the scope of the communication device described in the present disclosure is not limited thereto, and the structure of the communication device may not be limited to limits.
  • the communication device may be a stand-alone device or may be part of a larger device.
  • the communication device may be:
  • the IC collection may also include storage components for storing data and computer programs;
  • the communication device may be a chip or a system on a chip
  • the chip includes a processor and an interface.
  • the number of processors may be one or more, and the number of interfaces may be multiple.
  • the chip also includes a memory, which is used to store necessary computer programs and data.
  • Embodiments of the present disclosure also provide a system for determining side link duration.
  • the system includes a communication device as a terminal device in the foregoing embodiment (such as the first terminal device in the foregoing method embodiment) and a communication device as a network device.
  • the system includes a communication device as a terminal device in the foregoing embodiment (such as the first terminal device in the foregoing method embodiment) and a communication device as a network device.
  • the present disclosure also provides a readable storage medium on which instructions are stored, and when the instructions are executed by a computer, the functions of any of the above method embodiments are implemented.
  • the present disclosure also provides a computer program product, which, when executed by a computer, implements the functions of any of the above method embodiments.
  • the above embodiments it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer programs.
  • the computer program When the computer program is loaded and executed on a computer, the processes or functions described in accordance with the embodiments of the present disclosure are generated in whole or in part.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device.
  • the computer program may be stored in or transferred from one computer-readable storage medium to another, for example, the computer program may be transferred from a website, computer, server, or data center Transmission to another website, computer, server or data center through wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more available media integrated.
  • the usable media may be magnetic media (e.g., floppy disks, hard disks, magnetic tapes), optical media (e.g., high-density digital video discs (DVD)), or semiconductor media (e.g., solid state disks, SSD)) etc.
  • magnetic media e.g., floppy disks, hard disks, magnetic tapes
  • optical media e.g., high-density digital video discs (DVD)
  • DVD digital video discs
  • semiconductor media e.g., solid state disks, SSD
  • At least one in the present disclosure can also be described as one or more, and the plurality can be two, three, four or more, and the present disclosure is not limited.
  • the technical feature is distinguished by “first”, “second”, “third”, “A”, “B”, “C” and “D” etc.
  • the technical features described in “first”, “second”, “third”, “A”, “B”, “C” and “D” are in no particular order or order.

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

本公开提出一种方法/装置/设备/存储介质,属于通信技术领域。网络设备可以获取UE上报的能力信息,该能力信息用于指示UE的AI和/或ML的支持能力,之后基于能力信息生成编码器模型和译码器模型,最后将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型。由此可知,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。

Description

一种模型训练部署方法/装置/设备及存储介质 技术领域
本公开涉及通信技术领域,尤其涉及一种模型训练部署方法/装置/设备及存储介质。
背景技术
随着AI(ArtificialIntelligent,人工智能)技术和ML(MachineLearning,机器学习)技术的不断发展,AI技术和ML技术的应用领域(如图像识别、语音处理、自然语言处理、游戏等)也越来越广泛。其中,在使用AI技术和ML技术时通常需要利用AI/ML模型对信息进行编解码处理。因此,亟需一种有关AI/ML的模型训练部署方法。
发明内容
本公开提出的模型训练部署方法/装置/设备及存储介质,用于训练和部署AI/ML模型。
本公开一方面实施例提出的方法,应用于网络设备,包括:
获取UE上报的能力信息,所述能力信息用于指示所述UE的AI和/或ML的支持能力;
基于所述能力信息生成编码器模型和译码器模型;
将所述编码器模型的模型信息发送至所述UE,所述编码器模型的模型信息用于部署编码器模型。
本公开另一方面实施例提出的方法,应用于UE,包括:
向网络设备上报能力信息,所述能力信息用于指示所述UE的AI和/或ML的支持能力;
获取所述网络设备发送的编码器模型的模型信息,所述编码器模型的模型信息用于部署编码器模型;
基于所述编码器模型的模型信息生成编码器模型。
本公开又一方面实施例提出的一种装置,包括:
获取模块,用于获取UE上报的能力信息,所述能力信息用于指示所述UE的AI和/或ML的支持能力;
生成模块,用于基于所述能力信息生成编码器模型和译码器模型;
发送模块,用于将所述编码器模型的模型信息发送至所述UE,所述编码器模型的模型信息用于部署编码器模型。
本公开又一方面实施例提出的一种装置,包括:
上报模块,用于向网络设备上报能力信息,所述能力信息用于指示所述UE的AI和/或ML的支持能力;
获取模块,用于获取所述网络设备发送的编码器模型的模型信息,所述编码器模型的模型信息用于部署编码器模型;
生成模块,用于基于所述编码器模型的模型信息生成编码器模型。
本公开又一方面实施例提出的一种通信装置,所述装置包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器执行所述存储器中存储的计算机程序,以使所述装置执行如上一方面实施例提出的方法。
本公开又一方面实施例提出的一种通信装置,所述装置包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器执行所述存储器中存储的计算机程序,以使所述装置执行如上另一方面实施例提出的方法。
本公开又一方面实施例提出的通信装置,包括:处理器和接口电路;
所述接口电路,用于接收代码指令并传输至所述处理器;
所述处理器,用于运行所述代码指令以执行如一方面实施例提出的方法。
本公开又一方面实施例提出的通信装置,包括:处理器和接口电路;
所述接口电路,用于接收代码指令并传输至所述处理器;
所述处理器,用于运行所述代码指令以执行如另一方面实施例提出的方法。
本公开又一方面实施例提出的计算机可读存储介质,用于存储有指令,当所述指令被执行时,使如 一方面实施例提出的方法被实现。
本公开又一方面实施例提出的计算机可读存储介质,用于存储有指令,当所述指令被执行时,使如另一方面实施例提出的方法被实现。
综上所述,在本公开实施例提供的模型训练部署方法/装置/设备及存储介质之中,网络设备会先获取UE上报的能力信息,该能力信息用于指示UE的AI和/或ML的支持能力,之后网络设备会基于能力信息生成编码器模型和译码器模型,并将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型,则UE可以基于该编码器模型的模型信息生成编码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
附图说明
本公开上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1为本公开一个实施例所提供的方法的流程示意图;
图2a为本公开另一个实施例所提供的方法的流程示意图;
图2b为本公开再一个实施例所提供的方法的流程示意图;
图2c为本公开又一个实施例所提供的方法的流程示意图;
图2d为本公开又一个实施例所提供的方法的流程示意图;
图2e为本公开又一个实施例所提供的方法的流程示意图;
图3a为本公开又一个实施例所提供的方法的流程示意图;
图3b为本公开又一个实施例所提供的方法的流程示意图;
图3c为本公开又一个实施例所提供的方法的流程示意图;
图3d为本公开又一个实施例所提供的方法的流程示意图;
图3e为本公开又一个实施例所提供的方法的流程示意图;
图4为本公开又一个实施例所提供的方法的流程示意图;
图5为本公开又一个实施例所提供的方法的流程示意图;
图6a为本公开又一个实施例所提供的方法的流程示意图;
图6b为本公开又一个实施例所提供的方法的流程示意图;
图6c为本公开又一个实施例所提供的方法的流程示意图;
图6d为本公开又一个实施例所提供的方法的流程示意图;
图7为本公开一个实施例所提供的装置的结构示意图;
图8为本公开另一个实施例所提供的装置的结构示意图;
图9是本公开一个实施例所提供的一种用户设备的框图;
图10为本公开一个实施例所提供的一种网络侧设备的框图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开实施例相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开实施例的一些方面相一致的装置和方法的例子。
在本公开实施例使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本公开实施例。在本公开实施例和所附权利要求书中所使用的单数形式的“一种”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本公开实施例可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本公开实施例范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如 在此所使用的词语“如果”及“若”可以被解释成为“在……时”或“当……时”或“响应于确定”。
下面参考附图对本公开实施例所提供的模型训练部署方法/装置/设备及存储介质进行详细描述。
图1为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由网络设备执行,如图1所示,该方法可以包括以下步骤:
步骤101、获取UE(UserEquipment,用户设备)上报的能力信息。
需要说明的是,在本公开的一个实施例之中,UE可以是指向用户提供语音和/或数据连通性的设备。终端设备可以经RAN(Radio Access Network,无线接入网)与一个或多个核心网进行通信,UE可以是物联网终端,如传感器设备、移动电话(或称为“蜂窝”电话)和具有物联网终端的计算机,例如,可以是固定式、便携式、袖珍式、手持式、计算机内置的或者车载的装置。例如,站(Station,STA)、订户单元(subscriberunit)、订户站(subscriberstation),移动站(mobilestation)、移动台(mobile)、远程站(remotestation)、接入点、远程终端(remoteterminal)、接入终端(accessterminal)、用户装置(userterminal)或用户代理(useragent)。或者,UE也可以是无人飞行器的设备。或者,UE也可以是车载设备,比如,可以是具有无线通信功能的行车电脑,或者是外接行车电脑的无线终端。或者,UE也可以是路边设备,比如,可以是具有无线通信功能的路灯、信号灯或者其它路边设备等。
其中,在本公开的一个实施例之中,该能力信息可以用于指示UE的AI(ArtificialIntelligent,人工智能)和/或ML(MachineLearning,机器学习)的支持能力。
以及,在本公开的一个实施例之中,该模型可以包括以下至少一种:
AI模型;
ML模型。
进一步的,在本公开的一个实施例之中,上述的能力信息可以包括以下至少一种:
UE是否支持AI;
UE是否支持ML;
UE支持的AI和/或ML的模型的种类;
UE对于模型的最大支持能力信息,该最大支持能力信息包括UE支持的最复杂的模型的结构信息。
其中,在本公开的一个实施例之中,上述的结构信息例如可以包括模型的层数等。
步骤102、基于能力信息生成编码器模型和译码器模型。
其中,在本公开的一个实施例之中,网络设备中存储有多个不同的待训练编码器模型和/或多个不同的待训练译码器模型,其中,编码器模型和译码器模型之间存在有对应关系。以及,网络设备在获取到UE发送的能力信息之后,可以基于该能力信息选择出匹配于UE的AI和/或ML的支持能力的待训练编码器模型,和/或,选择出匹配于网络设备自身的AI和/或ML的支持能力的待训练译码器模型,之后,通过对该待训练编码器模型和/或待训练译码器模型进行训练,以生成编码器模型和译码器模型。
以及,关于上述的基于能力信息生成编码器模型和译码器模型的详细具体方法会在后续实施例进行描述。
步骤103、将编码器模型的模型信息发送至UE,该编码器模型的模型信息可以用于部署编码器模型。
其中,在本公开的一个实施例之中,上述的编码器模型的模型信息可以包括以下至少一种:
编码器模型的种类;
编码器模型的模型参数。
以及,在本公开的一个实施例之中,上述的编码器模型的种类可以包括:
CNN(Convolutional Neural Network,卷积神经网络)模型;
全连接DNN(Deep Neural Network,深度神经网络)模型;
CNN与全连接DNN结合的模型。
以及,需要说明的是,在本公开的一个实施例之中,当编码器模型的种类不同时,该编码器模型的模型参数也会有所不同。
具体而言,在本公开的一个实施例之中,当编码器模型的种类为:CNN模型时,编码器模型的模 型参数可以包括CNN模型的压缩率、CNN模型卷积层的个数、各卷积层之间的排布信息、每个卷积层的权重信息,每个卷积层的卷积核大小,每个卷积层应用的归一化层和激活函数类型中的至少一种。
在本公开的另一个实施例之中,当编码器模型的种类为:全连接DNN模型时,编码器模型的模型参数可以包括全连接DNN模型的压缩率、全连接层的个数、各全连接层之间的排布信息、每个全连接层的权重信息、每个全连接层的节点数,每个全连接层应用的归一化层和激活函数类型中的至少一种。
在本公开的又一个实施例之中,当编码器模型的种类为:CNN与全连接DNN的结合模型时,编码器模型的模型参数可以包括CNN与全连接DNN的结合模型的压缩率、卷积层与全连接层的个数、搭配模式、卷积层的权重信息、卷积核大小、全连接层的节点数、全连接层的权重信息,每个全连接层和卷积层应用的归一化层和激活函数类型中的至少一种。
综上所述,在本公开实施例提供的方法之中,网络设备会先获取UE上报的能力信息,该能力信息用于指示UE的AI和/或ML的支持能力,之后网络设备会基于能力信息生成编码器模型和译码器模型,并将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型,则UE可以基于该编码器模型的模型信息生成编码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图2a为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由网络设备执行,如图2a所示,该方法可以包括以下步骤:
步骤201a、获取UE上报的能力信息。
其中,关于步骤201a的详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤202a、基于能力信息选择待训练编码器模型和待训练译码器模型。
其中,在本公开的一个实施例之中,网络设备可以从其存储的多个不同的待训练编码器模型和多个不同的待训练译码器模型之中,基于能力信息选择出UE所支持的待训练编码器模型、以及选择出网络设备所支持的待训练译码器模型。其中,网络设备所选择的待训练编码器模型与待训练译码器模型是相互对应匹配的。
步骤203a、基于UE上报的信息和网络设备的存储信息中的至少一种确定样本数据。
其中,在本公开的一个实施例之中,上述的UE上报的信息可以是UE当前上报的信息,也可以是UE历史上报的信息。
以及,在本公开的一个实施例之中,上述的信息可以是UE上报的未经过编码压缩的信息,和/或UE上报的经过编码压缩的信息。
步骤204a、基于样本数据对待训练编码器模型和待训练译码器模型进行训练,以生成编码器模型和译码器模型。
步骤205a、将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型。
其中,关于步骤205a的详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
综上所述,在本公开实施例提供的方法之中,网络设备会先获取UE上报的能力信息,该能力信息用于指示UE的AI和/或ML的支持能力,之后网络设备会基于能力信息生成编码器模型和译码器模型,并将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型,则UE可以基于该编码器模型的模型信息生成编码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图2b为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由网络设备执行,如图2b所示,该方法可以包括以下步骤:
步骤201b、获取UE上报的能力信息。
其中,关于步骤201b的详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤202b、基于能力信息选择待训练编码器模型。
其中,在本公开的一个实施例之中,网络设备可以从其存储的多个不同的待训练编码器模型之中,基于能力信息选择出UE所支持的待训练编码器模型。
此外,需要说明的是,在本公开的一个实施例之中,上述的待训练编码器模型应当满足以下条件:
1、所选择出的待训练编码器模型应当是匹配于UE的能力信息的模型(即UE支持的模型);
2、所选择出的待训练编码器模型对应的待训练译码器模型应当是网络设备支持的模型。
步骤203b、基于该待训练编码器模型确定出匹配于该待训练编码器模型的待训练译码器模型。
具体的,在本公开的一个实施例之中,可以基于该待训练编码器模型的模型信息确定出匹配于该待训练编码器模型的待训练译码器模型的模型信息,之后,基于该待训练译码器模型的模型信息部署该待训练译码器模型。
步骤204b、基于UE上报的信息和网络设备的存储信息中的至少一种确定样本数据。
其中,在本公开的一个实施例之中,上述的UE上报的信息可以是UE当前上报的信息,也可以是UE历史上报的信息。
以及,在本公开的一个实施例之中,上述的信息可以是UE上报的未经过编码压缩的信息,和/或UE上报的经过编码压缩的信息。
步骤205b、基于样本数据对待训练编码器模型和待训练译码器模型进行训练,以生成编码器模型和译码器模型。
步骤206b、将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型。
其中,关于步骤206b的详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
综上所述,在本公开实施例提供的方法之中,网络设备会先获取UE上报的能力信息,该能力信息用于指示UE的AI和/或ML的支持能力,之后网络设备会基于能力信息生成编码器模型和译码器模型,并将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型,则UE可以基于该编码器模型的模型信息生成编码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图2c为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由网络设备执行,如图2c所示,该方法可以包括以下步骤:
步骤201c、获取UE上报的能力信息。
其中,关于步骤201c的详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤202c、基于能力信息选择待训练编码器模型。
其中,在本公开的一个实施例之中,网络设备可以从其存储的多个不同的待训练编码器模型之中,基于能力信息选择出UE所支持的待训练编码器模型。
此外,需要说明的是,在本公开的一个实施例之中,上述的待训练编码器模型应当满足以下条件:
1、所选择出的待训练编码器模型应当是匹配于UE的能力信息的模型(即UE支持的模型);
2、所选择出的待训练编码器模型对应的待训练译码器模型应当是网络设备支持的模型。
步骤203c、基于UE上报的信息和网络设备的存储信息中的至少一种确定样本数据。
其中,在本公开的一个实施例之中,上述的UE上报的信息可以是UE当前上报的信息,也可以是UE历史上报的信息。
以及,在本公开的一个实施例之中,上述的信息可以是UE上报的未经过编码压缩的信息,和/或UE上报的经过编码压缩的信息。
步骤204c、基于样本数据对待训练编码器模型进行训练,以生成编码器模型。
步骤205c、基于该编码器模型确定出匹配于该编码器模型的译码器模型。
具体的,在本公开的一个实施例之中,可以基于该编码器模型的模型信息确定出匹配于该编码器模型的译码器模型的模型信息,之后,基于该译码器模型的模型信息部署该译码器模型。
步骤206c、将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型。
其中,关于步骤206c的详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
综上所述,在本公开实施例提供的方法之中,网络设备会先获取UE上报的能力信息,该能力信息用于指示UE的AI和/或ML的支持能力,之后网络设备会基于能力信息生成编码器模型和译码器模型,并将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型,则UE可以基于该编码器模型的模型信息生成编码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图2d为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由网络设备执行,如 图2d所示,该方法可以包括以下步骤:
步骤201d、获取UE上报的能力信息。
其中,关于步骤201d的详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤202d、基于能力信息选择待训练译码器模型。
其中,在本公开的一个实施例之中,网络设备可以从其存储的多个不同的待训练译码器模型之中,基于能力信息选择出待训练译码器模型。
需要说明的是,在本公开的一个实施例之中,上述的“基于能力信息选择出待训练译码器模型”含义具体为:在基于能力信息选择待训练译码器模型时,所选择出的待训练译码器模型需满足以下条件:
1、所选择出的待训练译码器模型应当为网络设备支持的模型;
2、所选择出的待训练译码器模型对应的待训练编码器模型应当是匹配于UE的能力信息的模型(即UE支持的模型)。
步骤203d、基于该待训练译码器模型确定出匹配于该待训练译码器模型的待训练编码器模型。
具体的,在本公开的一个实施例之中,可以基于该待训练译码器模型的模型信息确定出匹配于该待训练译码器模型的待训练编码器模型的模型信息,之后,基于该待训练编码器模型的模型信息部署该待训练编码器模型。
以及,在本公开的一个实施例之中,所确定出的待训练编码器模型具体为该UE支持的模型。
步骤204d、基于UE上报的信息和网络设备的存储信息中的至少一种确定样本数据。
其中,在本公开的一个实施例之中,上述的UE上报的信息可以是UE当前上报的信息,也可以是UE历史上报的信息。
以及,在本公开的一个实施例之中,上述的信息可以是UE上报的未经过编码压缩的信息,和/或UE上报的经过编码压缩的信息。
步骤205d、基于样本数据对待训练编码器模型和待训练译码器模型进行训练,以生成编码器模型和译码器模型。
步骤206d、将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型。
其中,关于步骤206d的详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
图2e为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由网络设备执行,如图2e所示,该方法可以包括以下步骤:
步骤201e、获取UE上报的能力信息。
其中,关于步骤201e的详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤202e、基于能力信息选择待训练译码器模型。
其中,在本公开的一个实施例之中,网络设备可以从其存储的多个不同的待训练译码器模型之中,基于能力信息选择出待训练译码器模型。
需要说明的是,在本公开的一个实施例之中,上述的“基于能力信息选择出待训练译码器模型”含义具体为:在基于能力信息选择待训练译码器模型时,所选择出的待训练译码器模型需满足以下条件:
1、所选择出的待训练译码器模型应当为网络设备支持的模型;
2、所选择出的待训练译码器模型对应的待训练编码器模型应当是匹配于UE的能力信息的模型(即UE支持的模型)。
步骤203e、基于UE上报的信息和网络设备的存储信息中的至少一种确定样本数据。
其中,在本公开的一个实施例之中,上述的UE上报的信息可以是UE当前上报的信息,也可以是UE历史上报的信息。
以及,在本公开的一个实施例之中,上述的信息可以是UE上报的未经过编码压缩的信息,和/或UE上报的经过编码压缩的信息。
步骤204e、基于样本数据对待训练译码器模型进行训练,以生成译码器模型。
步骤205e、基于该译码器模型确定出匹配于该译码器模型的编码器模型。
具体的,在本公开的一个实施例之中,可以基于该译码器模型的模型信息确定出匹配于该译码器模型的编码器模型的模型信息,之后,基于该编码器模型的模型信息部署该编码器模型。
步骤206e、将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型。
其中,关于步骤206e的详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
图3a为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由网络设备执行,如图3a所示,该方法可以包括以下步骤:
步骤301a、获取UE上报的能力信息。
步骤302a、基于能力信息生成编码器模型和译码器模型。
步骤303a、将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型。
其中,关于步骤301a-步骤303a的其他详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤304a、向UE发送指示信息。
其中,在本公开的一个实施例之中,该指示信息可以用于指示UE向网络设备上报时的信息类型。
以及,在本公开的一个实施例之中,该信息类型可以包括以下至少一种:
未经编码器模型编码的原始的上报信息;
原始的上报信息经过编码器模型编码之后的信息。
进一步的,在本公开的一个实施例之中,该上报信息可以为UE要向网络设备上报的信息。以及,该上报信息可以包括CSI(Channel State Information,信道状态信息)信息;
在本公开的一个实施例之中,该CSI信息可以包括以下至少一种:
信道信息;
信道的特征矩阵信息;
信道的特征向量信息;
PMI(Precoding Matrix Indicator,预编码矩阵标识);
CQI(Channel Quality Indicator,信道质量指示信息);
RI(RankIndicator,信道秩指示信息);
RSRP(Reference Signal Received Power,参考信号接收功率);
RSRQ(Reference Signal Received Quality,参考信号接收质量);
SINR(Signal-to-Interference plus Noise Ratio,信干燥比);
参考信号资源指示。
此外,在本公开的一个实施例之中,网络设备可以通过信令向UE发送指示信息。
综上所述,在本公开实施例提供的方法之中,网络设备会先获取UE上报的能力信息,该能力信息用于指示UE的AI和/或ML的支持能力,之后网络设备会基于能力信息生成编码器模型和译码器模型,并将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型,则UE可以基于该编码器模型的模型信息生成编码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图3b为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由网络设备执行,如图3b所示,该方法可以包括以下步骤:
步骤301b、获取UE上报的能力信息。
步骤302b、基于能力信息生成编码器模型和译码器模型。
步骤303b、将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型。
步骤304b、向UE发送指示信息,该指示信息用于指示UE向网络设备上报时的信息类型包括:原始的上报信息经过编码器模型编码之后的信息。
其中,关于步骤303b-步骤304b的其他详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤305b、当接收到UE上报的信息,利用译码器模型对UE上报的信息进行译码。
其中,在本公开的一个实施例之中,由于上述步骤304b中指示UE上报时的信息类型为经过编码器模型编码之后的信息,因此,在本步骤305b中,网络设备所接收到的UE上报的信息实质为经过编码器模型编码之后的信息,基于此,网络设备需要利用译码器模型(如上述步骤303b中生成的译码器 模型)对UE上报的信息进行译码以获取到原始的上报信息。
综上所述,在本公开实施例提供的方法之中,网络设备会先获取UE上报的能力信息,该能力信息用于指示UE的AI和/或ML的支持能力,之后网络设备会基于能力信息生成编码器模型和译码器模型,并将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型,则UE可以基于该编码器模型的模型信息生成编码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图3c为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由网络设备执行,如图3c所示,该方法可以包括以下步骤:
步骤301c、获取UE上报的能力信息。
步骤302c、基于能力信息生成编码器模型和译码器模型。
步骤303c、将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型。
其中,关于步骤301c-步骤303c的其他详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤304c、对编码器模型和译码器模型进行模型更新,生成更新后的编码器模型和更新后的译码器模型。
其中,在本公开的一个实施例之中,对编码器模型和译码器模型进行模型更新具体可以包括以下步骤:
步骤1、基于原有的编码器模型和原有的译码器模型确定新的编码器模型和新的译码器模型。
其中,在本公开的一个实施例之中,当网络设备需要调整编码器模型和译码器模型的压缩率时,可以确定新的编码器模型和新的译码器模型。
以及,在本公开的一个实施例之中,上述的确定新的编码器模型和新的译码器模型的方法可以包括:
调整原有的编码器模型和原有的译码器模型的模型参数得到所述新的编码器模型和新的译码器模型;或者
基于所述能力信息重新选择与原有的编码器模型和原有的译码器模型的种类不同的新的编码器模型和新的译码器模型。其中,该新的编码器模型为UE支持的模型,该新的译码器模型为网络设备支持的模型。
步骤2、重新训练所述新的编码器模型和新的译码器模型得到更新后的编码器模型和更新后的译码器模型。
其中,重新训练过程具体可以参考上述实施例描述,本公开实施例在此不做赘述。
进一步地,在本公开的另一个实施例之中,对编码器模型和译码器模型进行模型更新具体可以包括以下步骤:
步骤a、监控原有的编码器模型和原有的译码器模型的失真度。
其中,在本公开的一个实施例之中,可以实时监控原有的编码器模型和原有的译码器模型的失真度。具体的,可以将UE上报的未经过编码压缩的信息作为输入信息依次输入至原有的编码器模型和原有的译码器模型中依次进行编解码操作得到输出信息,以及通过计算该输出信息和输入信息的匹配度来确定原有的编码器模型和原有的译码器模型的失真度。
步骤b、当失真度超出第一阈值,基于原有的编码器模型和原有的译码器模型确定新的编码器模型和新的译码器模型,并重新训练新的编码器模型和新的译码器模型得到更新后的编码器模型和更新后的译码器模型,其中,更新后的编码器模型和更新后的译码器模型的失真度低于第二阈值,第二阈值小于等于第一阈值。
其中,在本公开的一个实施例之中,当失真度超出第一阈值时,说明该原有的编码器模型和原有的译码器模型的编解码精度较低,则会影响信号后续的处理精度。由此,需要确定新的编码器模型和新的译码器模型,并重新训练新的编码器模型和新的译码器模型得到更新后的编码器模型和更新后的译码器模型。以及,应确保更新后的编码器模型和更新后的译码器模型的失真度较低,低于第二阈值,以此来保证模型的编解码精度。
以及,关于上述的确定新的编码器模型和新的译码器模型的方法以及重新训练的方法具体可以参考上述实施例描述,本公开实施例在此不做赘述。
此外,在本公开的一个实施例之中,上述第一阈值和第二阈值可以是预先设置的。
步骤305c、直接利用更新后的译码器模型替换原有的译码器模型。
其中,在本公开的一个实施例之中,利用更新后的译码器模型替换原有的译码器模型后,网络设备即可利用该更新后的译码器模型来进行译码,其中,基于该更新后的译码器模型的译码精度较高,则可以确保信号后续的处理精度。
综上所述,在本公开实施例提供的方法之中,网络设备会先获取UE上报的能力信息,该能力信息用于指示UE的AI和/或ML的支持能力,之后网络设备会基于能力信息生成编码器模型和译码器模型,并将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型,则UE可以基于该编码器模型的模型信息生成编码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图3d为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由网络设备执行,如图3d所示,该方法可以包括以下步骤:
步骤301d、获取UE上报的能力信息。
步骤302d、基于能力信息生成编码器模型和译码器模型。
步骤303d、将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型。
步骤304d、对编码器模型和译码器模型进行模型更新,生成更新后的编码器模型和更新后的译码器模型。
其中,关于步骤301d-步骤304d的其他详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤305d、确定更新后的译码器模型的模型信息与原有的译码器模型的模型信息之间的差异模型信息。
其中,关于模型信息的相关介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤306d、基于差异模型信息对原有的译码器模型进行优化。
具体的,在本公开的一个实施例之中,基于该差异模型信息优化调整原有的译码器模型之后,则可以使得优化调整之后的译码器模型的模型信息与上述步骤304d中生成的更新后的译码器模型的模型信息一致,从而网络设备后续即可利用该更新后的译码器模型来进行译码,其中,基于该更新后的译码器模型的译码精度较高,则可以确保信号后续的处理精度。
综上所述,在本公开实施例提供的方法之中,网络设备会先获取UE上报的能力信息,该能力信息用于指示UE的AI和/或ML的支持能力,之后网络设备会基于能力信息生成编码器模型和译码器模型,并将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型,则UE可以基于该编码器模型的模型信息生成编码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图3e为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由网络设备执行,如图3e所示,该方法可以包括以下步骤:
步骤301e、获取UE上报的能力信息。
步骤302e、基于能力信息生成编码器模型和译码器模型。
步骤303e、将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型。
步骤304e、对编码器模型和译码器模型进行模型更新,生成更新后的编码器模型和更新后的译码器模型。
其中,关于步骤301e-步骤303e的其他详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤305e、将更新后的编码器模型的模型信息发送至UE。
其中,在本公开的一个实施例之中,该更新后的编码器模型的模型信息可以包括:
更新后的编码器模型的全部模型信息;或者
更新后的编码器模型的模型信息与原有的编码器模型的模型信息之间的差异模型信息。
以及,在本公开的一个实施例之中,网络设备通过将更新后的编码器模型的模型信息发送至UE后,可以使得UE利用更新后的编码器模型来对进行编码,其中,基于该更新后的编码器模型的编码精度较高,则可以确保信号后续的处理精度。
综上所述,在本公开实施例提供的方法之中,网络设备会先获取UE上报的能力信息,该能力信息用于指示UE的AI和/或ML的支持能力,之后网络设备会基于能力信息生成编码器模型和译码器模型,并将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型,则UE可以基于该编码器模型的模型信息生成编码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图4为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由UE执行,如图4所示,该方法可以包括以下步骤:
步骤401、向网络设备上报能力信息。
其中,在本公开的一个实施例之中,该能力信息用于指示UE的AI和/或ML的支持能力。
以及,在本公开的一个实施例之中,该模型可以包括以下至少一种:
AI模型;
ML模型。
进一步的,在本公开的一个实施例之中,该能力信息可以包括以下至少一种:
UE是否支持AI;
UE是否支持ML;
UE支持的AI和/或ML的模型的种类;
UE对于模型的最大支持能力信息,该最大支持能力信息包括UE支持的最复杂的模型的结构信息。
其中,在本公开的一个实施例之中,上述的结构信息例如可以包括模型的层数等。
步骤402、获取网络设备发送的编码器模型的模型信息。
其中,在本公开的一个实施例之中,该编码器模型的模型信息用于部署编码器模型。
以及,在本公开的一个实施例之中,该编码器模型的模型信息可以包括以下至少一种:
编码器模型的种类;
编码器模型的模型参数。
步骤403、基于编码器模型的模型信息生成编码器模型。
其中,关于步骤401-步骤403的其他详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
综上所述,在本公开实施例提供的方法之中,网络设备会先获取UE上报的能力信息,该能力信息用于指示UE的AI和/或ML的支持能力,之后网络设备会基于能力信息生成编码器模型和译码器模型,并将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型,则UE可以基于该编码器模型的模型信息生成编码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图5为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由UE执行,如图5所示,该方法可以包括以下步骤:
步骤501、向网络设备上报能力信息。
步骤502、获取网络设备发送的编码器模型的模型信息。
步骤503、基于编码器模型的模型信息生成编码器模型。
其中,关于步骤501-步骤503的其他详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤504、获取网络设备发送指示信息。
其中,本公开的一个实施例之中,该指示信息用于指示UE向网络设备上报时的信息类型。
以及,本公开的一个实施例之中,该信息类型可以包括未经编码器模型编码的原始的上报信息和原始的上报信息经过编码器模型编码之后的信息中的至少一种。
进一步的,在本公开的一个实施例之中,该上报信息为UE要向网络设备上报的信息。以及,该上报信息可以包括CSI信息;
以及,该CSI信息可以包括以下至少一种:
信道信息;
信道的特征矩阵信息;
信道的特征向量信息;
PMI;
CQI;
RI;
RSRP;
RSRQ;
SINR;
参考信号资源指示。
步骤505、基于指示信息向网络设备进行上报。
其中,在本公开的一个实施例之中,当上述步骤504中的指示信息用于指示UE向网络设备上报时的信息类型为:未经编码器模型编码的原始的上报信息时,则本步骤505中UE要向网络设备进行上报时,可以无需编码而直接将原始的上报信息发送至网络设备;以及,当上述步骤504中的指示信息用于指示UE向网络设备上报时的信息类型为:原始的上报信息经过编码器模型编码之后的信息时,则本步骤505中UE要向网络设备进行上报时,需先利用编码器模型对上报信息进行编码,并将编码之后的信息上报至网络设备。
综上所述,在本公开实施例提供的方法之中,网络设备会先获取UE上报的能力信息,该能力信息用于指示UE的AI和/或ML的支持能力,之后网络设备会基于能力信息生成编码器模型和译码器模型,并将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型,则UE可以基于该编码器模型的模型信息生成编码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图6a为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由UE执行,如图6所示,该方法可以包括以下步骤:
步骤601a、向网络设备上报能力信息。
步骤602a、获取网络设备发送的编码器模型的模型信息。
步骤603a、基于编码器模型的模型信息生成编码器模型。
其中,关于步骤601a-步骤603a的其他详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤604a、获取网络设备发送指示信息,该指示信息指示的信息类型可以包括:原始的上报信息经过编码器模型编码之后的信息。
步骤605a、利用编码器模型对上报信息进行编码。
步骤606a、将编码之后的信息上报至网络设备。
综上所述,在本公开实施例提供的方法之中,网络设备会先获取UE上报的能力信息,该能力信息用于指示UE的AI和/或ML的支持能力,之后网络设备会基于能力信息生成编码器模型和译码器模型,并将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型,则UE可以基于该编码器模型的模型信息生成编码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图6b为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由网络设备执行,如图6b所示,该方法可以包括以下步骤:
步骤601b、向网络设备上报能力信息。
步骤602b、获取网络设备发送的编码器模型的模型信息。
步骤603b、基于编码器模型的模型信息生成编码器模型。
其中,关于步骤601b-步骤603b的其他详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤604b、接收网络设备发送的更新后的编码器模型的模型信息。
其中,在本公开的一个实施例之中,该更新后的编码器模型的模型信息可以包括:
更新后的编码器模型的全部模型信息;或者
更新后的编码器模型的模型信息与原有的编码器模型的模型信息之间的差异模型信息。
步骤605b、基于更新后的编码器模型的模型信息进行模型更新。
其中,在本公开的一个实施例之中,关于“基于更新后的编码器模型的模型信息进行模型更新”的详细内容会在后续实施例进行介绍。
综上所述,在本公开实施例提供的方法之中,网络设备会先获取UE上报的能力信息,该能力信息用于指示UE的AI和/或ML的支持能力,之后网络设备会基于能力信息生成编码器模型和译码器模型,并将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型,则UE可以基于该编码器模型的模型信息生成编码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图6c为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由网络设备执行,如图6b所示,该方法可以包括以下步骤:
步骤601c、向网络设备上报能力信息。
步骤602c、获取网络设备发送的编码器模型的模型信息。
步骤603c、基于编码器模型的模型信息生成编码器模型。
其中,关于步骤601c-步骤603c的其他详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤604c、接收网络设备发送的更新后的编码器模型的模型信息。
其中,在本公开的一个实施例之中,该更新后的编码器模型的模型信息可以包括:
更新后的编码器模型的全部模型信息;或者
更新后的编码器模型的模型信息与原有的编码器模型的模型信息之间的差异模型信息。
步骤605c、基于更新后的编码器模型的模型信息生成更新后的编码器模型。
其中,在本公开的一个实施例之中,当上述步骤604b中接收到的更新后的编码器模型的模型信息为“更新后的编码器模型的全部模型信息”时,则UE可以直接基于该全部模型信息来生成更新后的编码器模型。
以及,在本公开的另一个实施例之中,当上述步骤604b中接收到的更新后的编码器模型的模型信息为“更新后的编码器模型的模型信息与原有的编码器模型的模型信息之间的差异模型信息”时,则UE可以先确定出自身的原有的编码器模型的模型信息,再基于该原有的编码器模型的模型信息和差异模型信息确定出更新后的编码器模型的模型信息,之后再基于该更新后的编码器模型的模型信息来生成更新后的编码器模型。
步骤606c、利用更新后的编码器模型替换原有的编码器模型以进行模型更新。
其中,在本公开的一个实施例之中,利用更新后的编码器模型替换原有的编码器模型后,UE即可利用该更新后的编码器模型来进行译码,其中,基于该更新后的编码器模型的编码精度较高,则可以确保信号后续的处理精度。
综上所述,在本公开实施例提供的方法之中,网络设备会先获取UE上报的能力信息,该能力信息用于指示UE的AI和/或ML的支持能力,之后网络设备会基于能力信息生成编码器模型和译码器模型,并将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型,则UE可以基于该编码器模型的模型信息生成编码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图6d为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由网络设备执行,如图6c所示,该方法可以包括以下步骤:
步骤601d、向网络设备上报能力信息。
步骤602d、获取网络设备发送的编码器模型的模型信息。
步骤603d、基于编码器模型的模型信息生成编码器模型。
步骤604d、接收网络设备发送的更新后的编码器模型的模型信息。
其中,关于步骤601d-步骤604d的其他详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤605d、基于更新后的编码器模型的模型信息对原有的译码器模型进行优化以进行模型更新。
其中,在本公开的一个实施例之中,当上述步骤604d中接收到的更新后的编码器模型的模型信息为“更新后的编码器模型的全部模型信息”时,则UE可以先确定出该全部模型信息和原有的编码器模型的模型信息之间的模型差异信息,之后,再基于模型差异信息来对原有的译码器模型进行优化以进行模型更新。
以及,在本公开的另一个实施例之中,当上述步骤604d中接收到的更新后的编码器模型的模型信息为“更新后的编码器模型的模型信息与原有的编码器模型的模型信息之间的差异模型信息”时,则UE可以直接基于模型差异信息来对原有的译码器模型进行优化以进行模型更新。
具体的,在本公开的一个实施例之中,基于该差异模型信息优化调整原有的编码器模型之后,则可以使得优化调整之后的编码器模型的模型信息与更新后的编码器模型的模型信息一致,从而UE后续即可利用该更新后的编码器模型来进行译码,其中,基于该更新后的编码器模型的编码精度较高,则可以确保信号后续的处理精度。
综上所述,在本公开实施例提供的方法之中,网络设备会先获取UE上报的能力信息,该能力信息用于指示UE的AI和/或ML的支持能力,之后网络设备会基于能力信息生成编码器模型和译码器模型,并将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型,则UE可以基于该编码器模型的模型信息生成编码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图7为本公开实施例所提供的一种模型训练部署装置的结构示意图,如图7所示,装置可以包括:
获取模块,用于获取UE上报的能力信息,所述能力信息用于指示所述UE的AI和/或ML的支持能力;
生成模块,用于基于所述能力信息生成编码器模型和译码器模型;
发送模块,用于将所述编码器模型的模型信息发送至所述UE,所述编码器模型的模型信息用于部署编码器模型。
综上所述,在本公开实施例提供的装置之中,网络设备会先获取UE上报的能力信息,该能力信息用于指示UE的AI和/或ML的支持能力,之后网络设备会基于能力信息生成编码器模型和译码器模型,并将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型,则UE可以基于该编码器模型的模型信息生成编码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
可选地,在本公开的一个实施例之中,所述模型包括以下至少一种:
AI模型;
ML模型。
可选地,在本公开的一个实施例之中,所述能力信息包括以下至少一种:
所述UE是否支持AI;
所述UE是否支持ML;
所述UE支持的AI和/或ML的模型的种类;
所述UE对于模型的最大支持能力信息,所述最大支持能力信息包括所述UE支持的最复杂的模型的结构信息。
可选地,在本公开的一个实施例之中,所述生成模块,还用于:
基于所述能力信息选择待训练编码器模型和/或待训练译码器模型;其中,所述待训练编码器模型为所述UE所支持的模型,所述待训练译码器模型为所述网络设备所支持的模型;
基于所述UE上报的信息和所述网络设备的存储信息中的至少一种确定样本数据;
基于所述样本数据对所述待训练编码器模型和/或待训练译码器模型进行训练,以生成编码器模型和译码器模型。
可选地,在本公开的一个实施例之中,所述编码器模型的模型信息包括以下至少一种:
所述编码器模型的种类;
所述编码器模型的模型参数。
可选地,在本公开的一个实施例之中,所述装置,还用于:
向所述UE发送指示信息,所述指示信息用于指示所述UE向所述网络设备上报时的信息类型;
所述信息类型包括以下至少一种:
未经编码器模型编码的原始的上报信息;
原始的上报信息经过编码器模型编码之后的信息。
可选地,在本公开的一个实施例之中,所述上报信息为所述UE要向所述网络设备上报的信息;所述上报信息包括CSI信息;
所述的CSI信息包括以下至少一种:
信道信息;
信道的特征矩阵信息;
信道的特征向量信息;
PMI;
CQI;
RI;
RSRP;
RSRQ;
SINR;
参考信号资源指示。
可选地,在本公开的一个实施例之中,所述指示信息指示的信息类型包括原始的上报信息经过编码器模型编码之后的信息;
所述装置,还用于:
当接收到所述UE上报的信息,利用所述译码器模型对所述UE上报的信息进行译码。
可选地,在本公开的一个实施例之中,所述装置,还用于:
对编码器模型和译码器模型进行模型更新,生成更新后的编码器模型和更新后的译码器模型。
可选地,在本公开的一个实施例之中,所述装置,还用于:
基于原有的编码器模型和原有的译码器模型确定新的编码器模型和新的译码器模型;
重新训练所述新的编码器模型和新的译码器模型得到更新后的编码器模型和更新后的译码器模型。
可选地,在本公开的一个实施例之中,所述装置,还用于:
监控原有的编码器模型和原有的译码器模型的失真度;
当所述失真度超出第一阈值,基于原有的编码器模型和原有的译码器模型确定新的编码器模型和新的译码器模型;
重新训练所述新的编码器模型和新的译码器模型得到更新后的编码器模型和更新后的译码器模型,其中,所述更新后的编码器模型和更新后的译码器模型的失真度低于第二阈值,所述第二阈值小于等于第一阈值。
可选地,在本公开的一个实施例之中,所述装置,还用于:
调整原有的编码器模型和原有的译码器模型的模型参数得到所述新的编码器模型和新的译码器模型;或者
基于所述能力信息重新选择与原有的编码器模型和原有的译码器模型的种类不同的新的编码器模型和新的译码器模型。
可选地,在本公开的一个实施例之中,所述装置,还用于:
直接利用所述更新后的译码器模型替换原有的译码器模型。
可选地,在本公开的一个实施例之中,所述装置,还用于:
确定所述更新后的译码器模型的模型信息与原有的译码器模型的模型信息之间的差异模型信息;
基于所述差异模型信息对原有的译码器模型进行优化。
可选地,在本公开的一个实施例之中,所述装置,还用于:
将所述更新后的编码器模型的模型信息发送至所述UE。
可选地,在本公开的一个实施例之中,所述更新后的编码器模型的模型信息,包括:
所述更新后的编码器模型的全部模型信息;或者
所述更新后的编码器模型的模型信息与原有的编码器模型的模型信息之间的差异模型信息。
图8为本公开实施例所提供的一种模型训练部署装置的结构示意图,如图8所示,装置可以包括:
上报模块,用于向网络设备上报能力信息,所述能力信息用于指示所述UE的AI和/或ML的支持能力;
获取模块,用于获取所述网络设备发送的编码器模型的模型信息,所述编码器模型的模型信息用于部署编码器模型;
生成模块,用于基于所述编码器模型的模型信息生成编码器模型。
综上所述,在本公开实施例提供的装置之中,网络设备会先获取UE上报的能力信息,该能力信息用于指示UE的AI和/或ML的支持能力,之后网络设备会基于能力信息生成编码器模型和译码器模型,并将编码器模型的模型信息发送至UE,该编码器模型的模型信息用于部署编码器模型,则UE可以基于该编码器模型的模型信息生成编码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
可选地,在本公开的一个实施例之中,所述模型包括以下至少一种:
AI模型;
ML模型。
可选地,在本公开的一个实施例之中,所述能力信息包括以下至少一种:
所述UE是否支持AI;
所述UE是否支持ML;
所述UE支持的AI和/或ML的的种类;
所述UE对于模型的最大支持能力信息,所述最大支持能力信息包括所述UE支持的最复杂的模型的结构信息。
可选地,在本公开的一个实施例之中,所述编码器模型的模型信息包括以下至少一种:
所述编码器模型的种类;
所述编码器模型的模型参数。
可选地,在本公开的一个实施例之中,所述装置,还用于:
获取所述网络设备发送指示信息,所述指示信息用于指示所述UE向所述网络设备上报时的信息类型;所述信息类型包括未经编码器模型编码的原始的上报信息和原始的上报信息经过编码器模型编码之后的信息中的至少一种;
基于所述指示信息向所述网络设备进行上报。
可选地,在本公开的一个实施例之中,所述上报信息为所述UE要向所述网络设备上报的信息;所述上报信息包括CSI信息;
所述的CSI信息包括以下至少一种:
信道信息;
信道的特征矩阵信息;
信道的特征向量信息;
PMI;
CQI;
RI;
RSRP;
RSRQ;
SINR;
参考信号资源指示。
可选地,在本公开的一个实施例之中,所述指示信息指示的信息类型包括原始的上报信息经过编码器模型编码之后的信息;
所述装置,还用于:
利用所述编码器模型对所述上报信息进行编码;
将编码之后的信息上报至所述网络设备。
可选地,在本公开的一个实施例之中,所述装置,还用于:
接收所述网络设备发送的更新后的编码器模型的模型信息;
基于所述更新后的编码器模型的模型信息进行模型更新。
可选地,在本公开的一个实施例之中,所述装置,还用于:
所述更新后的编码器模型的全部模型信息;或者
所述更新后的编码器模型的模型信息与原有的编码器模型的模型信息之间的差异模型信息。
可选地,在本公开的一个实施例之中,所述装置,还用于:
所述基于所述更新后的编码器模型的模型信息进行模型更新,包括:
基于所述更新后的编码器模型的模型信息生成更新后的编码器模型;
利用所述更新后的编码器模型替换原有的编码器模型以进行模型更新。
可选地,在本公开的一个实施例之中,所述更新后的编码器模型的模型信息,包括:
基于所述更新后的编码器模型的模型信息对原有的译码器模型进行优化以进行模型更新。
图9是本公开一个实施例所提供的一种用户设备UE900的框图。例如,UE900可以是移动电话,计算机,数字广播终端设备,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图9,UE900可以包括以下至少一个组件:处理组件902,存储器904,电源组件906,多媒体组件908,音频组件910,输入/输出(I/O)的接口912,传感器组件913,以及通信组件916。
处理组件902通常控制UE900的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件902可以包括至少一个处理器920来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件902可以包括至少一个模块,便于处理组件902和其他组件之间的交互。例如,处理组件902可以包括多媒体模块,以方便多媒体组件908和处理组件902之间的交互。
存储器904被配置为存储各种类型的数据以支持在UE900的操作。这些数据的示例包括用于在UE900上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器904可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件906为UE900的各种组件提供电力。电源组件906可以包括电源管理系统,至少一个电源,及其他与为UE900生成、管理和分配电力相关联的组件。
多媒体组件908包括在所述UE900和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括至少一个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的唤醒时间和压力。在一些实施例中,多媒体组件908包括一个前置摄像头和/或后置摄像头。当UE900处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件910被配置为输出和/或输入音频信号。例如,音频组件910包括一个麦克风(MIC),当UE900处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器904或经由通信组件916发送。在一些实施例中,音频组件910 还包括一个扬声器,用于输出音频信号。
I/O接口912为处理组件902和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件913包括至少一个传感器,用于为UE900提供各个方面的状态评估。例如,传感器组件913可以检测到设备900的打开/关闭状态,组件的相对定位,例如所述组件为UE900的显示器和小键盘,传感器组件913还可以检测UE900或UE900一个组件的位置改变,用户与UE900接触的存在或不存在,UE900方位或加速/减速和UE900的温度变化。传感器组件913可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件913还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件913还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件916被配置为便于UE900和其他设备之间有线或无线方式的通信。UE900可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件916经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件916还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,UE900可以被至少一个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
图10是本公开实施例所提供的一种网络侧设备1000的框图。例如,网络侧设备1000可以被提供为一网络侧设备。参照图10,网络侧设备1000包括处理组件1011,其进一步包括至少一个处理器,以及由存储器1032所代表的存储器资源,用于存储可由处理组件1022的执行的指令,例如应用程序。存储器1032中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件1010被配置为执行指令,以执行上述方法前述应用在所述网络侧设备的任意方法,例如,如图1所示方法。
网络侧设备1000还可以包括一个电源组件1026被配置为执行网络侧设备1000的电源管理,一个有线或无线网络接口1050被配置为将网络侧设备1000连接到网络,和一个输入输出(I/O)接口1058。网络侧设备1000可以操作基于存储在存储器1032的操作系统,例如Windows Server TM,Mac OS XTM,Unix TM,Linux TM,FreeBSDTM或类似。
上述本公开提供的实施例中,分别从网络侧设备、UE的角度对本公开实施例提供的方法进行了介绍。为了实现上述本公开实施例提供的方法中的各功能,网络侧设备和UE可以包括硬件结构、软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能可以以硬件结构、软件模块、或者硬件结构加软件模块的方式来执行。
上述本公开提供的实施例中,分别从网络侧设备、UE的角度对本公开实施例提供的方法进行了介绍。为了实现上述本公开实施例提供的方法中的各功能,网络侧设备和UE可以包括硬件结构、软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能可以以硬件结构、软件模块、或者硬件结构加软件模块的方式来执行。
本公开实施例提供的一种通信装置。通信装置可包括收发模块和处理模块。收发模块可包括发送模块和/或接收模块,发送模块用于实现发送功能,接收模块用于实现接收功能,收发模块可以实现发送功能和/或接收功能。
通信装置可以是终端设备(如前述方法实施例中的终端设备),也可以是终端设备中的装置,还可以是能够与终端设备匹配使用的装置。或者,通信装置可以是网络设备,也可以是网络设备中的装置,还可以是能够与网络设备匹配使用的装置。
本公开实施例提供的另一种通信装置。通信装置可以是网络设备,也可以是终端设备(如前述方法实施例中的终端设备),也可以是支持网络设备实现上述方法的芯片、芯片系统、或处理器等,还可以是支持终端设备实现上述方法的芯片、芯片系统、或处理器等。该装置可用于实现上述方法实施例中描述的方法,具体可以参见上述方法实施例中的说明。
通信装置可以包括一个或多个处理器。处理器可以是通用处理器或者专用处理器等。例如可以是基带处理器或中央处理器。基带处理器可以用于对通信协议以及通信数据进行处理,中央处理器可以用于对通信装置(如,网络侧设备、基带芯片,终端设备、终端设备芯片,DU或CU等)进行控制,执行计算机程序,处理计算机程序的数据。
可选的,通信装置中还可以包括一个或多个存储器,其上可以存有计算机程序,处理器执行所述计算机程序,以使得通信装置执行上述方法实施例中描述的方法。可选的,所述存储器中还可以存储有数据。通信装置和存储器可以单独设置,也可以集成在一起。
可选的,通信装置还可以包括收发器、天线。收发器可以称为收发单元、收发机、或收发电路等,用于实现收发功能。收发器可以包括接收器和发送器,接收器可以称为接收机或接收电路等,用于实现接收功能;发送器可以称为发送机或发送电路等,用于实现发送功能。
可选的,通信装置中还可以包括一个或多个接口电路。接口电路用于接收代码指令并传输至处理器。处理器运行所述代码指令以使通信装置执行上述方法实施例中描述的方法。
通信装置为终端设备(如前述方法实施例中的终端设备):处理器用于执行图1-图4任一所示的方法。
通信装置为网络设备:收发器用于执行图5-图7任一所示的方法。
在一种实现方式中,处理器中可以包括用于实现接收和发送功能的收发器。例如该收发器可以是收发电路,或者是接口,或者是接口电路。用于实现接收和发送功能的收发电路、接口或接口电路可以是分开的,也可以集成在一起。上述收发电路、接口或接口电路可以用于代码/数据的读写,或者,上述收发电路、接口或接口电路可以用于信号的传输或传递。
在一种实现方式中,处理器可以存有计算机程序,计算机程序在处理器上运行,可使得通信装置执行上述方法实施例中描述的方法。计算机程序可能固化在处理器中,该种情况下,处理器可能由硬件实现。
在一种实现方式中,通信装置可以包括电路,所述电路可以实现前述方法实施例中发送或接收或者通信的功能。本公开中描述的处理器和收发器可实现在集成电路(integratedcircuit,IC)、模拟IC、射频集成电路RFIC、混合信号IC、专用集成电路(application specific integrated circuit,ASIC)、印刷电路板(printed circuit board,PCB)、电子设备等上。该处理器和收发器也可以用各种IC工艺技术来制造,例如互补金属氧化物半导体(complementary metal oxide semiconductor,CMOS)、N型金属氧化物半导体(nMetal-oxide-semiconductor,NMOS)、P型金属氧化物半导体(positive channel metal oxide semiconductor,PMOS)、双极结型晶体管(bipolar junction transistor,BJT)、双极CMOS(BiCMOS)、硅锗(SiGe)、砷化镓(GaAs)等。
以上实施例描述中的通信装置可以是网络设备或者终端设备(如前述方法实施例中的终端设备),但本公开中描述的通信装置的范围并不限于此,而且通信装置的结构可以不受的限制。通信装置可以是独立的设备或者可以是较大设备的一部分。例如所述通信装置可以是:
(1)独立的集成电路IC,或芯片,或,芯片系统或子系统;
(2)具有一个或多个IC的集合,可选的,该IC集合也可以包括用于存储数据,计算机程序的存储部件;
(3)ASIC,例如调制解调器(Modem);
(4)可嵌入在其他设备内的模块;
(5)接收机、终端设备、智能终端设备、蜂窝电话、无线设备、手持机、移动单元、车载设备、网络设备、云设备、人工智能设备等等;
(6)其他等等。
对于通信装置可以是芯片或芯片系统的情况,芯片包括处理器和接口。其中,处理器的数量可以是一个或多个,接口的数量可以是多个。
可选的,芯片还包括存储器,存储器用于存储必要的计算机程序和数据。
本领域技术人员还可以了解到本公开实施例列出的各种说明性逻辑块(illustrative logical block)和步骤(step)可以通过电子硬件、电脑软件,或两者的结合进行实现。这样的功能是通过硬件还是软件 来实现取决于特定的应用和整个系统的设计要求。本领域技术人员可以对于每种特定的应用,可以使用各种方法实现所述的功能,但这种实现不应被理解为超出本公开实施例保护的范围。
本公开实施例还提供一种确定侧链路时长的系统,该系统包括前述实施例中作为终端设备(如前述方法实施例中的第一终端设备)的通信装置和作为网络设备的通信装置,或者,该系统包括前述实施例中作为终端设备(如前述方法实施例中的第一终端设备)的通信装置和作为网络设备的通信装置。
本公开还提供一种可读存储介质,其上存储有指令,该指令被计算机执行时实现上述任一方法实施例的功能。
本公开还提供一种计算机程序产品,该计算机程序产品被计算机执行时实现上述任一方法实施例的功能。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机程序。在计算机上加载和执行所述计算机程序时,全部或部分地产生按照本公开实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机程序可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机程序可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,高密度数字视频光盘(digital video disc,DVD))、或者半导体介质(例如,固态硬盘(solid state disk,SSD))等。
本领域普通技术人员可以理解:本公开中涉及的第一、第二等各种数字编号仅为描述方便进行的区分,并不用来限制本公开实施例的范围,也表示先后顺序。
本公开中的至少一个还可以描述为一个或多个,多个可以是两个、三个、四个或者更多个,本公开不做限制。在本公开实施例中,对于一种技术特征,通过“第一”、“第二”、“第三”、“A”、“B”、“C”和“D”等区分该种技术特征中的技术特征,该“第一”、“第二”、“第三”、“A”、“B”、“C”和“D”描述的技术特征间无先后顺序或者大小顺序。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本发明的其它实施方案。本公开旨在涵盖本发明的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本发明的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。

Claims (35)

  1. 一种模型训练部署方法,其特征在于,被网络设备执行,包括:
    获取用户设备UE上报的能力信息,所述能力信息用于指示所述UE的人工智能AI和/或机器学习ML的支持能力;
    基于所述能力信息生成编码器模型和译码器模型;
    将所述编码器模型的模型信息发送至所述UE,所述编码器模型的模型信息用于部署编码器模型。
  2. 如权利要求1所述的方法,其特征在于,所述模型包括以下至少一种:
    AI模型;
    ML模型。
  3. 如权利要求1或2所述的方法,其特征在于,所述能力信息包括以下至少一种:
    所述UE是否支持AI;
    所述UE是否支持ML;
    所述UE支持的AI和/或ML的模型的种类;
    所述UE对于模型的最大支持能力信息,所述最大支持能力信息包括所述UE支持的最复杂的模型的结构信息。
  4. 如权利要求1或2所述的方法,其特征在于,所述基于所述能力信息生成编码器模型和译码器模型,包括:
    基于所述能力信息选择待训练编码器模型和/或待训练译码器模型;其中,所述待训练编码器模型为所述UE所支持的模型,所述待训练译码器模型为所述网络设备所支持的模型;
    基于所述UE上报的信息和所述网络设备的存储信息中的至少一种确定样本数据;
    基于所述样本数据对所述待训练编码器模型和/或待训练译码器模型进行训练,以生成编码器模型和译码器模型。
  5. 如权利要求1所述的方法,其特征在于,所述编码器模型的模型信息包括以下至少一种:
    所述编码器模型的种类;
    所述编码器模型的模型参数。
  6. 如权利要求1所述的方法,其特征在于,所述方法还包括:
    向所述UE发送指示信息,所述指示信息用于指示所述UE向所述网络设备上报时的信息类型;
    所述信息类型包括以下至少一种:
    未经编码器模型编码的原始的上报信息;
    原始的上报信息经过编码器模型编码之后的信息。
  7. 如权利要求6所述的方法,其特征在于,所述上报信息为所述UE要向所述网络设备上报的信息;所述上报信息包括信道状态信息CSI信息;
    所述的CSI信息包括以下至少一种:
    信道信息;
    信道的特征矩阵信息;
    信道的特征向量信息;
    预编码矩阵指示信息PMI;
    信道质量指示信息CQI;
    信道秩指示信息RI;
    参考信号接收功率RSRP;
    参考信号接收质量RSRQ;
    信干燥比SINR;
    参考信号资源指示。
  8. 如权利要求6所述的方法,其特征在于,所述指示信息指示的信息类型包括原始的上报信息经过编码器模型编码之后的信息;
    所述方法还包括:
    当接收到所述UE上报的信息,利用所述译码器模型对所述UE上报的信息进行译码。
  9. 如权利要求1所述的方法,其特征在于,所述方法还包括:
    对编码器模型和译码器模型进行模型更新,生成更新后的编码器模型和更新后的译码器模型。
  10. 如权利要求9所述的方法,其特征在于,所述对编码器模型和译码器模型进行模型更新,包括:
    基于原有的编码器模型和原有的译码器模型确定新的编码器模型和新的译码器模型;
    重新训练所述新的编码器模型和新的译码器模型得到更新后的编码器模型和更新后的译码器模型。
  11. 如权利要求9所述的方法,其特征在于,所述对编码器模型和译码器模型进行模型更新,包括:
    监控原有的编码器模型和原有的译码器模型的失真度;
    当所述失真度超出第一阈值,基于原有的编码器模型和原有的译码器模型确定新的编码器模型和新的译码器模型;并重新训练所述新的编码器模型和新的译码器模型得到更新后的编码器模型和更新后的译码器模型,其中,所述更新后的编码器模型和更新后的译码器模型的失真度低于第二阈值,所述第二阈值小于等于第一阈值。
  12. 如权利要求10或11所述的方法,其特征在于,所述基于原有的编码器模型和原有的译码器模型确定新的编码器模型和新的译码器模型,包括:
    调整原有的编码器模型和原有的译码器模型的模型参数得到所述新的编码器模型和新的译码器模型;或者
    基于所述能力信息重新选择与原有的编码器模型和原有的译码器模型的种类不同的新的编码器模型和新的译码器模型。
  13. 如权利要求9-12任一所述的方法,其特征在于,所述方法还包括:
    直接利用所述更新后的译码器模型替换原有的译码器模型。
  14. 如权利要求9-12任一所述的方法,其特征在于,所述方法还包括:
    确定所述更新后的译码器模型的模型信息与原有的译码器模型的模型信息之间的差异模型信息;
    基于所述差异模型信息对原有的译码器模型进行优化。
  15. 如权利要求9-12任一所述的方法,其特征在于,所述方法还包括:
    将所述更新后的编码器模型的模型信息发送至所述UE。
  16. 如权利要求15所述的方法,其特征在于,所述更新后的编码器模型的模型信息包括:
    所述更新后的编码器模型的全部模型信息;或者
    所述更新后的编码器模型的模型信息与原有的编码器模型的模型信息之间的差异模型信息。
  17. 一种模型训练部署方法,其特征在于,被UE执行,包括:
    向网络设备上报能力信息,所述能力信息用于指示所述UE的AI和/或ML的支持能力;
    获取所述网络设备发送的编码器模型的模型信息,所述编码器模型的模型信息用于部署编码器模型;
    基于所述编码器模型的模型信息生成编码器模型。
  18. 如权利要求17所述的方法,其特征在于,所述模型包括以下至少一种:
    AI模型;
    ML模型。
  19. 如权利要求17或18所述的方法,其特征在于,所述能力信息包括以下至少一种:
    所述UE是否支持AI;
    所述UE是否支持ML;
    所述UE支持的AI和/或ML的模型的种类;
    所述UE对于模型的最大支持能力信息,所述最大支持能力信息包括所述UE支持的最复杂的模型的结构信息。
  20. 如权利要求17所述的方法,所述编码器模型的模型信息包括以下至少一种:
    所述编码器模型的种类;
    所述编码器模型的模型参数。
  21. 如权利要求17所述的方法,其特征在于,所述方法还包括:
    获取所述网络设备发送指示信息,所述指示信息用于指示所述UE向所述网络设备上报时的信息类 型;所述信息类型包括未经编码器模型编码的原始的上报信息和原始的上报信息经过编码器模型编码之后的信息中的至少一种;
    基于所述指示信息向所述网络设备进行上报。
  22. 如权利要求21所述的方法,其特征在于,所述上报信息为所述UE要向所述网络设备上报的信息;所述上报信息包括CSI信息;
    所述的CSI信息包括以下至少一种:
    信道信息;
    信道的特征矩阵信息;
    信道的特征向量信息;
    PMI;
    CQI;
    RI;
    RSRP;
    RSRQ;
    SINR;
    参考信号资源指示。
  23. 如权利要求21所述的方法,其特征在于,所述指示信息指示的信息类型包括原始的上报信息经过编码器模型编码之后的信息;
    所述基于所述指示信息向所述网络设备进行上报,包括:
    利用所述编码器模型对所述上报信息进行编码;
    将编码之后的信息上报至所述网络设备。
  24. 如权利要求17所述的方法,其特征在于,所述方法还包括:
    接收所述网络设备发送的更新后的编码器模型的模型信息;
    基于所述更新后的编码器模型的模型信息进行模型更新。
  25. 如权利要求24所述的方法,其特征在于,所述更新后的编码器模型的模型信息包括:
    所述更新后的编码器模型的全部模型信息;或者
    所述更新后的编码器模型的模型信息与原有的编码器模型的模型信息之间的差异模型信息。
  26. 如权利要求24或25所述的方法,其特征在于,所述基于所述更新后的编码器模型的模型信息进行模型更新,包括:
    基于所述更新后的编码器模型的模型信息生成更新后的编码器模型;
    利用所述更新后的编码器模型替换原有的编码器模型以进行模型更新。
  27. 如权利要求24或25所述的方法,其特征在于,所述基于所述更新后的编码器模型的模型信息进行模型更新,包括:
    基于所述更新后的编码器模型的模型信息对原有的译码器模型进行优化以进行模型更新。
  28. 一种模型训练部署装置,其特征在于,包括:
    获取模块,用于获取UE上报的能力信息,所述能力信息用于指示所述UE的AI和/或ML的支持能力;
    生成模块,用于基于所述能力信息生成编码器模型和译码器模型;
    发送模块,用于将所述编码器模型的模型信息发送至所述UE,所述编码器模型的模型信息用于部署编码器模型。
  29. 一种模型训练部署方法,其特征在于,包括:
    上报模块,用于向网络设备上报能力信息,所述能力信息用于指示所述UE的AI和/或ML的支持能力;
    获取模块,用于获取所述网络设备发送的编码器模型的模型信息,所述编码器模型的模型信息用于部署编码器模型;
    生成模块,用于基于所述编码器模型的模型信息生成编码器模型。
  30. 一种通信装置,其特征在于,所述装置包括处理器和存储器,其中,所述存储器中存储有计算机程序,所述处理器执行所述存储器中存储的计算机程序,以使所述装置执行如权利要求1至16中任一项所述的方法。
  31. 一种通信装置,其特征在于,所述装置包括处理器和存储器,其中,所述存储器中存储有计算机程序,所述处理器执行所述存储器中存储的计算机程序,以使所述装置执行如权利要求17至27中任一项所述的方法。
  32. 一种通信装置,其特征在于,包括:处理器和接口电路,其中
    所述接口电路,用于接收代码指令并传输至所述处理器;
    所述处理器,用于运行所述代码指令以执行如权利要求1至16中任一项所述的方法。
  33. 一种通信装置,其特征在于,包括:处理器和接口电路,其中
    所述接口电路,用于接收代码指令并传输至所述处理器;
    所述处理器,用于运行所述代码指令以执行如权利要求17至27中任一项所述的方法。
  34. 一种计算机可读存储介质,用于存储有指令,当所述指令被执行时,使如权利要求1至16中任一项所述的方法被实现。
  35. 一种计算机可读存储介质,用于存储有指令,当所述指令被执行时,使如权利要求17至27中任一项所述的方法被实现。
PCT/CN2022/080478 2022-03-11 2022-03-11 一种模型训练部署方法/装置/设备及存储介质 WO2023168718A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/080478 WO2023168718A1 (zh) 2022-03-11 2022-03-11 一种模型训练部署方法/装置/设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/080478 WO2023168718A1 (zh) 2022-03-11 2022-03-11 一种模型训练部署方法/装置/设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023168718A1 true WO2023168718A1 (zh) 2023-09-14

Family

ID=87936981

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/080478 WO2023168718A1 (zh) 2022-03-11 2022-03-11 一种模型训练部署方法/装置/设备及存储介质

Country Status (1)

Country Link
WO (1) WO2023168718A1 (zh)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111819872A (zh) * 2020-06-03 2020-10-23 北京小米移动软件有限公司 信息传输方法、装置、通信设备及存储介质
CN114070676A (zh) * 2020-08-05 2022-02-18 展讯半导体(南京)有限公司 Ai网络模型支持能力上报、接收方法及装置、存储介质、用户设备、基站

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111819872A (zh) * 2020-06-03 2020-10-23 北京小米移动软件有限公司 信息传输方法、装置、通信设备及存储介质
CN114070676A (zh) * 2020-08-05 2022-02-18 展讯半导体(南京)有限公司 Ai网络模型支持能力上报、接收方法及装置、存储介质、用户设备、基站

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CATT: "On AI/ML study for physical layer in Rel-18", 3GPP DRAFT; RP-212255, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. TSG RAN, no. Electronic Meeting; 20210913 - 20210917, 6 September 2021 (2021-09-06), Mobile Competence Centre ; 650, route des Lucioles ; F-06921 Sophia-Antipolis Cedex ; France, XP052049530 *
QUALCOMM: "Email discussion summary for [RAN-R18-WS-crossFunc-Qualcomm]", 3GPP DRAFT; RWS-210637, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. TSG RAN, no. Electronic Meeting; 20210628 - 20210702, 25 June 2021 (2021-06-25), Mobile Competence Centre ; 650, route des Lucioles ; F-06921 Sophia-Antipolis Cedex ; France , XP052029085 *

Similar Documents

Publication Publication Date Title
WO2024011641A1 (zh) 定位辅助终端设备的确定方法、装置
WO2023206182A1 (zh) 成功PScell添加或更换报告的位置信息记录方法、装置
WO2023168718A1 (zh) 一种模型训练部署方法/装置/设备及存储介质
WO2023168717A1 (zh) 一种模型训练部署方法/装置/设备及存储介质
WO2023220901A1 (zh) 上报方法、装置
WO2023201759A1 (zh) 成功PScell添加或更换报告的上报方法、装置
CN114080844B (zh) 一种寻呼分组的方法、装置、终端设备、基站及存储介质
WO2023028849A1 (zh) 参考信号测量方法、装置、用户设备、网络侧设备及存储介质
CN114731566A (zh) 路径切换方法及装置
WO2023173434A1 (zh) 一种信道估计方法、装置、设备及存储介质
WO2023173433A1 (zh) 一种信道估计方法/装置/设备及存储介质
CN113728723A (zh) 辅助配置方法及其装置
WO2023173254A1 (zh) 一种定时调整方法/装置/设备及存储介质
WO2023245586A1 (zh) 操作配置方法、装置
WO2023193276A1 (zh) 一种上报方法/装置/设备及存储介质
WO2023184261A1 (zh) 一种上报方法/装置/设备及存储介质
WO2024021131A1 (zh) Drx周期确定方法、装置
WO2023155053A1 (zh) 一种辅助通信设备的发送方法及设备/存储介质/装置
WO2023216079A1 (zh) 资源配置方法/装置/用户设备/网络侧设备及存储介质
WO2023178567A1 (zh) 一种上报方法/装置/设备及存储介质
WO2023201499A1 (zh) 一种小区激活方法/装置/设备及存储介质
WO2023206032A2 (zh) 直连sidelink非连续接收DRX的控制方法及装置
WO2023206176A1 (zh) 测量报告发送方法、测量报告接收方法、装置
WO2023178568A1 (zh) 一种测量方法及设备/存储介质/装置
WO2023168590A1 (zh) 波束确定方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22930356

Country of ref document: EP

Kind code of ref document: A1