WO2023168717A1 - 一种模型训练部署方法/装置/设备及存储介质 - Google Patents

一种模型训练部署方法/装置/设备及存储介质 Download PDF

Info

Publication number
WO2023168717A1
WO2023168717A1 PCT/CN2022/080477 CN2022080477W WO2023168717A1 WO 2023168717 A1 WO2023168717 A1 WO 2023168717A1 CN 2022080477 W CN2022080477 W CN 2022080477W WO 2023168717 A1 WO2023168717 A1 WO 2023168717A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
information
decoder
encoder
trained
Prior art date
Application number
PCT/CN2022/080477
Other languages
English (en)
French (fr)
Inventor
池连刚
Original Assignee
北京小米移动软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京小米移动软件有限公司 filed Critical 北京小米移动软件有限公司
Priority to PCT/CN2022/080477 priority Critical patent/WO2023168717A1/zh
Publication of WO2023168717A1 publication Critical patent/WO2023168717A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W92/00Interfaces specially adapted for wireless communication networks
    • H04W92/04Interfaces between hierarchically different network devices
    • H04W92/10Interfaces between hierarchically different network devices between terminal device and access point, i.e. wireless air interface

Definitions

  • the present disclosure relates to the field of communication technology, and in particular, to a model training deployment method/device/equipment and a storage medium.
  • AI Artificial Intelligent, artificial intelligence
  • ML Machine Learning, machine learning
  • the model training deployment method/device/equipment and storage medium proposed in this disclosure are used to train and deploy AI/ML models.
  • Model information of the decoder model is sent to the network device, and the model information of the decoder model is used to deploy the decoder model.
  • the model training and deployment method proposed by another embodiment of the present disclosure is applied to network equipment, including:
  • capability information reported by the UE where the capability information is used to indicate the AI and/or ML support capabilities of the UE;
  • the decoder model is deployed based on model information of the decoder model.
  • a model training and deployment device proposed by another aspect of the present disclosure includes:
  • a reporting module configured to report capability information to the network device, where the capability information is used to indicate the AI and/or ML support capabilities of the UE;
  • An acquisition module configured to acquire the model information of the encoder model to be trained and/or the model information of the decoder model to be trained sent by the network device;
  • a generation module configured to generate an encoder model and a decoder model based on the model information of the encoder model to be trained and/or the model information of the decoder model to be trained;
  • a sending module configured to send model information of the decoder model to the network device, where the model information of the decoder model is used to deploy the decoder model.
  • a model training and deployment device proposed by another aspect of the present disclosure includes:
  • the first acquisition module is used to acquire the capability information reported by the UE, where the capability information is used to indicate the AI and/or ML support capabilities of the UE;
  • a sending module configured to send model information of the encoder model to be trained and/or model information of the decoder model to be trained to the UE based on the capability information;
  • the second acquisition module is used to acquire the model information of the decoder model sent by the UE, and the model information of the decoder model is used to deploy the decoder model;
  • a generating module configured to generate the decoder model based on model information of the decoder model.
  • the device includes a processor and a memory.
  • a computer program is stored in the memory.
  • the processor executes the computer program stored in the memory so that the The device performs the method proposed in the embodiment of the above aspect.
  • the device includes a processor and a memory.
  • a computer program is stored in the memory.
  • the processor executes the computer program stored in the memory so that the The device performs the method proposed in the above embodiment.
  • a communication device provided by another embodiment of the present disclosure includes: a processor and an interface circuit
  • the interface circuit is used to receive code instructions and transmit them to the processor
  • the processor is configured to run the code instructions to perform the method proposed in the embodiment of one aspect.
  • a communication device provided by another embodiment of the present disclosure includes: a processor and an interface circuit
  • the interface circuit is used to receive code instructions and transmit them to the processor
  • the processor is configured to run the code instructions to perform the method proposed in another embodiment.
  • a computer-readable storage medium provided by an embodiment of another aspect of the present disclosure is used to store instructions. When the instructions are executed, the method proposed by the embodiment of the present disclosure is implemented.
  • a computer-readable storage medium provided by an embodiment of another aspect of the present disclosure is used to store instructions. When the instructions are executed, the method proposed by the embodiment of another aspect is implemented.
  • the UE will first report capability information to the network device.
  • This capability information can be used to indicate the AI and/or ML of the UE.
  • the UE will obtain the model information of the encoder model to be trained and/or the model information of the decoder model to be trained sent by the network device, and perform decoding based on the model information of the encoder model to be trained and/or the decoder model to be trained.
  • model information of the decoder model generate the encoder model and decoder model, and finally send the model information of the decoder model to the network device.
  • the model information of the decoder model is used to deploy the decoder model so that the network device can Deploy a decoder model based on the model information of the decoder model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 1 is a schematic flowchart of a model training and deployment method provided by an embodiment of the present disclosure
  • Figure 2a is a schematic flowchart of a model training and deployment method provided by another embodiment of the present disclosure
  • Figure 2b is a schematic flowchart of a model training and deployment method provided by yet another embodiment of the present disclosure
  • Figure 2c is a schematic flowchart of a model training and deployment method provided by yet another embodiment of the present disclosure.
  • Figure 2d is a schematic flowchart of a model training and deployment method provided by yet another embodiment of the present disclosure
  • Figure 2e is a schematic flowchart of a model training and deployment method provided by yet another embodiment of the present disclosure
  • Figure 3b is a schematic flowchart of a model training and deployment method provided by yet another embodiment of the present disclosure.
  • Figure 3c is a schematic flowchart of a model training and deployment method provided by yet another embodiment of the present disclosure.
  • Figure 3d is a schematic flowchart of a model training and deployment method provided by yet another embodiment of the present disclosure.
  • Figure 3e is a schematic flowchart of a model training and deployment method provided by yet another embodiment of the present disclosure.
  • Figure 4 is a schematic flowchart of a model training and deployment method provided by yet another embodiment of the present disclosure.
  • Figure 5a is a schematic flowchart of a model training and deployment method provided by yet another embodiment of the present disclosure.
  • Figure 5b is a schematic flowchart of a model training and deployment method provided by yet another embodiment of the present disclosure.
  • Figure 5c is a schematic flowchart of a model training and deployment method provided by yet another embodiment of the present disclosure.
  • Figure 6a is a schematic flowchart of a model training and deployment method provided by yet another embodiment of the present disclosure.
  • Figure 6b is a schematic flowchart of a model training and deployment method provided by yet another embodiment of the present disclosure.
  • Figure 6c is a schematic flowchart of a model training and deployment method provided by yet another embodiment of the present disclosure.
  • Figure 6d is a schematic flowchart of a model training and deployment method provided by yet another embodiment of the present disclosure.
  • Figure 6e is a schematic flowchart of a model training and deployment method provided by yet another embodiment of the present disclosure.
  • Figure 7 is a schematic structural diagram of a model training and deployment device provided by an embodiment of the present disclosure.
  • Figure 8 is a schematic structural diagram of a model training and deployment device provided by another embodiment of the present disclosure.
  • Figure 9 is a block diagram of a user equipment provided by an embodiment of the present disclosure.
  • Figure 10 is a block diagram of a network side device provided by an embodiment of the present disclosure.
  • first, second, third, etc. may be used to describe various information in the embodiments of the present disclosure, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from each other.
  • first information may also be called second information, and similarly, the second information may also be called first information.
  • the words "if” and “if” as used herein may be interpreted as “when” or “when” or “in response to determining.”
  • Figure 1 is a schematic flowchart of a model training and deployment method provided by an embodiment of the present disclosure. The method is executed by a UE (User Equipment). As shown in Figure 1, the method may include the following steps:
  • Step 101 Report capability information to the network device.
  • a UE may be a device that provides voice and/or data connectivity to users.
  • Terminal devices can communicate with one or more core networks via RAN (Radio Access Network).
  • UEs can be IoT terminals, such as sensor devices, mobile phones (or "cellular" phones) and devices with
  • the computer of the network terminal may, for example, be a fixed, portable, pocket-sized, handheld, built-in computer or vehicle-mounted device.
  • station STA
  • subscriber unit subscriber unit
  • subscriber station subscriber station
  • mobile station mobile station
  • mobile station mobile station
  • remote station remote station
  • access point remote terminal
  • remoteterminal access terminal
  • access terminal access terminal
  • user device user terminal
  • user agent useragent
  • the UE may also be a device of an unmanned aerial vehicle.
  • the UE may also be a vehicle-mounted device, for example, it may be a driving computer with a wireless communication function, or a wireless terminal connected to an external driving computer.
  • the UE may also be a roadside device, for example, it may be a streetlight, a signal light, or other roadside device with wireless communication functions.
  • the capability information may be used to indicate the UE's AI (Artificial Intelligent, artificial intelligence) and/or ML (Machine Learning, machine learning) support capabilities.
  • AI Artificial Intelligent, artificial intelligence
  • ML Machine Learning, machine learning
  • the model may include at least one of the following:
  • the above capability information may include at least one of the following:
  • Types of AI and/or ML supported by the UE are Types of AI and/or ML supported by the UE;
  • the maximum support capability information of the UE for the model includes the structural information of the most complex model supported by the UE.
  • the above-mentioned structural information may include, for example, the number of layers of the model.
  • Step 102 Obtain the model information of the encoder model to be trained and/or the model information of the decoder model to be trained sent by the network device.
  • multiple different encoder models to be trained and/or multiple different decoder models to be trained will be stored in the network device, where the encoder model and the decoder There is a correspondence between the models.
  • the network device can select an AI matching the UE from a plurality of different encoder models to be trained stored in the network device based on the capability information. and/or ML support capabilities of the encoder model to be trained, and/or select from a plurality of different stored decoder models to be trained that matches the AI and/or ML support capabilities of the network device itself
  • the decoder model to be trained will then send model information of the selected encoder model to be trained and/or model information of the decoder model to be trained to the UE.
  • the relevant content about model information will be introduced in detail later.
  • Step 103 Generate an encoder model and a decoder model based on the model information of the encoder model to be trained and/or the model information of the decoder model to be trained.
  • the UE generates an encoder model and a decoder model by training the encoder model to be trained and/or the decoder model to be trained.
  • Step 104 Send the model information of the decoder model to the network device.
  • the model information of the decoder model can be used to deploy the decoder model.
  • the model information of the above-mentioned decoder model may include at least one of the following:
  • the types of models mentioned above may include:
  • CNN Convolutional Neural Network, convolutional neural network
  • model parameters of the models will also be different.
  • the model parameters of the decoder model may include the compression rate of the CNN model and the number of convolutional layers of the CNN model. , the arrangement information between each convolution layer, the weight information of each convolution layer, the convolution kernel size of each convolution layer, at least one of the normalization layer and activation function type applied to each convolution layer A sort of.
  • the model parameters of the decoder model may include the compression rate of the fully connected DNN model, the number of fully connected layers, The arrangement information between each fully connected layer, the weight information of each fully connected layer, the number of nodes in each fully connected layer, and at least one of the normalization layer and activation function type applied to each fully connected layer.
  • the model parameters of the decoder model may include the compression rate of the combined model of CNN and fully connected DNN.
  • the number of convolutional layers and fully connected layers, the matching mode, the weight information of the convolutional layer, the convolution kernel size, the number of nodes in the fully connected layer, the weight information of the fully connected layer, each fully connected layer and convolutional layer At least one of the normalization layer and activation function types to apply.
  • the UE will first report capability information to the network device.
  • This capability information can be used to indicate the UE's AI and/or ML support capabilities, and then the UE will obtain the network device
  • the model information of the decoder model is used to deploy the decoder model, so that the network device can be deployed based on the model information of the decoder model. Decoder model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 2a is a schematic flowchart of a model training deployment method provided by an embodiment of the present disclosure. The method is executed by a UE. As shown in Figure 2a, the method may include the following steps:
  • Step 201a Report capability information to the network device.
  • step 201a For detailed introduction to step 201a, reference may be made to the description of the above embodiments, and the embodiments of the disclosure will not be described again here.
  • Step 202a Obtain the model information of the encoder model to be trained and the model information of the decoder model to be trained sent by the network device.
  • the network device can select from multiple different encoder models to be trained and multiple different decoder models to be trained stored therein. Among them, the encoder model to be trained supported by the UE and the decoder model to be trained supported by the network device are selected based on the capability information, and the model information of the selected encoder model to be trained and the decoder model to be trained are combined. The model information of the encoder model is sent to the UE. Among them, the encoder model to be trained and the decoder model to be trained selected by the network device correspond to each other.
  • Step 203a Deploy the encoder model to be trained based on the model information of the encoder model to be trained.
  • Step 204a Deploy the decoder model to be trained based on the model information of the decoder model to be trained.
  • Step 205a Determine sample data based on the UE's measurement information and/or historical measurement information.
  • the above-mentioned measurement information may include at least one of the following:
  • UE s measurement information for reference signals (such as SSB (Synchronization Signal Block, synchronization block), CSI-RS (Channel State Information-Reference Signal, channel state information reference signal), etc.);
  • reference signals such as SSB (Synchronization Signal Block, synchronization block), CSI-RS (Channel State Information-Reference Signal, channel state information reference signal), etc.
  • UE's RRM Radio Resource management, radio resource management
  • Radio Resource Control Radio Resource Control
  • Beam measurement information of UE Beam measurement information of UE.
  • Step 206a Train the encoder model to be trained and the decoder model to be trained based on the sample data to generate an encoder model and a decoder model.
  • Step 207a Send the model information of the decoder model to the network device, and the model information of the decoder model can be used to deploy the decoder model.
  • step 207a For a detailed introduction to step 207a, reference may be made to the description of the above embodiments, and the embodiments of the present disclosure will not be described again here.
  • the UE will first report capability information to the network device.
  • This capability information can be used to indicate the UE's AI and/or ML support capabilities, and then the UE will obtain the network device
  • the model information of the decoder model is used to deploy the decoder model, so that the network device can be deployed based on the model information of the decoder model. Decoder model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 2b is a schematic flowchart of a model training deployment method provided by an embodiment of the present disclosure. The method is executed by a UE. As shown in Figure 2b, the method may include the following steps:
  • Step 201b Report capability information to the network device.
  • step 201b For a detailed introduction to step 201b, reference may be made to the description of the above embodiments, and the embodiments of the present disclosure will not be described again here.
  • Step 202b Obtain the model information of the encoder model to be trained sent by the network device.
  • the network device can select the encoder model supported by the UE based on the capability information from multiple different encoder models to be trained stored therein.
  • the encoder model to be trained, and the model information of the selected encoder model to be trained is sent to the UE.
  • the above-mentioned encoder model to be trained should meet the following conditions:
  • the selected encoder model to be trained should be a model that matches the capability information of the UE (that is, the model supported by the UE);
  • the decoder model to be trained corresponding to the selected encoder model to be trained should be a model supported by the network device.
  • Step 203b Deploy the encoder model to be trained based on the model information of the encoder model to be trained.
  • Step 204b Determine a decoder model to be trained that matches the encoder model to be trained based on the encoder model to be trained.
  • the model information of the decoder model to be trained that matches the encoder model to be trained can be determined based on the model information of the encoder model to be trained, and then, based on the model information of the encoder model to be trained, The model information for training the decoder model deploys the decoder model to be trained.
  • Step 205b Determine sample data based on the UE's measurement information and/or historical measurement information.
  • Step 206b Train the encoder model to be trained and the decoder model to be trained based on the sample data to generate an encoder model and a decoder model.
  • Step 207b Send the model information of the decoder model to the network device.
  • the model information of the decoder model can be used to deploy the decoder model.
  • the UE will first report capability information to the network device.
  • This capability information can be used to indicate the UE's AI and/or ML support capabilities, and then the UE will obtain the network device
  • the model information of the decoder model is used to deploy the decoder model, so that the network device can be deployed based on the model information of the decoder model. Decoder model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 2c is a schematic flowchart of a model training deployment method provided by an embodiment of the present disclosure. The method is executed by a UE. As shown in Figure 2c, the method may include the following steps:
  • Step 201c Report capability information to the network device.
  • step 201c For a detailed introduction to step 201c, reference may be made to the description of the above embodiments, and the embodiments of the present disclosure will not be described again here.
  • Step 202c Obtain the model information of the encoder model to be trained sent by the network device.
  • Step 203c Determine sample data based on the UE's measurement information and/or historical measurement information.
  • Step 204c Train the encoder model to be trained based on the sample data to generate an encoder model.
  • steps 201c to 204c please refer to the above embodiment description, and the embodiments of the present disclosure will not be described again here.
  • Step 205c The UE determines a decoder model that matches the encoder model based on the model information of the encoder model.
  • model information of a decoder model that matches the encoder model can be determined based on model information of the encoder model, and then, based on the model information of the decoder model Deploy the decoder model.
  • Step 206c Send the model information of the decoder model to the network device, and the model information of the decoder model can be used to deploy the decoder model.
  • steps 205c to 206c please refer to the above embodiment description, and the embodiments of the present disclosure will not be described again here.
  • the UE will first report capability information to the network device.
  • This capability information can be used to indicate the UE's AI and/or ML support capabilities, and then the UE will obtain the network device
  • the model information of the decoder model is used to deploy the decoder model, so that the network device can be deployed based on the model information of the decoder model. Decoder model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 2d is a schematic flowchart of a model training deployment method provided by an embodiment of the present disclosure. The method is executed by a UE. As shown in Figure 2d, the method may include the following steps:
  • Step 201d Report capability information to the network device.
  • step 201d For detailed introduction to step 201d, reference may be made to the description of the above embodiments, and the embodiments of the present disclosure will not be described in detail here.
  • Step 202d Obtain model information of the decoder model to be trained sent by the network device.
  • the network device can select the decoder model to be trained based on the capability information from multiple different decoder models stored in it. Decoder model.
  • the above-mentioned "selecting a decoder model to be trained based on capability information” specifically means: when selecting a decoder model to be trained based on capability information, the selected The decoder model to be trained must meet the following conditions:
  • the selected decoder model to be trained should be a model supported by the network device
  • the encoder model to be trained corresponding to the selected decoder model to be trained should be a model that matches the capability information of the UE (that is, a model supported by the UE).
  • Step 203d Deploy the decoder model to be trained based on the model information of the decoder model to be trained.
  • Step 204d Determine an encoder model to be trained that matches the encoder model to be trained based on the decoder model to be trained.
  • the model information of the encoder model to be trained that matches the decoder model to be trained can be determined based on the model information of the decoder model to be trained, and then, based on the The model information of the encoder model to be trained deploys the encoder model to be trained.
  • the determined encoder model to be trained is specifically a model supported by the UE.
  • Step 205d Determine sample data based on the UE's measurement information and/or historical measurement information.
  • Step 206d Train the decoder model to be trained and the encoder model to be trained based on the sample data to generate a decoder model and an encoder model.
  • Step 207d Send the model information of the decoder model to the network device.
  • the model information of the decoder model can be used to deploy the decoder model.
  • the UE will first report capability information to the network device.
  • This capability information can be used to indicate the UE's AI and/or ML support capabilities, and then the UE will obtain the network device
  • the model information of the decoder model is used to deploy the decoder model, so that the network device can be deployed based on the model information of the decoder model. Decoder model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 2e is a schematic flowchart of a model training deployment method provided by an embodiment of the present disclosure. The method is executed by a UE. As shown in Figure 2e, the method may include the following steps:
  • Step 201e Report capability information to the network device.
  • Step 202e Obtain model information of the decoder model to be trained sent by the network device.
  • Step 203e Determine sample data based on the UE's measurement information and/or historical measurement information.
  • Step 204e Train the decoder model to be trained based on the sample data to generate a decoder model.
  • Step 205e The UE determines an encoder model that matches the decoder model based on the model information of the decoder model.
  • the model information of the encoder model matching the decoder model can be determined based on the model information of the decoder model, and then, based on the model information of the encoder model Generate this encoder model.
  • Step 206e Send the model information of the decoder model to the network device, and the model information of the decoder model can be used to deploy the decoder model.
  • steps 201e to 206e please refer to the above embodiment description, and the embodiments of the present disclosure will not be described again here.
  • the UE will first report capability information to the network device.
  • This capability information can be used to indicate the UE's AI and/or ML support capabilities, and then the UE will obtain the network device
  • the model information of the decoder model is used to deploy the decoder model, so that the network device can be deployed based on the model information of the decoder model. Decoder model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 3a is a schematic flowchart of a model training deployment method provided by an embodiment of the present disclosure. The method is executed by a UE. As shown in Figure 3, the method may include the following steps:
  • Step 301a Report capability information to the network device.
  • Step 302a Obtain model information of the encoder model to be trained and/or model information of the decoder model to be trained sent by the network device.
  • Step 303a Generate an encoder model and a decoder model based on the model information of the encoder model to be trained and/or the model information of the decoder model to be trained.
  • Step 304a Send the model information of the decoder model to the network device.
  • the model information of the decoder model can be used to deploy the decoder model.
  • steps 301a-304a please refer to the above embodiment description, and the embodiments of the present disclosure will not be described again here.
  • Step 305a Obtain the network device sending instruction information.
  • the indication information may be used to indicate the type of information reported by the UE to the network device.
  • the information type may include at least one of the following:
  • the original reported information is encoded by the encoder model.
  • the reported information may be information to be reported by the UE to the network device.
  • the reported information may include CSI (Channel State Information, channel state information) information;
  • the CSI information may include at least one of the following:
  • PMI Precoding Matrix Indicator, precoding matrix identifier
  • CQI Channel Quality Indicator, channel quality indication information
  • RI Rank Indicator, channel rank indication information
  • RSRP Reference Signal Received Power, reference signal received power
  • RSRQ Reference Signal Received Quality, reference signal received quality
  • SINR Signal-to-Interference plus Noise Ratio, signal to dryness ratio
  • the UE may obtain the indication information sent by the network device through signaling.
  • Step 306a Report to the network device based on the indication information.
  • the indication information in the above step 305a is used to indicate that the type of information reported by the UE to the network device is: original reported information without encoder model encoding
  • this step When the UE wants to report to the network device in step 306a, it can directly send the original reporting information to the network device without encoding; and when the instruction information in the above step 305a is used to instruct the UE to report to the network device, the type of information is:
  • the original reported information is encoded by the encoder model
  • the UE wants to report to the network device in step 306a it must first use the encoder model to encode the reported information, and report the encoded information to the network device. .
  • the UE will first report capability information to the network device.
  • This capability information can be used to indicate the UE's AI and/or ML support capabilities, and then the UE will obtain the network device
  • the model information of the decoder model is used to deploy the decoder model, so that the network device can be deployed based on the model information of the decoder model. Decoder model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 3b is a schematic flowchart of a model training deployment method provided by an embodiment of the present disclosure. The method is executed by a UE. As shown in Figure 3, the method may include the following steps:
  • Step 301b Report capability information to the network device.
  • Step 302b Obtain the model information of the encoder model to be trained and/or the model information of the decoder model to be trained sent by the network device.
  • Step 303b Generate an encoder model and a decoder model based on the model information of the encoder model to be trained and/or the model information of the decoder model to be trained.
  • Step 304b Send the model information of the decoder model to the network device.
  • the model information of the decoder model can be used to deploy the decoder model.
  • Step 305b Obtain the instruction information sent by the network device.
  • the instruction information is used to instruct the UE to report the information type to the network device including: information after the original reported information is encoded by the encoder model.
  • steps 301b to 305b please refer to the description of the above embodiments, and the embodiments of the present disclosure will not be described again here.
  • Step 306b Use the encoder model to encode the reported information.
  • Step 307b Report the encoded information to the network device.
  • the UE report received by the network device The information is essentially the information encoded by the encoder model. Based on this, the network device needs to use the decoder model (such as the decoder model generated in step 303b above) to decode the information reported by the UE to obtain the original Report information.
  • the decoder model such as the decoder model generated in step 303b above
  • the UE will first report capability information to the network device.
  • This capability information can be used to indicate the UE's AI and/or ML support capabilities, and then the UE will obtain the network device
  • the model information of the decoder model is used to deploy the decoder model, so that the network device can be deployed based on the model information of the decoder model. Decoder model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 3c is a schematic flowchart of a model training and deployment method provided by an embodiment of the present disclosure. The method is executed by a network device. As shown in Figure 3c, the method may include the following steps:
  • Step 301c Report capability information to the network device.
  • Step 302c Obtain the model information of the encoder model to be trained and/or the model information of the decoder model to be trained sent by the network device.
  • Step 303c Generate an encoder model and a decoder model based on the model information of the encoder model to be trained and/or the model information of the decoder model to be trained.
  • Step 304c Send the model information of the encoder model to the network device, and the model information of the encoder model is used to deploy the encoder model.
  • Step 305c Update the encoder model and the decoder model to generate an updated encoder model and an updated decoder model.
  • the UE may update the encoder model and the decoder model based on instructions from the network device. Wherein, the UE performs model updates on the encoder model and decoder model based on instructions from the network device, which may include the following steps:
  • Step 1 Obtain the update instruction information sent by the network device.
  • the update instruction information may be used to instruct the UE to adjust model parameters.
  • the update indication information may include model information of a new encoder model and/or model information of a new decoder model, and the new encoder model and the new decoder The type of model is different from the original encoder model and the original decoder model.
  • the above-mentioned update instruction information may only include model information of the new encoder model.
  • the update indication information may only include model information of the new decoder model.
  • the update indication information may include model information of the new decoder model and model information of the new encoder model.
  • Step 2 Determine a new encoder model and a new decoder model based on the update instruction information.
  • the UE when the update indication information is used to instruct the UE to adjust model parameters, the UE can obtain new model parameters by adjusting the original encoder model and the original decoder model. Encoder model and new decoder model;
  • the UE when the update indication information includes model information of the new encoder model and/or model information of the new decoder model, the UE may be based on the model of the new encoder model. information and/or model information of the new decoder model to generate a new encoder model and a new decoder model, wherein the new encoder model and the new decoder model are of the same type as the original encoder model The type of model is different from the original decoder model.
  • the UE may determine the corresponding matching model information of another new model based on the model information of the new model included in the update indication information.
  • the UE may correspondingly determine the The model information of the new encoder model matches the model information of the corresponding new decoder model. After that, a new decoder model can be determined based on the model information of the new decoder model, and the new decoder model can be determined based on the model information of the new decoder model.
  • the model information of the encoder model is used to determine a new encoder model.
  • Step 3 Retrain the new encoder model and the new decoder model to obtain the updated encoder model and updated decoder model.
  • the UE can autonomously perform model updates on the encoder model and the decoder model.
  • the UE independently performs model updates on the encoder model and decoder model, which may include the following steps:
  • Step a Monitor the distortion of the original encoder model and the original decoder model.
  • the distortion degree of the original encoder model and the original decoder model can be monitored in real time.
  • the unencoded and compressed information reported by the UE can be used as input information and sequentially input into the encoder model and the decoder model to sequentially perform encoding and decoding operations to obtain the output information, and the matching degree between the output information and the input information can be calculated.
  • the distortion of the original encoder model and the original decoder model can be determined by the distortion of the original encoder model and the original decoder model.
  • Step b When the distortion exceeds the first threshold, retrain the original encoder model and the original decoder model to obtain an updated encoder model and an updated decoder model, where the updated encoder model
  • the distortion of the model and the updated decoder model is lower than the second threshold, and the second threshold is less than or equal to the first threshold.
  • the distortion exceeds the first threshold, it means that the encoding and decoding accuracy of the original encoder model and the original decoder model is low, which will affect the subsequent processing of the signal. Processing accuracy. Therefore, the original encoder model and the original decoder model need to be retrained to obtain an updated encoder model and an updated decoder model. Moreover, it should be ensured that the distortion of the updated encoder model and the updated decoder model is low and lower than the second threshold, so as to ensure the encoding and decoding accuracy of the model.
  • the above-mentioned first threshold and second threshold may be set in advance.
  • Step 306c Directly replace the original encoder model with the updated encoder model.
  • the UE after replacing the original encoder model with the updated encoder model, the UE can use the updated encoder model to perform decoding, wherein based on the updated encoder model The encoding accuracy of the encoder model is higher, which can ensure the accuracy of subsequent signal processing.
  • the UE will first report capability information to the network device.
  • This capability information can be used to indicate the UE's AI and/or ML support capabilities, and then the UE will obtain the network device
  • the model information of the decoder model is used to deploy the decoder model, so that the network device can be deployed based on the model information of the decoder model. Decoder model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 3d is a schematic flowchart of a model training and deployment method provided by an embodiment of the present disclosure. The method is executed by a network device. As shown in Figure 3d, the method may include the following steps:
  • Step 301d Report capability information to the network device.
  • Step 302d Obtain the model information of the encoder model to be trained and/or the model information of the decoder model to be trained sent by the network device.
  • Step 303d Generate an encoder model and a decoder model based on the model information of the encoder model to be trained and/or the model information of the decoder model to be trained.
  • Step 304d Send the model information of the encoder model to the network device, and the model information of the encoder model is used to deploy the encoder model.
  • Step 305d Update the encoder model and the decoder model to generate an updated encoder model and an updated decoder model.
  • Step 306d Determine the difference model information between the model information of the updated encoder model and the model information of the original encoder model.
  • Step 307d Optimize the original encoder model based on the difference model information.
  • the model information of the optimally adjusted encoder model can be the same as the updated model generated in the above step 305d.
  • the model information of the encoder model is consistent, so that the network device can subsequently use the updated encoder model to perform encoding.
  • the decoding accuracy based on the updated encoder model is higher, which can ensure the subsequent signal Processing accuracy.
  • the UE will first report capability information to the network device.
  • This capability information can be used to indicate the UE's AI and/or ML support capabilities, and then the UE will obtain the network device.
  • the model information of the decoder model is used to deploy the decoder model, so that the network device can be deployed based on the model information of the decoder model. Decoder model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 3e is a schematic flowchart of a model training and deployment method provided by an embodiment of the present disclosure. The method is executed by a network device. As shown in Figure 3e, the method may include the following steps:
  • Step 301e Report capability information to the network device.
  • Step 302e Obtain model information of the encoder model to be trained and/or model information of the decoder model to be trained sent by the network device.
  • Step 303e Generate an encoder model and a decoder model based on the model information of the encoder model to be trained and/or the model information of the decoder model to be trained.
  • Step 304e Send the model information of the encoder model to the network device, and the model information of the encoder model is used to deploy the encoder model.
  • Step 305e Update the encoder model and the decoder model to generate an updated encoder model and an updated decoder model.
  • Step 306e Send the updated model information of the decoder model to the network device.
  • model information of the updated decoder model may include:
  • Difference model information between the model information of the updated decoder model and the model information of the original decoder model is a difference model information between the model information of the updated decoder model and the model information of the original decoder model.
  • the UE by sending the model information of the updated decoder model to the network device, the UE can cause the network device to use the updated decoder model to decode, where , the decoding accuracy based on the updated decoder model is higher, which can ensure the accuracy of subsequent signal processing.
  • the UE will first report capability information to the network device.
  • This capability information can be used to indicate the UE's AI and/or ML support capabilities, and then the UE will obtain the network device
  • the model information of the decoder model is used to deploy the decoder model, so that the network device can be deployed based on the model information of the decoder model. Decoder model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 4 is a schematic flowchart of a model training and deployment method provided by an embodiment of the present disclosure. The method is executed by a network device. As shown in Figure 4, the method may include the following steps:
  • Step 401 Obtain the capability information reported by the UE.
  • the capability information is used to indicate the AI and/or ML support capabilities of the UE.
  • the model may include at least one of the following:
  • the capability information may include at least one of the following:
  • Types of AI and/or ML supported by the UE are Types of AI and/or ML supported by the UE;
  • the maximum support capability information of the UE for the model includes the structural information of the most complex model supported by the UE.
  • the above-mentioned structural information may include, for example, the number of layers of the model.
  • Step 402 Send model information of the encoder model to be trained and/or model information of the decoder model to be trained to the UE based on the capability information.
  • Step 403 Obtain the model information of the decoder model sent by the UE.
  • the model information of the decoder model is used to deploy the decoder model.
  • the model information of the decoder model is used to deploy the decoder model.
  • the model information of the decoder model may include at least one of the following:
  • Step 404 Generate a decoder model based on the model information of the decoder model.
  • the UE will first report capability information to the network device.
  • This capability information can be used to indicate the UE's AI and/or ML support capabilities, and then the UE will obtain the network device
  • the model information of the decoder model is used to deploy the decoder model, so that the network device can be deployed based on the model information of the decoder model. Decoder model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 5a is a schematic flowchart of a model training and deployment method provided by an embodiment of the present disclosure. The method is executed by a network device. As shown in Figure 5, the method may include the following steps:
  • Step 501a Obtain the capability information reported by the UE.
  • the capability information is used to indicate the AI and/or ML support capabilities of the UE.
  • Step 502a Select the encoder model to be trained and the decoder model to be trained based on the capability information.
  • the encoder model to be trained is a model supported by the UE
  • the decoder model to be trained is a model supported by the network device.
  • Step 503a Send the model information of the encoder model to be trained and the model information of the decoder model to be trained to the UE.
  • Step 504a Obtain the model information of the decoder model sent by the UE.
  • the model information of the decoder model is used to deploy the decoder model.
  • Step 505a Generate a decoder model based on the model information of the decoder model.
  • steps 501a to 505a please refer to the description of the above embodiments, and the embodiments of the present disclosure will not be described again here.
  • the UE will first report capability information to the network device.
  • This capability information can be used to indicate the UE's AI and/or ML support capabilities, and then the UE will obtain the network device
  • the model information of the decoder model is used to deploy the decoder model, so that the network device can be deployed based on the model information of the decoder model. Decoder model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 5b is a schematic flowchart of a model training and deployment method provided by an embodiment of the present disclosure. The method is executed by a network device. As shown in Figure 5, the method may include the following steps:
  • Step 501b Obtain the capability information reported by the UE.
  • the capability information is used to indicate the AI and/or ML support capabilities of the UE.
  • Step 502b Select the encoder model to be trained based on the capability information.
  • the encoder model to be trained is a model supported by the UE.
  • Step 503b Send the model information of the encoder model to be trained to the UE.
  • Step 504b Obtain the model information of the decoder model sent by the UE.
  • the model information of the decoder model is used to deploy the decoder model.
  • Step 505b Generate a decoder model based on the model information of the decoder model.
  • steps 501b to 505b please refer to the description of the above embodiments, and the embodiments of this disclosure will not be described again here.
  • the UE will first report capability information to the network device.
  • This capability information can be used to indicate the UE's AI and/or ML support capabilities, and then the UE will obtain the network device
  • the model information of the decoder model is used to deploy the decoder model, so that the network device can be deployed based on the model information of the decoder model. Decoder model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 5c is a schematic flowchart of a model training and deployment method provided by an embodiment of the present disclosure. The method is executed by a network device. As shown in Figure 5, the method may include the following steps:
  • Step 501c Obtain the capability information reported by the UE.
  • the capability information is used to indicate the AI and/or ML support capabilities of the UE.
  • Step 502c Select the decoder model to be trained based on the capability information.
  • the decoder model to be trained is a model supported by the network device.
  • Step 503c Send the model information of the decoder model to be trained to the UE.
  • Step 504c Obtain the model information of the decoder model sent by the UE.
  • the model information of the decoder model is used to deploy the decoder model.
  • Step 505c Generate a decoder model based on the model information of the decoder model.
  • steps 501c to 505c please refer to the description of the above embodiments, and the embodiments of the disclosure will not be described again here.
  • the UE will first report capability information to the network device.
  • This capability information can be used to indicate the UE's AI and/or ML support capabilities, and then the UE will obtain the network device
  • the model information of the decoder model is used to deploy the decoder model, so that the network device can be deployed based on the model information of the decoder model. Decoder model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 6a is a schematic flowchart of a model training and deployment method provided by an embodiment of the present disclosure. The method is executed by a network device. As shown in Figure 6a, the method may include the following steps:
  • Step 601a Obtain the capability information reported by the UE.
  • the capability information is used to indicate the AI and/or ML support capabilities of the UE.
  • Step 602a Send model information of the encoder model to be trained and/or model information of the decoder model to be trained to the UE based on the capability information.
  • Step 603a Obtain the model information of the decoder model sent by the UE.
  • the model information of the decoder model is used to deploy the decoder model.
  • Step 604a Generate a decoder model based on the model information of the decoder model.
  • steps 601a to 604a please refer to the description of the above embodiments, and the embodiments of the disclosure will not be described again here.
  • Step 605a Send indication information to the UE.
  • the indication information is used to indicate the type of information reported by the UE to the network device.
  • the information type may include at least one of original reported information without encoding by the encoder model and information after the original reported information has been encoded by the encoder model.
  • the reported information is information to be reported by the UE to the network device.
  • the reported information may include CSI information;
  • the CSI information may include at least one of the following:
  • the UE will first report capability information to the network device.
  • This capability information can be used to indicate the UE's AI and/or ML support capabilities, and then the UE will obtain the network device
  • the model information of the decoder model is used to deploy the decoder model, so that the network device can be deployed based on the model information of the decoder model. Decoder model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 6b is a schematic flowchart of a model training and deployment method provided by an embodiment of the present disclosure. The method is executed by a network device. As shown in Figure 6b, the method may include the following steps:
  • Step 601b Obtain the capability information reported by the UE.
  • the capability information is used to indicate the AI and/or ML support capabilities of the UE.
  • Step 602b Send model information of the encoder model to be trained and/or model information of the decoder model to be trained to the UE based on the capability information.
  • Step 603b Obtain the model information of the decoder model sent by the UE.
  • the model information of the decoder model is used to deploy the decoder model.
  • Step 604b Generate a decoder model based on the model information of the decoder model.
  • Step 605b Send indication information to the UE.
  • the indication information is used to indicate that the type of information reported by the UE to the network device includes: information after the original reported information has been encoded by the encoder model.
  • steps 601b to 605b please refer to the description of the above embodiments, and the embodiments of this disclosure will not be described again here.
  • Step 606b When receiving the information reported by the UE, use the decoder model to decode the information reported by the UE.
  • the UE report received by the network device The information is essentially the information encoded by the encoder model. Based on this, the network device needs to use the decoder model (such as the decoder model generated in step 606b above) to decode the information reported by the UE to obtain the original Report information.
  • the decoder model such as the decoder model generated in step 606b above
  • the UE will first report capability information to the network device.
  • This capability information can be used to indicate the UE's AI and/or ML support capabilities, and then the UE will obtain the network device
  • the model information of the decoder model is used to deploy the decoder model, so that the network device can be deployed based on the model information of the decoder model. Decoder model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 6c is a schematic flowchart of a model training and deployment method provided by an embodiment of the present disclosure. The method is executed by a network device. As shown in Figure 6c, the method may include the following steps:
  • Step 601c Obtain the capability information reported by the UE.
  • Step 602c Send model information of the encoder model to be trained and/or model information of the decoder model to be trained to the UE based on the capability information.
  • Step 603c Obtain the model information of the decoder model sent by the UE.
  • the model information of the decoder model is used to deploy the decoder model.
  • Step 604c Generate a decoder model based on the model information of the decoder model.
  • steps 601c to 604c please refer to the description of the above embodiments, and the embodiments of the present disclosure will not be described in detail here.
  • Step 605c Receive the updated model information of the decoder model sent by the UE.
  • the model information of the updated decoder model includes:
  • Difference model information between the model information of the updated decoder model and the model information of the original decoder model is a difference model information between the model information of the updated decoder model and the model information of the original decoder model.
  • the network device when the UE updates the encoder model and the decoder model based on the instructions of the network device, then for the network device, after executing After the above step 604c, the network device also needs to send an update instruction message to the UE.
  • the update instruction message is used to instruct the UE to adjust the model parameters; or, the update instruction message includes model information of the new encoder model and/or new translation. Model information of the encoder model, the types of the new encoder model and the new decoder model are different from the types of the original encoder model and the original decoder model.
  • the UE may update the model based on the update indication message, and determine the model information of the updated decoder model and send it to the network device.
  • the network device Before sending the update instruction message to the UE, it is necessary to reselect a new encoder model and/or a new decoder model that are different from the original encoder model and the original decoder model based on the capability information.
  • the new encoder model may be a model supported by the UE, and the new encoder model may be a model supported by the network device.
  • the network device can send an update instruction containing the model information of the new encoder model and/or the model information of the new decoder model to the UE based on the reselected new encoder model and/or the new decoder model. information.
  • the network device when the UE independently updates the encoder model and the decoder model, the network device does not need to report to the UE after executing the above step 604c.
  • the update indication message By sending the update indication message, the updated model information of the decoder model sent by the UE can be directly received.
  • Step 606c Update the model based on the updated model information of the decoder model.
  • model update based on updated model information of the decoder model will be introduced in subsequent embodiments.
  • the UE will first report capability information to the network device.
  • This capability information can be used to indicate the UE's AI and/or ML support capabilities, and then the UE will obtain the network device
  • the model information of the decoder model is used to deploy the decoder model, so that the network device can be deployed based on the model information of the decoder model. Decoder model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 6d is a schematic flowchart of a model training and deployment method provided by an embodiment of the present disclosure. The method is executed by a network device. As shown in Figure 6d, the method may include the following steps:
  • Step 601d Obtain the capability information reported by the UE.
  • Step 602d Send model information of the encoder model to be trained and/or model information of the decoder model to be trained to the UE based on the capability information.
  • Step 603d Obtain the model information of the decoder model sent by the UE.
  • the model information of the decoder model is used to deploy the decoder model.
  • Step 604d Generate a decoder model based on the model information of the decoder model.
  • steps 601d to 604d please refer to the description of the above embodiments, and the embodiments of this disclosure will not be described again here.
  • Step 605d Receive the updated model information of the decoder model sent by the UE.
  • the model information of the updated decoder model includes:
  • Difference model information between the model information of the updated decoder model and the model information of the original decoder model is a difference model information between the model information of the updated decoder model and the model information of the original decoder model.
  • Step 606d Generate an updated decoder model based on the model information of the updated decoder model.
  • the network device can An updated decoder model is generated directly based on this full model information.
  • the network device can first determine the model information of its own original decoder model, and then based on the model information and difference model information of the original decoder model Model information of the updated decoder model is determined, and then an updated decoder model is generated based on the model information of the updated decoder model.
  • Step 607d Use the updated decoder model to replace the original decoder model to update the model.
  • the network device can use the updated decoder model to perform decoding, where, The decoding accuracy based on the updated decoder model is higher, which can ensure the accuracy of subsequent signal processing.
  • the UE will first report capability information to the network device.
  • This capability information can be used to indicate the UE's AI and/or ML support capabilities, and then the UE will obtain the network device
  • the model information of the decoder model is used to deploy the decoder model, so that the network device can be deployed based on the model information of the decoder model. Decoder model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 6e is a schematic flowchart of a model training and deployment method provided by an embodiment of the present disclosure. The method is executed by a network device. As shown in Figure 6e, the method may include the following steps:
  • Step 601e Obtain the capability information reported by the UE.
  • Step 602e Send model information of the encoder model to be trained and/or model information of the decoder model to be trained to the UE based on the capability information.
  • Step 603e Obtain the model information of the decoder model sent by the UE.
  • the model information of the decoder model is used to deploy the decoder model.
  • Step 604e Generate a decoder model based on the model information of the decoder model.
  • steps 601e to 604e please refer to the description of the above embodiments, and the embodiments of the present disclosure will not be repeated here.
  • Step 605e Receive the updated model information of the decoder model sent by the UE.
  • the model information of the updated decoder model includes:
  • Difference model information between the model information of the updated decoder model and the model information of the original decoder model is a difference model information between the model information of the updated decoder model and the model information of the original decoder model.
  • Step 606e Optimize the original decoder model based on the model information of the updated decoder model to update the model.
  • the network device can Model difference information between all model information and the model information of the original decoder model is first determined, and then the original decoder model is optimized based on the model difference information for model updating.
  • the network device can directly optimize the original decoder model based on the model difference information to perform model updates.
  • the model information of the optimized and adjusted decoder model can be consistent with the updated decoder model.
  • the model information of the model is consistent, so that the network device can subsequently use the updated decoder model for decoding.
  • the decoding accuracy based on the updated decoder model is higher, which can ensure the subsequent signal processing. Processing accuracy.
  • the UE will first report capability information to the network device.
  • This capability information can be used to indicate the UE's AI and/or ML support capabilities, and then the UE will obtain the network device
  • the model information of the decoder model is used to deploy the decoder model, so that the network device can be deployed based on the model information of the decoder model. Decoder model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • Figure 7 is a schematic structural diagram of a model training and deployment device provided by an embodiment of the present disclosure. As shown in Figure 7, the device may include:
  • a reporting module configured to report capability information to the network device, where the capability information is used to indicate the AI and/or ML support capabilities of the UE;
  • An acquisition module configured to acquire the model information of the encoder model to be trained and/or the model information of the decoder model to be trained sent by the network device;
  • a generation module configured to generate an encoder model and a decoder model based on the model information of the encoder model to be trained and/or the model information of the decoder model to be trained;
  • a sending module configured to send model information of the decoder model to the network device, where the model information of the decoder model is used to deploy the decoder model.
  • the UE will first report capability information to the network device.
  • This capability information can be used to indicate the UE's AI and/or ML support capabilities, and then the UE will obtain the network device.
  • the model information of the decoder model is used to deploy the decoder model, so that the network device can be deployed based on the model information of the decoder model. Decoder model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • the model includes at least one of the following:
  • the capability information includes at least one of the following:
  • the maximum support capability information of the UE for the model includes the structural information of the most complex model supported by the UE.
  • the generation module is also used to:
  • the encoder model to be trained and the decoder model to be trained are trained based on the sample data to generate an encoder model and a decoder model.
  • the model information of the decoder model includes at least one of the following:
  • the device is also used for:
  • the indication information is used to indicate the type of information when the UE reports to the network device; the information type includes original reporting information that has not been encoded by the encoder model, and original reporting information. At least one of the information after the information is encoded by the encoder model;
  • the reporting information is information to be reported by the UE to the network device; the reporting information includes channel state information CSI information;
  • the CSI information includes at least one of the following:
  • SINR Signal to dryness ratio
  • the information type indicated by the indication information includes information after the original reported information has been encoded by the encoder model
  • the device is also used for:
  • the device is also used for:
  • the device is also used for:
  • update instruction information sent by the network device is used to instruct the UE to adjust model parameters; or, the update instruction information includes model information of a new encoder model and/or model information of a new decoder model Model information, the types of the new encoder model and the new decoder model are different from the types of the original encoder model and the original decoder model;
  • the new encoder model and the new decoder model are retrained to obtain an updated encoder model and an updated decoder model.
  • the device is also used for:
  • the original encoder model and the original decoder model are retrained to obtain an updated encoder model and an updated decoder model, wherein the updated The distortion degree of the encoder model and the updated decoder model is lower than a second threshold, and the second threshold is less than or equal to the first threshold.
  • the device is also used for:
  • the UE obtains the new encoder model and the new decoder model by adjusting the model parameters of the original encoder model and the original decoder model. encoder model; or
  • the UE When the update indication information includes model information of a new encoder model and/or model information of a new decoder model, the UE is based on the model information of the new encoder model and/or the new decoder.
  • the model information of the model generates the new encoder model and the new decoder model.
  • the types of the new encoder model and the new decoder model are the same as those of the original encoder model and the original decoder. There are different types of models.
  • the device is also used for:
  • the original encoder model is directly replaced with the updated encoder model.
  • the device is also used for:
  • the original encoder model is optimized based on the difference model information.
  • the device is also used for:
  • the model information of the updated decoder model is sent to the network device.
  • the model information of the updated decoder model includes:
  • Difference model information between the model information of the updated decoder model and the model information of the original decoder model is a difference model information between the model information of the updated decoder model and the model information of the original decoder model.
  • Figure 8 is a schematic structural diagram of a model training and deployment device provided by an embodiment of the present disclosure. As shown in Figure 8, the device may include:
  • the first acquisition module is used to acquire the capability information reported by the UE, where the capability information is used to indicate the AI and/or ML support capabilities of the UE;
  • a sending module configured to send model information of the encoder model to be trained and/or model information of the decoder model to be trained to the UE based on the capability information;
  • the second acquisition module is used to acquire the model information of the decoder model sent by the UE, and the model information of the decoder model is used to deploy the decoder model;
  • a generating module configured to generate the decoder model based on model information of the decoder model.
  • the UE will first report capability information to the network device.
  • This capability information can be used to indicate the UE's AI and/or ML support capabilities, and then the UE will obtain the network device.
  • the model information of the decoder model is used to deploy the decoder model, so that the network device can be deployed based on the model information of the decoder model. Decoder model. Therefore, embodiments of the present disclosure provide a method for generating and deploying models, which can be used to train and deploy AI/ML models.
  • the model includes at least one of the following:
  • the capability information includes at least one of the following:
  • the maximum support capability information of the UE for the model includes the structural information of the most complex model supported by the UE.
  • the sending module is also used to:
  • an encoder model to be trained and/or a decoder model to be trained based on the capability information; wherein the encoder model to be trained is a model supported by the UE, and the decoder model to be trained is the Models supported by network equipment;
  • the model information of the encoder model to be trained and/or the model information of the decoder model to be trained are sent to the UE.
  • the model information of the decoder model includes at least one of the following:
  • the device is also used for:
  • the indication information is used to indicate the type of information when the UE reports to the network device;
  • the information type includes at least one of the following:
  • the original reported information is encoded by the encoder model.
  • the reported information is information to be reported by the UE to the network device; the reported information includes CSI information;
  • the CSI information includes at least one of the following:
  • the information type indicated by the indication information includes information after the original reported information has been encoded by the encoder model
  • the device is also used for:
  • the decoder model is used to decode the information reported by the UE.
  • the device is also used for:
  • Model updating is performed based on the model information of the updated decoder model.
  • the device is also used for:
  • the update instruction information is used to instruct the UE to adjust model parameters
  • the update instruction information includes model information of a new encoder model and/or model information of a new decoder model.
  • the types of the new encoder model and the new decoder model are the same as those of the original encoder model. Different types from the original decoder model.
  • the device is also used for:
  • Difference model information between the model information of the updated decoder model and the model information of the original decoder model is a difference model information between the model information of the updated decoder model and the model information of the original decoder model.
  • the model update based on the model information of the updated decoder model includes:
  • the updated decoder model is used to replace the original decoder model to perform model updating.
  • the model update based on the model information of the updated decoder model includes:
  • the original decoder model is optimized to perform model update.
  • FIG. 9 is a block diagram of a user equipment UE900 provided by an embodiment of the present disclosure.
  • UE900 can be a mobile phone, computer, digital broadcast terminal device, messaging device, game console, tablet device, medical device, fitness device, personal digital assistant, etc.
  • UE 900 may include at least one of the following components: a processing component 902 , a memory 904 , a power supply component 906 , a multimedia component 908 , an audio component 910 , an input/output (I/O) interface 912 , a sensor component 913 , and a communication component. 916.
  • Processing component 902 generally controls the overall operations of UE 900, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 902 may include at least one processor 920 to execute instructions to complete all or part of the steps of the above method. Additionally, processing component 902 may include at least one module that facilitates interaction between processing component 902 and other components. For example, processing component 902 may include a multimedia module to facilitate interaction between multimedia component 908 and processing component 902.
  • Memory 904 is configured to store various types of data to support operations at UE 900. Examples of this data include instructions for any application or method operating on the UE900, contact data, phonebook data, messages, pictures, videos, etc.
  • Memory 904 may be implemented by any type of volatile or non-volatile storage device, or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EEPROM), Programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EEPROM erasable programmable read-only memory
  • EPROM Programmable read-only memory
  • PROM programmable read-only memory
  • ROM read-only memory
  • magnetic memory flash memory, magnetic or optical disk.
  • Power supply component 906 provides power to various components of UE 900.
  • Power component 906 may include a power management system, at least one power supply, and other components associated with generating, managing, and distributing power to UE 900.
  • Multimedia component 908 includes a screen that provides an output interface between the UE 900 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes at least one touch sensor to sense touches, slides, and gestures on the touch panel. The touch sensor may not only sense the boundary of the touch or sliding operation, but also detect the wake-up time and pressure related to the touch or sliding operation.
  • multimedia component 908 includes a front-facing camera and/or a rear-facing camera. When UE900 is in operating mode, such as shooting mode or video mode, the front camera and/or rear camera can receive external multimedia data.
  • Each front-facing camera and rear-facing camera can be a fixed optical lens system or have a focal length and optical zoom capabilities.
  • Audio component 910 is configured to output and/or input audio signals.
  • audio component 910 includes a microphone (MIC) configured to receive external audio signals when UE 900 is in operating modes, such as call mode, recording mode, and voice recognition mode. The received audio signals may be further stored in memory 904 or sent via communications component 916 .
  • audio component 910 also includes a speaker for outputting audio signals.
  • the I/O interface 912 provides an interface between the processing component 902 and a peripheral interface module, which may be a keyboard, a click wheel, a button, etc. These buttons may include, but are not limited to: Home button, Volume buttons, Start button, and Lock button.
  • the sensor component 913 includes at least one sensor for providing various aspects of status assessment for the UE 900 .
  • the sensor component 913 can detect the open/closed state of the device 900, the relative positioning of components, such as the display and keypad of the UE900, the sensor component 913 can also detect the position change of the UE900 or a component of the UE900, the user and the Presence or absence of UE900 contact, UE900 orientation or acceleration/deceleration and temperature changes of UE900.
  • Sensor assembly 913 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • Sensor assembly 913 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 913 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 916 is configured to facilitate wired or wireless communication between UE 900 and other devices.
  • UE900 can access wireless networks based on communication standards, such as WiFi, 2G or 3G, or a combination thereof.
  • the communication component 916 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communications component 916 also includes a near field communications (NFC) module to facilitate short-range communications.
  • NFC near field communications
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • UE 900 may be configured by at least one Application Specific Integrated Circuit (ASIC), Digital Signal Processor (DSP), Digital Signal Processing Device (DSPD), Programmable Logic Device (PLD), Field Programmable Gate Array ( FPGA), controller, microcontroller, microprocessor or other electronic component implementation for executing the above method.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processor
  • DSPD Digital Signal Processing Device
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • controller microcontroller, microprocessor or other electronic component implementation for executing the above method.
  • FIG. 10 is a block diagram of a network side device 1000 provided by an embodiment of the present disclosure.
  • the network side device 1000 may be provided as a network side device.
  • the network side device 1000 includes a processing component 1011, which further includes at least one processor, and a memory resource represented by a memory 1032 for storing instructions, such as application programs, that can be executed by the processing component 1022.
  • the application program stored in memory 1032 may include one or more modules, each corresponding to a set of instructions.
  • the processing component 1010 is configured to execute instructions to perform any of the foregoing methods applied to the network side device, for example, the method shown in FIG. 1 .
  • the network side device 1000 may also include a power supply component 1026 configured to perform power management of the network side device 1000, a wired or wireless network interface 1050 configured to connect the network side device 1000 to the network, and an input/output (I/O ) interface 1058.
  • the network side device 1000 can operate based on an operating system stored in the memory 1032, such as Windows Server TM, Mac OS X TM, Unix TM, Linux TM, Free BSD TM or similar.
  • the methods provided by the embodiments of the present disclosure are introduced from the perspectives of network side equipment and UE respectively.
  • the network side device and the UE may include a hardware structure and a software module to implement the above functions in the form of a hardware structure, a software module, or a hardware structure plus a software module.
  • a certain function among the above functions can be executed by a hardware structure, a software module, or a hardware structure plus a software module.
  • the methods provided by the embodiments of the present disclosure are introduced from the perspectives of network side equipment and UE respectively.
  • the network side device and the UE may include a hardware structure and a software module to implement the above functions in the form of a hardware structure, a software module, or a hardware structure plus a software module.
  • a certain function among the above functions can be executed by a hardware structure, a software module, or a hardware structure plus a software module.
  • the communication device may include a transceiver module and a processing module.
  • the transceiver module may include a sending module and/or a receiving module.
  • the sending module is used to implement the sending function
  • the receiving module is used to implement the receiving function.
  • the transceiving module may implement the sending function and/or the receiving function.
  • the communication device may be a terminal device (such as the terminal device in the foregoing method embodiment), a device in the terminal device, or a device that can be used in conjunction with the terminal device.
  • the communication device may be a network device, a device in a network device, or a device that can be used in conjunction with the network device.
  • the communication device may be a network device, or may be a terminal device (such as the terminal device in the foregoing method embodiment), or may be a chip, chip system, or processor that supports the network device to implement the above method, or may be a terminal device that supports A chip, chip system, or processor that implements the above method.
  • the device can be used to implement the method described in the above method embodiment. For details, please refer to the description in the above method embodiment.
  • a communications device may include one or more processors.
  • the processor may be a general-purpose processor or a special-purpose processor, etc.
  • it can be a baseband processor or a central processing unit.
  • the baseband processor can be used to process communication protocols and communication data
  • the central processor can be used to control and execute communication devices (such as network side equipment, baseband chips, terminal equipment, terminal equipment chips, DU or CU, etc.)
  • a computer program processes data for a computer program.
  • the communication device may also include one or more memories, on which a computer program may be stored, and the processor executes the computer program, so that the communication device executes the method described in the above method embodiment.
  • data may also be stored in the memory.
  • the communication device and the memory can be provided separately or integrated together.
  • the communication device may also include a transceiver and an antenna.
  • the transceiver can be called a transceiver unit, a transceiver, or a transceiver circuit, etc., and is used to implement transceiver functions.
  • the transceiver can include a receiver and a transmitter.
  • the receiver can be called a receiver or a receiving circuit, etc., and is used to implement the receiving function;
  • the transmitter can be called a transmitter or a transmitting circuit, etc., and is used to implement the transmitting function.
  • the communication device may also include one or more interface circuits.
  • Interface circuitry is used to receive code instructions and transmit them to the processor.
  • the processor executes the code instructions to cause the communication device to perform the method described in the above method embodiment.
  • the communication device is a terminal device (such as the terminal device in the foregoing method embodiment): the processor is configured to execute the method shown in any one of Figures 1-4.
  • the communication device is a network device: a transceiver is used to perform the method shown in any one of Figures 5-7.
  • a transceiver for implementing receiving and transmitting functions may be included in the processor.
  • the transceiver may be a transceiver circuit, an interface, or an interface circuit.
  • the transceiver circuits, interfaces or interface circuits used to implement the receiving and transmitting functions can be separate or integrated together.
  • the above-mentioned transceiver circuit, interface or interface circuit can be used for reading and writing codes/data, or the above-mentioned transceiver circuit, interface or interface circuit can be used for signal transmission or transfer.
  • the processor may store a computer program, and the computer program runs on the processor, which can cause the communication device to perform the method described in the above method embodiment.
  • the computer program may be embedded in the processor, in which case the processor may be implemented in hardware.
  • the communication device may include a circuit, and the circuit may implement the functions of sending or receiving or communicating in the foregoing method embodiments.
  • the processors and transceivers described in this disclosure may be implemented on integrated circuits (ICs), analog ICs, radio frequency integrated circuits (RFICs), mixed signal ICs, application specific integrated circuits (ASICs), printed circuit boards ( printed circuit board (PCB), electronic equipment, etc.
  • the processor and transceiver can also be manufactured using various IC process technologies, such as complementary metal oxide semiconductor (CMOS), n-type metal oxide-semiconductor (NMOS), P-type Metal oxide semiconductor (positive channel metal oxide semiconductor, PMOS), bipolar junction transistor (BJT), bipolar CMOS (BiCMOS), silicon germanium (SiGe), gallium arsenide (GaAs), etc.
  • CMOS complementary metal oxide semiconductor
  • NMOS n-type metal oxide-semiconductor
  • PMOS P-type Metal oxide semiconductor
  • BJT bipolar junction transistor
  • BiCMOS bipolar CMOS
  • SiGe silicon germanium
  • GaAs gallium arsenide
  • the communication device described in the above embodiments may be a network device or a terminal device (such as the terminal device in the foregoing method embodiment), but the scope of the communication device described in the present disclosure is not limited thereto, and the structure of the communication device may not be limited to limits.
  • the communication device may be a stand-alone device or may be part of a larger device.
  • the communication device may be:
  • the IC collection may also include storage components for storing data and computer programs;
  • the communication device may be a chip or a system on a chip
  • the chip includes a processor and an interface.
  • the number of processors can be one or more, and the number of interfaces can be multiple.
  • the chip also includes a memory, which is used to store necessary computer programs and data.
  • Embodiments of the present disclosure also provide a system for determining side link duration.
  • the system includes a communication device as a terminal device in the foregoing embodiment (such as the first terminal device in the foregoing method embodiment) and a communication device as a network device.
  • the system includes a communication device as a terminal device in the foregoing embodiment (such as the first terminal device in the foregoing method embodiment) and a communication device as a network device.
  • the present disclosure also provides a readable storage medium on which instructions are stored, and when the instructions are executed by a computer, the functions of any of the above method embodiments are implemented.
  • the present disclosure also provides a computer program product, which, when executed by a computer, implements the functions of any of the above method embodiments.
  • the above embodiments it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer programs.
  • the computer program When the computer program is loaded and executed on a computer, the processes or functions described in accordance with the embodiments of the present disclosure are generated in whole or in part.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device.
  • the computer program may be stored in or transferred from one computer-readable storage medium to another, for example, the computer program may be transferred from a website, computer, server, or data center Transmission to another website, computer, server or data center through wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more available media integrated.
  • the usable media may be magnetic media (e.g., floppy disks, hard disks, magnetic tapes), optical media (e.g., high-density digital video discs (DVD)), or semiconductor media (e.g., solid state disks, SSD)) etc.
  • magnetic media e.g., floppy disks, hard disks, magnetic tapes
  • optical media e.g., high-density digital video discs (DVD)
  • DVD digital video discs
  • semiconductor media e.g., solid state disks, SSD
  • At least one in the present disclosure can also be described as one or more, and the plurality can be two, three, four or more, and the present disclosure is not limited.
  • the technical feature is distinguished by “first”, “second”, “third”, “A”, “B”, “C” and “D” etc.
  • the technical features described in “first”, “second”, “third”, “A”, “B”, “C” and “D” are in no particular order or order.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

本公开提出一种方法/装置/设备/存储介质,属于通信技术领域。UE会先向网络设备上报能力信息,该能力信息可以用于指示UE的AI和/或ML的支持能力,之后UE会获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,并基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型,最后将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息用于部署译码器模型。由此可知,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。

Description

一种模型训练部署方法/装置/设备及存储介质 技术领域
本公开涉及通信技术领域,尤其涉及一种模型训练部署方法/装置/设备及存储介质。
背景技术
随着AI(Artificial Intelligent,人工智能)技术和ML(Machine Learning,机器学习)技术的不断发展,AI技术和ML技术的应用领域(如图像识别、语音处理、自然语言处理、游戏等)也越来越广泛。其中,在使用AI技术和ML技术时通常需要利用AI/ML模型对信息进行编解码处理。因此,亟需一种有关AI/ML的模型训练部署方法。
发明内容
本公开提出的模型训练部署方法/装置/设备及存储介质,用于训练和部署AI/ML模型。
本公开一方面实施例提出的模型训练部署方法,应用于UE,包括:
向网络设备上报能力信息,所述能力信息用于指示所述UE的AI和/或ML的支持能力;
获取所述网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息;
基于所述待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型;
将所述译码器模型的模型信息发送至所述网络设备,所述译码器模型的模型信息用于部署所述译码器模型。
本公开另一方面实施例提出的模型训练部署方法,应用于网络设备,包括:
获取UE上报的能力信息,所述能力信息用于指示所述UE的AI和/或ML的支持能力;
基于所述能力信息向所述UE发送待训练编码器模型的模型信息和/或待训练译码器模型的模型信息;
获取所述UE发送的译码器模型的模型信息,所述译码器模型的模型信息用于部署所述译码器模型;
基于所述译码器模型的模型信息部署所述译码器模型。
本公开又一方面实施例提出的一种模型训练部署装置,包括:
上报模块,用于向网络设备上报能力信息,所述能力信息用于指示所述UE的AI和/或ML的支持能力;
获取模块,用于获取所述网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息;
生成模块,用于基于所述待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型;
发送模块,用于将所述译码器模型的模型信息发送至所述网络设备,所述译码器模型的模型信息用于部署所述译码器模型。
本公开又一方面实施例提出的一种模型训练部署装置,包括:
第一获取模块,用于获取UE上报的能力信息,所述能力信息用于指示所述UE的AI和/或ML的支持能力;
发送模块,用于基于所述能力信息向所述UE发送待训练编码器模型的模型信息和/或待训练译码器模型的模型信息;
第二获取模块,用于获取所述UE发送的译码器模型的模型信息,所述译码器模型的模型信息用于部署所述译码器模型;
生成模块,用于基于所述译码器模型的模型信息生成所述译码器模型。
本公开又一方面实施例提出的一种通信装置,所述装置包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器执行所述存储器中存储的计算机程序,以使所述装置执行如上一方面实施例提 出的方法。
本公开又一方面实施例提出的一种通信装置,所述装置包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器执行所述存储器中存储的计算机程序,以使所述装置执行如上另一方面实施例提出的方法。
本公开又一方面实施例提出的通信装置,包括:处理器和接口电路;
所述接口电路,用于接收代码指令并传输至所述处理器;
所述处理器,用于运行所述代码指令以执行如一方面实施例提出的方法。
本公开又一方面实施例提出的通信装置,包括:处理器和接口电路;
所述接口电路,用于接收代码指令并传输至所述处理器;
所述处理器,用于运行所述代码指令以执行如另一方面实施例提出的方法。
本公开又一方面实施例提出的计算机可读存储介质,用于存储有指令,当所述指令被执行时,使如一方面实施例提出的方法被实现。
本公开又一方面实施例提出的计算机可读存储介质,用于存储有指令,当所述指令被执行时,使如另一方面实施例提出的方法被实现。
综上所述,在本公开实施例提供的方法/装置/用户设备/基站及存储介质之中,UE会先向网络设备上报能力信息,该能力信息可以用于指示UE的AI和/或ML的支持能力,之后UE会获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,并基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型,最后将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息用于部署译码器模型,以便网络设备可以基于该译码器模型的模型信息部署译码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
附图说明
本公开上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1为本公开一个实施例所提供的模型训练部署方法的流程示意图;
图2a为本公开另一个实施例所提供的模型训练部署方法的流程示意图;
图2b为本公开再一个实施例所提供的模型训练部署方法的流程示意图;
图2c为本公开又一个实施例所提供的模型训练部署方法的流程示意图;
图2d为本公开又一个实施例所提供的模型训练部署方法的流程示意图;
图2e为本公开又一个实施例所提供的模型训练部署方法的流程示意图;
图3a为本公开又一个实施例所提供的模型训练部署方法的流程示意图;
图3b为本公开又一个实施例所提供的模型训练部署方法的流程示意图;
图3c为本公开又一个实施例所提供的模型训练部署方法的流程示意图;
图3d为本公开又一个实施例所提供的模型训练部署方法的流程示意图;
图3e为本公开又一个实施例所提供的模型训练部署方法的流程示意图;
图4为本公开又一个实施例所提供的模型训练部署方法的流程示意图;
图5a为本公开又一个实施例所提供的模型训练部署方法的流程示意图;
图5b为本公开又一个实施例所提供的模型训练部署方法的流程示意图;
图5c为本公开又一个实施例所提供的模型训练部署方法的流程示意图;
图6a为本公开又一个实施例所提供的模型训练部署方法的流程示意图;
图6b为本公开又一个实施例所提供的模型训练部署方法的流程示意图;
图6c为本公开又一个实施例所提供的模型训练部署方法的流程示意图;
图6d为本公开又一个实施例所提供的模型训练部署方法的流程示意图;
图6e为本公开又一个实施例所提供的模型训练部署方法的流程示意图;
图7为本公开一个实施例所提供的模型训练部署装置的结构示意图;
图8为本公开另一个实施例所提供的模型训练部署装置的结构示意图;
图9是本公开一个实施例所提供的一种用户设备的框图;
图10为本公开一个实施例所提供的一种网络侧设备的框图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开实施例相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开实施例的一些方面相一致的装置和方法的例子。
在本公开实施例使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本公开实施例。在本公开实施例和所附权利要求书中所使用的单数形式的“一种”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本公开实施例可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本公开实施例范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”及“若”可以被解释成为“在……时”或“当……时”或“响应于确定”。
下面参考附图对本公开实施例所提供的模型训练部署方法/装置/设备及存储介质进行详细描述。
图1为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由UE(User Equipment,用户设备)执行,如图1所示,该方法可以包括以下步骤:
步骤101、向网络设备上报能力信息。
需要说明的是,在本公开的一个实施例之中,UE可以是指向用户提供语音和/或数据连通性的设备。终端设备可以经RAN(Radio Access Network,无线接入网)与一个或多个核心网进行通信,UE可以是物联网终端,如传感器设备、移动电话(或称为“蜂窝”电话)和具有物联网终端的计算机,例如,可以是固定式、便携式、袖珍式、手持式、计算机内置的或者车载的装置。例如,站(Station,STA)、订户单元(subscriber unit)、订户站(subscriber station),移动站(mobile station)、移动台(mobile)、远程站(remote station)、接入点、远程终端(remoteterminal)、接入终端(access terminal)、用户装置(user terminal)或用户代理(useragent)。或者,UE也可以是无人飞行器的设备。或者,UE也可以是车载设备,比如,可以是具有无线通信功能的行车电脑,或者是外接行车电脑的无线终端。或者,UE也可以是路边设备,比如,可以是具有无线通信功能的路灯、信号灯或者其它路边设备等。
其中,在本公开的一个实施例之中,该能力信息可以用于指示UE的AI(Artificial Intelligent,人工智能)和/或ML(Machine Learning,机器学习)的支持能力。
以及,在本公开的一个实施例之中,该模型可以包括以下至少一种:
AI模型;
ML模型。
进一步的,在本公开的一个实施例之中,上述的能力信息可以包括以下至少一种:
UE是否支持AI;
UE是否支持ML;
UE支持的AI和/或ML的种类;
UE对于模型的最大支持能力信息,该最大支持能力信息包括UE支持的最复杂的模型的结构信息。
其中,在本公开的一个实施例之中,上述的结构信息例如可以包括模型的层数等。
步骤102、获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息。
其中,在本公开的一个实施例之中,网络设备中会存储有多个不同的待训练编码器模型和/或多个 不同的待训练译码器模型,其中,编码器模型和译码器模型之间存在有对应关系。
以及,在本公开的一个实施例之中,UE向网络设备发送了能力信息之后,网络设备可以基于该能力信息从其存储的多个不同的待训练编码器模型中选择出匹配于UE的AI和/或ML的支持能力的待训练编码器模型,和/或,从其存储的多个不同的待训练译码器模型中选择出匹配于网络设备自身的AI和/或ML的支持能力的待训练译码器模型,之后会将所选择的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息发送至UE。其中,关于模型信息的相关内容会在后续进行详细介绍。
步骤103、基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型。
在本公开的一个实施例之中,UE通过对该待训练编码器模型和/或待训练译码器模型进行训练,以生成编码器模型和译码器模型。
以及,关于上述的生成编码器模型和译码器模型的详细具体方法可以包括会在后续实施例进行描述。
步骤104、将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息可以用于部署译码器模型。
其中,在本公开的一个实施例之中,上述的译码器模型的模型信息可以包括以下至少一种:
模型的种类;
模型的模型参数。
以及,在本公开的一个实施例之中,上述的模型的种类可以包括:
CNN(Convolutional Neural Network,卷积神经网络)模型;
全连接DNN(Deep Neural Network,深度神经网络)模型;
CNN与全连接DNN结合的模型。
以及,需要说明的是,在本公开的一个实施例之中,当模型的种类不同时,该模型的模型参数也会有所不同。
具体而言,在本公开的一个实施例之中,当译码器模型的种类为:CNN模型时,译码器模型的模型参数可以包括CNN模型的压缩率、CNN模型卷积层的个数、各卷积层之间的排布信息、每个卷积层的权重信息,每个卷积层的卷积核大小,每个卷积层应用的归一化层和激活函数类型中的至少一种。
在本公开的另一个实施例之中,当译码器模型的种类为:全连接DNN模型时,译码器模型的模型参数可以包括全连接DNN模型的压缩率、全连接层的个数、各全连接层之间的排布信息、每个全连接层的权重信息、每个全连接层的节点数,每个全连接层应用的归一化层和激活函数类型中的至少一种。
在本公开的又一个实施例之中,当译码器模型的种类为:CNN与全连接DNN的结合模型时,译码器模型的模型参数可以包括CNN与全连接DNN的结合模型的压缩率、卷积层与全连接层的个数、搭配模式、卷积层的权重信息、卷积核大小、全连接层的节点数、全连接层的权重信息,每个全连接层和卷积层应用的归一化层和激活函数类型中的至少一种。
综上所述,在本公开实施例提供的方法之中,UE会先向网络设备上报能力信息,该能力信息可以用于指示UE的AI和/或ML的支持能力,之后UE会获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,并基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型,最后将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息用于部署译码器模型,以便网络设备可以基于该译码器模型的模型信息部署译码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图2a为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由UE执行,如图2a所示,该方法可以包括以下步骤:
步骤201a、向网络设备上报能力信息。
其中,关于步骤201a的详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤202a、获取网络设备发送的待训练编码器模型的模型信息和待训练译码器模型的模型信息。
具体的,在本公开的一个实施例之中,UE向网络设备上报了能力信息之后,网络设备可以从其存储的多个不同的待训练编码器模型和多个不同的待训练译码器模型之中,基于能力信息选择出UE所支 持的待训练编码器模型、以及选择出网络设备所支持的待训练译码器模型,并将所选的待训练编码器模型的模型信息和待训练译码器模型的模型信息发送至UE。其中,网络设备所选择的待训练编码器模型与待训练译码器模型是相互对应匹配的。
步骤203a、基于待训练编码器模型的模型信息部署待训练编码器模型。
步骤204a、基于待训练译码器模型的模型信息部署待训练译码器模型。
步骤205a、基于UE的测量信息和/或历史测量信息确定样本数据。
其中,在本公开的一个实施例之中,上述的测量信息可以至少包括以下的至少一种:
UE对于参考信号(例如SSB(Synchronization Signal Block,同步块)、CSI-RS(Channel State Information-Reference Signal,信道状态信息参考信号)等)的测量信息;
UE的RRM(Radio Resource management,无线资源管理)测量信息;
UE的RRC(Radio Resource Control,无线资源控制)测量信息
UE的波束测量信息。
步骤206a、基于样本数据对待训练编码器模型和待训练译码器模型进行训练,以生成编码器模型和译码器模型。
步骤207a、将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息可以用于部署译码器模型。
其中,关于步骤207a的详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
综上所述,在本公开实施例提供的方法之中,UE会先向网络设备上报能力信息,该能力信息可以用于指示UE的AI和/或ML的支持能力,之后UE会获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,并基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型,最后将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息用于部署译码器模型,以便网络设备可以基于该译码器模型的模型信息部署译码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图2b为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由UE执行,如图2b所示,该方法可以包括以下步骤:
步骤201b、向网络设备上报能力信息。
其中,关于步骤201b的详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤202b、获取网络设备发送的待训练编码器模型的模型信息。
具体的,在本公开的一个实施例之中,UE向网络设备上报了能力信息之后,网络设备可以从其存储的多个不同的待训练编码器模型之中,基于能力信息选择出UE所支持的待训练编码器模型,并将所选的待训练编码器模型的模型信息发送至UE。
此外,需要说明的是,在本公开的一个实施例之中,上述的待训练编码器模型应当满足以下条件:
1、所选择出的待训练编码器模型应当是匹配于UE的能力信息的模型(即UE支持的模型);
2、所选择出的待训练编码器模型对应的待训练译码器模型应当是网络设备支持的模型。
步骤203b、基于待训练编码器模型的模型信息部署待训练编码器模型。
步骤204b、基于待训练编码器模型确定出匹配于该待训练编码器模型的待训练译码器模型。
具体的,在本公开的一个实施例之中,可以基于该待训练编码器模型的模型信息确定出匹配于该待训练编码器模型的待训练译码器模型的模型信息,之后,基于该待训练译码器模型的模型信息部署该待训练译码器模型。
步骤205b、基于UE的测量信息和/或历史测量信息确定样本数据。
步骤206b、基于样本数据对待训练编码器模型和待训练译码器模型进行训练,以生成编码器模型和译码器模型。
步骤207b、将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息可以用于部署译码器模型。
其中,关于步骤205b-步骤207b的详细介绍可以参考上述实施例描述,本公开实施例在此不做赘 述。
综上所述,在本公开实施例提供的方法之中,UE会先向网络设备上报能力信息,该能力信息可以用于指示UE的AI和/或ML的支持能力,之后UE会获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,并基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型,最后将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息用于部署译码器模型,以便网络设备可以基于该译码器模型的模型信息部署译码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图2c为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由UE执行,如图2c所示,该方法可以包括以下步骤:
步骤201c、向网络设备上报能力信息。
其中,关于步骤201c的详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤202c、获取网络设备发送的待训练编码器模型的模型信息。
步骤203c、基于UE的测量信息和/或历史测量信息确定样本数据。
步骤204c、基于样本数据对待训练编码器模型进行训练,以生成编码器模型。
其中,关于步骤201c-步骤204c的详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤205c、UE基于编码器模型的模型信息确定匹配于该编码器模型的译码器模型。
具体的,在本公开的一个实施例之中,可以基于该编码器模型的模型信息确定出匹配于该编码器模型的译码器模型的模型信息,之后,基于该译码器模型的模型信息部署该译码器模型。
步骤206c、将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息可以用于部署译码器模型。
其中,关于步骤205c-步骤206c的详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
综上所述,在本公开实施例提供的方法之中,UE会先向网络设备上报能力信息,该能力信息可以用于指示UE的AI和/或ML的支持能力,之后UE会获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,并基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型,最后将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息用于部署译码器模型,以便网络设备可以基于该译码器模型的模型信息部署译码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图2d为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由UE执行,如图2d所示,该方法可以包括以下步骤:
步骤201d、向网络设备上报能力信息。
其中,关于步骤201d的详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤202d、获取网络设备发送的待训练译码器模型的模型信息。
具体的,在本公开的一个实施例之中,UE向网络设备上报了能力信息之后,网络设备可以从其存储的多个不同的待训练译码器模型之中,基于能力信息选择出待训练译码器模型。
需要说明的是,在本公开的一个实施例之中,上述的“基于能力信息选择出待训练译码器模型”含义具体为:在基于能力信息选择待训练译码器模型时,所选择出的待训练译码器模型需满足以下条件:
1、所选择出的待训练译码器模型应当为网络设备支持的模型;
2、所选择出的待训练译码器模型对应的待训练编码器模型应当是匹配于UE的能力信息的模型(即UE支持的模型)。
步骤203d、基于待训练译码器模型的模型信息部署待训练译码器模型。
步骤204d、基于待训练译码器模型确定出匹配于该待训练编码器模型的待训练编码器模型。
具体的,在本公开的一个实施例之中,可以基于该待训练译码器模型的模型信息确定出匹配于该待训练译码器模型的待训练编码器模型的模型信息,之后,基于该待训练编码器模型的模型信息部署该待训练编码器模型。
以及,在本公开的一个实施例之中,所确定出的待训练编码器模型具体为该UE支持的模型。
步骤205d、基于UE的测量信息和/或历史测量信息确定样本数据。
步骤206d、基于样本数据对待训练译码器模型和待训练编码器模型进行训练,以生成译码器模型和编码器模型。
步骤207d、将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息可以用于部署译码器模型。
其中,关于步骤205d-步骤207d的详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
综上所述,在本公开实施例提供的方法之中,UE会先向网络设备上报能力信息,该能力信息可以用于指示UE的AI和/或ML的支持能力,之后UE会获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,并基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型,最后将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息用于部署译码器模型,以便网络设备可以基于该译码器模型的模型信息部署译码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图2e为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由UE执行,如图2e所示,该方法可以包括以下步骤:
步骤201e、向网络设备上报能力信息。
步骤202e、获取网络设备发送的待训练译码器模型的模型信息。
步骤203e、基于UE的测量信息和/或历史测量信息确定样本数据。
步骤204e、基于样本数据对待训练译码器模型进行训练,以生成译码器模型。
步骤205e、UE基于译码器模型的模型信息确定匹配于该译码器模型的编码器模型。
具体的,在本公开的一个实施例之中,可以基于该译码器模型的模型信息确定出匹配于该译码器模型的编码器模型的模型信息,之后,基于该编码器模型的模型信息生成该编码器模型。
步骤206e、将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息可以用于部署译码器模型。
其中,关于步骤201e-步骤206e的详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
综上所述,在本公开实施例提供的方法之中,UE会先向网络设备上报能力信息,该能力信息可以用于指示UE的AI和/或ML的支持能力,之后UE会获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,并基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型,最后将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息用于部署译码器模型,以便网络设备可以基于该译码器模型的模型信息部署译码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图3a为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由UE执行,如图3所示,该方法可以包括以下步骤:
步骤301a、向网络设备上报能力信息。
步骤302a、获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息。
步骤303a、基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型。
步骤304a、将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息可以用于部署译码器模型。
其中,关于步骤301a-304a的详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤305a、获取网络设备发送指示信息。
其中,本公开的一个实施例之中,该指示信息可以用于指示UE向网络设备上报时的信息类型。以及,本公开的一个实施例之中,该信息类型可以包括以下至少一种:
未经编码器模型编码的原始的上报信息;
原始的上报信息经过编码器模型编码之后的信息。
进一步的,在本公开的一个实施例之中,该上报信息可以为UE要向网络设备上报的信息。以及,该上报信息可以包括CSI(Channel State Information,信道状态信息)信息;
在本公开的一个实施例之中,该CSI信息可以包括以下至少一种:
信道信息;
信道的特征矩阵信息;
信道的特征向量信息;
PMI(Precoding Matrix Indicator,预编码矩阵标识);
CQI(Channel Quality Indicator,信道质量指示信息);
RI(Rank Indicator,信道秩指示信息);
RSRP(Reference Signal Received Power,参考信号接收功率);
RSRQ(Reference Signal Received Quality,参考信号接收质量);
SINR(Signal-to-Interference plus Noise Ratio,信干燥比);
参考信号资源指示。
此外,在本公开的一个实施例之中,UE可以获取网络设备通过信令发送的指示信息。
步骤306a、基于指示信息向网络设备进行上报。
其中,在本公开的一个实施例之中,当上述步骤305a中的指示信息用于指示UE向网络设备上报时的信息类型为:未经编码器模型编码的原始的上报信息时,则本步骤306a中UE要向网络设备进行上报时,可以无需编码而直接将原始的上报信息发送至网络设备;以及,当上述步骤305a中的指示信息用于指示UE向网络设备上报时的信息类型为:原始的上报信息经过编码器模型编码之后的信息时,则本步骤306a中UE要向网络设备进行上报时,需先利用编码器模型对上报信息进行编码,并将编码之后的信息上报至网络设备。
综上所述,在本公开实施例提供的方法之中,UE会先向网络设备上报能力信息,该能力信息可以用于指示UE的AI和/或ML的支持能力,之后UE会获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,并基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型,最后将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息用于部署译码器模型,以便网络设备可以基于该译码器模型的模型信息部署译码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图3b为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由UE执行,如图3所示,该方法可以包括以下步骤:
步骤301b、向网络设备上报能力信息。
步骤302b、获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息。
步骤303b、基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型。
步骤304b、将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息可以用于部署译码器模型。
步骤305b、获取网络设备发送指示信息,该指示信息用于指示UE向网络设备上报时的信息类型包括:原始的上报信息经过编码器模型编码之后的信息。
其中,关于步骤301b-步骤305b的其他详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤306b、利用编码器模型对上报信息进行编码。
步骤307b、将编码之后的信息上报至网络设备。
其中,在本公开的一个实施例之中,由于上述步骤305b中指示UE上报时的信息类型为经过编码器模型编码之后的信息,因此,在本步骤305b中,网络设备所接收到的UE上报的信息实质为经过编码器模型编码之后的信息,基于此,网络设备需要利用译码器模型(如上述步骤303b中生成的译码器模型)对UE上报的信息进行译码以获取到原始的上报信息。
综上所述,在本公开实施例提供的方法之中,UE会先向网络设备上报能力信息,该能力信息可以用于指示UE的AI和/或ML的支持能力,之后UE会获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,并基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型,最后将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息用于部署译码器模型,以便网络设备可以基于该译码器模型的模型信息部署译码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图3c为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由网络设备执行,如图3c所示,该方法可以包括以下步骤:
步骤301c、向网络设备上报能力信息。
步骤302c、获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息。
步骤303c、基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型。
步骤304c、将编码器模型的模型信息发送至网络设备,该编码器模型的模型信息用于部署编码器模型。
其中,关于步骤301c-步骤304c的其他详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤305c、对编码器模型和译码器模型进行模型更新,生成更新后的编码器模型和更新后的译码器模型。
其中,在本公开的一个实施例之中,UE可以基于网络设备的指示来对编码器模型和译码器模型进行模型更新。其中,UE基于网络设备的指示对编码器模型和译码器模型进行模型更新具体可以包括以下步骤:
步骤1、获取网络设备发送的更新指示信息。
其中,在本公开的一个实施例之中,该更新指示信息可以用于指示UE调整模型参数。在本公开的另一个实施例之中,该更新指示信息可以包括新的编码器模型的模型信息和/或新的译码器模型的模型信息,该新的编码器模型和新的译码器模型的种类与原有的编码器模型和原有的译码器模型的种类不同。
需要说明的是,在本公开的一个实施例之中,上述更新指示信息中可以仅包括新的编码器模型的模型信息。在本公开的另一个实施例之中,更新指示信息中可以仅包括新的译码器模型的模型信息。在本公开的又一个实施例之中,更新指示信息中可以包括新的译码器模型的模型信息和新的编码器模型的模型信息。
步骤2、基于更新指示信息确定新的编码器模型和新的译码器模型。
具体的,在本公开的一个实施例之中,当更新指示信息用于指示UE调整模型参数时,UE可以通过调整原有的编码器模型和原有的译码器模型的模型参数得到新的编码器模型和新的译码器模型;
以及,在本公开的另一个实施例之中,当更新指示信息包括新的编码器模型的模型信息和/或新的译码器模型的模型信息时,UE可以基于新的编码器模型的模型信息和/或新的译码器模型的模型信息生成新的编码器模型和新的译码器模型,其中,该新的编码器模型和新的译码器模型的种类与原有的编码器模型和原有的译码器模型的种类不同。
需要说明的是,在本公开的一个实施例之中,若更新指示信息中仅包括某一个新的模型的模型信息时(如仅包括新的编码器模型的模型信息或新的译码器模型的模型信息),则UE可以基于该更新指示信息中所包括的该新的模型的模型信息来确定出对应匹配的另一个新的模型的模型信息。
示例的,在本公开的一个实施例之中,当更新指示信息中仅包括新的编码器模型的模型信息时,UE在接收到该新的编码器模型的模型信息之后,可以对应确定出与该新的编码器模型的模型信息匹配对应的新的译码器模型的模型信息,之后,可以基于该新的译码器模型的模型信息确定出新的译码器模型,以及基于基于该新的编码器模型的模型信息确定出新的编码器模型。
步骤3、重新训练新的编码器模型和新的译码器模型得到更新后的编码器模型和更新后的译码器模型。
其中,重新训练的方法可以参考上述实施例描述,本公开实施例在此不做赘述。
进一步地,在本公开的另一个实施例之中,UE可以自主对编码器模型和译码器模型进行模型更新。其中,UE自主对编码器模型和译码器模型进行模型更新具体可以包括以下步骤:
步骤a、监控原有的编码器模型和原有的译码器模型的失真度。
其中,在本公开的一个实施例之中,可以实时监控原有的编码器模型和原有的译码器模型的失真度。具体的,可以将UE上报的未经过编码压缩的信息作为输入信息依次输入至编码器模型和译码器模型中依次进行编解码操作得到输出信息,以及通过计算该输出信息和输入信息的匹配度来确定原有的编码器模型和原有的译码器模型的失真度。
步骤b、当失真度超出第一阈值,重新训练原有的编码器模型和原有的译码器模型得到更新后的编码器模型和更新后的译码器模型,其中,更新后的编码器模型和更新后的译码器模型的失真度低于第二阈值,第二阈值小于等于第一阈值。
其中,在本公开的一个实施例之中,当失真度超出第一阈值时,说明该原有的编码器模型和原有的译码器模型的编解码精度较低,则会影响信号后续的处理精度。由此,需要重新训练原有的编码器模型和原有的译码器模型得到更新后的编码器模型和更新后的译码器模型。并且,应确保更新后的编码器模型和更新后的译码器模型的失真度较低,低于第二阈值,以此来保证模型的编解码精度。
以及,上述的训练过程具体可以参考上述实施例描述,本公开实施例在此不做赘述。
此外,在本公开的一个实施例之中,上述第一阈值和第二阈值可以是预先设置的。
步骤306c、直接利用更新后的编码器模型替换原有的编码器模型。
其中,在本公开的一个实施例之中,利用更新后的编码器模型替换原有的编码器模型后,UE即可利用该更新后的编码器模型来进行译码,其中,基于该更新后的编码器模型的编码精度较高,则可以确保信号后续的处理精度。
综上所述,在本公开实施例提供的方法之中,UE会先向网络设备上报能力信息,该能力信息可以用于指示UE的AI和/或ML的支持能力,之后UE会获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,并基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型,最后将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息用于部署译码器模型,以便网络设备可以基于该译码器模型的模型信息部署译码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图3d为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由网络设备执行,如图3d所示,该方法可以包括以下步骤:
步骤301d、向网络设备上报能力信息。
步骤302d、获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息。
步骤303d、基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型。
步骤304d、将编码器模型的模型信息发送至网络设备,该编码器模型的模型信息用于部署编码器模型。
步骤305d、对编码器模型和译码器模型进行模型更新,生成更新后的编码器模型和更新后的译码器模型。
其中,关于步骤301d-步骤305d的其他详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤306d、确定更新后的编码器模型的模型信息与原有的编码器模型的模型信息之间的差异模型信息。
其中,关于模型信息的相关介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤307d、基于差异模型信息对原有的编码器模型进行优化。
具体的,在本公开的一个实施例之中,基于该差异模型信息优化调整原有的编码器模型之后,则可以使得优化调整之后的编码器模型的模型信息与上述步骤305d中生成的更新后的编码器模型的模型信 息一致,从而网络设备后续即可利用该更新后的编码器模型来进行编码,其中,基于该更新后的编码器模型的译码精度较高,则可以确保信号后续的处理精度。
综上所述,在本公开实施例提供的方法之中,UE会先向网络设备上报能力信息,该能力信息可以用于指示UE的AI和/或ML的支持能力,之后UE会获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,并基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型,最后将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息用于部署译码器模型,以便网络设备可以基于该译码器模型的模型信息部署译码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图3e为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由网络设备执行,如图3e所示,该方法可以包括以下步骤:
步骤301e、向网络设备上报能力信息。
步骤302e、获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息。
步骤303e、基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型。
步骤304e、将编码器模型的模型信息发送至网络设备,该编码器模型的模型信息用于部署编码器模型。
其中,关于步骤301e-步骤304e的其他详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤305e、对编码器模型和译码器模型进行模型更新,生成更新后的编码器模型和更新后的译码器模型。
步骤306e、将更新后的译码器模型的模型信息发送至网络设备。
其中,在本公开的一个实施例之中,该更新后的译码器模型的模型信息可以包括:
更新后的译码器模型的全部模型信息;或者
更新后的译码器模型的模型信息与原有的译码器模型的模型信息之间的差异模型信息。
以及,在本公开的一个实施例之中,UE通过将更新后的译码器模型的模型信息发送至网络设备后,可以使得网络设备利用更新后的译码器模型来对进行译码,其中,基于该更新后的译码器模型的译码精度较高,则可以确保信号后续的处理精度。
综上所述,在本公开实施例提供的方法之中,UE会先向网络设备上报能力信息,该能力信息可以用于指示UE的AI和/或ML的支持能力,之后UE会获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,并基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型,最后将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息用于部署译码器模型,以便网络设备可以基于该译码器模型的模型信息部署译码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图4为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由网络设备执行,如图4所示,该方法可以包括以下步骤:
步骤401、获取UE上报的能力信息。
其中,在本公开的一个实施例之中,该能力信息用于指示UE的AI和/或ML的支持能力。
以及,在本公开的一个实施例之中,该模型可以包括以下至少一种:
AI模型;
ML模型。
进一步的,在本公开的一个实施例之中,该能力信息可以包括以下至少一种:
UE是否支持AI;
UE是否支持ML;
UE支持的AI和/或ML的种类;
UE对于模型的最大支持能力信息,该最大支持能力信息包括UE支持的最复杂的模型的结构信息。
其中,在本公开的一个实施例之中,上述的结构信息例如可以包括模型的层数等。
步骤402、基于能力信息向UE发送待训练编码器模型的模型信息和/或待训练译码器模型的模型信息。
步骤403、获取UE发送的译码器模型的模型信息,译码器模型的模型信息用于部署译码器模型。
其中,在本公开的一个实施例之中,该译码器模型的模型信息用于部署译码器模型。
以及,在本公开的一个实施例之中,该译码器模型的模型信息可以包括以下至少一种:
译码器模型的种类;
译码器模型的模型参数。
步骤404、基于译码器模型的模型信息生成译码器模型。
其中,关于步骤401-步骤404的其他详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
综上所述,在本公开实施例提供的方法之中,UE会先向网络设备上报能力信息,该能力信息可以用于指示UE的AI和/或ML的支持能力,之后UE会获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,并基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型,最后将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息用于部署译码器模型,以便网络设备可以基于该译码器模型的模型信息部署译码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图5a为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由网络设备执行,如图5所示,该方法可以包括以下步骤:
步骤501a、获取UE上报的能力信息。
其中,在本公开的一个实施例之中,该能力信息用于指示UE的AI和/或ML的支持能力。
步骤502a、基于能力信息选择待训练编码器模型和待训练译码器模型。
其中,所述待训练编码器模型为所述UE所支持的模型,所述待训练译码器模型为所述网络设备所支持的模型。
步骤503a、将待训练编码器模型的模型信息和待训练译码器模型的模型信息发送至UE。
步骤504a、获取UE发送的译码器模型的模型信息,译码器模型的模型信息用于部署译码器模型。
步骤505a、基于译码器模型的模型信息生成译码器模型。
其中,关于步骤501a-步骤505a的其他详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
综上所述,在本公开实施例提供的方法之中,UE会先向网络设备上报能力信息,该能力信息可以用于指示UE的AI和/或ML的支持能力,之后UE会获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,并基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型,最后将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息用于部署译码器模型,以便网络设备可以基于该译码器模型的模型信息部署译码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图5b为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由网络设备执行,如图5所示,该方法可以包括以下步骤:
步骤501b、获取UE上报的能力信息。
其中,在本公开的一个实施例之中,该能力信息用于指示UE的AI和/或ML的支持能力。
步骤502b、基于能力信息选择待训练编码器模型。
其中,所述待训练编码器模型为所述UE所支持的模型。
步骤503b、将待训练编码器模型的模型信息发送至UE。
步骤504b、获取UE发送的译码器模型的模型信息,译码器模型的模型信息用于部署译码器模型。
步骤505b、基于译码器模型的模型信息生成译码器模型。
其中,关于步骤501b-步骤505b的其他详细介绍可以参考上述实施例描述,本公开实施例在此不 做赘述。
综上所述,在本公开实施例提供的方法之中,UE会先向网络设备上报能力信息,该能力信息可以用于指示UE的AI和/或ML的支持能力,之后UE会获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,并基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型,最后将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息用于部署译码器模型,以便网络设备可以基于该译码器模型的模型信息部署译码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图5c为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由网络设备执行,如图5所示,该方法可以包括以下步骤:
步骤501c、获取UE上报的能力信息。
其中,在本公开的一个实施例之中,该能力信息用于指示UE的AI和/或ML的支持能力。
步骤502c、基于能力信息选择待训练译码器模型。
其中,该待训练译码器模型为所述网络设备所支持的模型。
步骤503c、将待训练译码器模型的模型信息发送至UE。
步骤504c、获取UE发送的译码器模型的模型信息,译码器模型的模型信息用于部署译码器模型。
步骤505c、基于译码器模型的模型信息生成译码器模型。
其中,关于步骤501c-步骤505c的其他详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
综上所述,在本公开实施例提供的方法之中,UE会先向网络设备上报能力信息,该能力信息可以用于指示UE的AI和/或ML的支持能力,之后UE会获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,并基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型,最后将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息用于部署译码器模型,以便网络设备可以基于该译码器模型的模型信息部署译码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图6a为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由网络设备执行,如图6a所示,该方法可以包括以下步骤:
步骤601a、获取UE上报的能力信息。
其中,在本公开的一个实施例之中,该能力信息用于指示UE的AI和/或ML的支持能力。
步骤602a、基于能力信息向UE发送待训练编码器模型的模型信息和/或待训练译码器模型的模型信息。
步骤603a、获取UE发送的译码器模型的模型信息,译码器模型的模型信息用于部署译码器模型。
步骤604a、基于译码器模型的模型信息生成译码器模型。
其中,关于步骤601a-步骤604a的其他详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤605a、向UE发送指示信息。
其中,本公开的一个实施例之中,该指示信息用于指示UE向网络设备上报时的信息类型。
以及,本公开的一个实施例之中,该信息类型可以包括未经编码器模型编码的原始的上报信息和原始的上报信息经过编码器模型编码之后的信息中的至少一种。
进一步的,在本公开的一个实施例之中,该上报信息为UE要向网络设备上报的信息。以及,该上报信息可以包括CSI信息;
以及,该CSI信息可以包括以下至少一种:
信道信息;
信道的特征矩阵信息;
信道的特征向量信息;
PMI;
CQI;
RI;
RSRP;
RSRQ;
SINR;
参考信号资源指示。
综上所述,在本公开实施例提供的方法之中,UE会先向网络设备上报能力信息,该能力信息可以用于指示UE的AI和/或ML的支持能力,之后UE会获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,并基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型,最后将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息用于部署译码器模型,以便网络设备可以基于该译码器模型的模型信息部署译码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图6b为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由网络设备执行,如图6b所示,该方法可以包括以下步骤:
步骤601b、获取UE上报的能力信息。
其中,在本公开的一个实施例之中,该能力信息用于指示UE的AI和/或ML的支持能力。
步骤602b、基于能力信息向UE发送待训练编码器模型的模型信息和/或待训练译码器模型的模型信息。
步骤603b、获取UE发送的译码器模型的模型信息,译码器模型的模型信息用于部署译码器模型。
步骤604b、基于译码器模型的模型信息生成译码器模型。
步骤605b、向UE发送指示信息,该指示信息用于指示UE向网络设备上报时的信息类型包括:原始的上报信息经过编码器模型编码之后的信息。
其中,关于步骤601b-步骤605b的其他详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤606b、当接收到UE上报的信息,利用译码器模型对UE上报的信息进行译码。
其中,在本公开的一个实施例之中,由于上述步骤605b中指示UE上报时的信息类型为经过编码器模型编码之后的信息,因此,在本步骤606b中,网络设备所接收到的UE上报的信息实质为经过编码器模型编码之后的信息,基于此,网络设备需要利用译码器模型(如上述步骤606b中生成的译码器模型)对UE上报的信息进行译码以获取到原始的上报信息。
综上所述,在本公开实施例提供的方法之中,UE会先向网络设备上报能力信息,该能力信息可以用于指示UE的AI和/或ML的支持能力,之后UE会获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,并基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型,最后将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息用于部署译码器模型,以便网络设备可以基于该译码器模型的模型信息部署译码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图6c为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由网络设备执行,如图6c所示,该方法可以包括以下步骤:
步骤601c、获取UE上报的能力信息。
步骤602c、基于能力信息向UE发送待训练编码器模型的模型信息和/或待训练译码器模型的模型信息。
步骤603c、获取UE发送的译码器模型的模型信息,译码器模型的模型信息用于部署译码器模型。
步骤604c、基于译码器模型的模型信息生成译码器模型。
其中,关于步骤601c-步骤604c的其他详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤605c、接收UE发送的更新后的译码器模型的模型信息。
其中,在本公开的一个实施例之中,该更新后的译码器模型的模型信息包括:
更新后的译码器模型的全部模型信息;或者
更新后的译码器模型的模型信息与原有的译码器模型的模型信息之间的差异模型信息。
其中,需要说明的是,在本公开的一个实施例之中,当UE是基于网络设备的指示对编码器模型和译码器模型进行模型更新时,则针对网络设备而言,其在执行完上述步骤604c之后,网络设备还需向UE发送一更新指示消息,该更新指示消息用于指示UE调整模型参数;或者,该更新指示信息包括新的编码器模型的模型信息和/或新的译码器模型的模型信息,该新的编码器模型和新的译码器模型的种类与原有的编码器模型和原有的译码器模型的种类不同。之后,UE端可以基于该更新指示消息进行模型更新,并确定出更新后的译码器模型的模型信息发送至网络设备。
还需要说明的是,在本公开的一个实施例之中,若上述更新指示消息中包括的是新的编码器模型的模型信息和/或新的译码器模型的模型信息,则网络设备在向UE发送更新指示消息之前,还需要基于能力信息重新选择出与原有的编码器模型和原有的译码器模型的种类不同的新的编码器模型和/或新的译码器模型,其中,该新的编码器模型可以为UE支持的模型,该新的编码器模型可以为网络设备支持的模型。之后,网络设备才能基于重新选择的新的编码器模型和/或新的译码器模型向UE发送包含新的编码器模型的模型信息和/或新的译码器模型的模型信息的更新指示消息。
以及,在本公开的另一个实施例之中,当UE是自主对编码器模型和译码器模型进行模型更新时,则针对网络设备而言,其在执行完上述步骤604c之后,无需向UE发送更新指示消息,即可直接接收到UE发送的更新后的译码器模型的模型信息。
步骤606c、基于更新后的译码器模型的模型信息进行模型更新。
其中,在本公开的一个实施例之中,关于“基于更新后的译码器模型的模型信息进行模型更新”的详细内容会在后续实施例进行介绍。
综上所述,在本公开实施例提供的方法之中,UE会先向网络设备上报能力信息,该能力信息可以用于指示UE的AI和/或ML的支持能力,之后UE会获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,并基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型,最后将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息用于部署译码器模型,以便网络设备可以基于该译码器模型的模型信息部署译码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图6d为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由网络设备执行,如图6d所示,该方法可以包括以下步骤:
步骤601d、获取UE上报的能力信息。
步骤602d、基于能力信息向UE发送待训练编码器模型的模型信息和/或待训练译码器模型的模型信息。
步骤603d、获取UE发送的译码器模型的模型信息,译码器模型的模型信息用于部署译码器模型。
步骤604d、基于译码器模型的模型信息生成译码器模型。
其中,关于步骤601d-步骤604d的其他详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤605d、接收UE发送的更新后的译码器模型的模型信息。
其中,在本公开的一个实施例之中,该更新后的译码器模型的模型信息包括:
更新后的译码器模型的全部模型信息;或者
更新后的译码器模型的模型信息与原有的译码器模型的模型信息之间的差异模型信息。
步骤606d、基于更新后的译码器模型的模型信息生成更新后的译码器模型。
其中,在本公开的一个实施例之中,当上述步骤605b中接收到的更新后的译码器模型的模型信息为“更新后的译码器模型的全部模型信息”时,则网络设备可以直接基于该全部模型信息来生成更新后的译码器模型。
以及,在本公开的另一个实施例之中,当上述步骤605b中接收到的更新后的译码器模型的模型信 息为“更新后的译码器模型的模型信息与原有的译码器模型的模型信息之间的差异模型信息”时,则网络设备可以先确定出自身的原有的译码器模型的模型信息,再基于该原有的译码器模型的模型信息和差异模型信息确定出更新后的译码器模型的模型信息,之后再基于该更新后的译码器模型的模型信息来生成更新后的译码器模型。
步骤607d、利用更新后的译码器模型替换原有的译码器模型以进行模型更新。
其中,在本公开的一个实施例之中,利用更新后的译码器模型替换原有的译码器模型后,网络设备即可利用该更新后的译码器模型来进行译码,其中,基于该更新后的译码器模型的译码精度较高,则可以确保信号后续的处理精度。
综上所述,在本公开实施例提供的方法之中,UE会先向网络设备上报能力信息,该能力信息可以用于指示UE的AI和/或ML的支持能力,之后UE会获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,并基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型,最后将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息用于部署译码器模型,以便网络设备可以基于该译码器模型的模型信息部署译码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图6e为本公开实施例所提供的一种模型训练部署方法的流程示意图,该方法由网络设备执行,如图6e所示,该方法可以包括以下步骤:
步骤601e、获取UE上报的能力信息。
步骤602e、基于能力信息向UE发送待训练编码器模型的模型信息和/或待训练译码器模型的模型信息。
步骤603e、获取UE发送的译码器模型的模型信息,译码器模型的模型信息用于部署译码器模型。
步骤604e、基于译码器模型的模型信息生成译码器模型。
其中,关于步骤601e-步骤604e的其他详细介绍可以参考上述实施例描述,本公开实施例在此不做赘述。
步骤605e、接收UE发送的更新后的译码器模型的模型信息。
其中,在本公开的一个实施例之中,该更新后的译码器模型的模型信息包括:
更新后的译码器模型的全部模型信息;或者
更新后的译码器模型的模型信息与原有的译码器模型的模型信息之间的差异模型信息。
步骤606e、基于更新后的译码器模型的模型信息对原有的译码器模型进行优化以进行模型更新。
其中,在本公开的一个实施例之中,当上述步骤605d中接收到的更新后的译码器模型的模型信息为“更新后的译码器模型的全部模型信息”时,则网络设备可以先确定出该全部模型信息和原有的译码器模型的模型信息之间的模型差异信息,之后,再基于模型差异信息来对原有的译码器模型进行优化以进行模型更新。
以及,在本公开的另一个实施例之中,当上述步骤605d中接收到的更新后的译码器模型的模型信息为“更新后的译码器模型的模型信息与原有的译码器模型的模型信息之间的差异模型信息”时,则网络设备可以直接基于模型差异信息来对原有的译码器模型进行优化以进行模型更新。
具体的,在本公开的一个实施例之中,基于该差异模型信息优化调整原有的译码器模型之后,则可以使得优化调整之后的译码器模型的模型信息与更新后的译码器模型的模型信息一致,从而网络设备后续即可利用该更新后的译码器模型来进行译码,其中,基于该更新后的译码器模型的译码精度较高,则可以确保信号后续的处理精度。
综上所述,在本公开实施例提供的方法之中,UE会先向网络设备上报能力信息,该能力信息可以用于指示UE的AI和/或ML的支持能力,之后UE会获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,并基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型,最后将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息用于部署译码器模型,以便网络设备可以基于该译码器模型的模型信息部署译码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
图7为本公开实施例所提供的一种模型训练部署装置的结构示意图,如图7所示,装置可以包括:
上报模块,用于向网络设备上报能力信息,所述能力信息用于指示所述UE的AI和/或ML的支持能力;
获取模块,用于获取所述网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息;
生成模块,用于基于所述待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型;
发送模块,用于将所述译码器模型的模型信息发送至所述网络设备,所述译码器模型的模型信息用于部署所述译码器模型。
综上所述,在本公开实施例提供的装置之中,UE会先向网络设备上报能力信息,该能力信息可以用于指示UE的AI和/或ML的支持能力,之后UE会获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,并基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型,最后将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息用于部署译码器模型,以便网络设备可以基于该译码器模型的模型信息部署译码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
可选地,在本公开的一个实施例之中,所述模型包括以下至少一种:
AI模型;
ML模型。
可选地,在本公开的一个实施例之中,所述能力信息包括以下至少一种:
所述UE是否支持AI;
所述UE是否支持ML;
所述UE支持的AI和/或ML的种类;
所述UE对于模型的最大支持能力信息,所述最大支持能力信息包括所述UE支持的最复杂的模型的结构信息。
可选地,在本公开的一个实施例之中,所述生成模块,还用于:
基于所述待训练编码器模型的模型信息部署所述待训练编码器模型,和/或,基于所述待训练译码器模型的模型信息部署所述待训练译码器模型;
基于所述UE的测量信息和/或历史测量信息确定样本数据;
基于所述样本数据对所述待训练编码器模型和待训练译码器模型进行训练,以生成编码器模型和译码器模型。
可选地,在本公开的一个实施例之中,所述译码器模型的模型信息包括以下至少一种:
模型的种类;
模型的模型参数。
可选地,在本公开的一个实施例之中,所述装置,还用于:
获取所述网络设备发送指示信息,所述指示信息用于指示所述UE向所述网络设备上报时的信息类型;所述信息类型包括未经编码器模型编码的原始的上报信息、原始的上报信息经过编码器模型编码之后的信息中的至少一种;
基于所述指示信息向所述网络设备进行上报。
可选地,在本公开的一个实施例之中,所述上报信息为所述UE要向所述网络设备上报的信息;所述上报信息包括信道状态信息CSI信息;
所述的CSI信息包括以下至少一种:
信道信息;
信道的特征矩阵信息;
信道的特征向量信息;
预编码矩阵指示信息PMI;
信道质量指示信息CQI;
信道秩指示信息RI;
参考信号接收功率RSRP;
参考信号接收质量RSRQ;
信干燥比SINR;
参考信号资源指示。
可选地,在本公开的一个实施例之中,所述指示信息指示的信息类型包括原始的上报信息经过编码器模型编码之后的信息;
所述装置,还用于:
利用所述编码器模型对所述上报信息进行编码;
将编码之后的信息上报至所述网络设备。
可选地,在本公开的一个实施例之中,所述装置,还用于:
对编码器模型和译码器模型进行模型更新,生成更新后的编码器模型和更新后的译码器模型。
可选地,在本公开的一个实施例之中,所述装置,还用于:
获取网络设备发送的更新指示信息;所述更新指示消息用于指示所述UE调整模型参数;或者,所述更新指示信息包括新的编码器模型的模型信息和/或新的译码器模型的模型信息,所述新的编码器模型和新的译码器模型的种类与原有的编码器模型和原有的译码器模型的种类不同;
基于所述更新指示信息确定新的编码器模型和新的译码器模型;
重新训练所述新的编码器模型和新的译码器模型得到更新后的编码器模型和更新后的译码器模型。
可选地,在本公开的一个实施例之中,所述装置,还用于:
监控原有的编码器模型和原有的译码器模型的失真度;
当所述失真度超出第一阈值,重新训练所述原有的编码器模型和原有的译码器模型得到更新后的编码器模型和更新后的译码器模型,其中,所述更新后的编码器模型和更新后的译码器模型的失真度低于第二阈值,所述第二阈值小于等于第一阈值。
可选地,在本公开的一个实施例之中,所述装置,还用于:
当所述更新指示信息用于指示所述UE调整模型参数,所述UE通过调整原有的编码器模型和原有的译码器模型的模型参数得到所述新的编码器模型和新的译码器模型;或者
当所述更新指示信息包括新的编码器模型的模型信息和/或新的译码器模型的模型信息,所述UE基于所述新的编码器模型的模型信息和/或新的译码器模型的模型信息生成所述新的编码器模型和新的译码器模型,所述新的编码器模型和新的译码器模型的种类与原有的编码器模型和原有的译码器模型的种类不同。
可选地,在本公开的一个实施例之中,所述装置,还用于:
直接利用所述更新后的编码器模型替换原有的编码器模型。
可选地,在本公开的一个实施例之中,所述装置,还用于:
确定所述更新后的编码器模型的模型信息与原有的编码器模型的模型信息之间的差异模型信息;
基于所述差异模型信息对原有的编码器模型进行优化。
可选地,在本公开的一个实施例之中,所述装置,还用于:
将所述更新后的译码器模型的模型信息发送至所述网络设备。
可选地,在本公开的一个实施例之中,所述更新后的译码器模型的模型信息包括:
所述更新后的译码器模型的全部模型信息;或者
所述更新后的译码器模型的模型信息与原有的译码器模型的模型信息之间的差异模型信息。
图8为本公开实施例所提供的一种模型训练部署装置的结构示意图,如图8所示,装置可以包括:
第一获取模块,用于获取UE上报的能力信息,所述能力信息用于指示所述UE的AI和/或ML的支持能力;
发送模块,用于基于所述能力信息向所述UE发送待训练编码器模型的模型信息和/或待训练译码 器模型的模型信息;
第二获取模块,用于获取所述UE发送的译码器模型的模型信息,所述译码器模型的模型信息用于部署所述译码器模型;
生成模块,用于基于所述译码器模型的模型信息生成所述译码器模型。
综上所述,在本公开实施例提供的装置之中,UE会先向网络设备上报能力信息,该能力信息可以用于指示UE的AI和/或ML的支持能力,之后UE会获取网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,并基于待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型,最后将译码器模型的模型信息发送至网络设备,该译码器模型的模型信息用于部署译码器模型,以便网络设备可以基于该译码器模型的模型信息部署译码器模型。由此,本公开实施例提供了一种生成和部署模型的方法,可以用于训练和部署AI/ML模型。
可选地,在本公开的一个实施例之中,所述模型包括以下至少一种:
AI模型;
ML模型。
可选地,在本公开的一个实施例之中,所述能力信息包括以下至少一种:
所述UE是否支持AI;
所述UE是否支持ML;
所述UE支持的AI和/或ML的种类;
所述UE对于模型的最大支持能力信息,所述最大支持能力信息包括所述UE支持的最复杂的模型的结构信息。
可选地,在本公开的一个实施例之中,所述发送模块,还用于:
基于所述能力信息选择待训练编码器模型和/或待训练译码器模型;其中,所述待训练编码器模型为所述UE所支持的模型,所述待训练译码器模型为所述网络设备所支持的模型;
将所述待训练编码器模型的模型信息和/或待训练译码器模型的模型信息发送至所述UE。
可选地,在本公开的一个实施例之中,所述译码器模型的模型信息包括以下至少一种:
模型的种类;
模型的模型参数。
可选地,在本公开的一个实施例之中,所述装置,还用于:
向所述UE发送指示信息,所述指示信息用于指示所述UE向所述网络设备上报时的信息类型;
所述信息类型包括以下至少一种:
未经编码器模型编码的原始的上报信息;
原始的上报信息经过编码器模型编码之后的信息。
可选地,在本公开的一个实施例之中,所述上报信息为所述UE要向所述网络设备上报的信息;所述上报信息包括CSI信息;
所述的CSI信息包括以下至少一种:
信道信息;
信道的特征矩阵信息;
信道的特征向量信息;
PMI;
CQI;
RI;
RSRP;
RSRQ;
SINR;
参考信号资源指示。
可选地,在本公开的一个实施例之中,所述指示信息指示的信息类型包括原始的上报信息经过编码 器模型编码之后的信息;
所述装置,还用于:
当接收到所述UE上报的信息,利用所述译码器模型对所述UE上报的信息进行译码。
可选地,在本公开的一个实施例之中,所述装置,还用于:
接收所述UE发送的更新后的译码器模型的模型信息;
基于所述更新后的译码器模型的模型信息进行模型更新。
可选地,在本公开的一个实施例之中,所述装置,还用于:
向所述UE发送更新指示信息;
其中,所述更新指示信息用于指示所述UE调整模型参数;或者
所述更新指示信息包括新的编码器模型的模型信息和/或新的译码器模型的模型信息,所述新的编码器模型和新的译码器模型的种类与原有的编码器模型和原有的译码器模型的种类不同。
可选地,在本公开的一个实施例之中,所述装置,还用于:
所述更新后的译码器模型的全部模型信息;或者
所述更新后的译码器模型的模型信息与原有的译码器模型的模型信息之间的差异模型信息。
可选地,在本公开的一个实施例之中,所述基于所述更新后的译码器模型的模型信息进行模型更新,包括:
基于所述更新后的译码器模型的模型信息生成更新后的译码器模型;
利用所述更新后的译码器模型替换原有的译码器模型以进行模型更新。
可选地,在本公开的一个实施例之中,所述基于所述更新后的译码器模型的模型信息进行模型更新,包括:
基于所述更新后的译码器模型的模型信息对原有的译码器模型进行优化以进行模型更新。
图9是本公开一个实施例所提供的一种用户设备UE900的框图。例如,UE900可以是移动电话,计算机,数字广播终端设备,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图9,UE900可以包括以下至少一个组件:处理组件902,存储器904,电源组件906,多媒体组件908,音频组件910,输入/输出(I/O)的接口912,传感器组件913,以及通信组件916。
处理组件902通常控制UE900的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件902可以包括至少一个处理器920来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件902可以包括至少一个模块,便于处理组件902和其他组件之间的交互。例如,处理组件902可以包括多媒体模块,以方便多媒体组件908和处理组件902之间的交互。
存储器904被配置为存储各种类型的数据以支持在UE900的操作。这些数据的示例包括用于在UE900上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器904可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件906为UE900的各种组件提供电力。电源组件906可以包括电源管理系统,至少一个电源,及其他与为UE900生成、管理和分配电力相关联的组件。
多媒体组件908包括在所述UE900和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括至少一个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的唤醒时间和压力。在一些实施例中,多媒体组件908包括一个前置摄像头和/或后置摄像头。当UE900处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件910被配置为输出和/或输入音频信号。例如,音频组件910包括一个麦克风(MIC),当 UE900处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器904或经由通信组件916发送。在一些实施例中,音频组件910还包括一个扬声器,用于输出音频信号。
I/O接口912为处理组件902和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件913包括至少一个传感器,用于为UE900提供各个方面的状态评估。例如,传感器组件913可以检测到设备900的打开/关闭状态,组件的相对定位,例如所述组件为UE900的显示器和小键盘,传感器组件913还可以检测UE900或UE900一个组件的位置改变,用户与UE900接触的存在或不存在,UE900方位或加速/减速和UE900的温度变化。传感器组件913可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件913还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件913还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件916被配置为便于UE900和其他设备之间有线或无线方式的通信。UE900可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件916经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件916还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,UE900可以被至少一个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
图10是本公开实施例所提供的一种网络侧设备1000的框图。例如,网络侧设备1000可以被提供为一网络侧设备。参照图10,网络侧设备1000包括处理组件1011,其进一步包括至少一个处理器,以及由存储器1032所代表的存储器资源,用于存储可由处理组件1022的执行的指令,例如应用程序。存储器1032中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件1010被配置为执行指令,以执行上述方法前述应用在所述网络侧设备的任意方法,例如,如图1所示方法。
网络侧设备1000还可以包括一个电源组件1026被配置为执行网络侧设备1000的电源管理,一个有线或无线网络接口1050被配置为将网络侧设备1000连接到网络,和一个输入输出(I/O)接口1058。网络侧设备1000可以操作基于存储在存储器1032的操作系统,例如Windows Server TM,Mac OS XTM,Unix TM,Linux TM,Free BSDTM或类似。
上述本公开提供的实施例中,分别从网络侧设备、UE的角度对本公开实施例提供的方法进行了介绍。为了实现上述本公开实施例提供的方法中的各功能,网络侧设备和UE可以包括硬件结构、软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能可以以硬件结构、软件模块、或者硬件结构加软件模块的方式来执行。
上述本公开提供的实施例中,分别从网络侧设备、UE的角度对本公开实施例提供的方法进行了介绍。为了实现上述本公开实施例提供的方法中的各功能,网络侧设备和UE可以包括硬件结构、软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能可以以硬件结构、软件模块、或者硬件结构加软件模块的方式来执行。
本公开实施例提供的一种通信装置。通信装置可包括收发模块和处理模块。收发模块可包括发送模块和/或接收模块,发送模块用于实现发送功能,接收模块用于实现接收功能,收发模块可以实现发送功能和/或接收功能。
通信装置可以是终端设备(如前述方法实施例中的终端设备),也可以是终端设备中的装置,还可以是能够与终端设备匹配使用的装置。或者,通信装置可以是网络设备,也可以是网络设备中的装置,还可以是能够与网络设备匹配使用的装置。
本公开实施例提供的另一种通信装置。通信装置可以是网络设备,也可以是终端设备(如前述方法 实施例中的终端设备),也可以是支持网络设备实现上述方法的芯片、芯片系统、或处理器等,还可以是支持终端设备实现上述方法的芯片、芯片系统、或处理器等。该装置可用于实现上述方法实施例中描述的方法,具体可以参见上述方法实施例中的说明。
通信装置可以包括一个或多个处理器。处理器可以是通用处理器或者专用处理器等。例如可以是基带处理器或中央处理器。基带处理器可以用于对通信协议以及通信数据进行处理,中央处理器可以用于对通信装置(如,网络侧设备、基带芯片,终端设备、终端设备芯片,DU或CU等)进行控制,执行计算机程序,处理计算机程序的数据。
可选的,通信装置中还可以包括一个或多个存储器,其上可以存有计算机程序,处理器执行所述计算机程序,以使得通信装置执行上述方法实施例中描述的方法。可选的,所述存储器中还可以存储有数据。通信装置和存储器可以单独设置,也可以集成在一起。
可选的,通信装置还可以包括收发器、天线。收发器可以称为收发单元、收发机、或收发电路等,用于实现收发功能。收发器可以包括接收器和发送器,接收器可以称为接收机或接收电路等,用于实现接收功能;发送器可以称为发送机或发送电路等,用于实现发送功能。
可选的,通信装置中还可以包括一个或多个接口电路。接口电路用于接收代码指令并传输至处理器。处理器运行所述代码指令以使通信装置执行上述方法实施例中描述的方法。
通信装置为终端设备(如前述方法实施例中的终端设备):处理器用于执行图1-图4任一所示的方法。
通信装置为网络设备:收发器用于执行图5-图7任一所示的方法。
在一种实现方式中,处理器中可以包括用于实现接收和发送功能的收发器。例如该收发器可以是收发电路,或者是接口,或者是接口电路。用于实现接收和发送功能的收发电路、接口或接口电路可以是分开的,也可以集成在一起。上述收发电路、接口或接口电路可以用于代码/数据的读写,或者,上述收发电路、接口或接口电路可以用于信号的传输或传递。
在一种实现方式中,处理器可以存有计算机程序,计算机程序在处理器上运行,可使得通信装置执行上述方法实施例中描述的方法。计算机程序可能固化在处理器中,该种情况下,处理器可能由硬件实现。
在一种实现方式中,通信装置可以包括电路,所述电路可以实现前述方法实施例中发送或接收或者通信的功能。本公开中描述的处理器和收发器可实现在集成电路(integrated circuit,IC)、模拟IC、射频集成电路RFIC、混合信号IC、专用集成电路(application specific integrated circuit,ASIC)、印刷电路板(printed circuit board,PCB)、电子设备等上。该处理器和收发器也可以用各种IC工艺技术来制造,例如互补金属氧化物半导体(complementary metal oxide semiconductor,CMOS)、N型金属氧化物半导体(nMetal-oxide-semiconductor,NMOS)、P型金属氧化物半导体(positive channel metal oxide semiconductor,PMOS)、双极结型晶体管(bipolar junction transistor,BJT)、双极CMOS(BiCMOS)、硅锗(SiGe)、砷化镓(GaAs)等。
以上实施例描述中的通信装置可以是网络设备或者终端设备(如前述方法实施例中的终端设备),但本公开中描述的通信装置的范围并不限于此,而且通信装置的结构可以不受的限制。通信装置可以是独立的设备或者可以是较大设备的一部分。例如所述通信装置可以是:
(1)独立的集成电路IC,或芯片,或,芯片系统或子系统;
(2)具有一个或多个IC的集合,可选的,该IC集合也可以包括用于存储数据,计算机程序的存储部件;
(3)ASIC,例如调制解调器(Modem);
(4)可嵌入在其他设备内的模块;
(5)接收机、终端设备、智能终端设备、蜂窝电话、无线设备、手持机、移动单元、车载设备、网络设备、云设备、人工智能设备等等;
(6)其他等等。
对于通信装置可以是芯片或芯片系统的情况,芯片包括处理器和接口。其中,处理器的数量可以是 一个或多个,接口的数量可以是多个。
可选的,芯片还包括存储器,存储器用于存储必要的计算机程序和数据。
本领域技术人员还可以了解到本公开实施例列出的各种说明性逻辑块(illustrative logical block)和步骤(step)可以通过电子硬件、电脑软件,或两者的结合进行实现。这样的功能是通过硬件还是软件来实现取决于特定的应用和整个系统的设计要求。本领域技术人员可以对于每种特定的应用,可以使用各种方法实现所述的功能,但这种实现不应被理解为超出本公开实施例保护的范围。
本公开实施例还提供一种确定侧链路时长的系统,该系统包括前述实施例中作为终端设备(如前述方法实施例中的第一终端设备)的通信装置和作为网络设备的通信装置,或者,该系统包括前述实施例中作为终端设备(如前述方法实施例中的第一终端设备)的通信装置和作为网络设备的通信装置。
本公开还提供一种可读存储介质,其上存储有指令,该指令被计算机执行时实现上述任一方法实施例的功能。
本公开还提供一种计算机程序产品,该计算机程序产品被计算机执行时实现上述任一方法实施例的功能。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机程序。在计算机上加载和执行所述计算机程序时,全部或部分地产生按照本公开实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机程序可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机程序可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,高密度数字视频光盘(digital video disc,DVD))、或者半导体介质(例如,固态硬盘(solid state disk,SSD))等。
本领域普通技术人员可以理解:本公开中涉及的第一、第二等各种数字编号仅为描述方便进行的区分,并不用来限制本公开实施例的范围,也表示先后顺序。
本公开中的至少一个还可以描述为一个或多个,多个可以是两个、三个、四个或者更多个,本公开不做限制。在本公开实施例中,对于一种技术特征,通过“第一”、“第二”、“第三”、“A”、“B”、“C”和“D”等区分该种技术特征中的技术特征,该“第一”、“第二”、“第三”、“A”、“B”、“C”和“D”描述的技术特征间无先后顺序或者大小顺序。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本发明的其它实施方案。本公开旨在涵盖本发明的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本发明的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。

Claims (37)

  1. 一种模型训练部署方法,其特征在于,被用户设备UE执行,包括:
    向网络设备上报能力信息,所述能力信息用于指示所述UE的人工智能AI和/或机器学习ML的支持能力;
    获取所述网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息;
    基于所述待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型;
    将所述译码器模型的模型信息发送至所述网络设备,所述译码器模型的模型信息用于部署所述译码器模型。
  2. 如权利要求1所述的方法,其特征在于,所述模型包括以下至少一种:
    AI模型;
    ML模型。
  3. 如权利要求1或2所述的方法,其特征在于,所述能力信息包括以下至少一种:
    所述UE是否支持AI;
    所述UE是否支持ML;
    所述UE支持的AI和/或ML的模型的种类;
    所述UE对于模型的最大支持能力信息,所述最大支持能力信息包括所述UE支持的最复杂的模型的结构信息。
  4. 如权利要求1或2所述的方法,其特征在于,所述基于所述待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型,包括:
    基于所述待训练编码器模型的模型信息部署所述待训练编码器模型,和/或,基于所述待训练译码器模型的模型信息部署所述待训练译码器模型;
    基于所述UE的测量信息和/或历史测量信息确定样本数据;
    基于所述样本数据对所述待训练编码器模型和/或待训练译码器模型进行训练,以生成编码器模型和译码器模型。
  5. 如权利要求1所述的方法,其特征在于,所述译码器模型的模型信息包括以下至少一种:
    模型的种类;
    模型的模型参数。
  6. 如权利要求1所述的方法,其特征在于,所述方法还包括:
    获取所述网络设备发送指示信息,所述指示信息用于指示所述UE向所述网络设备上报时的信息类型;所述信息类型包括未经编码器模型编码的原始的上报信息、原始的上报信息经过编码器模型编码之后的信息中的至少一种;
    基于所述指示信息向所述网络设备进行上报。
  7. 如权利要求6所述的方法,其特征在于,所述上报信息为所述UE要向所述网络设备上报的信息;所述上报信息包括信道状态信息CSI信息;
    所述的CSI信息包括以下至少一种:
    信道信息;
    信道的特征矩阵信息;
    信道的特征向量信息;
    预编码矩阵指示信息PMI;
    信道质量指示信息CQI;
    信道秩指示信息RI;
    参考信号接收功率RSRP;
    参考信号接收质量RSRQ;
    信干燥比SINR;
    参考信号资源指示。
  8. 如权利要求6所述的方法,其特征在于,所述指示信息指示的信息类型包括原始的上报信息经过编码器模型编码之后的信息;
    所述基于所述指示信息向所述网络设备进行上报,包括:
    利用所述编码器模型对所述上报信息进行编码;
    将编码之后的信息上报至所述网络设备。
  9. 如权利要求1所述的方法,其特征在于,所述方法还包括:
    对编码器模型和译码器模型进行模型更新,生成更新后的编码器模型和更新后的译码器模型。
  10. 如权利要求9所述的方法,其特征在于,所述对编码器模型和译码器模型进行模型更新,包括:
    获取网络设备发送的更新指示信息;所述更新指示消息用于指示所述UE调整模型参数;或者,所述更新指示信息包括新的编码器模型的模型信息和/或新的译码器模型的模型信息,所述新的编码器模型和新的译码器模型的种类与原有的编码器模型和原有的译码器模型的种类不同;
    基于所述更新指示信息确定新的编码器模型和新的译码器模型;
    重新训练所述新的编码器模型和新的译码器模型得到更新后的编码器模型和更新后的译码器模型。
  11. 如权利要求9所述的方法,其特征在于,所述对编码器模型和译码器模型进行模型更新,包括:
    监控原有的编码器模型和原有的译码器模型的失真度;
    当所述失真度超出第一阈值,重新训练所述原有的编码器模型和原有的译码器模型得到更新后的编码器模型和更新后的译码器模型,其中,所述更新后的编码器模型和更新后的译码器模型的失真度低于第二阈值,所述第二阈值小于等于第一阈值。
  12. 如权利要求10所述的方法,其特征在于,所述基于所述更新指示信息确定新的编码器模型和新的译码器模型,包括:
    当所述更新指示信息用于指示所述UE调整模型参数,所述UE通过调整原有的编码器模型和原有的译码器模型的模型参数得到所述新的编码器模型和新的译码器模型;或者
    当所述更新指示信息包括新的编码器模型的模型信息和/或新的译码器模型的模型信息,所述UE基于所述新的编码器模型的模型信息和/或新的译码器模型的模型信息生成所述新的编码器模型和新的译码器模型,所述新的编码器模型和新的译码器模型的种类与原有的编码器模型和原有的译码器模型的种类不同。
  13. 如权利要求9所述的方法,其特征在于,所述方法还包括:
    直接利用所述更新后的编码器模型替换原有的编码器模型。
  14. 如权利要求9所述的方法,其特征在于,所述方法还包括:
    确定所述更新后的编码器模型的模型信息与原有的编码器模型的模型信息之间的差异模型信息;
    基于所述差异模型信息对原有的编码器模型进行优化。
  15. 如权利要求9所述的方法,其特征在于,所述方法还包括:
    将所述更新后的译码器模型的模型信息发送至所述网络设备。
  16. 如权利要求15所述的方法,其特征在于,所述更新后的译码器模型的模型信息包括:
    所述更新后的译码器模型的全部模型信息;或者
    所述更新后的译码器模型的模型信息与原有的译码器模型的模型信息之间的差异模型信息。
  17. 一种模型部署方法,其特征在于,被网络设备执行,包括:
    获取UE上报的能力信息,所述能力信息用于指示所述UE的AI和/或ML的支持能力;
    基于所述能力信息向所述UE发送待训练编码器模型的模型信息和/或待训练译码器模型的模型信息;
    获取所述UE发送的译码器模型的模型信息,所述译码器模型的模型信息用于部署所述译码器模型;
    基于所述译码器模型的模型信息生成所述译码器模型。
  18. 如权利要求17所述的方法,其特征在于,所述模型包括以下至少一种:
    AI模型;
    ML模型。
  19. 如权利要求17或18所述的方法,其特征在于,所述能力信息包括以下至少一种:
    所述UE是否支持AI;
    所述UE是否支持ML;
    所述UE支持的AI和/或ML的种类;
    所述UE对于模型的最大支持能力信息,所述最大支持能力信息包括所述UE支持的最复杂的模型的结构信息。
  20. 如权利要求17所述的方法,其特征在于,所述基于所述能力信息向所述UE发送待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,包括:
    基于所述能力信息选择待训练编码器模型和/或待训练译码器模型;其中,所述待训练编码器模型为所述UE所支持的模型,所述待训练译码器模型为所述网络设备所支持的模型;
    将所述待训练编码器模型的模型信息和/或待训练译码器模型的模型信息发送至所述UE。
  21. 如权利要求17所述的方法,其特征在于,所述译码器模型的模型信息包括以下至少一种:
    模型的种类;
    模型的模型参数。
  22. 如权利要求17所述的方法,其特征在于,所述方法还包括:
    向所述UE发送指示信息,所述指示信息用于指示所述UE向所述网络设备上报时的信息类型;
    所述信息类型包括以下至少一种:
    未经编码器模型编码的原始的上报信息;
    原始的上报信息经过编码器模型编码之后的信息。
  23. 如权利要求22所述的方法,其特征在于,所述上报信息为所述UE要向所述网络设备上报的信息;所述上报信息包括CSI信息;
    所述的CSI信息包括以下至少一种:
    信道信息;
    信道的特征矩阵信息;
    信道的特征向量信息;
    PMI;
    CQI;
    RI;
    RSRP;
    RSRQ;
    SINR;
    参考信号资源指示。
  24. 如权利要求22所述的方法,其特征在于,所述指示信息指示的信息类型包括原始的上报信息经过编码器模型编码之后的信息;
    所述方法还包括:
    当接收到所述UE上报的信息,利用所述译码器模型对所述UE上报的信息进行译码。
  25. 如权利要求17所述的方法,其特征在于,所述方法还包括:
    接收所述UE发送的更新后的译码器模型的模型信息;
    基于所述更新后的译码器模型的模型信息进行模型更新。
  26. 如权利要求25所述的方法,其特征在于,所述方法还包括:
    向所述UE发送更新指示信息;
    其中,所述更新指示信息用于指示所述UE调整模型参数;或者
    所述更新指示信息包括新的编码器模型的模型信息和/或新的译码器模型的模型信息,所述新的编码器模型和新的译码器模型的种类与原有的编码器模型和原有的译码器模型的种类不同。
  27. 如权利要求25所述的方法,其特征在于,所述更新后的译码器模型的模型信息包括:
    所述更新后的译码器模型的全部模型信息;或者
    所述更新后的译码器模型的模型信息与原有的译码器模型的模型信息之间的差异模型信息。
  28. 如权利要求25或27所述的方法,其特征在于,所述基于所述更新后的译码器模型的模型信息进行模型更新,包括:
    基于所述更新后的译码器模型的模型信息生成更新后的译码器模型;
    利用所述更新后的译码器模型替换原有的译码器模型以进行模型更新。
  29. 如权利要求25或27所述的方法,其特征在于,所述基于所述更新后的译码器模型的模型信息进行模型更新,包括:
    基于所述更新后的译码器模型的模型信息对原有的译码器模型进行优化以进行模型更新。
  30. 一种模型训练部署装置,其特征在于,包括:
    上报模块,用于向网络设备上报能力信息,所述能力信息用于指示所述UE的AI和/或ML的支持能力;
    获取模块,用于获取所述网络设备发送的待训练编码器模型的模型信息和/或待训练译码器模型的模型信息;
    生成模块,用于基于所述待训练编码器模型的模型信息和/或待训练译码器模型的模型信息,生成编码器模型和译码器模型;
    发送模块,用于将所述译码器模型的模型信息发送至所述网络设备,所述译码器模型的模型信息用于部署所述译码器模型。
  31. 一种模型训练部署装置,其特征在于,包括:
    第一获取模块,用于获取UE上报的能力信息,所述能力信息用于指示所述UE的AI和/或ML的支持能力;
    发送模块,用于基于所述能力信息向所述UE发送待训练编码器模型的模型信息和/或待训练译码器模型的模型信息;
    第二获取模块,用于获取所述UE发送的译码器模型的模型信息,所述译码器模型的模型信息用于部署所述译码器模型;
    生成模块,用于基于所述译码器模型的模型信息生成所述译码器模型。
  32. 一种通信装置,其特征在于,所述装置包括处理器和存储器,其中,所述存储器中存储有计算机程序,所述处理器执行所述存储器中存储的计算机程序,以使所述装置执行如权利要求1至16中任一项所述的方法。
  33. 一种通信装置,其特征在于,所述装置包括处理器和存储器,其中,所述存储器中存储有计算机程序,所述处理器执行所述存储器中存储的计算机程序,以使所述装置执行如权利要求17至29中任一项所述的方法。
  34. 一种通信装置,其特征在于,包括:处理器和接口电路,其中
    所述接口电路,用于接收代码指令并传输至所述处理器;
    所述处理器,用于运行所述代码指令以执行如权利要求1至16中任一项所述的方法。
  35. 一种通信装置,其特征在于,包括:处理器和接口电路,其中
    所述接口电路,用于接收代码指令并传输至所述处理器;
    所述处理器,用于运行所述代码指令以执行如权利要求17至29中任一项所述的方法。
  36. 一种计算机可读存储介质,用于存储有指令,当所述指令被执行时,使如权利要求1至16中任一项所述的方法被实现。
  37. 一种计算机可读存储介质,用于存储有指令,当所述指令被执行时,使如权利要求17至29中任一项所述的方法被实现。
PCT/CN2022/080477 2022-03-11 2022-03-11 一种模型训练部署方法/装置/设备及存储介质 WO2023168717A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/080477 WO2023168717A1 (zh) 2022-03-11 2022-03-11 一种模型训练部署方法/装置/设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/080477 WO2023168717A1 (zh) 2022-03-11 2022-03-11 一种模型训练部署方法/装置/设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023168717A1 true WO2023168717A1 (zh) 2023-09-14

Family

ID=87936990

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/080477 WO2023168717A1 (zh) 2022-03-11 2022-03-11 一种模型训练部署方法/装置/设备及存储介质

Country Status (1)

Country Link
WO (1) WO2023168717A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112597778A (zh) * 2020-12-14 2021-04-02 华为技术有限公司 一种翻译模型的训练方法、翻译方法以及设备
WO2021089568A1 (en) * 2019-11-04 2021-05-14 Telefonaktiebolaget Lm Ericsson (Publ) Machine learning non-standalone air-interface
US20210160149A1 (en) * 2019-11-22 2021-05-27 Huawei Technologies Co., Ltd. Personalized tailored air interface
US20210209315A1 (en) * 2019-03-29 2021-07-08 Google Llc Direct Speech-to-Speech Translation via Machine Learning
WO2021256584A1 (ko) * 2020-06-18 2021-12-23 엘지전자 주식회사 무선 통신 시스템에서 데이터를 송수신하는 방법 및 이를 위한 장치
WO2022014728A1 (ko) * 2020-07-13 2022-01-20 엘지전자 주식회사 무선 통신 시스템에서 단말 및 기지국의 채널 코딩 수행 방법 및 장치
US20220038349A1 (en) * 2020-10-19 2022-02-03 Ziyi LI Federated learning across ue and ran

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210209315A1 (en) * 2019-03-29 2021-07-08 Google Llc Direct Speech-to-Speech Translation via Machine Learning
WO2021089568A1 (en) * 2019-11-04 2021-05-14 Telefonaktiebolaget Lm Ericsson (Publ) Machine learning non-standalone air-interface
US20210160149A1 (en) * 2019-11-22 2021-05-27 Huawei Technologies Co., Ltd. Personalized tailored air interface
WO2021256584A1 (ko) * 2020-06-18 2021-12-23 엘지전자 주식회사 무선 통신 시스템에서 데이터를 송수신하는 방법 및 이를 위한 장치
WO2022014728A1 (ko) * 2020-07-13 2022-01-20 엘지전자 주식회사 무선 통신 시스템에서 단말 및 기지국의 채널 코딩 수행 방법 및 장치
US20220038349A1 (en) * 2020-10-19 2022-02-03 Ziyi LI Federated learning across ue and ran
CN112597778A (zh) * 2020-12-14 2021-04-02 华为技术有限公司 一种翻译模型的训练方法、翻译方法以及设备

Similar Documents

Publication Publication Date Title
WO2023216079A1 (zh) 资源配置方法/装置/用户设备/网络侧设备及存储介质
US20220232445A1 (en) Communications System Switching Method and Terminal Device
WO2023201586A1 (zh) 一种物理信道重复传输的指示方法及设备/存储介质/装置
WO2023206183A1 (zh) 成功PScell添加或更换报告的相关信息记录方法、装置
CN114731566A (zh) 路径切换方法及装置
WO2023206182A1 (zh) 成功PScell添加或更换报告的位置信息记录方法、装置
WO2023168717A1 (zh) 一种模型训练部署方法/装置/设备及存储介质
WO2023168718A1 (zh) 一种模型训练部署方法/装置/设备及存储介质
WO2023220901A1 (zh) 上报方法、装置
WO2023225829A1 (zh) 多prach传输配置方法、装置
WO2023159614A1 (zh) 一种预编码矩阵确定方法及设备/存储介质/装置
CN113728723B (zh) 辅助配置方法及其装置
WO2023173433A1 (zh) 一种信道估计方法/装置/设备及存储介质
WO2023173254A1 (zh) 一种定时调整方法/装置/设备及存储介质
WO2023173434A1 (zh) 一种信道估计方法、装置、设备及存储介质
WO2023245586A1 (zh) 操作配置方法、装置
WO2023193276A1 (zh) 一种上报方法/装置/设备及存储介质
US20240323996A1 (en) Communication method and apparatus thereof
WO2023206032A2 (zh) 直连sidelink非连续接收DRX的控制方法及装置
WO2023220900A1 (zh) Mcg失败相关信息上报方法、装置
WO2023206181A1 (zh) 一种小区管理方法/装置/设备及存储介质
WO2023201499A1 (zh) 一种小区激活方法/装置/设备及存储介质
WO2024021131A1 (zh) Drx周期确定方法、装置
WO2023206176A1 (zh) 测量报告发送方法、测量报告接收方法、装置
WO2023206570A1 (zh) 一种配置管理方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22930355

Country of ref document: EP

Kind code of ref document: A1