WO2024026751A1 - 确定用于压缩信道状态信息的压缩模型的方法、装置及存储介质 - Google Patents

确定用于压缩信道状态信息的压缩模型的方法、装置及存储介质 Download PDF

Info

Publication number
WO2024026751A1
WO2024026751A1 PCT/CN2022/110091 CN2022110091W WO2024026751A1 WO 2024026751 A1 WO2024026751 A1 WO 2024026751A1 CN 2022110091 W CN2022110091 W CN 2022110091W WO 2024026751 A1 WO2024026751 A1 WO 2024026751A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
terminal
information
compression
decompression
Prior art date
Application number
PCT/CN2022/110091
Other languages
English (en)
French (fr)
Inventor
刘敏
Original Assignee
北京小米移动软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京小米移动软件有限公司 filed Critical 北京小米移动软件有限公司
Priority to CN202280002552.9A priority Critical patent/CN115443643A/zh
Priority to PCT/CN2022/110091 priority patent/WO2024026751A1/zh
Publication of WO2024026751A1 publication Critical patent/WO2024026751A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/04Protocols for data compression, e.g. ROHC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L5/00Arrangements affording multiple use of the transmission path
    • H04L5/003Arrangements for allocating sub-channels of the transmission path
    • H04L5/0053Allocation of signaling, i.e. of overhead other than pilot signals
    • H04L5/0057Physical resource allocation for CQI

Definitions

  • the present disclosure relates to the field of communications, and in particular, to a method, device and storage medium for determining a compression model for compressing channel state information.
  • channel state information is the channel attribute of the communication link. It describes the fading factor of the signal on each transmission path, that is, the value of each element in the channel gain matrix, such as signal scattering (Scattering), environmental fading (fading, multipath fading or shadowing fading), distance attenuation (power decay of distance) and other information.
  • CSI can adapt the communication system to the current channel conditions and provide guarantee for high-reliability and high-speed communication in multi-antenna systems.
  • the present disclosure provides a method, device and storage medium for determining a compression model for compressing channel state information.
  • This compression model is used to report channel status information based on AI.
  • a method of determining a compression model for compressing channel state information is provided, applied to a terminal, and the method includes:
  • Model information sent by a network device where the model information includes a first model parameter of a decompression model, and the decompression model is used by the network device to decompress the channel state information sent by the terminal;
  • a compression model used by the terminal to compress channel state information is determined according to the model information.
  • a method of determining a compression model for compressing channel state information is provided, applied to a network device, and the method includes:
  • the model information includes the first model parameter of the decompression model
  • the decompression model is used by the network device to decompress the channel state information sent by the terminal
  • the model information is used by the terminal to determine the method for compressing the channel state information. compression model.
  • a device for determining a compression model for compressing channel state information is provided, applied to a terminal, and the device includes:
  • a receiving module configured to receive model information sent by a network device, where the model information includes a first model parameter of a decompression model, and the decompression model is used by the network device to decompress the channel state information sent by the terminal;
  • a determining module configured to determine a compression model used by the terminal to compress channel state information according to the model information.
  • an apparatus for determining a compression model for compressing channel state information is provided, applied to network equipment, and the apparatus includes:
  • a sending module configured to send model information, where the model information includes a first model parameter of a decompression model.
  • the decompression model is used by the network device to decompress the channel state information sent by the terminal.
  • the model information is used for decompressing the channel state information sent by the terminal.
  • the terminal determines a compression model used to compress channel state information.
  • an apparatus for determining a compression model for compressing channel state information including:
  • Memory used to store instructions executable by the processor
  • the processor is configured as:
  • Model information sent by a network device where the model information includes a first model parameter of a decompression model, and the decompression model is used by the network device to decompress the channel state information sent by the terminal;
  • a compression model used by the terminal to compress channel state information is determined according to the model information.
  • an apparatus for determining a compression model for compressing channel state information including:
  • Memory used to store instructions executable by the processor
  • the processor is configured as:
  • the decompression model is used by the network device to decompress the channel state information sent by the terminal.
  • the model information is used by the terminal to determine a channel for compression. Compression model for state information.
  • a computer-readable storage medium having computer program instructions stored thereon.
  • the program instructions are executed by a processor, the steps of any one of the methods described in the first aspect of the present disclosure are implemented. , or the steps of the method according to any one of the second aspects of the present disclosure.
  • the terminal receives the model information sent by the network device and obtains the compression model for compressing CSI based on the model information, because the model information includes the first model parameter of the decompression model of the network device. , so the terminal can obtain the compression model for use in conjunction with the decompression model of the network device based on the first model parameters and the privatized data of the terminal manufacturer or chip manufacturer, and the network device does not need to know the compression deployed by the terminal in the process. parameters of the model, thereby ensuring the privatization of the compressed model.
  • FIG. 1A is a schematic diagram of a network system architecture in related art.
  • FIG. 1B is a schematic diagram of another network system architecture in the related art.
  • FIG. 2 is a flowchart illustrating a method of determining a compression model for compressing channel state information according to an exemplary embodiment.
  • FIG. 3 is a flowchart illustrating a method of determining a compression model for compressing channel state information according to an exemplary embodiment.
  • FIG. 4 is a flowchart of a method of determining a compression model for compressing channel state information according to an exemplary embodiment.
  • FIG. 5 is a flowchart illustrating a method of determining a compression model for compressing channel state information according to an exemplary embodiment.
  • FIG. 6 is a flowchart of a method of determining a compression model for compressing channel state information according to an exemplary embodiment.
  • FIG. 7 is a flowchart illustrating a method of determining a compression model for compressing channel state information according to an exemplary embodiment.
  • FIG. 8 is a flowchart illustrating a method of determining a compression model for compressing channel state information according to an exemplary embodiment.
  • FIG. 9 is a flowchart illustrating a method of determining a compression model for compressing channel state information according to an exemplary embodiment.
  • FIG. 10 is a flowchart illustrating a method of determining a compression model for compressing channel state information according to an exemplary embodiment.
  • FIG. 11 is a flowchart illustrating a method of determining a compression model for compressing channel state information according to an exemplary embodiment.
  • FIG. 12 is a flowchart illustrating a method of determining a compression model for compressing channel state information according to an exemplary embodiment.
  • FIG. 13 is a flowchart illustrating a method of determining a compression model for compressing channel state information according to an exemplary embodiment.
  • Figure 14 is a flowchart illustrating a method of determining a compression model for compressing channel state information according to an exemplary embodiment.
  • FIG. 15 is a flowchart illustrating a method of determining a compression model for compressing channel state information according to an exemplary embodiment.
  • Figure 16 is a flowchart illustrating a method of determining a compression model for compressing channel state information according to an exemplary embodiment.
  • Figure 17 is a flowchart illustrating a method of determining a compression model for compressing channel state information according to an exemplary embodiment.
  • FIG. 18 is a flowchart illustrating a method of determining a compression model for compressing channel state information according to an exemplary embodiment.
  • FIG. 19 is a flowchart illustrating a method of determining a compression model for compressing channel state information according to an exemplary embodiment.
  • FIG. 20 is a flowchart illustrating a method of determining a compression model for compressing channel state information according to an exemplary embodiment.
  • FIG. 21 is an interaction diagram illustrating a method of determining a compression model for compressing channel state information according to an exemplary embodiment.
  • FIG. 22 is a block diagram of an apparatus for determining a compression model for compressing channel state information according to an exemplary embodiment.
  • FIG. 23 is a block diagram of an apparatus for determining a compression model for compressing channel state information according to an exemplary embodiment.
  • Figure 24 is a block diagram of a terminal according to an exemplary embodiment.
  • Figure 25 is a block diagram of a network device according to an exemplary embodiment.
  • channel state information is the channel attribute of the communication link. It describes the fading factor of the signal on each transmission path, that is, the value of each element in the channel gain matrix, such as signal scattering (Scattering), environmental fading (fading, multipath fading or shadowing fading), distance attenuation (power decay of distance) and other information.
  • CSI can adapt the communication system to the current channel conditions and provide guarantee for high-reliability and high-speed communication in multi-antenna systems.
  • embodiments of the present disclosure provide a method, device and storage medium for determining a compression model for compressing channel state information.
  • Embodiments of the present disclosure may be applied to 4G (fourth generation mobile communication system) evolution systems, such as long term evolution (LTE) systems, or may also be applied to 5G (fifth generation mobile communication system) systems, such as using new Wireless access technology (new radio access technology, New RAT) access network; Cloud Radio Access Network (Cloud Radio Access Network, CRAN) and other communication systems.
  • 4G fourth generation mobile communication system
  • 5G fifth generation mobile communication system
  • new Wireless access technology new radio access technology, New RAT
  • New RAT new Radio Access technology
  • CRAN Cloud Radio Access Network
  • FIG. 1A exemplarily shows a schematic diagram of a system architecture applicable to embodiments of the present disclosure. It should be understood that the embodiments of the present disclosure are not limited to the system shown in FIG. 1A. In addition, the device in FIG. 1A may be hardware, functionally divided software, or a combination of the above two structures.
  • the system architecture provided by the embodiment of the present disclosure includes a terminal, a base station, a mobility management network element, a session management network element, a user plane network element, and a data network (DN). The terminal communicates with the DN through the base station and user plane network elements.
  • DN data network
  • the network elements shown in Figure 1A can be network elements in either the 4G architecture or the 5G architecture.
  • DN provides data transmission services to users
  • PDN Protocol Data Unit
  • IMS IP Multi-media Service
  • the mobility management network element may include an access and mobility management function (AMF) in 5G.
  • AMF access and mobility management function
  • the mobility management network element is responsible for the access and mobility management of terminals in the mobile network.
  • AMF is responsible for terminal access and mobility management, NAS message routing, session management function entity (session management function, SMF) selection, etc.
  • AMF can be used as an intermediate network element to transmit session management messages between the terminal and SMF.
  • the session management network element is responsible for forwarding path management, such as delivering packet forwarding policies to user plane network elements and instructing user plane network elements to process and forward packets according to the packet forwarding policy.
  • the session management network element can be the SMF in 5G (as shown in Figure 1B), which is responsible for session management, such as session creation/modification/deletion, user plane network element selection, and allocation and management of user plane tunnel information.
  • the user plane network element can be a user plane function (UPF) in the 5G architecture, as shown in Figure 1B.
  • UPF user plane function
  • the system architecture provided by the embodiments of the present disclosure may also include a data management network element for processing terminal device identification, access authentication, registration, mobility management, etc.
  • the data management network element may be a unified data management (UDM) network element.
  • the system architecture provided by the embodiments of the present disclosure may also include a policy control function entity (policy control function, PCF) or a policy charging control function entity (policy and charging control function, PCRF).
  • policy control function policy control function
  • PCRF policy charging control function entity
  • PCF or PCRF is responsible for policy control decisions and flow-based charging control.
  • the system architecture provided by the embodiments of the present disclosure may also include network storage network elements for maintaining real-time information of all network function services in the network.
  • the network storage network element may be a network repository function (NRF) network element.
  • NRF network repository function
  • Network repository network elements can store a lot of network element information, such as SMF information, UPF information, AMF information, etc.
  • Network elements such as AMF, SMF, and UPF in the network may be connected to the NRF.
  • they can register their own network element information to the NRF.
  • other network elements can obtain the information of already registered network elements from the NRF.
  • Other network elements (such as AMF) can obtain optional network elements by requesting NRF based on network element type, data network identification, unknown area information, etc.
  • the domain name system (DNS) server is integrated in the NRF, then the corresponding selection function network element (such as AMF) can request from the NRF to obtain other network elements to be selected (such as SMF).
  • DNS domain name system
  • the base station can also be called an access node. If it is a wireless access form, it is called a radio access network (RAN), as shown in Figure 1B As shown, wireless access services are provided for terminals.
  • the access node can be a base station in a global system for mobile communication (GSM) system or a code division multiple access (CDMA) system, or it can be a wideband code division multiple access (wideband code division multiple access)
  • GSM global system for mobile communication
  • CDMA code division multiple access
  • the base station (NodeB) in the access, WCDMA) system can also be the evolutionary node B (eNB or eNodeB) in the LTE system, or the base station equipment, small base station equipment, wireless access node (WiFiAP) in the 5G network ), wireless interoperability for microwave access base station (WiMAX BS), etc. This disclosure is not limited to this.
  • Terminal also known as access terminal, user equipment (UE), user unit, user station, mobile station, mobile station, remote station, remote terminal, mobile device, user terminal, wireless communication equipment, user agent or User devices, etc.
  • Figure 1B takes UE as an example for illustration.
  • the terminal can be a cellular phone, a cordless phone, a session initiation protocol (SIP) phone, a wireless local loop (WLL) station, a personal digital assistant (PDA), a mobile phone with wireless communication capabilities Handheld devices, computing devices or other processing devices connected to wireless modems, vehicle-mounted devices, wearable devices, IoT end devices such as fire detection sensors, smart water/electricity meters, factory monitoring equipment, etc.
  • SIP session initiation protocol
  • WLL wireless local loop
  • PDA personal digital assistant
  • the above functions can be either network elements in hardware devices, software functions running on dedicated hardware, or virtualized functions instantiated on a platform (e.g., a cloud platform).
  • models are deployed separately on the base station and terminal sides.
  • CSI is compressed through the compression model and executed on the terminal side.
  • CSI is decompressed through the decompression model and executed on the base station side.
  • the decompression model and the compression model are jointly used.
  • the model parameters involved in the embodiments of the present disclosure may include configuration variables inside the model and/or configuration variables outside the model (i.e., model super parameter).
  • Embodiments of the present disclosure do not limit the types of decompression models and compression models.
  • the variables configured inside the model may include, for example, the calculation parameter matrix of each neuron node, and the hyperparameters may, for example, be used to train the neural network.
  • Learning step size, and for support vector machines the variables configured inside the model can be, for example, support vectors, and the hyperparameters can be, for example, the sigma parameters of the support vector machine.
  • the model parameters in the embodiment of the present disclosure may specifically include at least one of the following parameters: model type, learning step size, calculation parameter matrix of each neuron node, The calculation parameter matrix is filled with the filling value, the deviation of each neuron node, and the activation function of each neuron node.
  • Figure 2 is a flowchart of a method for determining a compression model for compressing channel state information according to an exemplary embodiment. It is applied to a terminal. As shown in Figure 2, the method includes:
  • the terminal receives model information sent by the network device.
  • the model information includes the first model parameter of the decompression model.
  • the decompression model is used by the network device to decompress the channel state information sent by the terminal.
  • the terminal determines a compression model used by the terminal to compress channel state information based on the model information.
  • the first model parameter is used to indicate the compression model of the terminal corresponding to the decompression model.
  • the network device may be an access network device (such as a base station) as shown in Figure 1A, or other network logical entities in the core network.
  • the communication network includes an access network (as shown in Figure 1A The base station shown in ), the bearer network and the core network (the mobility management network element and the session management network element shown in Figure 1A and the NRF and AMF shown in Figure 1B).
  • the first model parameter of the above-mentioned decompression model may be obtained by the base station through training, and the present disclosure does not specifically limit the training method of the decompression model.
  • the first model parameter is used to indicate the terminal's compression model corresponding to the decompression model; the terminal can obtain the application that can be used in conjunction with the decompression model based on at least the first model parameter of the decompression model.
  • Compression model for compressed CSI For example, the terminal can train a compression model corresponding to the decompression model based on the first model parameters and the privatized data collected by the terminal.
  • the above model information may be sent by the base station through RRC signaling.
  • the network device may be the base station.
  • the above model information can also be sent by the AMF in the core network, that is, the access and mobility management entity, through NAS (Non-access stratum, non-access layer) layer signaling.
  • the network device can be AMF.
  • the model information may be sent when the decompression model deployed in the network device changes, may be sent in response to the terminal accessing the base station, or may be sent regularly by the network device at a certain period.
  • the compression model in the terminal changes, it will not affect the decompression model deployed in the network device. That is to say, the decompression model of the network device will not be affected by the compression model of the terminal, thus effectively ensuring The complexity of decompression models deployed in network devices does not increase.
  • the compression model used by the terminal to compress channel state information satisfies at least one of the following model performance conditions: the mean square error or the normalized mean square error when the compression model and the decompression model are jointly used are less than the first Preset threshold; the cosine similarity when the compression model and the decompression model are used together is greater than the second preset threshold; the cosine similarity square when the compression model and the decompression model are used together is greater than the third preset threshold; the compression model and the decompression model The signal-to-noise ratio under joint use is greater than the fourth preset threshold.
  • the compression model used by the terminal to compress channel state information needs to meet multiple conditions above at the same time. If the compression model cannot meet any one of the multiple conditions, the terminal can further Adjust the parameters of the compression model, or have the network device re-send model information to the compression model, thereby ensuring that the compression model obtained based on the model information sent by the terminal can reliably store channel state information.
  • the terminal receives the model information sent by the network device and obtains the compression model for compressing the channel state information according to the model information. Since the model information includes the first model parameter of the decompression model of the network device, therefore Based on the first model parameters and the privatized data of the terminal manufacturer or chip manufacturer, the terminal can obtain the compression model for use in conjunction with the decompression model of the network device. In this process, the network device does not need to know the compression model deployed by the terminal. parameters, thereby ensuring the privatization of the compression model.
  • Figure 3 is a flow chart of a method of determining a compression model for compressing channel state information according to an exemplary embodiment. It is applied to a terminal. As shown in Figure 3, the method includes:
  • the terminal receives model information sent by the network device.
  • the model information includes the first model parameters of the decompression model and the second model parameters of the compression model corresponding to the decompression model.
  • the decompression model is used by the network device to decompress the channel state information sent by the terminal. .
  • the terminal determines a compression model used by the terminal to compress the channel state information based on the model information.
  • the terminal can determine according to its own capabilities or other parameters to obtain the compression model according to the first model parameters, or obtain the compression model according to the second model parameters.
  • the model information also includes configuration information of a network device (eg, a base station), and the terminal can determine whether to use the second model parameters or to obtain the compression model based on the first model parameters based on the configuration information.
  • the configuration information may at least be used to indicate whether the terminal directly uses the compression model corresponding to the second model parameter, or whether to adjust the second model parameter.
  • the terminal can obtain the compression model based on the first model parameters, or can also obtain the compression model based on the second model.
  • the parameters obtained by the compression model can ensure that terminals of different terminal manufacturers or terminal chip manufacturers can effectively obtain the compression model.
  • Figure 4 is a flowchart of a method for determining a compression model for compressing channel state information according to an exemplary embodiment. It is applied to a terminal. As shown in Figure 4, the method includes:
  • the terminal receives model information sent by the network device.
  • the model information includes the first model parameter of the decompression model.
  • the decompression model is used by the network device to decompress the channel state information sent by the terminal.
  • the terminal obtains a compression model corresponding to the decompression model based on the first model parameter training.
  • the terminal can at least obtain the compression model based on the privatized data of its terminal manufacturer or chip manufacturer and the first model parameters. This disclosure does not specifically limit the specific training method of the compression model.
  • the model information further includes second model parameters of the compression model corresponding to the decompression model.
  • the terminal may determine whether to execute step S402 based on its capability information. That is, the terminal may determine whether the terminal has the ability to obtain a compression model corresponding to the decompression model based on training based on the first model parameters based on its capability information.
  • the capability information of the terminal may include the hardware capability of the terminal, such as its computing power, and the capability information may also include the current status information of the terminal, such as load information.
  • the terminal may also determine whether to perform step S402 based on the configuration information of the base station. That is, the terminal determines whether the terminal needs to obtain a compression model corresponding to the decompression model based on the first model parameter training based on the base station configuration information.
  • the configuration information may also indicate whether the terminal needs to directly use the second model parameters.
  • the terminal can decide on its own whether to directly use the second model parameters or to obtain the compression model based on the first model parameters based on its own capabilities or other information.
  • the terminal can retrain according to the first model parameters to obtain the decompression model. Corresponding compression model.
  • the terminal obtains the compression model for compressing channel state information by receiving the first model parameters in the model information sent by the network device and training based on the first model parameters. Due to the training process and/or compression of the compression model Model parameters are transparent to network devices, effectively ensuring the privatization of the compressed model.
  • Figure 5 is a flow chart of a method of determining a compression model for compressing channel state information according to an exemplary embodiment. It is applied to a terminal. As shown in Figure 5, the method includes:
  • the terminal receives model information sent by the network device.
  • the model information includes the first model parameter of the decompression model.
  • the decompression model is used by the network device to decompress the channel state information sent by the terminal.
  • the terminal sends an acquisition request to the server, and obtains the compression model corresponding to the decompression model sent by the server.
  • the training of the model relies on the corresponding server on the terminal side, such as the server of a chip company. After the terminal obtains the first model parameters, it sends the first model parameters to the corresponding server on the terminal side. The server completes the training of the compression model. Finally, the compressed model is sent to the terminal.
  • the server may be a third-party server, a network-side server, such as a server in a core network, or a terminal-side server, such as a chip manufacturer's server or a terminal manufacturer's server. This disclosure will Not limited.
  • the acquisition request includes at least the first model parameter of the decompression model, and the acquisition request may also include training data for training the compression model, or the training data for training the compression model is stored in the server.
  • the terminal can also train locally on the terminal to obtain a compression model corresponding to the decompression model.
  • the terminal sends an acquisition request to the server to request the server to train the compression model, or training locally on the terminal may be determined based on the terminal's capability information. For example, if the current load of the terminal is greater than the preset threshold, you can request the server to train the compression model by sending an acquisition request to the server. If the current load of the terminal is less than the preset threshold, training can be performed locally on the terminal.
  • the model information further includes second model parameters of the compression model corresponding to the decompression model.
  • the terminal can determine whether to execute step S502 based on its capability information, that is, the terminal determines whether the terminal has the ability to obtain a compression model corresponding to the decompression model based on the first model parameter training based on its capability information, If the terminal does not have the ability to obtain a compression model corresponding to the decompression model based on training based on the first model parameters, the compression model can be obtained based on the second model parameters.
  • whether to perform S502 may also be determined based on the privatization requirements of the terminal. For example, if the terminal has no privatization requirements, the step S502 may not be performed, and the compression model may be obtained directly based on the second model parameters.
  • the terminal may determine whether to directly use the second model parameters or to obtain the compression model based on the first model parameters according to the configuration information of the base station (the model information may include the configuration information, for example). In the absence of base station configuration information, the terminal can decide on its own whether to directly use the second model parameters or to obtain the compression model based on the first model parameters based on its own capabilities or other information.
  • the terminal can send an acquisition request to the server again and obtain the The decompression model corresponds to the updated compression model.
  • the terminal sends an acquisition request to the server so that the server trains the compression model, so that the terminal obtains the compression model for compressing channel state information.
  • the compression model may not be trained locally on the terminal, or the terminal cannot support compression temporarily.
  • the compression model can be trained. While ensuring the privatization of the compression model, it effectively ensures that terminals from different terminal manufacturers or chip manufacturers can obtain the compression model through training.
  • Figure 6 is a flowchart of a method for determining a compression model for compressing channel state information according to an exemplary embodiment. It is applied to a terminal. As shown in Figure 6, the method includes:
  • the terminal receives model information sent by the network device.
  • the model information at least includes the second model parameters of the compression model corresponding to the decompression model.
  • the decompression model is used by the network device to decompress the channel state information sent by the terminal.
  • the terminal adjusts the second model parameters to obtain the third model parameters.
  • the terminal establishes a compression model for compressing channel state information according to the third model parameters.
  • the second model parameters may be adjusted based on the privatized data of the terminal manufacturer or chip manufacturer in the terminal.
  • the adjustment method of the second model parameters in this disclosure is not specifically limited.
  • steps S602 and S603 may be determined by the terminal according to its capability information. For example, when determining that the capability information of the terminal indicates that the computing power of the terminal does not have the ability to train the compression model, but has the ability to train the second model. If the ability to adjust parameters is obtained, step S602 and step S603 can be performed.
  • the terminal may further optimize the third model parameter or modify the second The model parameters are re-tuned and a compression model corresponding to the decompression model is obtained.
  • the model information in step S601 may include the second model parameters of the compression model corresponding to the decompression model. In another implementation, the model information in step S601 may include first model parameters of the decompression model and second model parameters of the compression model corresponding to the decompression model.
  • the terminal receives the second model parameters of the compression model corresponding to the decompression model sent by the network device, and further adjusts the second model parameters to obtain the third model parameters and establishes a compression model based on the third model parameters.
  • the adjusted compression model can be obtained by adjusting the second model parameters sent by the network device, and the parameters of the compression model are transparent to the network device, which can effectively ensure that the compression model of privatization.
  • Figure 7 is a flow chart of a method for determining a compression model for compressing channel state information according to an exemplary embodiment. It is applied to a terminal. As shown in Figure 7, the method includes:
  • the terminal receives model information sent by the network device.
  • the model information at least includes the second model parameters of the compression model corresponding to the decompression model.
  • the decompression model is used by the network device to decompress the channel state information sent by the terminal.
  • the terminal establishes a compression model for compressing channel state information according to the second model parameters.
  • step S702 may be performed by the terminal according to its capability information. For example, after determining that the terminal's capability information indicates that the terminal's computing power does not have the ability to train the compression model, nor does it have the ability to perform the second model parameters. If the adjustment capability is available, step S702 may be performed, so that the terminal directly applies the model parameters of the compression model sent by the network device.
  • the terminal can ask the network device to resend the model information, and according to the network device The second model parameters in the newly sent model information establish a compressed model.
  • the model information in step S701 may include the second model parameters of the compression model corresponding to the decompression model. In another implementation, the model information in step S701 may include first model parameters of the decompression model and second model parameters of the compression model corresponding to the decompression model.
  • the terminal receives the second model parameters of the compression model corresponding to the decompression model sent by the network device, and directly uses the second model parameters to establish a compression model for compressing channel state information, which can make it possible to operate when the terminal capability is insufficient.
  • the compression model can be obtained by receiving the second model parameters sent by the network device and applying them.
  • steps S402, S502, S602 to S603 in the above embodiment may be selectively executed according to the terminal's capability information.
  • step S702 For example, if the current load of the terminal is higher than the first preset threshold, you can choose to perform step S702. If the current load of the terminal is lower than the second preset threshold, you can choose to perform step S402. If the current load of the terminal is higher than If the second preset threshold is lower than the first preset threshold, you may choose to perform steps S602 to S603. Alternatively, when the current load of the terminal is lower than the second preset threshold and the computing power of the terminal is less than the preset computing power threshold, steps S502 to S503 may be performed. This disclosure does not specifically limit which method the terminal selects to obtain the compression model based on the model information sent by the network device.
  • Figure 8 is a flowchart of a method for determining a compression model for compressing channel state information according to an exemplary embodiment. It is applied to a terminal. As shown in Figure 8, the method includes:
  • the terminal receives model information sent by the network device.
  • the model information includes first model parameters of multiple decompression models; or, the model information includes first model parameters of multiple decompression models and compression models that correspond one-to-one to the multiple decompression models.
  • the second model parameters wherein, the decompression model is used by the network device to decompress the channel state information sent by the terminal.
  • the terminal determines the model parameters of the target model from the model information at least based on the terminal's capability information and/or the frequency band information used by the terminal.
  • the target model is a decompression model or a compression model.
  • the terminal determines a compression model used by the terminal to compress channel state information according to the model parameters of the target model.
  • network equipment can deploy multiple decompression models to ensure that the compressed channel state information sent by terminals with different capabilities and frequency bands can be reliably decompressed.
  • the model information includes first model parameters of multiple decompression models, or first model parameters of multiple decompression models and second model parameters of a compression model corresponding to the multiple decompression models, which may be reported by the terminal. Determined by capability information and/or frequency band information. For example, if the capability information reported by the terminal indicates that the terminal does not have the capability of model training, the network device can send a second model including first model parameters of multiple decompression models and a compression model corresponding to the multiple decompression models. Parameter model information.
  • the terminal can determine the decompression model and/or compression model corresponding to the terminal based on the frequency band information used by the terminal, and then determine the target model as the decompression model or compression model based on whether the terminal has the ability to train the compression model.
  • the model information only includes the first model parameters of the first decompression model corresponding to the first frequency band and the second decompression model corresponding to the second frequency band
  • the frequency band information used by the terminal indicates that the terminal uses the first frequency band
  • it can be determined that the target model is the first decompression model.
  • the model information includes a first model parameter of a first decompression model corresponding to the first frequency band and a second model parameter of the first compression model corresponding to the first decompression model, and a second decompression model corresponding to the second frequency band.
  • the first model parameters of the model and the second model parameters of the second compression model corresponding to the second decompression model if the frequency band information used by the terminal represents that the terminal uses the first frequency band, and the capability information of the terminal represents the If the terminal does not have the capability of compression model training, the target model of the terminal can be determined to be the first compression model.
  • step S803 may be the compression model obtained by training by the terminal according to the first model parameter corresponding to the decompression model, wherein the compression model may be obtained by local training of the terminal or through the server. obtained through training.
  • the terminal can determine, based on its capability information, whether to directly establish a compression model based on the second model parameters corresponding to the target model, or to optimize the second model parameters to obtain a third model. parameters and establish a compression model based on the third model parameters.
  • the network device sends model parameters of multiple decompression models, or multiple decompression models and model parameters of the compression model corresponding to the multiple decompression models, so that the terminal determines based on its capability information and/or frequency band information.
  • the target model is used to determine the compression model of the terminal based on the parameters of the target model, ensuring that terminals with different capabilities and frequency bands can effectively compress channel state information, and enabling network terminals to compress the compressed channels sent by these terminals. State information is reliably decompressed.
  • Figure 9 is a flowchart of a method for determining a compression model for compressing channel state information according to an exemplary embodiment. It is applied to a terminal. As shown in Figure 9, the method includes:
  • the terminal receives model information sent by the network device.
  • the model information includes first model parameters of multiple decompression models; or, the model information includes first model parameters of multiple decompression models and compression models that correspond one-to-one to the multiple decompression models.
  • the second model parameters wherein, the decompression model is used by the network device to decompress the channel state information sent by the terminal.
  • the terminal determines the model parameters of the target model from the model information at least based on the terminal's capability information and/or the frequency band information used by the terminal.
  • the target model is a decompression model or a compression model.
  • S903 The terminal determines a compression model used by the terminal to compress channel state information according to the model parameters of the target model.
  • the terminal reports the target model determined by the terminal to the network device.
  • step S904 may be executed after the execution of step S903, may be executed before the execution of step S903, or may be executed simultaneously with step S903 after the execution of step S902. This disclosure is Not limited.
  • step S903 is an exemplary method of determining the model parameters of the target model from the model information through the terminal's capability information and/or the frequency band information used by the terminal; of course, those skilled in the art can understand that the terminal can also Determine through other parameters, and determine the model parameters of the target model from the model information, which will not be described again here.
  • the network device can determine the channel state information sent by the terminal based on the target model after receiving the target model reported by the terminal.
  • the decompression model enables the network device to correctly select the decompression model and use it in conjunction with the compression model in the terminal to ensure that the channel state information sent by the terminal can be effectively decompressed.
  • Figure 10 is a flow chart of a method of determining a compression model for compressing channel state information according to an exemplary embodiment. It is applied to a terminal. As shown in Figure 10, the method includes:
  • the terminal receives model information sent by the network device.
  • the model information includes first model parameters of multiple decompression models; or, the model information includes first model parameters of multiple decompression models and compression models that correspond one-to-one to the multiple decompression models.
  • the second model parameters wherein, the decompression model is used by the network device to decompress the channel state information sent by the terminal.
  • the terminal determines the model parameters of the target model from the model information at least based on the terminal's capability information and/or the frequency band information used by the terminal.
  • the target model is a decompression model or a compression model.
  • the terminal reports the terminal's capability information and/or the frequency band information used by the terminal to the network device.
  • the terminal's capability information and/or the frequency band information used by the terminal is used by the network device to determine the target model determined by the terminal.
  • the network device determines based on the frequency band information reported by the received terminal
  • the frequency band information indicates that the terminal uses the first frequency band, and the network device can determine that the first decompression model selected by the terminal is the target model.
  • the model information includes a first model parameter of a first decompression model corresponding to the first frequency band and a second model parameter of the first compression model corresponding to the first decompression model, and a second decompression model corresponding to the second frequency band.
  • the network device determines based on the frequency band information reported by the terminal that the received terminal information indicates that the terminal uses the first frequency band , and the capability information reported by the terminal indicates that the terminal does not have the capability of compression model training, then it can be determined that the target model selected by the terminal is the first compression model.
  • the network device may determine a decompression model used to decompress the channel state information sent by the terminal according to the target model selected by the terminal to decompress the channel state information.
  • step S1002 is an exemplary method of determining the model parameters of the target model from the model information through the terminal's capability information and/or the frequency band information used by the terminal; of course, those skilled in the art can understand that the terminal can also Determine through other parameters, and determine the model parameters of the target model from the model information, which will not be described again here.
  • a decompression model corresponding to the channel state information sent by the terminal can be determined based on the target model to ensure that the channel state information sent by the terminal can be effectively decompressed.
  • Figure 11 is a flow chart of a method for determining a compression model for compressing channel state information according to an exemplary embodiment. It is applied to a terminal. As shown in Figure 11, the method includes:
  • the terminal receives model information sent by the network device.
  • the model information includes first model parameters of multiple decompression models; or, the model information includes first model parameters of multiple decompression models and compression models that correspond one-to-one to the multiple decompression models.
  • the second model parameters wherein, the decompression model is used by the network device to decompress the channel state information sent by the terminal.
  • the terminal determines the model parameters of the target model from the model information at least based on the terminal's capability information and/or the frequency band information used by the terminal.
  • the target model is a decompression model or a compression model.
  • the terminal reports the identification information of the target model to the network device.
  • the identification information is used to uniquely identify each decompression model in the model information.
  • the decompression model and compression model corresponding to each group of associations may use the same identification information or different identification information.
  • the identification information may be represented by a model ID, for example.
  • a model ID for example, for the first decompression model and the first compression model corresponding to the first decompression model, "0001" can be used as the corresponding identification information at the same time, or for the first decompression model, "0010" can be used as the corresponding identification information.
  • the identification information of the first compression model "0011" can be used as its corresponding identification information. This disclosure does not limit the setting method of the identification information.
  • step S1102 is an exemplary method of determining the model parameters of the target model from the model information through the terminal's capability information and/or the frequency band information used by the terminal; of course, those skilled in the art can understand that the terminal can also Determine through other parameters, and determine the model parameters of the target model from the model information, which will not be described again here.
  • the identification information of the target model is reported to the network device, so that the network device determines the target model determined by the terminal.
  • the network device determines the target model reported by the terminal, it can be based on the The target model determines a decompression model corresponding to the channel state information sent by the terminal to ensure that the channel state information sent by the terminal can be effectively decompressed.
  • Figure 12 is a flow chart of a method for determining a compression model for compressing channel state information according to an exemplary embodiment. It is applied to a terminal. As shown in Figure 12, the method includes:
  • the terminal receives model information sent by the network device.
  • the model information includes the first model parameter of the decompression model.
  • the decompression model is used by the network device to decompress the channel state information sent by the terminal.
  • the terminal determines a compression model used by the terminal to compress channel state information based on the model information.
  • the terminal determines the activation time of the compression model according to the preset time in the communication protocol or the preset time configured by the network device.
  • the preset time is a time relative to a specific time.
  • the preset duration configured by the network device may be sent to the terminal through model information, or may be sent to the terminal through other signaling.
  • the communication protocol is a predetermined protocol for communication between the terminal and the network device.
  • the network device can also determine the compression model of the terminal and itself according to the preset duration in the communication protocol or the preset duration configured by the network device. The activation time of the decompression model.
  • the specific time is the end time of the PDSCH where the terminal receives the model information.
  • the specific moment is the moment when the terminal feeds back the last symbol of the uplink resource of the HARQ-ACK information for the PDSCH including the model information.
  • the preset duration may be determined in different ways corresponding to different compression models.
  • the preset duration may be X unit duration; corresponding to the compression model determination method of steps S602 to step S603 in the above embodiment, the preset time length may be The duration may be Y unit durations; corresponding to the compression model determination method in step S702 in the above embodiment, the preset duration may be Z unit durations, and the sizes of The device is configured.
  • X can be greater than Y and Y can be greater than Z, or X, Y, and Z can also be equal, which is not limited in this disclosure.
  • the unit duration may be a duration corresponding to a time slot, a duration corresponding to a symbol, or a duration corresponding to a subframe, and is not specifically limited in this disclosure.
  • the activation time of the compression model can be the time corresponding to slot n+X, that is, the compression model is enabled at the time corresponding to the time slot numbered n+X.
  • the terminal and the network device can be synchronized. Enable the compression model and the decompression model to ensure that the network device can reliably decompress the compressed channel state information sent by the terminal to ensure the communication quality between the network device and the terminal.
  • Figure 13 is a flowchart of a method for determining a compression model for compressing channel state information according to an exemplary embodiment. It is applied to a network device. As shown in Figure 13, the method includes:
  • the network device sends model information.
  • the model information includes the first model parameters of the decompression model.
  • the decompression model is used by the network device to decompress the channel state information sent by the terminal.
  • the model information is used by the terminal to determine a compression model for compressing the channel state information.
  • the model information may also include first model parameters of multiple decompression models; or, the model information may include first model parameters of multiple decompression models, and first model parameters of a compression model corresponding to the multiple decompression models. Two model parameters.
  • the above model information may be sent by the base station through RRC signaling.
  • the network device may be the base station.
  • the above model information can also be sent by the AMF in the core network, that is, the access and mobility management entity, through NAS (Non-access stratum, non-access layer) layer signaling.
  • the network device can be AMF.
  • the compression model used by the terminal to compress channel state information satisfies at least one of the following model performance conditions: the mean square error or the normalized mean square error when the compression model and the decompression model are jointly used are less than the first Preset threshold; the cosine similarity when the compression model and the decompression model are used together is greater than the second preset threshold; the cosine similarity square when the compression model and the decompression model are used together is greater than the third preset threshold; the compression model and the decompression model The signal-to-noise ratio under joint use is greater than the fourth preset threshold.
  • the model information may be sent when the decompression model deployed in the network device is changed, or may be sent in response to the terminal accessing the base station.
  • sending model information through the network device so that the terminal determines the compression model used to compress channel state information can ensure that the network device has the dominance of the terminal's compression model.
  • the decompression model deployed on the network device side is updated, Able to effectively update the compression model deployed on the terminal side.
  • the network device may also be configured with a preset duration, or a preset duration may be specified in the communication protocol between the network device and the terminal, so that the terminal determines the activation time of the compression model based on the preset duration. . Furthermore, the network device may determine the activation time of the decompression model corresponding to the compression model used by the terminal according to the preset time period.
  • the preset duration is a duration relative to a specific moment. This allows the terminal and network equipment to simultaneously activate the corresponding compression model and decompression model to achieve reliable compression and decompression of channel state information.
  • model information is sent to the terminal through the network device, so that the terminal obtains a compression model for compressing the channel state information according to the model information.
  • the model information includes the first model parameter of the decompression model of the network device
  • the terminal Based on the first model parameters and the privatized data of the terminal manufacturer or chip manufacturer, the compression model for use in conjunction with the decompression model of the network device can be obtained by itself, and the network device does not need to know the parameters of the compression model deployed by the terminal in this process. , thus ensuring the privatization of the compression model.
  • Figure 14 is a flowchart of a method for determining a compression model for compressing channel state information according to an exemplary embodiment. It is applied to a network device. As shown in Figure 14, the method includes:
  • the network device sends model information.
  • the model information includes the first model parameters of the decompression model and the second model parameters of the compression model corresponding to the decompression model.
  • the decompression model is used by the network device to decompress the channel state information sent by the terminal.
  • the model information uses The terminal determines a compression model for compressing channel state information.
  • whether the network device sends the second model parameter may be determined based on the capability information reported by the terminal or terminal group.
  • the terminal may determine whether to use the first model parameters to determine the decompression model or the second model parameters to determine the compression model based on its own capabilities or other parameters.
  • the terminal can determine whether to use the second model parameters to determine the compression model or to use the second model parameters to obtain the third model parameters after tuning, and establish a compression model based on the third model parameters. Compression model for channel state information.
  • the terminal by sending the first model parameters and the second model parameters to the terminal, not only the terminal with the model training capability can obtain the compressed model according to the first model parameters, but also the terminal without the model training capability can obtain the compressed model according to the second model parameter.
  • the model parameters are obtained by compressing the model, which can ensure that terminals from different terminal manufacturers or terminal chip manufacturers can effectively obtain the compressed model.
  • Figure 15 is a flowchart of a method for determining a compression model for compressing channel state information according to an exemplary embodiment. It is applied to a network device. As shown in Figure 15, the method includes:
  • the network device broadcasts model information.
  • the model information includes the first model parameter of the decompression model.
  • the decompression model is used by the network device to decompress the channel state information sent by the terminal.
  • the model information is used by the terminal to determine a compression model for compressing the channel state information.
  • the model information may also include second model parameters of the compression model corresponding to the decompression model.
  • model information may also include the first model parameters of multiple decompression models; or the model information may include the first model parameters of multiple decompression models, and the first model parameters of the compression models corresponding to the multiple decompression models. Second model parameters.
  • the network device sends model information through broadcast, so that all terminals within its broadcast range can receive the model information, and then determine the compression model for compressing the channel state information based on the model information.
  • Figure 16 is a flowchart of a method for determining a compression model for compressing channel state information according to an exemplary embodiment. It is applied to a network device. As shown in Figure 16, the method includes:
  • the network device sends model information to the terminal in a unicast manner.
  • the model information includes the first model parameter of the decompression model.
  • the decompression model is used by the network device to decompress the channel state information sent by the terminal.
  • the model information is used by the terminal to determine the compression method. Compression model for channel state information.
  • model information sent through unicast may be different for different terminals.
  • the first model information may be sent to the first terminal in a unicast manner
  • the second model information may be sent to the second terminal in a unicast manner.
  • the model information may also include second model parameters of the compression model corresponding to the decompression model.
  • model information may also include the first model parameters of multiple decompression models; or the model information may include the first model parameters of multiple decompression models, and the first model parameters of the compression models corresponding to the multiple decompression models. Second model parameters.
  • model information can be sent to the terminal through unicast, which can send model information in a more targeted manner. This can avoid excessive use of network resources caused by too much redundant information in the sent information, and can effectively improve the utilization of network resources. Rate.
  • Figure 17 is a flowchart of a method for determining a compression model for compressing channel state information according to an exemplary embodiment. It is applied to a network device. As shown in Figure 17, the method includes:
  • the network device sends model information to the terminal group through multicast.
  • the model information includes the first model parameter of the decompression model.
  • the decompression model is used by the network device to decompress the channel state information sent by the terminal.
  • the model information is used by the terminal to determine the Compression model for compressing channel state information.
  • model information sent through multicast may be different for different terminal groups.
  • the first model information can be sent to the terminals in the first terminal group through multicast
  • the first model information can be sent to the terminals in the second terminal group through multicast.
  • the model information may also include second model parameters of the compression model corresponding to the decompression model.
  • model information may also include the first model parameters of multiple decompression models; or the model information may include the first model parameters of multiple decompression models, and the first model parameters of the compression models corresponding to the multiple decompression models. Second model parameters.
  • model information can be sent to the terminal group through multicast, which can send model information in a more targeted manner. This can avoid excessive use of network resources caused by too much redundant information in the sent information, and can effectively improve network resources. Utilization.
  • Figure 18 is a flowchart of a method of determining a compression model for compressing channel state information according to an exemplary embodiment. It is applied to a network device. As shown in Figure 17, the method includes:
  • the network device For terminals that are not connected to the network device, the network device sends model information to the terminal through broadcast; for terminals or terminal groups that have completed access to the network device, the network device sends model information through unicast or multicast. It includes a first model parameter of a decompression model, the decompression model is used by the network device to decompress the channel state information sent by the terminal, and the model information is used by the terminal to determine a compression model for compressing the channel state information.
  • the broadcast model information may be different from the unicast or multicast model information.
  • the broadcast model information may include first model parameters corresponding to all decompression models deployed in the network device.
  • the unicast or multicast model information may only include the first model parameters corresponding to the partial decompression model deployed in the network device.
  • the model information may also include second model parameters of the compression model corresponding to the decompression model.
  • model information may also include the first model parameters of multiple decompression models; or the model information may include the first model parameters of multiple decompression models, and the first model parameters of the compression models corresponding to the multiple decompression models. Second model parameters.
  • model information can be sent in a more targeted manner through unicast or multicast, which can avoid excessive use of network resources caused by excessive redundant information in the sent information, and can effectively improve the network Resource utilization.
  • Figure 19 is a flowchart of a method for determining a compression model for compressing channel state information according to an exemplary embodiment. It is applied to a network device. As shown in Figure 19, the method includes:
  • the network device responds to the terminal capability information and/or frequency band information sent by the terminal or terminal group for the terminal or terminal group that has completed access to the network device, and sends model information through unicast or multicast.
  • the model information includes decompression The first model parameter of the model, the decompression model is used by the network device to decompress the channel state information sent by the terminal, and the model information is used by the terminal to determine the compression model used to compress the channel state information.
  • the model information sent by the network device is different.
  • the terminal group may be sent to the terminal group in a multicast manner, including the corresponding information.
  • Model information of the first model parameters of the decompression model of the first frequency band may also include the second model parameters of the compression model corresponding to the decompression model of the first frequency band, So that the terminal can obtain the compression model by adjusting the second model parameters.
  • model information can be sent to the terminal group in a multicast manner, and the model information includes information corresponding to the second frequency band.
  • the first model parameter of the decompression model for the frequency band if the frequency band information indicates that the terminals in the terminal group use the first frequency band and the second frequency band, model information can be sent to the terminal group through multicast, and the model information includes the third decompression model corresponding to the first frequency band.
  • a model parameter, and a first model parameter corresponding to the decompression model of the first frequency band can be sent to the terminal group in a multicast manner, and the model information includes information corresponding to the second frequency band.
  • the first model parameter of the decompression model for the frequency band if the frequency band information indicates that the terminals in the terminal group use the first frequency band and the second frequency band, model information can be sent to the terminal group through multicast, and the model information includes the third decompression model corresponding to the first frequency band.
  • the information including the corresponding second frequency band may be sent to the terminal in a unicast manner.
  • the first model parameter of the decompression model, and the model information of the second model parameter of the compression model corresponding to the decompression model may be sent to the terminal in a unicast manner.
  • the model information may also include second model parameters of the compression model corresponding to the decompression model.
  • model information may also include the first model parameters of multiple decompression models; or the model information may include the first model parameters of multiple decompression models, and the first model parameters of the compression models corresponding to the multiple decompression models. Second model parameters.
  • the network device receives the capability information and/or frequency band information of the terminal or terminal group that has accessed the network device, and sends the model information through unicast or multicast.
  • the terminal or terminal group's own capabilities and usage Frequency band more targeted transmission of model information can avoid excessive use of communication resources due to excessive redundant information in the sent information, and can effectively improve communication resource utilization.
  • Figure 20 is a flow chart illustrating a method of determining a compression model for compressing channel state information according to an exemplary embodiment. It is applied to a network device. As shown in Figure 20, the method includes:
  • the network device sends model information, where the model information includes first model parameters of multiple decompression models; or, the model information includes first model parameters of multiple decompression models, and the first model parameters of the compression model that correspond to the multiple decompression models one-to-one.
  • the decompression model is used by the network device to decompress the channel state information sent by the terminal, and the model information is used by the terminal to determine the compression model used to compress the channel state information.
  • the network device receives the target model determined by the terminal reported by the terminal.
  • the target model is determined by the terminal from the model information.
  • the model parameters of the target model are used by the terminal to determine the compression model used by the terminal to compress channel state information.
  • the model information includes the first model parameters of multiple decompression models; or the model information includes the first model parameters of multiple decompression models, and the first model parameters of the compression model that correspond to the multiple decompression models one-to-one.
  • the terminal will only select one of the decompression models or the model parameters corresponding to the compression model to obtain the compression model. Therefore, if the network device and the terminal need to jointly use the corresponding compression model and decompression model, the target model selected by the terminal needs to be determined to ensure that the network device can effectively decompress the channel state information.
  • the terminal may report the identification information of the target model, so that the network device determines the target model determined by the terminal based on the identification information.
  • the terminal may report the capability information of the terminal and/or the frequency band information used by the terminal, so that the network device determines the target model determined by the terminal based on the capability information and/or the frequency band information.
  • the network device can determine the decompression model corresponding to the compression model used by the terminal when receiving the compressed channel state information sent by the terminal.
  • the channel status information is decompressed, thereby effectively ensuring the communication quality between the terminal and the network device.
  • Figure 21 is an interaction diagram illustrating a method of determining a compression model for compressing channel state information according to an exemplary embodiment. As shown in Figure 21, the method includes:
  • the network device sends model information to the terminal.
  • the model information may include first model parameters of multiple decompression models; or, the model information may include first model parameters of multiple decompression models, and second model parameters of the compression model that correspond one-to-one to the multiple decompression models.
  • the model information may be sent by multicast, and the terminal is a terminal in the terminal group, or it may be sent by broadcast.
  • the terminal determines the target model from the model information based on its capability information and/or frequency band information used by the terminal.
  • the terminal reports its capability information and/or used frequency band information to the network device.
  • the network device determines the target model determined by the terminal based on the capability information and/or frequency band information reported by the terminal.
  • the terminal may report the identification information of the target model determined by it, so that the network device determines the target model determined by the terminal based on the identification information.
  • the terminal determines a compression model used by the terminal to compress the channel state information according to the model parameters of the target model.
  • the target model can be a compression model or a decompression model.
  • the terminal can train to obtain the compression model according to the first model parameter of the decompression model, wherein the step of training the compression model can be performed by the server.
  • the terminal can directly apply the second model parameters corresponding to the compression model, or the terminal can also optimize the second model parameters before applying them, that is, according to the parameters obtained by the tuning
  • the third model parameter establishes the compression model.
  • the training of the compression model on the terminal itself or through the server can be determined based on the terminal's capability information. For example, the current load of the terminal is too high and cannot be trained by itself, then the model can be obtained by sending to the server. Request to have the compression model trained on the server.
  • directly applying the second model parameters of the compression model corresponding to the target model, or optimizing the second model and then constructing a compression model may also be determined based on the terminal's capability information.
  • S2106. Determine the activation time of the compression model according to the preset time in the communication protocol or the preset time configured by the network device.
  • the preset duration may be a duration relative to a specific time.
  • the specific time includes the end time of the PDSCH where the terminal receives the model information, and/or the last uplink resource of the HARQ-ACK information fed back by the terminal for the PDSCH including the model information. Symbolic moment.
  • S2107 Enable the compression model to compress the channel state information at the activation time.
  • the network device can synchronously enable the decompression model corresponding to the compression model at the enabling time.
  • the decompression model remains enabled in the network device, and the network device can determine the decompression model corresponding to the compressed channel state information sent by the received terminal to decompress the compressed channel state information.
  • Figure 22 is a block diagram of a device 22 for determining a compression model for compressing channel state information according to an exemplary embodiment.
  • the device 22 is applied to a terminal.
  • the device 22 includes:
  • the receiving module 2201 is configured to receive model information sent by the network device.
  • the model information includes the first model parameter of the decompression model.
  • the decompression model is used by the network device to decompress the channel state information sent by the terminal;
  • the determination module 2202 is configured to determine a compression model used by the terminal to compress the channel state information according to the model information.
  • the model information also includes second model parameters of the compression model corresponding to the decompression model.
  • the determining module 2202 includes:
  • the training submodule is configured to train according to the first model parameters to obtain a compression model corresponding to the decompression model.
  • the determining module 2202 includes:
  • the local training module is configured to train locally on the terminal to obtain the compression model corresponding to the decompression model
  • the server training module is configured to send an acquisition request to the server and obtain the compression model corresponding to the decompression model sent by the server.
  • the determining module 2202 includes:
  • the tuning sub-module is configured to tune the second model parameters to obtain the third model parameters
  • the first establishing sub-module is configured to establish a compression model for compressing channel state information according to the third model parameter.
  • the determining module 2202 includes:
  • the second establishment submodule is configured to establish a compression model for compressing channel state information according to the second model parameters.
  • the compression model used by the terminal to compress channel state information satisfies at least one of the following model performance conditions:
  • the mean square error or normalized mean square error when the compression model and decompression model are used together is less than the first preset threshold
  • the cosine similarity when the compression model and the decompression model are used together is greater than the second preset threshold
  • the squared cosine similarity when the compression model and the decompression model are used together is greater than the third preset threshold
  • the signal-to-noise ratio when the compression model and the decompression model are used together is greater than the fourth preset threshold.
  • the model information includes first model parameters of multiple decompression models; alternatively, the model information includes first model parameters of multiple decompression models, and second model parameters of the compression model that correspond one-to-one to the multiple decompression models.
  • the determining module 2202 includes:
  • the first determination sub-module is configured to determine the model parameters of the target model from the model information based on at least the terminal's capability information and/or the frequency band information used by the terminal, and the target model is a decompression model or a compression model;
  • the second determination sub-module is configured to determine a compression model used by the terminal to compress channel state information according to the model parameters of the target model.
  • device 22 also includes:
  • the reporting module is configured to report the target model determined by the terminal to the network device.
  • the reporting module includes:
  • the first reporting submodule is configured to report the identification information of the target model to the network device
  • the second reporting submodule is configured to report the terminal's capability information and/or the frequency band information used by the terminal to the network device.
  • the terminal's capability information and/or the frequency band information used by the terminal are used by the network device to determine the target model determined by the terminal.
  • device 22 also includes:
  • the activation module is configured to determine the activation time of the compression model based on a preset duration in the communication protocol or a preset duration configured on the network device, and the preset duration is a duration relative to a specific moment.
  • the specific time includes the end time of the terminal receiving the PDSCH where the model information is located, and/or the time when the terminal feeds back the last symbol of the uplink resource of the HARQ-ACK information for the PDSCH including the model information.
  • Figure 23 is a block diagram of a device 23 for determining a compression model for compressing channel state information according to an exemplary embodiment.
  • the device 23 is applied to network equipment.
  • the device 23 includes:
  • the sending module 2301 is configured to send model information.
  • the model information includes the first model parameter of the decompression model.
  • the decompression model is used by the network device to decompress the channel state information sent by the terminal.
  • the model information is used by the terminal to determine the method for compressing the channel state information. Compressed model.
  • the model information also includes second model parameters of the compression model corresponding to the decompression model.
  • the sending module 2301 includes:
  • the first broadcast submodule is configured to broadcast model information; or,
  • the thin sub-module is configured to send model information to the terminal through unicast; or,
  • the multicast submodule is configured to send model information to the terminal group through multicast.
  • the sending module 2301 includes:
  • the second sending sub-module is configured to send model information through broadcast to terminals not connected to network equipment;
  • the second sending sub-module is configured to send model information through unicast or multicast to the terminal or terminal group that has completed access to the network device.
  • the second sending sub-module includes:
  • the third sending submodule is configured to send model information through unicast or multicast in response to the terminal capability information and/or frequency band information sent by the terminal or terminal group for the terminal or terminal group that has completed access to the network device. .
  • the model information sent by the network device is different.
  • the model information includes first model parameters of multiple decompression models; alternatively, the model information includes first model parameters of multiple decompression models, and second model parameters of the compression model that correspond one-to-one to the multiple decompression models.
  • device 23 also includes:
  • the first receiving module is configured to receive the terminal-determined target model reported by the terminal.
  • the target model is determined by the terminal from the model information.
  • the model parameters of the target model are used by the terminal to determine the compression model used by the terminal to compress channel state information.
  • the network device is a base station
  • the sending module 2301 includes:
  • the RRC signaling sending module is configured to send model information through RRC signaling.
  • the network device is an AMF network element in the core network
  • the sending module 2301 includes:
  • the NAS signaling sending module is configured to send model information through NAS signaling.
  • the present disclosure also provides a computer-readable storage medium on which computer program instructions are stored.
  • the program instructions are executed by a processor, the program instructions are implemented to determine the compression method for compressing channel state information provided by any of the foregoing method embodiments provided by the present disclosure. Model method steps.
  • Figure 24 is a block diagram of a terminal 2400 according to an exemplary embodiment.
  • the terminal 2400 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like.
  • the terminal 2400 may include one or more of the following components: a processing component 2402, a memory 2404, a power component 2406, a multimedia component 2408, an audio component 2410, an input/output (I/O) interface 2412, a sensor component 2414, and communications component 2416.
  • a processing component 2402 may include one or more of the following components: a processing component 2402, a memory 2404, a power component 2406, a multimedia component 2408, an audio component 2410, an input/output (I/O) interface 2412, a sensor component 2414, and communications component 2416.
  • Processing component 2402 generally controls the overall operations of terminal 2400, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 2402 may include one or more processors 2420 to execute instructions to complete all or part of the steps of the above method. Additionally, processing component 2402 may include one or more modules that facilitate interaction between processing component 2402 and other components. For example, processing component 2402 may include a multimedia module to facilitate interaction between multimedia component 2408 and processing component 2402.
  • Memory 2404 is configured to store various types of data to support operations at terminal 2400. Examples of such data include instructions for any application or method operating on terminal 2400, contact data, phonebook data, messages, pictures, videos, etc.
  • Memory 2404 may be implemented by any type of volatile or non-volatile storage device, or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EEPROM), Programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EEPROM erasable programmable read-only memory
  • EPROM Programmable read-only memory
  • PROM programmable read-only memory
  • ROM read-only memory
  • magnetic memory flash memory, magnetic or optical disk.
  • Power component 2406 provides power to various components of terminal 2400.
  • Power components 2406 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to terminal 2400.
  • Multimedia component 2408 includes a screen that provides an output interface between the terminal 2400 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide action.
  • multimedia component 2408 includes a front-facing camera and/or a rear-facing camera.
  • the front camera and/or the rear camera may receive external multimedia data.
  • Each front-facing camera and rear-facing camera can be a fixed optical lens system or have a focal length and optical zoom capabilities.
  • Audio component 2410 is configured to output and/or input audio signals.
  • audio component 2410 includes a microphone (MIC) configured to receive external audio signals when terminal 2400 is in operating modes, such as call mode, recording mode, and voice recognition mode. The received audio signals may be further stored in memory 2404 or sent via communications component 2416.
  • audio component 2410 also includes a speaker for outputting audio signals.
  • the I/O interface 2412 provides an interface between the processing component 2402 and a peripheral interface module, which may be a keyboard, a click wheel, a button, etc. These buttons may include, but are not limited to: Home button, Volume buttons, Start button, and Lock button.
  • Sensor component 2414 includes one or more sensors for providing various aspects of status assessment for terminal 2400 .
  • the sensor component 2414 can detect the open/closed state of the terminal 2400 and the relative positioning of components, such as the display and keypad of the terminal 2400.
  • the sensor component 2414 can also detect the position change of the terminal 2400 or a component of the terminal 2400. , the presence or absence of user contact with the terminal 2400, the orientation or acceleration/deceleration of the terminal 2400 and the temperature change of the terminal 2400.
  • Sensor assembly 2414 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • Sensor assembly 2414 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 2414 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • the communication component 2416 is configured to facilitate wired or wireless communication between the terminal 2400 and other devices.
  • the terminal 2400 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
  • the communication component 2416 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communications component 2416 also includes a near field communications (NFC) module to facilitate short-range communications.
  • NFC near field communications
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the terminal 2400 may be configured by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable Gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are implemented for executing the above method.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable Gate array
  • controller microcontroller, microprocessor or other electronic components are implemented for executing the above method.
  • a non-transitory computer-readable storage medium including instructions such as a memory 2404 including instructions, which can be executed by the processor 2420 of the terminal 2400 to complete the above method is also provided.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
  • a computer program product comprising a computer program executable by a programmable device, the computer program having a function for performing the above when executed by the programmable device.
  • the code portion of the method for determining the compression model used to compress channel state information is also provided.
  • FIG. 25 is a block diagram of a network device according to an exemplary embodiment.
  • the network device 2500 may be provided as a base station or as other network logical entities in a core network.
  • network device 2500 includes a processing component 2522, which further includes one or more processors, and memory resources represented by memory 2532 for storing instructions, such as application programs, executable by processing component 2522.
  • the application program stored in memory 2532 may include one or more modules, each corresponding to a set of instructions.
  • the processing component 2522 is configured to execute instructions to perform steps of the method of determining a compression model for compressing channel state information provided by the above method embodiments.
  • Network device 2500 may also include a power supply component 2526 configured to perform power management of device 2500, a wired or wireless network interface 2550 configured to connect network device 2500 to a network, and an input-output (I/O) interface 2558.
  • Network device 2500 may operate based on an operating system stored in memory 2532, such as Windows Server TM , Mac OS X TM , Unix TM , Linux TM , FreeBSD TM or the like.
  • a computer program product comprising a computer program executable by a programmable device, the computer program having a function for performing the above when executed by the programmable device.
  • the code portion of the method for determining the compression model used to compress channel state information is also provided.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Security & Cryptography (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

一种确定用于压缩信道状态信息的压缩模型的方法、装置及存储介质。方法包括:终端接收网络设备发送的模型信息,模型信息包括解压模型的第一模型参数,解压模型用于网络设备解压缩终端发送的信道状态信息;终端根据模型信息确定终端用于压缩信道状态信息的压缩模型。终端通过接受网络设备发送的模型信息,并根据模型信息得到用于压缩信道状态信息的压缩模型,由于该模型信息中包括网络设备的解压模型的第一模型参数,因此终端根据该第一模型参数,以及终端厂商或者芯片厂商的私有化数据就能够自行得到用于与网络设备的解压模型联合使用的压缩模型,而该过程中网络设备无需获知终端部署的压缩模型的参数,从而保证了压缩模型的私有化。

Description

确定用于压缩信道状态信息的压缩模型的方法、装置及存储介质 技术领域
本公开涉及通信领域,尤其涉及一种确定用于压缩信道状态信息的压缩模型的方法、装置及存储介质。
背景技术
在无线通信领域,信道状态信息(CSI,Channel State Information)就是通信链路的信道属性。它描述了信号在每条传输路径上的衰弱因子,即信道增益矩阵中每个元素的值,如信号散射(Scattering),环境衰弱(fading,multipath fading or shadowing fading),距离衰减(power decay of distance)等信息。CSI可以使通信系统适应当前的信道条件,在多天线系统中为高可靠性高速率的通信提供了保障。
在相关技术中,基于AI(Artificial Intelligence,人工智能)模型对CSI增强已是行业趋势,然而,相关技术中的用于CSI增强的AI模型部署与传输方式无法满足不同厂商对于模型参数以及模型数据的私有化的需求。
发明内容
为克服相关技术中存在的问题,本公开提供一种确定用于压缩信道状态信息的压缩模型的方法、装置及存储介质。该压缩模型用于基于AI进行信道状态信息的上报。
根据本公开实施例的第一方面,提供一种确定用于压缩信道状态信息的压缩模型的方法,应用于终端,所述方法包括:
接收网络设备发送的模型信息,所述模型信息包括解压模型的第一模型参数,所述解压模型用于所述网络设备解压缩所述终端发送的信道状态信息;
根据所述模型信息确定所述终端用于压缩信道状态信息的压缩模型。
根据本公开实施例的第二方面,提供一种确定用于压缩信道状态信息的压缩模型的方法,应用于网络设备,所述方法包括:
发送模型信息,所述模型信息包括解压模型的第一模型参数,所述解压模型用于所述网络设备解压缩终端发送的信道状态信息,所述模型信息用于终端确定用于压缩信道状态信息的压缩模型。
根据本公开实施例的第三方面,提供一种确定用于压缩信道状态信息的压缩模型的装置,应用于终端,所述装置包括:
接收模块,被配置为接收网络设备发送的模型信息,所述模型信息包括解压模型的第一模型参数,所述解压模型用于所述网络设备解压缩所述终端发送的信道状态信息;
确定模块,被配置为根据所述模型信息确定所述终端用于压缩信道状态信息的压缩模型。
根据本公开实施例的第四方面,提供一种确定用于压缩信道状态信息的压缩模型的装置,应用于网络设备,所述装置包括:
发送模块,被配置为发送模型信息,所述模型信息包括解压模型的第一模型参数,所述解压模型用于所述网络设备解压缩所述终端发送的信道状态信息,所述模型信息用于终端确定用于压缩信道状态信息的压缩模型。
根据本公开实施例的第五方面,提供一种确定用于压缩信道状态信息的压缩模型的装置,包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器被配置为:
接收网络设备发送的模型信息,所述模型信息包括解压模型的第一模型参数,所述解压模型用于所述网络设备解压缩所述终端发送的信道状态信息;
根据所述模型信息确定所述终端用于压缩信道状态信息的压缩模型。
根据本公开实施例的第六方面,提供一种确定用于压缩信道状态信息的压缩模型的装置,包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器被配置为:
发送模型信息,所述模型信息包括解压模型的第一模型参数,所述解压模型用于所述网络设备解压缩所述终端发送的信道状态信息,所述模型信息用于终端确定用于压缩信道状态信息的压缩模型。
根据本公开实施例的第七方面,提供一种计算机可读存储介质,其上存储有计算机程序指令,该程序指令被处理器执行时实现本公开第一方面中任一项所述方法的步骤,或者,本公开第二方面中任一项所述方法的步骤。
本公开的实施例提供的技术方案中,终端通过接受网络设备发送的模型信息,并根据模型信息得到用于压缩CSI的压缩模型,由于该模型信息中包括网络设备的解压模型的第一模型参数,因此终端根据该第一模型参数以及,终端厂商或者芯片厂商的私有化数据就能够自行得到用于与网络设备的解压模型联合使用的压缩模型,而该过程中网络设备无需获知终端部署的压缩模型的参数,从而保证了压缩模型的私有化。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
本公开上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1A是相关技术中的一种网络系统架构的示意图。
图1B是相关技术中的另一种网络系统架构的示意图。
图2是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图。
图3是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图。
图4是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图。
图5是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图。
图6是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图。
图7是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图。
图8是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图。
图9是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图。
图10是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图。
图11是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图。
图12是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图。
图13是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图。
图14是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图。
图15是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图。
图16是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图。
图17是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图。
图18是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图。
图19是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图。
图20是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图。
图21是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的交互图。
图22是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的装置的框图。
图23是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的装置的框图。
图24是根据一示例性实施例示出的一种终端的框图。
图25是根据一示例性实施例示出的一种网络设备的框图。
具体实施方式
下面详细描述本公开的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本公开,而不能理解为对本公开的限制。
在无线通信领域,信道状态信息(CSI,Channel State Information)就是通信链路的信道属性。它描述了信号在每条传输路径上的衰弱因子,即信道增益矩阵中每个元素的值,如信号散射(Scattering),环境衰弱(fading,multipath fading or shadowing fading),距离衰减(power decay of distance)等信息。CSI可以使通信系统适应当前的信道条件,在多天线系统中为高可靠性高速率的通信提供了保障。
在相关技术中,基于AI(Artificial Intelligence,人工智能)模型对CSI增强已是行业趋势,然而,相关技术中的用于CSI增强的AI模型部署与传输方式无法满足不同厂商对于模型参数以及模型数据的私有化的需求。
为了解决上述问题,本公开实施例提供一种确定用于压缩信道状态信息的压缩模型 的方法、装置及存储介质。
下面首先介绍本公开实施例的实施环境:
本公开实施例可以适用于于4G(第四代移动通信系统)演进系统,如长期演进(long term evolution,LTE)系统,或者还可以为5G(第五代移动通信系统)系统,如采用新型无线入技术(new radio access technology,New RAT)的接入网;云无线接入网(Cloud Radio Access Network,CRAN)等通信系统。
图1A示例性示出了本公开实施例适用的一种系统架构示意图。应理解,本公开实施例并不限于图1A所示的系统中,此外,图1A中的装置可以是硬件,也可以是从功能上划分的软件或者以上二者结合后的结构。如图1A所示,本公开实施例提供的系统架构包括终端、基站、移动性管理网元、会话管理网元、用户面网元以及数据网络(data network,DN)。终端通过基站以及用户面网元与DN通信。
其中图1A中所示的网元既可以是4G架构中的网元、还可以是5G架构中的网元。
数据网络(data network,DN),为用户提供数据传输服务,可以是协议数据单元(Protocol Data Unit,PDN)网络,如因特网(internet)、IP多媒体业务(IP Multi-media Service,IMS)等。
参见图1B所示的5G的系统架构示意图:移动性管理网元可以包括是5G中的接入与移动性管理实体(access and mobility management function,AMF)。移动性管理网元负责移动网络中终端的接入与移动性管理。其中,AMF负责终端接入与移动性管理,NAS消息路由,会话管理功能实体(session management function,SMF)选择等。AMF可以作为中间网元,用来传输终端和SMF之间的会话管理消息。
会话管理网元,负责转发路径管理,如向用户面网元下发报文转发策略,指示用户面网元根据报文转发策略进行报文处理和转发。会话管理网元可以是5G中的SMF(如图1B所示),负责会话管理,如会话创建/修改/删除,用户面网元选择以及用户面隧道信息的分配和管理等。
用户面网元可以是5G架构中的用户面功能实体(user plane function,UPF),如图1B所示。UPF负责报文处理与转发。
本公开实施例提供的系统架构中还可以包括数据管理网元,用于处理终端设备标识,接入鉴权,注册以及移动性管理等。在5G通信系统中,该数据管理网元可以是统一数据管理(unified data management,UDM)网元。
本公开实施例提供的系统架构中还可以包括策略控制功能实体(policy control function,PCF)或者为策略计费控制功能实体(policy and charging control function,PCRF)。其中,PCF或者PCRF负责策略控制决策和基于流计费控制。
本公开实施例提供的系统架构中还可以包括网络存储网元,用于维护网络中所有网络功能服务的实时信息。在5G通信系统中,该网络存储网元可以是网络存储库功能(network repository function,NRF)网元。网络存储库网元中可以存储了很多网元的信息,比如SMF的信息,UPF的信息,AMF的信息等。网络中AMF、SMF、UPF等网元都可能与NRF相连,一方面可以将自身的网元信息注册到NRF,另一方面其他网元可以从NRF中获得已经注册过的网元的信息。其他网元(比如AMF)可以根据网元类型、数据网络标识、未知区域信息等,通过向NRF请求获得可选的网元。如果域名系统(domain name system,DNS)服务器集成在NRF,那么相应的选择功能网元(比如AMF)可以向NRF请求获得要选择的其他网元(比如SMF)。
基站作为接入网络(access network,AN)的一个具体实现形式,还可以称为接入节点,如果是无线接入的形式,称为无线接入网(radio access network,RAN),如图1B所示,为终端提供无线接入服务。接入节点具体可以是全球移动通信(global system for mobile  communication,GSM)系统或码分多址(code division multiple access,CDMA)系统中的基站,也可以是宽带码分多址(wideband code division multiple access,WCDMA)系统中的基站(NodeB),还可以是LTE系统中的演进型基站(evolutional node B,eNB或eNodeB),或者是5G网络中的基站设备、小基站设备、无线访问节点(WiFiAP)、无线互通微波接入基站(worldwide interoperability for microwave access base station,WiMAX BS)等,本公开对此并不限定。
终端,也可称为接入终端、用户设备(user equipment,UE),用户单元、用户站、移动站、移动台、远方站、远程终端、移动设备、用户终端、无线通信设备、用户代理或用户装置等。图1B以UE为例进行说明。终端可以是蜂窝电话、无绳电话、会话启动协议(session initiation protocol,SIP)电话、无线本地环路(wireless local loop,WLL)站、个人数字处理(personal digital assistant,PDA)、具有无线通信功能的手持设备、计算设备或连接到无线调制解调器的其它处理设备、车载设备、可穿戴设备、物联网终端设备,比如火灾检测传感器、智能水表/电表、工厂监控设备等等。
上述功能既可以是硬件设备中的网络元件,也可以是在专用硬件上运行软件功能,或者是平台(例如,云平台)上实例化的虚拟化功能。
本公开实施例中,采用的是在基站和终端侧分别进行模型部署,CSI通过压缩模型进行压缩在终端侧执行,CSI通过解压模型进行解压在基站侧执行,解压模型与压缩模型联合使用的方案。本公开实施例中涉及到的模型参数(如后文描述的第一模型参数、第二模型参数以及第三模型参数)可以包括模型内部的配置变量和/或模型外部的配置变量(即模型超参数)。本公开实施例对于解压模型以及压缩模型的类型不做限定,例如对于神经网络模型而言,模型内部配置的变量例如可以包括各神经元节点的计算参数矩阵,超参数例如可以是训练神经网络的学习步长,而对于支持向量机而言,模型内部配置的变量例如可以是支持向量,超参数例如可以是支持向量机的sigma参数。以神经网络为例,在一种可能的实现方式中,本公开实施例中的模型参数具体可以包括以下参数中的至少一项:模型类型、学习步长、各神经元节点的计算参数矩阵、对该计算参数矩阵进行填充的填充值、各神经元节点的偏差、各神经元节点激活函数。
图2是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图,应用于终端,如图2所示,该方法包括:
S201、终端接收网络设备发送的模型信息,模型信息包括解压模型的第一模型参数,解压模型用于网络设备解压缩终端发送的信道状态信息。
S202、终端根据模型信息确定终端用于压缩信道状态信息的压缩模型。
在本公开实施例中,所述第一模型参数用于指示终端与所述解压模型对应的压缩模型。
其中,网络设备可以是如图1A中所示的接入网设备(例如基站),也可以是核心网中的其他网络逻辑实体,应理解的是,通信网络包括了接入网(如图1A中所示的基站)、承载网以及核心网(如图1A中所示的移动性管理网元、会话管理网元以及图1B中所示的NRF、AMF)。上述解压模型的第一模型参数可以是基站通过训练得到的,本公开对该解压模型的训练方式不作具体限定。
可以理解的是,所述第一模型参数用于指示终端与所述解压模型对应的压缩模型;终端可以至少基于解压模型的第一模型参数得到该用于能够与该解压模型联合使用的用于压缩CSI的压缩模型。例如,该终端可以基于该第一模型参数以及该终端采集到的私有化数据,训练得到与解压模型对应的压缩模型。
在一示例中,上述模型信息可以是基站通过RRC信令发送的,此时,网络设备即可以为基站。或者,在另一示例中,上述模型信息还可以核心网中的AMF,即接入与移动 性管理实体,通过NAS(Non-access stratum,非接入层)层信令发送的,此时,网络设备即可以为AMF。
在一示例中,该模型信息可以是网络设备中部署的解压模型发生变更的情况下发送的,也可以是响应于终端接入基站时发送的,还可以是网络设备以一定周期定期发送的。其中,在终端中的压缩模型发生变化时,并不会对网络设备中部署的解压模型造成影响,也就是说,网络设备的解压模型不会受到终端的压缩模型的影响,进而有效地保证了网络设备中部署的解压模型的复杂度不会上升。
在另一示例中,终端用于压缩信道状态信息的压缩模型满足以下模型性能条件中的至少一者:压缩模型与解压模型联合使用情况下的均方误差或者归一化均方误差小于第一预设阈值;压缩模型与解压模型联合使用情况下的余弦相似度大于第二预设阈值;压缩模型与解压模型联合使用情况下的余弦相似度平方大于第三预设阈值;压缩模型与解压模型联合使用情况下的信噪比大于第四预设阈值。
其中,在一些可能的实施方式中,终端用于压缩信道状态信息的压缩模型需要同时满足以上的多个条件,若该压缩模型无法满足多个条件其中的任意一个条件,则该终端可以进一步地对该压缩模型的参数进行调整,或者令网络设备重新对该压缩模型发送模型信息,进而保证基于终端发送的模型信息得到的压缩模型可以可靠地对信道状态信息。
在本公开实施例中,终端通过接受网络设备发送的模型信息,并根据模型信息得到用于压缩信道状态信息的压缩模型,由于该模型信息中包括网络设备的解压模型的第一模型参数,因此终端根据该第一模型参数,以及终端厂商或者芯片厂商的私有化数据就能够自行得到用于与网络设备的解压模型联合使用的压缩模型,而该过程中网络设备无需获知终端部署的压缩模型的参数,从而保证了压缩模型的私有化。
图3是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图,应用于终端,如图3所示,该方法包括:
S301、终端接收网络设备发送的模型信息,模型信息包括解压模型的第一模型参数以及与该解压模型对应的压缩模型的第二模型参数,解压模型用于网络设备解压缩终端发送的信道状态信息。
S302、终端根据模型信息确定终端用于压缩信道状态信息的压缩模型。
可以理解的是,由于不同终端厂商或者终端芯片厂商的性能差异化,终端自身的能力存在差异(终端自身的能力例如可以包括该终端的硬件能力,例如其算力等信息,该能力信息也可以包括该终端的当前状态信息,例如荷载信息等),在步骤S302中,终端可以根据其自身能力或其他参数确定根据第一模型参数得到该压缩模型,或者,根据第二模型参数得到该压缩模型。在另一示例中,模型信息还包括网络设备(例如基站)的配置信息,终端可以根据该配置信息确定是使用第二模型参数,还是根据第一模型参数得到压缩模型。其中,该配置信息至少可以用于指示终端是否直接使用第二模型参数对应的压缩模型,或者是否对第二模型参数进行调整。
采用本方案,通过接收包括解压模型的第一模型参数,以及与该解压模型对应的压缩模型的第二模型参数的模型信息,终端能够根据第一模型参数得到压缩模型,也可以根据第二模型参数得到该压缩模型,能够保证不同终端厂商或者终端芯片厂商的终端均能够有效地得到压缩模型。
图4是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图,应用于终端,如图4所示,该方法包括:
S401、终端接收网络设备发送的模型信息,模型信息包括解压模型的第一模型参数,解压模型用于网络设备解压缩终端发送的信道状态信息。
S402、终端根据第一模型参数训练得到与解压模型对应的压缩模型。
其中,在步骤S402中,终端至少可以基于其终端厂商的或者芯片厂商的私有化数据,以及该第一模型参数训练得到该压缩模型,本公开对该压缩模型的具体训练方式不作具体限定。
在一示例中,模型信息还包括与解压模型对应的压缩模型的第二模型参数。其中,在步骤S402执行之前,终端可以根据其能力信息,确定是否执行步骤S402,即终端根据其能力信息,确定该终端是否具备根据第一模型参数训练得到与解压模型对应的压缩模型的能力。其中,终端的能力信息可以包括该终端的硬件能力,例如其算力等信息,该能力信息也可以包括该终端的当前状态信息,例如荷载信息等。
在另一示例中,终端还可以根据基站的配置信息,确定是否执行步骤S402,即终端根据基站配置信息,确定该终端是否需要根据第一模型参数训练得到与解压模型对应的压缩模型。在模型信息还包括与解压模型对应的压缩模型的第二模型参数的情况下,该配置信息还可以指示终端是否需要直接使用第二模型参数。在没有基站配置信息的情况下,终端可以根据自身能力或其他信息自行决定直接使用第二模型参数还是根据第一模型参数得到压缩模型。
在另一示例中,若根据步骤S402得到的压缩模型,不满足上述实施例中所述的任意一个或者至少一个模型性能条件,则该终端可以根据该第一模型参数重新训练得到与该解压模型对应的压缩模型。
采用本方案,终端通过接收网络设备发送的模型信息中的第一模型参数,并根据该第一模型参数训练得到该用于压缩信道状态信息的压缩模型,由于压缩模型的训练过程和/或压缩模型参数对网络设备是透明的,有效地保证了压缩模型的私有化。
图5是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图,应用于终端,如图5所示,该方法包括:
S501、终端接收网络设备发送的模型信息,模型信息包括解压模型的第一模型参数,解压模型用于网络设备解压缩终端发送的信道状态信息。
S502、终端向服务器发送获取请求,并获取服务器发送的与解压模型对应的压缩模型。
具体的,模型的训练依托于终端侧对应的服务器,比如芯片公司的服务器,终端在获取到第一模型参数后,将第一模型参数发送给终端侧对应的服务器,服务器在完成压缩模型的训练后,再将压缩模型发送给终端。
其中,该服务器可以是第三方服务器,也可以是网络侧的服务器,例如核心网中的服务器,或者,也可以是终端侧的服务器,比如芯片厂商的服务器或者终端厂商的服务器,本公开对此不作限定。
可以理解的是,该获取请求至少包括解压模型的第一模型参数,该获取请求还可以包括用于训练该压缩模型的训练数据,或者,该服务器中存储有用于训练该压缩模型的训练数据。
在一示例中,终端也可以在终端本地训练得到与解压模型对应的压缩模型。其中,终端向服务器发送获取请求以请求服务器训练该压缩模型,或者,在终端本地进行训练可以是根据该终端的能力信息确定的。例如,若终端当前负载大于预设阈值,则可以通过向服务器发送获取请求以请求服务器训练该压缩模型,若终端当前负载小于该预设阈值,则可以在终端本地进行训练。
在另一示例中,模型信息还包括与解压模型对应的压缩模型的第二模型参数。其中,在步骤S502执行之前,终端可以根据其能力信息,确定是否执行步骤S502,即终端根据其能力信息,确定该终端是否具备根据第一模型参数训练得到与解压模型对应的压缩模型的能力,若该终端不具备根据第一模型参数训练得到与解压模型对应的压缩模型的 能力,则可以根据该第二模型参数得到压缩模型。另外,是否执行S502还可以根据终端的私有化需求确定,例如,若终端无私有化需求,则可以不执行该步骤S502,而直接根据该第二模型参数得到压缩模型。
在另一示例中,终端可以根据基站的配置信息(该模型信息例如可以包括该配置信息),确定直接使用第二模型参数还是根据第一模型参数得到压缩模型。在没有基站配置信息的情况下,终端可以根据自身能力或其他信息自行决定直接使用第二模型参数还是根据第一模型参数得到压缩模型。
在又一示例中,若根据步骤S501以及步骤S502得到的压缩模型,不满足上述实施例中所述的任意一个或者至少一个模型性能条件,则该终端可以再次向该服务器发送获取请求并获取与该解压模型对应的更新压缩模型。
采用上述方案,通过终端向服务器发送获取请求以使得服务器对该压缩模型训练,以使得该终端得到用于压缩信道状态信息的压缩模型,可以在终端本地无法训练压缩模型,或者终端暂时无法支持压缩模型的训练的情况下,能够训练得到该压缩模型,在保证压缩模型的私有化的同时,有效地保证了不同终端厂商或者芯片厂商的终端均能够通过训练得到该压缩模型。
图6是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图,应用于终端,如图6所示,该方法包括:
S601、终端接收网络设备发送的模型信息,模型信息至少包括解压模型对应的压缩模型的第二模型参数,解压模型用于网络设备解压缩终端发送的信道状态信息。
S602、终端对第二模型参数进行调优,得到第三模型参数。
S603、终端根据第三模型参数建立用于压缩信道状态信息的压缩模型。
其中,在步骤S602中,可以是基于终端内的终端厂商或者芯片厂商的私有化数据对该第二模型参数进行调整的,本公开第二模型参数的调整方式不作具体限定。
在一示例中,步骤S602以及步骤S603可以是由该终端根据其能力信息确定是否执行的,例如,在确定终端的能力信息表征该终端的算力不具备训练压缩模型,而具备对第二模型参数进行调整的能力的情况下,则可以执行该步骤S602以及步骤S603。
在另一示例中,若根据步骤S602得到的压缩模型,不满足上述实施例中所述的任意一个或者至少一个模型性能条件,则该终端可以对第三模型参数进步进行调优或者对第二模型参数重新进行调优,并得到与该解压模型对应的压缩模型。
在一种实现方式中,步骤S601中的模型信息可以包括解压模型对应的压缩模型的第二模型参数。在另一种实现方式中,步骤S601中的模型信息可以包括解压模型的第一模型参数以及与该解压模型对应的压缩模型的第二模型参数。
采用上述方案,终端接收网络设备发送的解压模型对应的压缩模型的第二模型参数,并该第二模型参数进行进一步地调整,得到第三模型参数并根据该第三模型参数建立压缩模型,能够使得在终端能力不足的情况下,通过对网络设备发送的第二模型参数进行调整得到调整后的压缩模型,且该压缩模型的参数对于网络设备来说是透明的,能够有效地保证该压缩模型的私有化。
图7是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图,应用于终端,如图7所示,该方法包括:
S701、终端接收网络设备发送的模型信息,模型信息至少包括解压模型对应的压缩模型的第二模型参数,解压模型用于网络设备解压缩终端发送的信道状态信息。
S702、终端根据第二模型参数建立用于压缩信道状态信息的压缩模型。
在一示例中,步骤S702可以是由该终端根据其能力信息确定是否执行的,例如,在确定终端的能力信息表征该终端的算力不具备训练压缩模型,也不具备对第二模型参数 进行调整的能力的情况下,则可以执行该步骤S702,以使得该终端直接应用网络设备发送的该压缩模型的模型参数。
在另一示例中,若根据步骤S702得到的压缩模型,不满足上述实施例中所述的任意一个或者至少一个模型性能条件,则该终端可以令网络设备重新发送该模型信息,并根据网络设备新发送的模型信息中的第二模型参数建立压缩模型。
在一种实现方式中,步骤S701中的模型信息可以包括解压模型对应的压缩模型的第二模型参数。在另一种实现方式中,步骤S701中的模型信息可以包括解压模型的第一模型参数以及与该解压模型对应的压缩模型的第二模型参数。
采用上述方案,终端接收网络设备发送的解压模型对应的压缩模型的第二模型参数,并直接利用该第二模型参数建立用于压缩信道状态信息的压缩模型,能够使得在终端能力不足的情况下,可以通过接收网络设备发送的第二模型参数进行应用得到该压缩模型。
值得说明的是,在模型信息包括解压模型的第一模型参数以及与该解压模型对应的压缩模型的第二模型参数的情况下,上述实施例中的步骤S402、步骤S502、步骤S602至步骤S603、以及步骤S702,可以是根据终端其能力信息选择性执行的。
示例地,若终端当前的负载高于第一预设阈值,则可以选择执行步骤S702,若终端当前的负载低于第二预设阈值,则可以选择执行步骤S402,若终端当前的负载高于第二预设阈值低于第一预设阈值,则可以选择执行步骤S602至步骤S603。或者,在该终端当前的负载低于第二预设阈值,而终端算力小于预设算力阈值的情况下,则可以执行步骤S502至步骤S503。本公开对终端具体选用哪种方式基于网络设备发送的模型信息得到压缩模型不作具体限定。
图8是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图,应用于终端,如图8所示,该方法包括:
S801、终端接收网络设备发送的模型信息,模型信息包括多个解压模型的第一模型参数;或者,模型信息包括多个解压模型的第一模型参数以及与多个解压模型一一对应的压缩模型的第二模型参数;其中,解压模型用于网络设备解压缩终端发送的信道状态信息。
S802、终端至少根据终端的能力信息和/或终端使用的频段信息,从模型信息中,确定目标模型的模型参数,目标模型为解压模型或者压缩模型。
S803、终端根据目标模型的模型参数确定终端用于压缩信道状态信息的压缩模型。
可以理解的是,由于接入该网络设备的终端对应的终端厂商或者芯片厂商可能是完全不同的,不同终端厂商或者芯片厂商对应的终端的能力与采用的频段也可能是完全不同的,因此,对应不同能力与频段的终端,网络设备可以部署多个解压模型,以保证对于不同能力与频段的终端发送的压缩后的信道状态信息均可以可靠地解压。
其中,模型信息中包括多个解压模型的第一模型参数,或者多个解压模型的第一模型参数以及与多个解压模型一一对应的压缩模型的第二模型参数,可以是根据终端上报的能力信息和/或频段信息确定的。例如,若终端上报的能力信息表征该终端不具备模型训练的能力,则网络设备则可以发送包括多个解压模型的第一模型参数以及与多个解压模型一一对应的压缩模型的第二模型参数的模型信息。
在步骤S802中,终端可以基于其使用的频段信息,确定该终端对应的解压模型和/或压缩模型,再根据该终端是否具备压缩模型训练的能力,再确定目标模型为解压模型或压缩模型。
示例地,在该模型信息仅包括对应第一频段的第一解压模型和对应第二频段的第二解压模型的第一模型参数的情况下,若该终端使用的频段信息表征该终端使用第一频段,则可以确定该目标模型为第一解压模型。
又例如,在该模型信息包括对应第一频段的第一解压模型的第一模型参数以及该第一解压模型对应的第一压缩模型的第二模型参数,和,对应第二频段的第二解压模型的第一模型参数以及该第二解压模型对应的第二压缩模型的第二模型参数的情况下,若该终端使用的频段信息表征该终端使用第一频段,并该终端的能力信息表征该终端不具备压缩模型训练的能力,则可以确定该终端的目标模型为第一压缩模型。
具体地,若目标模型为解压模型,步骤S803可以是终端根据该解压模型对应的第一模型参数训练得到的该压缩模型,其中,该压缩模型可以是终端本地训练得到的,也可以是通过服务器训练得到的。若该目标模型为压缩模型,终端则可以根据其能力信息,确定该终端是直接根据该目标模型对应的第二模型参数建立压缩模型,或者是对该第二模型参数进行调优得到第三模型参数并基于该第三模型参数建立压缩模型。
采用上述方案,网络设备通过发送多个解压模型的模型参数,或者多个解压模型以及与该多个解压模型对应的压缩模型的模型参数,以使得该终端根据其能力信息和/或频段信息确定目标模型,并基于目标模型的参数确定该终端的压缩模型,保证了对于不同能力与频段的终端均可以对信道状态信息进行有效地压缩,并使得网络终端能够对这些终端发送的压缩后的信道状态信息可靠地解压。
图9是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图,应用于终端,如图9所示,该方法包括:
S901、终端接收网络设备发送的模型信息,模型信息包括多个解压模型的第一模型参数;或者,模型信息包括多个解压模型的第一模型参数以及与多个解压模型一一对应的压缩模型的第二模型参数;其中,解压模型用于网络设备解压缩终端发送的信道状态信息。
S902、终端至少根据终端的能力信息和/或终端使用的频段信息,从模型信息中,确定目标模型的模型参数,目标模型为解压模型或者压缩模型。
S903、终端根据目标模型的模型参数确定终端用于压缩信道状态信息的压缩模型。
S904、终端向网络设备上报终端确定的目标模型。
值得说明的是,上述步骤S904可以是在步骤S903执行之后再执行的,也可以是在步骤S903执行之前执行的,也可以是在步骤S902执行完成之后与步骤S903同时执行的,本公开对此不作限定。
需要说明的是,步骤S903中是示例性的通过终端的能力信息和/或终端使用的频段信息,从模型信息中确定目标模型的模型参数;当然,本领域内技术人员可以理解,终端还可以通过其他参数来确定,从模型信息中确定目标模型的模型参数,在此不再赘述。
采用上述方案,通过在确定目标模型之后,向网络设备上报该终端确定的目标模型,令网络设备在接收到终端上报的目标模型之后,即可以根据该目标模型确定对应该终端发送的信道状态信息的解压模型,进而以使得网络设备可以正确地选择解压模型与终端中的压缩模型联合使用,以保证能够有效地对该终端发送的信道状态信息进行解压。
图10是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图,应用于终端,如图10所示,该方法包括:
S1001、终端接收网络设备发送的模型信息,模型信息包括多个解压模型的第一模型参数;或者,模型信息包括多个解压模型的第一模型参数以及与多个解压模型一一对应的压缩模型的第二模型参数;其中,解压模型用于网络设备解压缩终端发送的信道状态信息。
S1002、终端至少根据终端的能力信息和/或终端使用的频段信息,从模型信息中,确定目标模型的模型参数,目标模型为解压模型或者压缩模型。
S1003、终端向网络设备上报终端的能力信息和/或终端使用的频段信息,终端的能 力信息和/或终端使用的频段信息用于网络设备确定终端确定的目标模型。
示例地,在该模型信息仅包括对应第一频段的第一解压模型和对应第二频段的第二解压模型的第一模型参数的情况下,若网络设备根据接收到的终端上报的频段信息确定该频段信息表征该终端使用的是第一频段,则该网络设备可以确定该终端选取的第一解压模型为目标模型。
又例如,在该模型信息包括对应第一频段的第一解压模型的第一模型参数以及该第一解压模型对应的第一压缩模型的第二模型参数,和,对应第二频段的第二解压模型的第一模型参数以及该第二解压模型对应的第二压缩模型的第二模型参数的情况下,若网络设备根据接收到的终端上报的频段信息确定该频段信息表征该终端使用第一频段,并该终端上报的能力信息表征该终端不具备压缩模型训练的能力,则可以确定该终端选择的目标模型为第一压缩模型。
进一步,该网络设备则可以根据该终端选择的目标模型确定用于解压该终端发送的信道状态信息的解压模型对该信道状态信息进行解压。
需要说明的是,步骤S1002中是示例性的通过终端的能力信息和/或终端使用的频段信息,从模型信息中确定目标模型的模型参数;当然,本领域内技术人员可以理解,终端还可以通过其他参数来确定,从模型信息中确定目标模型的模型参数,在此不再赘述。
采用上述方案,通过在确定目标模型之后,向网络设备上报该终端的能力信息和/或使用的频段信息,以使得该网络设备确定该终端确定的目标模型,令网络设备在接收到终端上报的目标模型之后,即可以根据该目标模型确定对应该终端发送的信道状态信息的解压模型,以保证能够有效地对该终端发送的信道状态信息进行解压。
图11是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图,应用于终端,如图11所示,该方法包括:
S1101、终端接收网络设备发送的模型信息,模型信息包括多个解压模型的第一模型参数;或者,模型信息包括多个解压模型的第一模型参数以及与多个解压模型一一对应的压缩模型的第二模型参数;其中,解压模型用于网络设备解压缩终端发送的信道状态信息。
S1102、终端至少根据终端的能力信息和/或终端使用的频段信息,从模型信息中,确定目标模型的模型参数,目标模型为解压模型或者压缩模型。
S1103、终端向网络设备上报目标模型的标识信息。
可以理解的是,该标识信息用于对模型信息中的每一解压模型唯一标识。
其中,对应每一组关联使用的解压模型以及压缩模型,可以采用同一个标识信息,也可以采用不同的标识信息,该标识信息例如可以通过模型ID表示。例如,对于第一解压模型以及与该第一解压模型对应的第一压缩模型,可以同时采用“0001”作为其对应的标识信息,或者,对于该第一解压模型可以采用“0010”作为其对应的标识信息,对于该第一压缩模型,可以采用“0011”作为其对应的标识信息,本公开对于标识信息的设置方式不作限定。
需要说明的是,步骤S1102中是示例性的通过终端的能力信息和/或终端使用的频段信息,从模型信息中确定目标模型的模型参数;当然,本领域内技术人员可以理解,终端还可以通过其他参数来确定,从模型信息中确定目标模型的模型参数,在此不再赘述。
采用上述方案,通过在确定目标模型之后,向网络设备上报该目标模型的标识信息,以使得该网络设备确定该终端确定的目标模型,令网络设备确定终端上报的目标模型之后,即可以根据该目标模型确定对应该终端发送的信道状态信息的解压模型,以保证能够有效地对该终端发送的信道状态信息进行解压。
图12是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方 法的流程图,应用于终端,如图12所示,该方法包括:
S1201、终端接收网络设备发送的模型信息,模型信息包括解压模型的第一模型参数,解压模型用于网络设备解压缩终端发送的信道状态信息。
S1202、终端根据模型信息确定终端用于压缩信道状态信息的压缩模型。
S1203、终端根据通信协议中的预设时长或者网络设备配置的预设时长,确定压缩模型的启用时间,预设时长为相对特定时刻的时长。
其中,网络设备配置的预设时长可以是通过模型信息发送至该终端的,也可以是通过其他信令发送至终端的。
可以理解的是,该通信协议是终端与网络设备之间通信预先规定的协议,网络设备同样可以根据通信协议中的预设时长或者网络设备配置的预设时长,确定终端的压缩模型以及其自身的解压模型的启用时间。
在一示例中,该特定时刻为终端接收模型信息所在的PDSCH的结束时刻。
在另一示例中,该特定时刻为终端反馈针对包括模型信息的PDSCH的HARQ-ACK信息的上行资源的最后一个符号的时刻。
其中,该预设时长对应不同的压缩模型的确定方式可以是不同的。
示例地,对应上述实施例中的步骤S402以及步骤S502的压缩模型确定方式,该预设时长可以是X个单位时长;对应上述实施例中步骤S602至步骤S603的压缩模型确定方式,该预设时长可以是Y个单位时长;对应上述实施例中步骤S702的压缩模型确定方式,该预设时长可以是Z个单位时长,X、Y、Z的大小可以在通信协议中规定,也可以又网络设备进行配置。其中,X可以大于Y且Y大于Z,或者,X、Y、Z也可以相等,本公开对此不作限定。
其中,该单位时长可以是一个时隙对应的时长,也可以也一个符号对应的时长或者一个子帧对应的时长,本公开也不做具体限定。
例如,若终端接收模型信息所在的PDSCH的结束时刻为slot n对应的时刻,单位时长为时隙,其中,slot表示时隙,n为时隙的编号,若该终端采用步骤S402至步骤S403的压缩模型确定方式,则该压缩模型的启用时间可以为slot n+X对应的时刻,即在编号为n+X的时隙对应的时刻启用该压缩模型。
采用上述方案,通过在通信协议中规定或者在网络设备中配置该预设时长,并基于与模型信息对应的特定时刻,确定该终端启用压缩模型的启用时间,能够使得终端与网络设备能够同步地启用压缩模型以及解压模型,进而保证网络设备能够可靠地对终端发送的压缩的信道状态信息进行解压,以保证网络设备与终端之间的通信质量。
需要说明的是,前述的由终端执行的各个实施例可以单独被实施,也可以组合在一起被实施,本公开实施例并不对此做出限定。
图13是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图,应用于网络设备,如图13所示,该方法包括:
S1301、网络设备发送模型信息,模型信息包括解压模型的第一模型参数,解压模型用于网络设备解压缩终端发送的信道状态信息,模型信息用于终端确定用于压缩信道状态信息的压缩模型。
在一示例中,该模型信息还可以包括多个解压模型的第一模型参数;或者,模型信息包括多个解压模型的第一模型参数,以及与多个解压模型一一对应的压缩模型的第二模型参数。
在一示例中,上述模型信息可以是基站通过RRC信令发送的,此时,网络设备即可以为基站。或者,在另一示例中,上述模型信息还可以核心网中的AMF,即接入与移动性管理实体,通过NAS(Non-access stratum,非接入层)层信令发送的,此时,网络设 备即可以为AMF。
在另一示例中,终端用于压缩信道状态信息的压缩模型满足以下模型性能条件中的至少一者:压缩模型与解压模型联合使用情况下的均方误差或者归一化均方误差小于第一预设阈值;压缩模型与解压模型联合使用情况下的余弦相似度大于第二预设阈值;压缩模型与解压模型联合使用情况下的余弦相似度平方大于第三预设阈值;压缩模型与解压模型联合使用情况下的信噪比大于第四预设阈值。
其中,步骤S1301可以是该模型信息可以是网络设备中部署的解压模型发生变更的情况下发送的,也可以是响应于终端接入基站时发送的。
值得说明的是,通过网络设备发送模型信息以使得终端确定用于压缩信道状态信息的压缩模型能够保证网络设备具备终端的压缩模型的主导权,在网络设备侧部署的解压模型更新的情况下,能够有效地对终端侧部署的压缩模型进行更新。
在一示例中,在该网络设备中还可以配置有预设时长,或者在网络设备与终端的通信协议中规定有预设时长,以使得终端根据该预设时长确定终端启用压缩模型的启用时间。并且,该网络设备可以根据该预设时长,确定启用与终端使用的压缩模型对应的解压模型的启用时间。其中,该预设时长为相对特定时刻的时长。以使得终端与网络设备可以同步启用对应的压缩模型与解压模型以实现信道状态信息的可靠压缩及解压。
在本公开实施例中,通过网络设备向终端发送模型信息,以使得终端根据模型信息得到压缩信道状态信息的压缩模型,由于该模型信息中包括网络设备的解压模型的第一模型参数,因此终端根据该第一模型参数,以及终端厂商或者芯片厂商的私有化数据就能够自行得到用于与网络设备的解压模型联合使用的压缩模型,而该过程中网络设备无需获知终端部署的压缩模型的参数,从而保证了压缩模型的私有化。
图14是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图,应用于网络设备,如图14所示,该方法包括:
S1401、网络设备发送模型信息,模型信息包括解压模型的第一模型参数以及与解压模型对应的压缩模型的第二模型参数,解压模型用于网络设备解压缩终端发送的信道状态信息,模型信息用于终端确定用于压缩信道状态信息的压缩模型。
其中,网络设备是否发送第二模型参数可以是根据终端或者终端组上报的能力信息确定的。另外,终端在接收到该模型信息后,可以根据其自身能力或其他参数确定是利用第一模型参数确定解压模型还是利用第二模型参数确定压缩模型。另外,终端在接收到该模型信息后,可以确定是利用第二模型参数确定压缩模型还是利用第二模型参数进行调优后得到第三模型参数,并根据所述第三模型参数建立用于压缩信道状态信息的压缩模型。
采用上述方案,通过向终端发送第一模型参数以及第二模型参数,不仅可以使得具备模型训练能力的终端能够根据第一模型参数得到压缩模型,还可以使得不具备模型训练能力的终端根据第二模型参数得到压缩模型,能够保证不同终端厂商或者终端芯片厂商的终端均能够有效地得到压缩模型。
图15是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图,应用于网络设备,如图15所示,该方法包括:
S1501、网络设备广播模型信息,模型信息包括解压模型的第一模型参数,解压模型用于网络设备解压缩终端发送的信道状态信息,模型信息用于终端确定用于压缩信道状态信息的压缩模型。
在一示例中,模型信息还可以包括解压模型对应的压缩模型的第二模型参数。
在另一示例中,该模型信息还可以包括多个解压模型的第一模型参数;或者,模型信息包括多个解压模型的第一模型参数,以及与多个解压模型一一对应的压缩模型的第 二模型参数。
采用上述方案,网络设备通过广播的方式发送模型信息,可以使得在其广播范围内的终端均能够接收到该模型信息,进而根据该模型信息确定用于压缩信道状态信息的压缩模型。
图16是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图,应用于网络设备,如图16所示,该方法包括:
S1601、网络设备通过单播的方式向终端发送模型信息,模型信息包括解压模型的第一模型参数,解压模型用于网络设备解压缩终端发送的信道状态信息,模型信息用于终端确定用于压缩信道状态信息的压缩模型。
可以理解的是,通过单播方式发送的模型信息,对应不同的终端可以是不同的。例如,针对第一终端,可以通过单播的方式向该第一终端发送第一模型信息,针对第二终端,可以通过单播的方式向该第二终端发送第二模型信息。
在一示例中,模型信息还可以包括解压模型对应的压缩模型的第二模型参数。
在另一示例中,该模型信息还可以包括多个解压模型的第一模型参数;或者,模型信息包括多个解压模型的第一模型参数,以及与多个解压模型一一对应的压缩模型的第二模型参数。
采用上述方案,通过单播的方式向终端发送模型信息,可以更具针对性地发送模型信息,可以避免发送的信息中冗余信息过多导致网络资源被大量占用,能够有效地提高网络资源利用率。
图17是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图,应用于网络设备,如图17所示,该方法包括:
S1701、网络设备通过组播的方式向终端组发送模型信息,模型信息包括解压模型的第一模型参数,解压模型用于网络设备解压缩终端发送的信道状态信息,模型信息用于终端确定用于压缩信道状态信息的压缩模型。
可以理解的是,通过组播方式发送的模型信息,对应不同的终端组可以是不同的。例如,针对第一终端组,可以通过组播的方式向该第一终端组中的终端发送第一模型信息,针对第二终端组,可以通过组播的方式向该第二终端组中的终端发送第二模型信息。
在一示例中,模型信息还可以包括解压模型对应的压缩模型的第二模型参数。
在另一示例中,该模型信息还可以包括多个解压模型的第一模型参数;或者,模型信息包括多个解压模型的第一模型参数,以及与多个解压模型一一对应的压缩模型的第二模型参数。
采用上述方案,通过组播的方式向终端组发送模型信息,可以更具针对性地发送模型信息,可以避免发送的信息中冗余信息过多导致网络资源被大量占用,能够有效地提高网络资源利用率。
图18是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图,应用于网络设备,如图17所示,该方法包括:
S1801、网络设备针对未接入网络设备的终端,通过广播的方式向终端发送模型信息;针对已完成接入网络设备的终端或终端组,通过单播或者组播的方式发送模型信息,模型信息包括解压模型的第一模型参数,解压模型用于网络设备解压缩终端发送的信道状态信息,模型信息用于终端确定用于压缩信道状态信息的压缩模型。
其中,广播的模型信息与单播或组播的模型信息可以是不同的。例如,广播的模型信息中可以包括网络设备中部署的全部解压模型对应的第一模型参数。单播或组播的模型信息中可以仅包括网络设备中部署的部分解压模型对应的第一模型参数。
在一示例中,模型信息还可以包括解压模型对应的压缩模型的第二模型参数。
在另一示例中,该模型信息还可以包括多个解压模型的第一模型参数;或者,模型信息包括多个解压模型的第一模型参数,以及与多个解压模型一一对应的压缩模型的第二模型参数。
采用上述方案,针对未接入网络设备的终端,通过广播的方式可以使得在其广播范围内的终端均能够接收到该模型信息,进而根据该模型信息确定用于压缩信道状态信息的压缩模型,可以保证其广播范围内的终端均可以根据该模型信息得到相应的压缩模型与该网络设备进行通信。针对已接入网络的终端,则可以通过单播或组播的方式更具针对性地发送模型信息,可以避免发送的信息中冗余信息过多导致网络资源被大量占用,能够有效地提高网络资源利用率。
图19是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图,应用于网络设备,如图19所示,该方法包括:
S1901、网络设备针对已完成接入网络设备的终端或终端组,响应于终端或终端组发送的终端能力信息和/或频段信息,通过单播或者组播的方式发送模型信息,模型信息包括解压模型的第一模型参数,解压模型用于网络设备解压缩终端发送的信道状态信息,模型信息用于终端确定用于压缩信道状态信息的压缩模型。
在一示例中,对于终端能力信息和/或使用的频段信息不同的终端或终端组,网络设备发送的模型信息不同。
示例地,若接收到终端组发送的终端频段信息表征终端组中的终端使用第一频段,且能力信息表征终端具备压缩模型训练的能力,则可以通过组播的方式向该终端组发送包括对应第一频段的解压模型的第一模型参数的模型信息。进一步,若该能力信息表征终端组中某终端当前负载过高无法进行模型训练的情况下,则组播的模型信息还可以包括对应第一频段的解压模型对应的压缩模型的第二模型参数,以使得该终端根据第二模型参数调优得到压缩模型。
另一示例中,若接收到终端组发送的频段信息表征该终端组中的终端均使用第二频段,则可以通过组播的方式,向该终端组发送模型信息,该模型信息包括对应第二频段的解压模型的第一模型参数。或者,若频段信息表征该终端组中的终端使用第一频段以及第二频段,则可以通过组播的方式,向该终端组发送模型信息,该模型信息包括对应第一频段的解压模型的第一模型参数,以及对应第一频段的解压模型的第一模型参数。
又一例中,若接收到终端发送的频段信息表征该终端使用第二频段,且能力信息表征该终端当前不具备模型训练的能力,则可以通过单播的方式向该终端发送包括对应第二频段的解压模型的第一模型参数,以及该解压模型对应的压缩模型的第二模型参数的模型信息。
在一示例中,模型信息还可以包括解压模型对应的压缩模型的第二模型参数。
在另一示例中,该模型信息还可以包括多个解压模型的第一模型参数;或者,模型信息包括多个解压模型的第一模型参数,以及与多个解压模型一一对应的压缩模型的第二模型参数。
采用上述方案,网络设备通过接收已接入该网络设备的终端或终端组的能力信息和或频段信息,通过单播或组播的方式发送模型信息,可以根据终端或终端组的自身能力以及使用频段,更具针对性地发送模型信息,可以避免发送的信息中冗余信息过多导致通信资源被大量占用,能够有效地提高通信资源利用率。
图20是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的流程图,应用于网络设备,如图20所示,该方法包括:
S2001、网络设备发送模型信息,模型信息包括多个解压模型的第一模型参数;或者,模型信息包括多个解压模型的第一模型参数,以及与多个解压模型一一对应的压缩模型 的第二模型参数,解压模型用于网络设备解压缩终端发送的信道状态信息,模型信息用于终端确定用于压缩信道状态信息的压缩模型。
S2002、网络设备接收终端上报的终端确定的目标模型,目标模型为终端从模型信息中确定的,目标模型的模型参数用于终端确定终端用于压缩信道状态信息的压缩模型。
其中,可以理解的是,在模型信息包括多个解压模型的第一模型参数;或者,模型信息包括多个解压模型的第一模型参数,以及与多个解压模型一一对应的压缩模型的第二模型参数的情况下,终端只会选用其中的一个解压模型或者压缩模型对应的模型参数得到压缩模型。因此,若要使得网络设备与该终端需要联合使用对应的压缩模型以及解压模型则需要确定终端选择的目标模型,以保证网络设备能够对信道状态信息进行有效地解压。
在一示例中,终端可以是通过上报该目标模型的标识信息,以使得网络设备根据该标识信息确定终端确定的目标模型的。
在另一示例中,终端可以是通过上报该终端的能力信息和/或该终端使用的频段信息,以使得该网络设备根据该能力信息和/或频段信息确定终端确定的目标模型的。
采用上述方案,网络设备通过接受终端上报的其选择的目标模型,能够使得该网络设备在接收到该终端发送的压缩后的信道状态信息时,可以确定与该终端使用的压缩模型对应的解压模型对该信道状态信息进行解压,进而能够有效地保证终端与网络设备之间的通信质量。
图21是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的方法的交互图,如图21所示,该方法包括:
S2101、网络设备向终端发送模型信息。
其中,该模型信息可以包括多个解压模型的第一模型参数;或者,模型信息包括多个解压模型的第一模型参数,以及与多个解压模型一一对应的压缩模型的第二模型参数。
该模型信息可以是通过组播的方式发送的,该终端为终端组中的一个终端,也可以是通过广播的方式发送的。
S2102、终端根据其的能力信息和/或终端使用的频段信息,从模型信息中,确定目标模型。
S2103、终端向网络设备上报其能力信息和/或使用的频段信息。
S2104、网络设备根据终端上报的能力信息和/或使用的频段信息,确定终端确定的目标模型。
在另一些可选地实施例中,步骤S2103中终端可以上报其确定的目标模型的标识信息,以使得网络设备根据标识信息确定终端确定的目标模型。
S2105、终端根据目标模型的模型参数确定终端用于压缩信道状态信息的压缩模型。
其中,该目标模型可以是压缩模型,也可以是解压模型。
在该目标模型为解压模型的情况下,终端则可以根据该解压模型的第一模型参数,训练得到该压缩模型,其中,训练该压缩模型的步骤可以是通过服务器执行的。
在该目标模型为压缩模型的情况下,终端则可以直接应用该压缩模型对应的第二模型参数,或者,终端还可以对该第二模型参数进行调优后进行应用,即根据调优得到的第三模型参数建立该压缩模型。
可以理解的是,在终端自身进行压缩模型的训练或者通过服务器进行压缩模型的训练,可以是根据终端的能力信息确定的,例如终端当前负载过高无法自身进行训练,则可以向服务器发送模型获取请求,以使得在服务器训练得到的该压缩模型。
同理,直接应用目标模型对应的压缩模型的第二模型参数,或者对第二模型进行调优后构建压缩模型,也可以是根据终端的能力信息确定的。
S2106、根据通信协议中的预设时长或者网络设备配置的预设时长,确定压缩模型的启用时间。
其中,预设时长可以是相对特定时刻的时长,特定时刻包括终端接收模型信息所在的PDSCH的结束时刻,和/或,终端反馈针对包括模型信息的PDSCH的HARQ-ACK信息的上行资源的最后一个符号的时刻。
S2107、在启用时间启用压缩模型对信道状态信息进行压缩。
可以理解的是,网络设备可以同步地在该启用时间启用对应该压缩模型的解压模型。或者,该解压模型在网络设备保持启用状态,网络设备可以根据接收到的终端发送的压缩后的信道状态信息确定与该信息对应的解压模型,以对压缩后的信道状态信息进行解压。
需要说明的是,前述的由网络设备执行的各个实施例可以单独被实施,也可以组合在一起被实施,本公开实施例并不对此做出限定。
本领域内技术人员可以理解,前述的多个由终端执行的实施例和多个由网络设备执行的实施例是相互对应的,因此对于相当的步骤可能只在一侧进行了描述,而另一侧必然执行对应的操作。举例来说,网络设备是广播所述模型信息或通过单播的方式向终端发送所述模型信息或通过组播的方式向终端组发送所述模型信息;则终端必然会通过对应的方式接收该模型信息。
以下对本公开实施例的装置进行说明,需要说明的是,其中各个模块的功能已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
图22是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的装置22的框图,该装置22应用于终端,装置22包括:
接收模块2201,被配置为接收网络设备发送的模型信息,模型信息包括解压模型的第一模型参数,解压模型用于网络设备解压缩终端发送的信道状态信息;
确定模块2202,被配置为根据模型信息确定终端用于压缩信道状态信息的压缩模型。
可选地,模型信息还包括与解压模型对应的压缩模型的第二模型参数。
可选地,确定模块2202包括:
训练子模块,被配置为根据第一模型参数训练得到与解压模型对应的压缩模型。
可选地,确定模块2202包括:
本地训练模块,被配置为在终端本地训练得到与解压模型对应的压缩模型;
服务器训练模块,被配置为向服务器发送获取请求,并获取服务器发送的与解压模型对应的压缩模型。
可选地,确定模块2202包括:
调优子模块,被配置为对第二模型参数进行调优,得到第三模型参数;
第一建立子模块,被配置为根据第三模型参数建立用于压缩信道状态信息的压缩模型。
可选地,确定模块2202包括:
第二建立子模块,被配置为根据第二模型参数建立用于压缩信道状态信息的压缩模型。
可选地,终端用于压缩信道状态信息的压缩模型满足以下模型性能条件中的至少一者:
压缩模型与解压模型联合使用情况下的均方误差或者归一化均方误差小于第一预设阈值;
压缩模型与解压模型联合使用情况下的余弦相似度大于第二预设阈值;
压缩模型与解压模型联合使用情况下的余弦相似度平方大于第三预设阈值;
压缩模型与解压模型联合使用情况下的信噪比大于第四预设阈值。
可选地,模型信息包括多个解压模型的第一模型参数;或者,模型信息包括多个解压模型的第一模型参数,以及与多个解压模型一一对应的压缩模型的第二模型参数。
可选地,确定模块2202包括:
第一确定子模块,被配置为至少根据终端的能力信息和/或终端使用的频段信息,从模型信息中,确定目标模型的模型参数,目标模型为解压模型或者压缩模型;
第二确定子模块,被配置为根据目标模型的模型参数确定终端用于压缩信道状态信息的压缩模型。
可选地,装置22还包括:
上报模块,被配置为向网络设备上报终端确定的目标模型。
可选地,上报模块包括:
第一上报子模块,被配置为向网络设备上报目标模型的标识信息;
第二上报子模块,被配置为向网络设备上报终端的能力信息和/或终端使用的频段信息,终端的能力信息和/或终端使用的频段信息用于网络设备确定终端确定的目标模型。
可选地,装置22还包括:
启用模块,被配置为根据通信协议中的预设时长或者网络设备配置的预设时长,确定压缩模型的启用时间,预设时长为相对特定时刻的时长。
可选地,特定时刻包括终端接收模型信息所在的PDSCH的结束时刻,和/或,终端反馈针对包括模型信息的PDSCH的HARQ-ACK信息的上行资源的最后一个符号的时刻。
图23是根据一示例性实施例示出的一种确定用于压缩信道状态信息的压缩模型的装置23的框图,该装置23应用于网络设备,装置23包括:
发送模块2301,被配置为发送模型信息,模型信息包括解压模型的第一模型参数,解压模型用于网络设备解压缩终端发送的信道状态信息,模型信息用于终端确定用于压缩信道状态信息的压缩模型。
可选地,模型信息还包括与解压模型对应的压缩模型的第二模型参数。
可选地,发送模块2301包括:
第一广播子模块,被配置为广播模型信息;或者,
单薄子模块,被配置为通过单播的方式向终端发送模型信息;或者,
组播子模块,被配置为通过组播的方式向终端组发送模型信息。
可选地,发送模块2301包括:
第二发送子模块,被配置为针对未接入网络设备的终端,通过广播的方式发送模型信息;
第二发送子模块,被配置为针对已完成接入网络设备的终端或终端组,通过单播或者组播的方式发送模型信息。
可选地,第二发送子模块,包括:
第三发送子模块,被配置为针对已完成接入网络设备的终端或终端组,响应于终端或终端组发送的终端能力信息和/或频段信息,通过单播或者组播的方式发送模型信息。
可选地,对于终端能力信息和/或使用的频段信息不同的终端或终端组,网络设备发送的模型信息不同。
可选地,模型信息包括多个解压模型的第一模型参数;或者,模型信息包括多个解压模型的第一模型参数,以及与多个解压模型一一对应的压缩模型的第二模型参数。
可选地,装置23还包括:
第一接收模块,被配置为接收终端上报的终端确定的目标模型,目标模型为终端从模型信息中确定的,目标模型的模型参数用于终端确定终端用于压缩信道状态信息的压 缩模型。
可选地,网络设备为基站,发送模块2301包括:
RRC信令发送模块,被配置为通过RRC信令发送模型信息。
可选地,网络设备为核心网中的AMF网元,发送模块2301包括:
NAS信令发送模块,被配置为通过NAS信令发送模型信息。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
本公开还提供一种计算机可读存储介质,其上存储有计算机程序指令,该程序指令被处理器执行时实现本公开提供的前述任一方法实施例提供的确定用于压缩信道状态信息的压缩模型的方法的步骤。
图24是根据一示例性实施例示出的一种终端2400的框图。例如,终端2400可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图24,终端2400可以包括以下一个或多个组件:处理组件2402,存储器2404,电力组件2406,多媒体组件2408,音频组件2410,输入/输出(I/O)的接口2412,传感器组件2414,以及通信组件2416。
处理组件2402通常控制终端2400的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件2402可以包括一个或多个处理器2420来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件2402可以包括一个或多个模块,便于处理组件2402和其他组件之间的交互。例如,处理组件2402可以包括多媒体模块,以方便多媒体组件2408和处理组件2402之间的交互。
存储器2404被配置为存储各种类型的数据以支持在终端2400的操作。这些数据的示例包括用于在终端2400上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器2404可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电力组件2406为终端2400的各种组件提供电力。电力组件2406可以包括电源管理系统,一个或多个电源,及其他与为终端2400生成、管理和分配电力相关联的组件。
多媒体组件2408包括在所述终端2400和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件2408包括一个前置摄像头和/或后置摄像头。当终端2400处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件2410被配置为输出和/或输入音频信号。例如,音频组件2410包括一个麦克风(MIC),当终端2400处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器2404或经由通信组件2416发送。在一些实施例中,音频组件2410还包括一个扬声器,用于输出音频信号。
I/O接口2412为处理组件2402和外围接口模块之间提供接口,上述外围接口模块 可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件2414包括一个或多个传感器,用于为终端2400提供各个方面的状态评估。例如,传感器组件2414可以检测到终端2400的打开/关闭状态,组件的相对定位,例如所述组件为终端2400的显示器和小键盘,传感器组件2414还可以检测终端2400或终端2400一个组件的位置改变,用户与终端2400接触的存在或不存在,终端2400方位或加速/减速和终端2400的温度变化。传感器组件2414可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件2414还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件2414还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件2416被配置为便于终端2400和其他设备之间有线或无线方式的通信。终端2400可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件2416经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件2416还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,终端2400可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器2404,上述指令可由终端2400的处理器2420执行以完成上述方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
在另一示例性实施例中,还提供一种计算机程序产品,该计算机程序产品包含能够由可编程的装置执行的计算机程序,该计算机程序具有当由该可编程的装置执行时用于执行上述的确定用于压缩信道状态信息的压缩模型的方法的代码部分。
图25是根据一示例性实施例示出的一种网络设备的框图。例如,网络设备2500可以被提供为一基站,也可以被提供为一核心网中的其他网络逻辑实体。参照图25,网络设备2500包括处理组件2522,其进一步包括一个或多个处理器,以及由存储器2532所代表的存储器资源,用于存储可由处理组件2522的执行的指令,例如应用程序。存储器2532中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件2522被配置为执行指令,以执行上述方法实施例提供的确定用于压缩信道状态信息的压缩模型的方法的步骤。
网络设备2500还可以包括一个电源组件2526被配置为执行装置2500的电源管理,一个有线或无线网络接口2550被配置为将网络设备2500连接到网络,和一个输入输出(I/O)接口2558。网络设备2500可以操作基于存储在存储器2532的操作系统,例如Windows Server TM,Mac OS X TM,Unix TM,Linux TM,FreeBSD TM或类似。
在另一示例性实施例中,还提供一种计算机程序产品,该计算机程序产品包含能够由可编程的装置执行的计算机程序,该计算机程序具有当由该可编程的装置执行时用于执行上述的确定用于压缩信道状态信息的压缩模型的方法的代码部分。
本领域技术人员在考虑说明书及实践本公开后,将容易想到本公开的其它实施方案。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技 术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。

Claims (28)

  1. 一种确定用于压缩信道状态信息的压缩模型的方法,其特征在于,应用于终端,所述方法包括:
    接收网络设备发送的模型信息,所述模型信息包括解压模型的第一模型参数,所述解压模型用于所述网络设备解压缩所述终端发送的信道状态信息;
    根据所述模型信息确定所述终端用于压缩信道状态信息的压缩模型。
  2. 根据权利要求1所述的方法,其特征在于,所述模型信息还包括与所述解压模型对应的压缩模型的第二模型参数。
  3. 根据权利要求1或2所述的方法,其特征在于,所述根据所述模型信息确定所述终端用于压缩信道状态信息的压缩模型,包括:
    根据所述第一模型参数训练得到与所述解压模型对应的压缩模型。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述第一模型参数训练得到与所述解压模型对应的压缩模型,包括:
    在所述终端本地训练得到与所述解压模型对应的压缩模型;或者,
    向服务器发送获取请求,并获取所述服务器发送的与所述解压模型对应的压缩模型。
  5. 根据权利要求2所述的方法,其特征在于,所述根据所述模型信息确定所述终端用于压缩信道状态信息的压缩模型,包括:
    对所述第二模型参数进行调优,得到第三模型参数;
    根据所述第三模型参数建立用于压缩信道状态信息的压缩模型。
  6. 根据权利要求2所述的方法,其特征在于,所述根据所述模型信息确定所述终端用于压缩信道状态信息的压缩模型,包括:
    根据所述第二模型参数建立用于压缩信道状态信息的压缩模型。
  7. 根据权利要求1所述的方法,其特征在于,所述终端用于压缩信道状态信息的压缩模型满足以下模型性能条件中的至少一者:
    所述压缩模型与所述解压模型联合使用情况下的均方误差或者归一化均方误差小于第一预设阈值;
    所述压缩模型与所述解压模型联合使用情况下的余弦相似度大于第二预设阈值;
    所述压缩模型与所述解压模型联合使用情况下的余弦相似度平方大于第三预设阈值;
    所述压缩模型与所述解压模型联合使用情况下的信噪比大于第四预设阈值。
  8. 根据权利要求1所述的方法,其特征在于,所述模型信息包括多个解压模型的第一模型参数;或者,所述模型信息包括多个解压模型的第一模型参数以及与所述多个解压模型一一对应的压缩模型的第二模型参数。
  9. 根据权利要求8所述的方法,其特征在于,所述根据所述模型信息确定所述终端用于压缩信道状态信息的压缩模型,包括:
    至少根据所述终端的能力信息和/或所述终端使用的频段信息,从所述模型信息中,确定目标模型的模型参数,所述目标模型为解压模型或者压缩模型;
    根据所述目标模型的模型参数确定所述终端用于压缩信道状态信息的压缩模型。
  10. 根据权利要求9所述的方法,其特征在于,所述方法还包括:
    向所述网络设备上报所述终端确定的所述目标模型。
  11. 根据权利要求10所述的方法,其特征在于,所述向所述网络设备上报所述终端确定的所述目标模型包括:
    向所述网络设备上报所述目标模型的标识信息;或者,
    向所述网络设备上报所述终端的能力信息和/或所述终端使用的频段信息,所述终端 的能力信息和/或所述终端使用的频段信息用于所述网络设备确定所述终端确定的目标模型。
  12. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    根据通信协议中的预设时长或者所述网络设备配置的预设时长,确定所述压缩模型的启用时间,所述预设时长为相对特定时刻的时长。
  13. 根据权利要求1所述的方法,其特征在于,所述特定时刻包括所述终端接收所述模型信息所在的PDSCH的结束时刻,和/或,所述终端反馈针对包括所述模型信息的PDSCH的HARQ-ACK信息的上行资源的最后一个符号的时刻。
  14. 一种确定用于压缩信道状态信息的压缩模型的方法,其特征在于,应用于网络设备,所述方法包括:
    发送模型信息,所述模型信息包括解压模型的第一模型参数,所述解压模型用于所述网络设备解压缩终端发送的信道状态信息,所述模型信息用于终端确定用于压缩信道状态信息的压缩模型。
  15. 根据权利要求14所述的方法,其特征在于,所述模型信息还包括与所述解压模型对应的压缩模型的第二模型参数。
  16. 根据权利要求14所述的方法,其特征在于,所述发送模型信息包括:
    广播所述模型信息;或者,
    通过单播的方式向终端发送所述模型信息;或者
    通过组播的方式向终端组发送所述模型信息。
  17. 根据权利要求14所述的方法,其特征在于,所述发送模型信息包括:
    针对未接入所述网络设备的终端,通过广播的方式发送所述模型信息;
    针对已完成接入所述网络设备的终端或终端组,通过单播或者组播的方式发送所述模型信息。
  18. 根据权利要求14所述的方法,其特征在于,所述针对已完成接入所述网络设备的终端,通过单播或者组播的方式发送所述模型信息,包括:
    针对已完成接入所述网络设备的终端或终端组,响应于所述终端或所述终端组发送的终端能力信息和/或频段信息,通过单播或者组播的方式发送所述模型信息。
  19. 根据权利要求18所述的方法,其特征在于,对于终端能力信息和/或使用的频段信息不同的终端或终端组,所述网络设备发送的模型信息不同。
  20. 根据权利要求14所述的方法,其特征在于,所述模型信息包括多个解压模型的第一模型参数;或者,所述模型信息包括多个解压模型的第一模型参数,以及与所述多个解压模型一一对应的压缩模型的第二模型参数。
  21. 根据权利要求20所述的方法,其特征在于,所述方法还包括:
    接收终端上报的所述终端确定的目标模型,所述目标模型为所述终端从所述模型信息中确定的,所述目标模型的模型参数用于所述终端确定所述终端用于压缩信道状态信息的压缩模型。
  22. 根据权利要求14-21中任一项所述的方法,其特征在于,所述网络设备为基站,所述发送模型信息包括:
    通过RRC信令发送所述模型信息。
  23. 根据权利要求14-22中任一项所述的方法,其特征在于,所述网络设备为核心网中的AMF网元,所述发送模型信息包括:
    通过NAS信令发送所述模型信息。
  24. 一种确定用于压缩信道状态信息的压缩模型的装置,其特征在于,应用于终端,所述装置包括:
    接收模块,被配置为接收网络设备发送的模型信息,所述模型信息包括解压模型的第一模型参数,所述解压模型用于所述网络设备解压缩所述终端发送的信道状态信息;
    确定模块,被配置为根据所述模型信息确定所述终端用于压缩信道状态信息的压缩模型。
  25. 一种确定用于压缩信道状态信息的压缩模型的装置,其特征在于,应用于网络设备,所述装置包括:
    发送模块,被配置为发送模型信息,所述模型信息包括解压模型的第一模型参数,所述解压模型用于所述网络设备解压缩所述终端发送的信道状态信息,所述模型信息用于终端确定用于压缩信道状态信息的压缩模型。
  26. 一种确定用于压缩信道状态信息的压缩模型的装置,其特征在于,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为:
    接收网络设备发送的模型信息,所述模型信息包括解压模型的第一模型参数,所述解压模型用于所述网络设备解压缩所述终端发送的信道状态信息;
    根据所述模型信息确定所述终端用于压缩信道状态信息的压缩模型。
  27. 一种确定用于压缩信道状态信息的压缩模型的装置,其特征在于,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为:
    发送模型信息,所述模型信息包括解压模型的第一模型参数,所述解压模型用于所述网络设备解压缩所述终端发送的信道状态信息,所述模型信息用于终端确定用于压缩信道状态信息的压缩模型。
  28. 一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,该程序指令被处理器执行时实现权利要求1-13中任一项所述方法的步骤,或者,权利要求14-22中任一项所述方法的步骤。
PCT/CN2022/110091 2022-08-03 2022-08-03 确定用于压缩信道状态信息的压缩模型的方法、装置及存储介质 WO2024026751A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202280002552.9A CN115443643A (zh) 2022-08-03 2022-08-03 确定用于压缩信道状态信息的压缩模型的方法、装置及存储介质
PCT/CN2022/110091 WO2024026751A1 (zh) 2022-08-03 2022-08-03 确定用于压缩信道状态信息的压缩模型的方法、装置及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/110091 WO2024026751A1 (zh) 2022-08-03 2022-08-03 确定用于压缩信道状态信息的压缩模型的方法、装置及存储介质

Publications (1)

Publication Number Publication Date
WO2024026751A1 true WO2024026751A1 (zh) 2024-02-08

Family

ID=84252415

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/110091 WO2024026751A1 (zh) 2022-08-03 2022-08-03 确定用于压缩信道状态信息的压缩模型的方法、装置及存储介质

Country Status (2)

Country Link
CN (1) CN115443643A (zh)
WO (1) WO2024026751A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116193504B (zh) * 2023-04-18 2023-07-21 南京云程半导体有限公司 一种信道状态信息的上报方法、电子设备及存储介质
CN117856947A (zh) * 2024-02-21 2024-04-09 荣耀终端有限公司 Csi压缩模型指示方法及通信装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021142631A1 (en) * 2020-01-14 2021-07-22 Nokia Shanghai Bell Co., Ltd. Method, device and computer readable medium of communication
CN113938952A (zh) * 2021-10-20 2022-01-14 澳门大学 信道状态信息压缩方法、重建方法、装置及计算机设备
WO2022148490A1 (zh) * 2021-01-08 2022-07-14 北京紫光展锐通信技术有限公司 Csi的上报方法及装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021142631A1 (en) * 2020-01-14 2021-07-22 Nokia Shanghai Bell Co., Ltd. Method, device and computer readable medium of communication
WO2022148490A1 (zh) * 2021-01-08 2022-07-14 北京紫光展锐通信技术有限公司 Csi的上报方法及装置
CN113938952A (zh) * 2021-10-20 2022-01-14 澳门大学 信道状态信息压缩方法、重建方法、装置及计算机设备

Also Published As

Publication number Publication date
CN115443643A (zh) 2022-12-06

Similar Documents

Publication Publication Date Title
WO2024026751A1 (zh) 确定用于压缩信道状态信息的压缩模型的方法、装置及存储介质
CN108401501B (zh) 数据传输方法、装置及无人机
US11343704B2 (en) Method and apparatus for assigning identifier, base station and user equipment
CN106664191B (zh) 配置工作带宽的方法及装置
CN109451842B (zh) 用户设备省电方法、装置、用户设备和基站
WO2022041003A1 (zh) 资源确定方法、资源确定装置及存储介质
US11638293B2 (en) Data transmission method and device, user equipment, and base station
CN107926000B (zh) 信息收发方法、装置及系统
CN110546999B (zh) 唤醒信号的变更处理、监听方法、通信设备及存储介质
CN110771222A (zh) 寻呼配置方法、装置、通信设备及存储介质
CN117676918A (zh) 连接建立方法、装置、通信设备和存储介质
CN111713136A (zh) 信息传输方法、装置及通信设备
CN111434177B (zh) 无线网络接入方法、装置、通信设备及存储介质
JP6990736B2 (ja) データ伝送方法、データ伝送装置、ユーザ装置及び基地局
US20230300722A1 (en) Information transmission method and apparatus
US20230011296A1 (en) Method and apparatus for processing uplink transmission
CN109496448B (zh) 网络参数配置方法、装置及计算机可读存储介质
KR20230162705A (ko) 측정 간격 사전 설정 처리 방법, 장치, 통신 기기 및 저장 매체(measurement gap pre-configuration processing method and apparatus, communication device, and storage medium)
WO2024031704A1 (zh) 回传链路的波束指示方法、装置及存储介质
CN106792922B (zh) 通信方法及装置
CN116669172B (zh) 网络注册方法、装置、设备及存储介质
WO2023220932A1 (zh) 一种传输系统信息的方法、装置、设备及存储介质
WO2023240572A1 (zh) 一种信息传输方法、装置、通信设备及存储介质
WO2024031288A1 (zh) 一种信息传输方法、装置、通信设备及存储介质
US11871381B2 (en) Data transmission method and apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22953561

Country of ref document: EP

Kind code of ref document: A1