WO2023185995A1 - 信道特征信息传输方法、装置、终端及网络侧设备 - Google Patents

信道特征信息传输方法、装置、终端及网络侧设备 Download PDF

Info

Publication number
WO2023185995A1
WO2023185995A1 PCT/CN2023/085012 CN2023085012W WO2023185995A1 WO 2023185995 A1 WO2023185995 A1 WO 2023185995A1 CN 2023085012 W CN2023085012 W CN 2023085012W WO 2023185995 A1 WO2023185995 A1 WO 2023185995A1
Authority
WO
WIPO (PCT)
Prior art keywords
layer
channel
characteristic information
target
terminal
Prior art date
Application number
PCT/CN2023/085012
Other languages
English (en)
French (fr)
Inventor
任千尧
谢天
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2023185995A1 publication Critical patent/WO2023185995A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/10Scheduling measurement reports ; Arrangements for measurement reports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/06Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information

Definitions

  • This application belongs to the field of communication technology, and specifically relates to a channel characteristic information transmission method, device, terminal and network side equipment.
  • AI artificial intelligence
  • communication data can be transmitted between network-side devices and terminals based on the AI network model.
  • the channel information compression feedback scheme based on the AI network model compresses and codes the channel information at the terminal, and decodes the compressed content on the network side to restore the channel information.
  • the decoding network on the network side and the terminal side The encoding network needs to be jointly trained to achieve reasonable matching.
  • channel information with different numbers of layers needs to use different AI network models for compression and encoding, which results in the need to train multiple AI network models to process the channel information, causing power consumption on the terminal side and the network side to also increase. Increase accordingly.
  • Embodiments of the present application provide a channel characteristic information transmission method, device, terminal and network-side equipment, which can solve the problem in related technologies that different layers of channel information need to be compressed and encoded using different AI network models.
  • a channel characteristic information transmission method including:
  • the terminal inputs the channel information of each layer into the corresponding first artificial intelligence AI network model for processing, and obtains the channel feature information output by the first AI network model, where one layer corresponds to one first AI network model;
  • the terminal reports the channel characteristic information corresponding to each layer to the network side device.
  • a channel characteristic information transmission method including:
  • the network side device receives the channel characteristic information corresponding to each layer reported by the terminal;
  • one layer of the terminal corresponds to a first AI network model
  • the first AI network model is used to process the channel information of the layer input by the terminal and output the channel characteristic information.
  • a channel characteristic information transmission device including:
  • a processing module configured to input the channel information of each layer into the corresponding first AI network model for processing, and obtain the channel feature information output by the first AI network model, where one layer corresponds to one first AI network model ;
  • a reporting module is used to report the channel characteristic information corresponding to each layer to the network side device.
  • a channel characteristic information transmission device including:
  • the receiving module is used to receive the channel characteristic information corresponding to each layer reported by the terminal;
  • one layer of the terminal corresponds to a first AI network model
  • the first AI network model is used to process the channel information of the layer input by the terminal and output the channel characteristic information.
  • a terminal in a fifth aspect, includes a processor and a memory.
  • the memory stores programs or instructions that can be run on the processor.
  • the program or instructions are executed by the processor, the following implementations are implemented: The steps of the channel characteristic information transmission method described in one aspect.
  • a terminal including a processor and a communication interface, wherein the processor is configured to input the channel information of each layer into the corresponding first artificial intelligence AI network model for processing, and obtain the third artificial intelligence AI network model.
  • the processor is configured to input the channel information of each layer into the corresponding first artificial intelligence AI network model for processing, and obtain the third artificial intelligence AI network model.
  • a network side device in a seventh aspect, includes a processor and a memory.
  • the memory stores programs or instructions that can be run on the processor.
  • the program or instructions are executed by the processor.
  • a network side device including a processor and a communication interface.
  • the communication interface is used to receive channel characteristic information corresponding to each layer reported by a terminal; wherein one layer of the terminal corresponds to a first AI Network model, the first AI network model is used to process the channel information of the layer input by the terminal, and output the channel characteristic information.
  • a communication system including: a terminal and a network side device, the terminal
  • the network side device may be configured to perform the steps of the channel characteristic information transmission method as described in the first aspect
  • the network side device may be configured to perform the steps of the channel characteristic information transmission method as described in the second aspect.
  • a readable storage medium In a tenth aspect, a readable storage medium is provided. Programs or instructions are stored on the readable storage medium. When the programs or instructions are executed by a processor, the steps of the channel characteristic information transmission method as described in the first aspect are implemented. , or implement the steps of the channel characteristic information transmission method described in the second aspect.
  • a chip in an eleventh aspect, includes a processor and a communication interface.
  • the communication interface is coupled to the processor.
  • the processor is used to run programs or instructions to implement the method described in the first aspect. Channel characteristic information transmission method, or implement the channel characteristic information transmission method as described in the second aspect.
  • a computer program/program product is provided, the computer program/program product is stored in a storage medium, and the computer program/program product is executed by at least one processor to implement as described in the first aspect
  • the steps of the channel characteristic information transmission method, or the steps of implementing the channel characteristic information transmission method as described in the second aspect are provided.
  • the terminal can input the channel information corresponding to each layer into the corresponding first AI network model for processing, and report the channel characteristic information output by the first AI network model of each layer to the network side device.
  • network-side devices need to train different AI network models for different numbers of layers, and terminals need to configure AI network models corresponding to different numbers of layers.
  • each layer on the terminal side corresponds to a first AI network model, and then No matter how many layers there are on the terminal side, each layer only needs to process the channel information through the corresponding first AI network model. This eliminates the need to train different AI network models for different layers, and can reduce the friction between network side equipment and terminals. Targeting the transmission overhead of the AI network model, it can also reduce the power consumption of terminals and network-side devices.
  • Figure 1 is a block diagram of a wireless communication system applicable to the embodiment of the present application.
  • Figure 2 is a flow chart of a channel characteristic information transmission method provided by an embodiment of the present application.
  • Figure 3 is a flow chart of another channel characteristic information transmission method provided by an embodiment of the present application.
  • Figure 4 is a structural diagram of a channel characteristic information transmission device provided by an embodiment of the present application.
  • Figure 5 is a structural diagram of another channel characteristic information transmission device provided by an embodiment of the present application.
  • Figure 6 is a structural diagram of a communication device provided by an embodiment of the present application.
  • Figure 7 is a structural diagram of a terminal provided by an embodiment of the present application.
  • Figure 8 is a structural diagram of a network side device provided by an embodiment of the present application.
  • first, second, etc. in the description and claims of this application are used to distinguish similar objects and are not used to describe a specific order or sequence. It is to be understood that the terms so used are interchangeable under appropriate circumstances so that the embodiments of the present application can be practiced in sequences other than those illustrated or described herein, and that "first" and “second” are distinguished objects It is usually one type, and the number of objects is not limited.
  • the first object can be one or multiple.
  • “and/or” in the description and claims indicates at least one of the connected objects, and the character “/" generally indicates that the related objects are in an "or” relationship.
  • LTE Long Term Evolution
  • LTE-Advanced, LTE-A Long Term Evolution
  • CDMA Code Division Multiple Access
  • TDMA Time Division Multiple Access
  • FDMA Frequency Division Multiple Access
  • OFDMA Orthogonal Frequency Division Multiple Access
  • SC-FDMA Single-carrier Frequency Division Multiple Access
  • system and “network” in the embodiments of this application are often used interchangeably, and the described technology can be used not only for the above-mentioned systems and radio technologies, but also for other systems and radio technologies.
  • NR New Radio
  • the following description describes a New Radio (NR) system for example purposes, and uses NR terminology in much of the following description, but these techniques can also be applied to applications other than NR system applications, such as 6th Generation , 6G) communication system.
  • NR New Radio
  • FIG. 1 shows a block diagram of a wireless communication system to which embodiments of the present application are applicable.
  • the wireless communication system includes a terminal 11 and a network side device 12.
  • the terminal 11 can be a mobile phone, a tablet computer (Tablet Personal Computer), laptop computer (Laptop Computer), also known as notebook computer, personal digital assistant (Personal Digital Assistant, PDA), handheld computer, netbook, ultra-mobile personal computer (UMPC), mobile Internet Device (Mobile Internet Device, MID), augmented reality (AR)/virtual reality (VR) equipment, robot, wearable device (Wearable Device), vehicle user equipment (VUE), pedestrian Terminal side (Pedestrian User Equipment, PUE), smart home (home equipment with wireless communication functions, such as refrigerators, TVs, washing machines or furniture, etc.), game consoles, personal computers (PC), teller machines or self-service machines, etc.
  • Tablet Personal Computer Tablet Personal Computer
  • laptop computer laptop computer
  • Netbook ultra-mobile personal computer
  • UMPC mobile Internet Device
  • Mobile Internet Device Mobile Internet Device
  • MID
  • the network side device 12 may include an access network device or a core network device, where the access network device may also be called a radio access network device, a radio access network (Radio Access Network, RAN), a radio access network function or a wireless device.
  • Access network equipment may include base stations, Wireless Local Area Network (WLAN) access points or WiFi nodes, etc.
  • WLAN Wireless Local Area Network
  • the base station may be called Node B, Evolved Node B (Evolved NodeB, eNB), access point, base transceiver Base Transceiver Station (BTS), radio base station, radio transceiver, Basic Service Set (BSS), Extended Service Set (ESS), home B-node, home evolved B-node, transmitter Transmitting Receiving Point (TRP) or some other appropriate term in the field, as long as the same technical effect is achieved, the base station is not limited to specific technical terms. It should be noted that in the embodiment of this application, it is only referred to as The base station in the NR system is introduced as an example, and the specific type of base station is not limited.
  • CSI channel state information
  • the transmitter can optimize signal transmission based on CSI to better match the channel status.
  • channel quality indicator CQI
  • MCS modulation and coding scheme
  • precoding matrix indicator precoding matrix indicator, PMI
  • CSI acquisition has been a research hotspot since multi-antenna technology (multi-input multi-output, MIMO) was proposed.
  • network side equipment such as a base station
  • CSI-RS channel state information reference signal
  • the terminal performs channel estimation based on the CSI-RS.
  • the base station combines the channel information based on the codebook information fed back by the terminal. Before the next CSI report, the base station uses this to perform data precoding and multi-user scheduling.
  • the terminal can change the PMI reported on each subband to report PMI based on delay. Since the channels in the delay domain are more concentrated, PMI with fewer delays can approximately represent the PMI of all subbands. That is, the delay field information will be compressed before reporting.
  • the base station can precode the CSI-RS in advance and send the coded CSI-RS to the terminal. What the terminal sees is the channel corresponding to the coded CSI-RS. The terminal only needs to Just select several ports with greater strength among the indicated ports and report the coefficients corresponding to these ports.
  • the terminal uses the AI network model to compress and encode the channel information, and the base station decodes the compressed content through the AI network model to restore the channel information.
  • the base station's AI network model for decoding and the terminal's use The AI network model for coding needs to be jointly trained to achieve a reasonable matching degree.
  • the terminal's AI network model for encoding and the base station's AI network model for decoding form a joint neural network model, which is jointly trained by the network side.
  • the base station sends the AI network model for encoding to the terminal. .
  • the terminal estimates CSI-RS, calculates channel information, uses the calculated channel information or original estimated channel information through the AI network model to obtain the coding result, and sends the coding result to the base station.
  • the base station receives the coding result and inputs it into the AI network model. decode and recover the channel information.
  • the energy of the channel itself is not concentrated.
  • the channel can be divided into several independent channels through precoding without interfering with each other. It is very suitable for parallel data transmission to improve throughput.
  • the terminal needs to feedback multiple Channel information of the layer, or PMI information.
  • PMI information Typically, the terminal pair
  • the channel matrix is decomposed by Singular Value Decomposition (SVD) to obtain the first few columns of the V matrix as the PMI information that needs to be reported.
  • the eigenvalues or singular values
  • the characteristic values of layer1, layer2, etc. decrease in sequence, and the proportion of the channel information represented in the entire channel also decreases in sequence.
  • Figure 2 is a flow chart of a channel characteristic information transmission method provided by an embodiment of the present application. This method is applied to terminals. As shown in Figure 2, the method includes the following steps:
  • Step 201 The terminal inputs the channel information of each layer into the corresponding first AI network model for processing, and obtains the channel characteristic information output by the first AI network model, where one layer corresponds to one first AI network model.
  • the terminal can detect the CSI Reference Signal (CSI-RS) or Tracking Reference Signal (TRS) at a location specified by the network side device, and perform channel estimation to obtain the original channel information, that is, Each subband has a channel matrix.
  • the terminal performs SVD decomposition on the original channel information to obtain a precoding matrix in each subband.
  • the precoding matrix includes N layers.
  • the terminal decomposes the precoding matrix of each layer (that is, the channel information ) is input to the first AI network model.
  • the precoding matrix of each subband of a layer is input to the first AI network model together, or the precoding matrix is input to the first AI network model after preprocessing.
  • the input channel information (such as the channel matrix of each subband, or the precoding matrix of each subband) is processed through the first AI network model, such as channel information encoding, to obtain the channel characteristics output by the first AI network model.
  • the channel characteristic information may also be called bit information, bit sequence, etc.
  • channel information coding mentioned in the embodiments of this application is different from channel coding.
  • the channel information input to the first AI network model mentioned in the embodiment of this application is precoding information, such as precoding matrix, PMI information, processed precoding matrix, etc.
  • Step 202 The terminal reports the channel characteristic information corresponding to each layer to the network side device.
  • the terminal reports the channel characteristic information corresponding to each layer to the network side device.
  • the terminal may report the channel characteristic information corresponding to each layer separately, or may report it in a combined manner.
  • the terminal can input the channel information corresponding to each layer into the corresponding first AI network model for processing, and report the channel characteristic information output by the first AI network model of each layer to the network side device.
  • each layer on the terminal side corresponds to a first AI network model. No matter how many layers there are on the terminal side, each layer only needs to process the channel information through the corresponding first AI network model, so there is no need to target different layers.
  • Digitally training different AI network models can reduce the transmission overhead of AI network models between network-side devices and terminals, reduce the power consumption of terminals and network-side devices, and increase the flexibility of reporting.
  • each layer corresponds to the same first AI network model.
  • the terminal can only need one first AI network model.
  • the channel information of each layer is input into the same first AI network model to obtain the channel characteristic information of the corresponding layer.
  • the terminal directly reports each layer channel characteristic information.
  • the rank of the terminal side is 2, the channel information of layer 1 passes through the first AI network model 1 to obtain the output first channel characteristic information, and the channel information of layer 2 passes through the first AI network model 1 to obtain the output second channel.
  • Characteristic information the terminal reports first channel characteristic information and second channel characteristic information to the network side device.
  • the network side device only needs to train a first AI network model and pass it to the terminal, which effectively reduces the transmission overhead of the AI network model between the network side device and the terminal, and can also effectively reduce the cost of the terminal. of power consumption.
  • the first AI network model corresponding to each layer is different, and the length of the channel characteristic information output by each first AI network model gradually decreases in the order of the layers.
  • each layer on the terminal side corresponds to a first AI network model, and then the network side device conducts separate training for the first AI network model of each layer, and sends the trained first AI network model to the terminal, and the terminal
  • the channel information of different layers is processed using the first AI network model corresponding to each layer.
  • the length of the channel characteristic information output by each of the first AI network models can be gradually reduced in the order of layers.
  • the length of the channel characteristic information output by the first AI network model corresponding to layer1 is 200 bits
  • the length of the channel characteristic information output by the first AI network model corresponding to layer2 is 200 bits
  • the length of the channel feature information output by the network model is 180 bits
  • the length of the channel feature information output by the first AI network model corresponding to layer 3 is 160 bit...
  • the channel feature information output by the first AI network model corresponding to each layer is also The length is limited to reduce the transmission overhead of the terminal.
  • the method may also include:
  • the terminal determines the number of layers corresponding to the channel information based on the rank of the channel;
  • the terminal obtains the ratio of the target parameters of the first target layer relative to the sum of the target parameters of the second target layer, and determines the first AI network model corresponding to the first target layer based on the proportion range in which the ratio is located,
  • the first target layer is any layer among the layers corresponding to the channel information, and the second target layer is all layers corresponding to the terminal or all layers reported by the terminal;
  • target parameters include any one of the following: characteristic values, channel quality indicator (Channel quality indicator, CQI), and channel capacity.
  • CSI-RS CSI Reference Signal
  • the determination of the first AI network model of a certain layer of the terminal may be based on the ratio of the target parameters of the layer to the sum of the target parameters of all layers, or based on the target of the layer.
  • the parameters are determined in proportion to the sum of the target parameters of all reported layers.
  • the terminal divides the first AI network model corresponding to different scale ranges in advance.
  • the proportion range of 70% to 100% corresponds to the first AI network model 001
  • the proportion range of 40% to 70% corresponds to the first AI network model 002
  • the proportion range below 40% corresponds to the first AI network model 003; if the terminal selects Rank1 is selected.
  • the proportion of is 75%, and the proportion of the feature value of layer2 to the sum of feature values of all layers is 20%, then it is determined that layer1 corresponds to the first AI network model 001, and layer2 corresponds to the first AI network model 003. Further, the terminal processes the input channel information based on the first AI network model determined by each layer.
  • the terminal determines the first AI network model corresponding to the layer based on the layer's characteristic value, CQI or channel capacity, which increases the flexibility of the terminal in processing channel information.
  • the first AI network model corresponding to each layer of the terminal is different, and the target first AI network
  • the input of the model includes the channel information of the third target layer; wherein the layers corresponding to the terminal are sorted based on the target parameters, the third target layer is any one of the layers corresponding to the terminal, and the target
  • the first AI network model corresponds to the third target layer, and the target parameters include any one of the following: target parameters, CQI, and channel capacity.
  • the input of the first AI network model corresponding to layer2 includes the channel information of layer2; if the third target layer is layer3, the input of the first AI network model corresponding to layer3 includes the channel information of layer3.
  • the third target layer is any layer other than the first layer after sorting the layers corresponding to the terminal, and the input of the target first AI network model also includes any of the following:
  • the third target layer is layer3, and the input of the first AI network model corresponding to layer3 can include the following methods:
  • Method 1 Channel information of layer 3 and channel feature information output by the first AI network model corresponding to layer 2;
  • Method 2 The channel information of layer 3 and the channel feature information output by the first AI network model corresponding to layer 1;
  • Method 3 The channel information of layer3 and the channel feature information output by the first AI network model corresponding to layer1 and the channel feature information output by the first AI network model corresponding to layer2;
  • Method 4 channel information of layer3 and channel information of layer2;
  • Method five channel information of layer3, channel information of layer1, and channel information of layer2.
  • the terminal can determine the input of the first AI network model corresponding to a certain layer of the terminal based on the above different methods, so that the input of the terminal to the first AI network model of each layer can be different, which improves the terminal's understanding of the channel. Information processing flexibility.
  • the terminal inputs the channel information of each layer into the corresponding first AI network model for processing, including:
  • the terminal pre-processes the channel information of each layer and inputs it into the corresponding first AI network model for processing.
  • the terminal may first preprocess the channel information.
  • the preprocessing may be orthogonal basis projection, oversampling, etc. It should be noted that the above is an advancement of orthogonal basis projection.
  • the precoding matrix as an example, the number of CSI-RS ports is 32, then the precoding matrix of a layer can be a 32*1 matrix, and the projection is generated 32 orthogonal DFT vectors, each DFT vector length is 32, project this precoding matrix into 32 orthogonal DFT vectors, select several with larger coefficient amplitudes, and then use the coefficients and/or corresponding DFT vectors as Preprocessing results. Oversampling occurs during projection.
  • each group of 32 DFT vectors is orthogonal. There is no orthogonality between groups. Then select 4 groups. The group closest to the precoding matrix is projected as above.
  • the terminal inputs the channel information of each layer into the corresponding first AI network model after preprocessing, including any of the following:
  • the terminal preprocesses the channel information of each layer through the second AI network model and then inputs it into the corresponding first AI network model respectively;
  • the output of the target second AI network model is input into the first AI network model corresponding to the target layer, wherein the target A layer is any layer corresponding to the terminal, and each layer corresponds to one of the target second AI network models.
  • the terminal may also preprocess the channel information through the second AI network model.
  • the terminal preprocesses the channel information of each layer through the same second AI network model, then corresponds the second AI network model to the output of each layer, and inputs the first AI network model corresponding to each layer respectively.
  • the network side device can only train a second AI network model, which reduces the power consumption of the network side device and the terminal.
  • the network side device can also train a second AI network model for each layer, and then each layer preprocesses the channel information through the corresponding second AI network model, and then uses the output of the second AI network model as the corresponding The input to the first AI network model of the layer.
  • Preprocessing channel information through different second AI network models improves the flexibility of the terminal in preprocessing channel information for each layer.
  • the terminal reports the channel characteristic information corresponding to each layer to the network side device, including:
  • the terminal performs post-processing on the channel characteristic information corresponding to the target layer, and reports the post-processed channel characteristic information to the network side device; wherein the target layer is any layer corresponding to the terminal.
  • the terminal may post-process the channel characteristic information corresponding to each layer and then report it to the network side device, or it may only perform post-processing on the channel characteristic information corresponding to one or more specified layers. The post-processed channel characteristic information is then reported to the network side device.
  • the post-processing method may be entropy coding, or interception of the channel characteristic information output by the first AI network model to a target length, etc.
  • the terminal performs post-processing on the channel characteristic information corresponding to the target layer, and reports the post-processed channel characteristic information to the network side device, including:
  • the terminal performs post-processing on the channel characteristic information corresponding to the target layer to obtain channel characteristic information of a target length, where the target length is smaller than the length of the channel characteristic information before post-processing;
  • the terminal reports the target length and the channel characteristic information of the target length to the network side device.
  • the channel information of layer1 is processed by the corresponding first AI network model, and the channel characteristic information 1 with a length of 100 bits output by the first AI network model is obtained.
  • the channel information of layer2 is processed by the corresponding first AI network.
  • the channel characteristic information 2 with a length of 100 bits output by the first AI network model is obtained; the terminal may not perform post-processing on the channel characteristic information 1 of layer 1, and perform post-processing on the channel characteristic information 2 of layer 2 to obtain 80 bits.
  • channel characteristic information the terminal can report the following information to the network side device: 100-bit channel characteristic information 1, 80-bit channel characteristic information 2 and the length of channel characteristic information 2 (that is, 80 bits). In this way, the network side device is able to decode the channel characteristic information through the third AI network model that matches the first AI network model based on the reported information to obtain restored channel information.
  • the post-processing method may be instructed by the network side device, or may be selected by the terminal itself.
  • the target length is included in the first part of the CSI.
  • the terminal may report channel characteristic information through a CSI.
  • the CSI includes a first part (CSI Part1) and a second part (CSI Part2), where the first part is a fixed-length part of the CSI, and the second part is a fixed-length part of the CSI.
  • the variable length part; the terminal can carry the channel characteristic information in CSI Part1, and also carry the target length of the channel characteristic information of the target layer in CSI Part1.
  • the network side device can directly obtain the channel characteristic information and its length of the target layer from CSI Part1 to decode the channel characteristic information.
  • the terminal reports the channel characteristic information corresponding to each layer to the network side device, including any one of the following:
  • the terminal reports the channel characteristic information corresponding to the first layer to the network side device through the first part of the CSI, except for the first
  • the channel characteristic information corresponding to other layers than the first layer is reported to the network side device through the second part of the CSI, where the target parameters include any one of the following: target parameters, CQI, channel capacity;
  • the terminal reports the channel characteristic information corresponding to each layer to the network side device through the second part of the CSI;
  • the terminal reports the channel characteristic information corresponding to each layer to the network side device through the corresponding block in the second part of the CSI, and one layer corresponds to one block.
  • the terminal reports the channel characteristic information corresponding to the first layer through CSI Part1, and reports the channel characteristic information corresponding to other layers except the first layer through CSI Part2; or, the terminal reports the channel characteristic information of each layer through CSI Part2.
  • CSI Part2 is reported; alternatively, CSI Part2 can be divided into blocks, and the terminal reports the channel characteristic information of each layer through a corresponding block in CSI Part2. In this way, the terminal's reporting method of channel characteristic information is more flexible.
  • the terminal reports the channel characteristic information corresponding to each layer to the network side device, including:
  • the terminal reports the channel characteristic information corresponding to each layer to the network side device, and discards the channel characteristic information in reverse order of the layer order.
  • the terminal reports channel characteristic information corresponding to each layer to the network side device.
  • the channel characteristic information can also be discarded; for example, if resources are insufficient, the terminal can discard the channel characteristic information from back to front in the order of layers to ensure that the channel characteristic information of previous layers can be transmitted to the network side device.
  • the method also includes:
  • the terminal determines the rank of the channel based on the CSI reference signal channel estimation result
  • the terminal reports the channel characteristic information corresponding to each layer to the network side device, including:
  • the terminal reports a rank indicator (Rank Indicator, RI) and the channel characteristic information corresponding to each layer to the network side device.
  • RI rank Indicator
  • the terminal determines the rank of the channel based on the CSI-RS channel estimation result, and thus can determine the number of layers corresponding to the terminal.
  • the terminal After the terminal inputs the channel information of each layer into the corresponding first AI network model and obtains the channel characteristic information output by the first AI network model, the terminal reports the RI and the channel characteristic information corresponding to each layer to the network side device. Furthermore, the network side device can restore the channel information based on the RI and the channel characteristic information.
  • Figure 3 is a flow chart of another channel characteristic information transmission method provided by an embodiment of the present application. This method is applied to network side equipment. As shown in Figure 3, the method includes the following steps:
  • Step 301 The network side device receives the channel characteristic information corresponding to each layer reported by the terminal.
  • one layer of the terminal corresponds to a first AI network model
  • the first AI network model is used to process the channel information of the layer input by the terminal and output the channel characteristic information.
  • the network side device includes a third AI network model that matches the first AI network model.
  • the first AI network model and the third AI network model are jointly trained through the network side device.
  • the network side device will train the The first AI network model is sent to the terminal.
  • the terminal encodes the input coefficients through the first AI network model and outputs channel characteristic information.
  • the terminal reports the channel characteristic information to the network side device, and the network side device inputs the channel characteristic information into the matching third AI network model.
  • the three AI network models decode the channel characteristic information to obtain the channel information output by the third AI network model, and then the network side device realizes the recovery of the channel information through the third AI network model. In this way, terminals and network-side devices can encode and decode channel information through matching AI network models.
  • the terminal inputs the channel information corresponding to each layer into the corresponding first AI network model for processing, and reports the channel characteristic information output by the first AI network model of each layer to Network side equipment.
  • the network side device needs to train different AI network models for different layers of the terminal.
  • the network side device can train a first AI network model for each layer of the terminal side, and then regardless of the number of layers on the terminal side, How many layers? Each layer only needs to process the channel information through the corresponding first AI network model. This eliminates the need to train different AI network models for different layers, effectively saving the power consumption of network-side equipment and also reducing The transmission overhead for the AI network model between the network side device and the terminal.
  • each layer corresponds to the same first AI network model. That is to say, no matter how many layers the terminal has, the terminal can only need one first AI network model, and the channel information of each layer is input into the same first AI network model, and the network side device can only train one first AI network model. Just transmit it to the terminal, effectively saving the power consumption and transmission overhead of network-side equipment.
  • the first AI network model corresponding to each layer is different, and the length of the channel characteristic information output by each first AI network model gradually decreases in the order of the layers.
  • each layer on the terminal side corresponds to a first AI network model, and then the network side device conducts separate training for the first AI network model of each layer, sends the trained first AI network model to the terminal, and By limiting the input length of the first AI network model corresponding to each layer, energy consumption reduces the transmission overhead of the terminal.
  • the network side device receives channel characteristic information corresponding to each layer reported by the terminal, including any one of the following:
  • the network side device receives the channel characteristic information corresponding to the first layer reported by the terminal through the first part of the CSI, and the channel characteristic information through the first part of the CSI.
  • the second part reports the channel characteristic information corresponding to other layers except the first layer, wherein the target parameters include any one of the following: target parameters, CQI, channel capacity;
  • the network side device receives the channel characteristic information corresponding to each layer reported by the terminal through the second part of the CSI;
  • the network side device receives the channel characteristic information corresponding to each layer reported by the terminal through the corresponding block in the second part of the CSI, and one layer corresponds to one block.
  • the terminal's reporting method of channel characteristic information is more flexible.
  • the network side device receives the channel characteristic information corresponding to each layer reported by the terminal, including include:
  • the network side device receives the RI reported by the terminal and the channel characteristic information corresponding to each layer.
  • the terminal reports the RI and the channel characteristic information corresponding to each layer to the network side device, so that the network side device can restore the channel information based on the RI and the channel characteristic information.
  • the channel characteristic information transmission method applied to the network side device corresponds to the above-mentioned method applied to the terminal side.
  • the relevant concepts and specific implementation processes involved in the embodiment of the present application can be referred to the above-mentioned Figure 2 To avoid repetition, the description in the embodiment will not be repeated in this embodiment.
  • the execution subject may be a channel characteristic information transmission device.
  • the channel characteristic information transmission method performed by the channel characteristic information transmission device is used as an example to illustrate the channel characteristic information transmission device provided by the embodiment of the present application.
  • the channel characteristic information transmission device 400 includes:
  • the processing module 401 is used to input the channel information of each layer into the corresponding first artificial intelligence AI network model for processing, and obtain the channel characteristic information output by the first AI network model, where one layer corresponds to one first artificial intelligence network model.
  • AI network model ;
  • the reporting module 402 is configured to report the channel characteristic information corresponding to each layer to the network side device.
  • each layer corresponds to the same first AI network model.
  • the first AI network model corresponding to each layer is different, and the length of the channel characteristic information output by each first AI network model gradually decreases in the order of the layers.
  • the device further includes a determining module for:
  • a target layer is any layer among the layers corresponding to the channel information, and the second target layer is all layers corresponding to the device or all layers reported by the device;
  • target parameters include any one of the following: target parameters, CQI, and channel capacity.
  • the first AI network model corresponding to each layer is different, and the target first AI network model is The input of the type includes the channel information of the third target layer;
  • the layers corresponding to the device are sorted based on target parameters
  • the third target layer is any one of the layers corresponding to the device
  • the target first AI network model is the third target layer.
  • the target parameters include any one of the following: target parameters, CQI, and channel capacity.
  • the third target layer is any layer other than the first layer after sorting the layers corresponding to the device, and the input of the target first AI network model also includes any of the following:
  • processing module 401 is also used to:
  • the channel information of each layer is pre-processed and then input into the corresponding first AI network model for processing.
  • processing module 401 is also used to perform any of the following:
  • the channel information of each layer is preprocessed by the second AI network model and then input into the corresponding first AI network model;
  • the output of the target second AI network model is input into the first AI network model corresponding to the target layer, wherein the target layer is the Any layer corresponding to the device, each layer corresponding to one of the target second AI network models.
  • reporting module 402 is also used to:
  • the target layer is at least one layer corresponding to the device.
  • reporting module 402 is also used to:
  • the target length and the channel characteristic information of the target length are reported to the network side device.
  • the target length is included in the first part of the CSI.
  • the reporting module 402 is also configured to perform any of the following:
  • the channel characteristic information corresponding to the first layer is reported to the network side device through the first part of the CSI.
  • the channel characteristic information corresponding to other layers is reported to the network side device through the second part of the CSI, where the target parameters include any one of the following: target parameters, CQI, channel capacity;
  • the channel characteristic information corresponding to each layer is reported to the network side device through the corresponding block in the second part of the CSI, and one layer corresponds to one block.
  • reporting module 402 is also used to:
  • the channel characteristic information corresponding to each layer is reported to the network side device, and the channel characteristic information is discarded in reverse order of the layer order.
  • the device also includes:
  • a determination module configured to determine the rank of the channel based on the CSI reference signal channel estimation result
  • the reporting module 402 is also used to:
  • the channel information is precoding information.
  • the device can input the channel information corresponding to each layer into the corresponding first AI network model for processing, and report the channel characteristic information output by the first AI network model of each layer to the network side device.
  • each layer of the device described in this application corresponds to a first AI network model, so there is no need to train different AI for different numbers of layers.
  • the network model can reduce the transmission overhead of the AI network model between the network side equipment and the device, and can also reduce the power consumption of the device.
  • the channel characteristic information transmission device 400 in the embodiment of the present application may be an electronic device, such as a An electronic device with an operating system can also be a component in the electronic device, such as an integrated circuit or chip.
  • the electronic device may be a terminal or other devices other than the terminal.
  • terminals may include but are not limited to the types of terminals 11 listed above, and other devices may be servers, network attached storage (Network Attached Storage, NAS), etc., which are not specifically limited in the embodiment of this application.
  • the channel characteristic information transmission device 400 provided by the embodiment of the present application can implement each process implemented by the terminal in the method embodiment of Figure 2 and achieve the same technical effect. To avoid duplication, the details will not be described here.
  • Figure 5 is a structural diagram of another channel characteristic information transmission device provided by an embodiment of the present application. As shown in Figure 5, the channel characteristic information transmission device 500 includes:
  • the receiving module 501 is used to receive the channel characteristic information corresponding to each layer reported by the terminal;
  • one layer of the terminal corresponds to a first AI network model
  • the first AI network model is used to process the channel information of the layer input by the terminal and output the channel characteristic information.
  • each layer corresponds to the same first AI network model.
  • the first AI network model corresponding to each layer is different, and the length of the channel characteristic information output by each first AI network model gradually decreases in the order of the layers.
  • the receiving module 501 is also configured to perform any of the following:
  • the layers corresponding to the terminal When the layers corresponding to the terminal are sorted based on target parameters, receive the channel characteristic information corresponding to the first layer reported by the terminal through the first part of the CSI, and the channel characteristic information reported through the second part of the CSI.
  • the receiving module 501 is also used to:
  • the device can train a first AI network model for each layer on the terminal side, and no matter how many layers there are on the terminal side, each layer uses the corresponding first AI network model to The channel information can be processed, so there is no need to train different AI network models for different layers, which can save the power consumption of the device and reduce the transmission overhead of the AI network model between the device and the terminal.
  • the channel characteristic information transmission device 500 provided by the embodiment of the present application can implement each process implemented by the network side device in the method embodiment of Figure 3, and achieve the same technical effect. To avoid duplication, the details will not be described here.
  • this embodiment of the present application also provides a communication device 600, which includes a processor 601 and a memory 602.
  • the memory 602 stores programs or instructions that can be run on the processor 601, for example.
  • the communication device 600 is a terminal, when the program or instruction is executed by the processor 601, each step of the method embodiment described in Figure 2 is implemented, and the same technical effect can be achieved.
  • the communication device 600 is a network-side device, when the program or instruction is executed by the processor 601, each step of the method embodiment described in FIG. 3 is implemented, and the same technical effect can be achieved. To avoid duplication, the details will not be described here.
  • Embodiments of the present application also provide a terminal, including a processor and a communication interface.
  • the processor is configured to input the channel information of each layer into the corresponding first artificial intelligence AI network model for processing, and obtain the first AI
  • the channel characteristic information output by the network model, where one layer corresponds to a first AI network model; the communication interface is used to report the channel characteristic information corresponding to each layer to the network side device.
  • This terminal embodiment corresponds to the above-mentioned terminal-side method embodiment.
  • FIG. 7 is a schematic diagram of the hardware structure of a terminal that implements an embodiment of the present application.
  • the terminal 700 includes but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, a processor 710, etc. At least some parts.
  • the terminal 700 may also include a power supply (such as a battery) that supplies power to various components.
  • the power supply may be logically connected to the processor 710 through a power management system, thereby managing charging, discharging, and power consumption through the power management system. Management and other functions.
  • the terminal structure shown in FIG. 7 does not constitute a limitation on the terminal.
  • the terminal may include more or fewer components than shown in the figure, or some components may be combined or arranged differently, which will not be described again here.
  • the input unit 704 may include a graphics processing unit (Graphics Processing Unit, GPU) 7041 and microphone 7042, the graphics processor 7041 processes image data of still pictures or videos obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode.
  • the display unit 706 may include a display panel 7061, which may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the user input unit 707 includes a touch panel 7071 and at least one of other input devices 7072 . Touch panel 7071, also called touch screen.
  • the touch panel 7071 may include two parts: a touch detection device and a touch controller.
  • Other input devices 7072 may include but are not limited to physical keyboards, function keys (such as volume control keys, switch keys, etc.), trackballs, mice, and joysticks, which will not be described again here.
  • the radio frequency unit 701 after receiving downlink data from the network side device, can transmit it to the processor 710 for processing; in addition, the radio frequency unit 701 can send uplink data to the network side device.
  • the radio frequency unit 701 includes, but is not limited to, an antenna, amplifier, transceiver, coupler, low noise amplifier, duplexer, etc.
  • Memory 709 may be used to store software programs or instructions as well as various data.
  • the memory 709 may mainly include a first storage area for storing programs or instructions and a second storage area for storing data, wherein the first storage area may store an operating system, an application program or instructions required for at least one function (such as a sound playback function, Image playback function, etc.) etc.
  • memory 709 may include volatile memory or non-volatile memory, or memory 709 may include both volatile and non-volatile memory.
  • non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electrically removable memory. Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory.
  • Volatile memory can be random access memory (Random Access Memory, RAM), static random access memory (Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), synchronous dynamic random access memory (Synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (Double Data Rate SDRAM, DDRSDRAM), enhanced synchronous dynamic random access memory (Enhanced SDRAM, ESDRAM), synchronous link dynamic random access memory (Synch link DRAM) , SLDRAM) and direct memory bus random access memory (Direct Rambus RAM, DRRAM).
  • RAM Random Access Memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • SDRAM double data rate synchronous dynamic random access memory
  • Double Data Rate SDRAM Double Data Rate SDRAM
  • DDRSDRAM double data rate synchronous dynamic random access memory
  • Enhanced SDRAM, ESDRAM enhanced synchronous dynamic random access memory
  • Synch link DRAM synchronous link dynamic random access memory
  • SLDRAM direct memory bus
  • the processor 710 may include one or more processing units; optionally, the processor 710 integrates application Processor and modem processor, among which the application processor mainly processes operations involving the operating system, user interface and application programs, etc., and the modem processor mainly processes wireless communication signals, such as a baseband processor. It can be understood that the above-mentioned modem processor may not be integrated into the processor 710.
  • the processor 710 is used to input the channel information of each layer into the corresponding first artificial intelligence AI network model for processing, and obtain the channel characteristic information output by the first AI network model, where one layer corresponds to one The first AI network model;
  • the radio frequency unit 701 is configured to report the channel characteristic information corresponding to each layer to the network side device.
  • each layer corresponds to the same first AI network model.
  • the first AI network model corresponding to each layer is different, and the length of the channel characteristic information output by each first AI network model gradually decreases in the order of the layers.
  • processor 710 is also used to:
  • One target layer is any layer among the layers corresponding to the channel information, and the second target layer is all layers corresponding to the terminal or all layers reported by the terminal;
  • target parameters include any one of the following: target parameters, CQI, and channel capacity.
  • the first AI network model corresponding to each layer is different, and the input of the target first AI network model includes channel information of the third target layer;
  • the layers corresponding to the terminal are sorted based on target parameters
  • the third target layer is any one of the layers corresponding to the terminal
  • the target first AI network model is the third target layer
  • the target parameters include any one of the following: target parameters, CQI, and channel capacity.
  • the third target layer is any layer other than the first layer after sorting the layers corresponding to the terminal, and the input of the target first AI network model also includes any of the following:
  • processor 710 is also used to:
  • the channel information of each layer is pre-processed and then input into the corresponding first AI network model for processing.
  • processor 710 is also used to perform any of the following:
  • the channel information of each layer is preprocessed by the second AI network model and then input into the corresponding first AI network model;
  • the output of the target second AI network model is input into the first AI network model corresponding to the target layer, wherein the target layer is the Any layer corresponding to the terminal, each layer corresponding to one of the target second AI network models.
  • the radio frequency unit 701 is also used for:
  • the target layer is at least one layer corresponding to the terminal.
  • the radio frequency unit 701 is also used for:
  • the target length and the channel characteristic information of the target length are reported to the network side device.
  • the target length is included in the first part of the CSI.
  • the radio frequency unit 701 is also configured to perform any of the following:
  • the channel characteristic information corresponding to the first layer is reported to the network side device through the first part of the CSI, and all other layers except the first layer are The channel characteristic information corresponding to other layers is reported to the network side device through the second part of the CSI, where the target parameters include any one of the following: target parameters, CQI, channel capacity;
  • the channel characteristic information corresponding to each layer is reported to the network side device through the corresponding block in the second part of the CSI, and one layer corresponds to one block.
  • the radio frequency unit 701 is also used for:
  • the channel characteristic information corresponding to each layer is reported to the network side device, and the channel characteristic information is discarded in reverse order of the layer order.
  • the processor 710 is also configured to: determine the rank of the channel according to the CSI reference signal channel estimation result;
  • the radio frequency unit 701 is also configured to report the RI and the channel characteristic information corresponding to each layer to the network side device.
  • the channel information is precoding information.
  • each layer of the terminal corresponds to a first AI network model, and no matter how many layers there are on the terminal side, each layer only needs to process the channel information through the corresponding first AI network model, so that there is no need to Training different AI network models with different layers can reduce the transmission overhead of the AI network model between network-side devices and terminals, and can also reduce the power consumption of terminals and network-side devices.
  • Embodiments of the present application also provide a network side device, including a processor and a communication interface.
  • the communication interface is used to receive channel characteristic information corresponding to each layer reported by a terminal; wherein, one layer of the terminal corresponds to a first AI Network model, the first AI network model is used to process the channel information of the layer input by the terminal, and output the channel characteristic information.
  • This network-side device embodiment corresponds to the above-mentioned network-side device method embodiment.
  • Each implementation process and implementation manner of the above-mentioned method embodiment can be applied to this network-side device embodiment, and can achieve the same technical effect.
  • the embodiment of the present application also provides a network side device.
  • the network side device 800 includes: an antenna 81 , a radio frequency device 82 , a baseband device 83 , a processor 84 and a memory 85 .
  • the antenna 81 is connected to the radio frequency device 82 .
  • the radio frequency device 82 receives information through the antenna 81 and sends the received information to the baseband device 83 for processing.
  • the baseband device 83 processes the information to be sent and sends it to the radio frequency device 82.
  • the radio frequency device 82 processes the received information and then sends it out through the antenna 81.
  • the method performed by the network side device in the above embodiment can be implemented in the baseband device 83, which Device 83 includes a baseband processor.
  • the baseband device 83 may include, for example, at least one baseband board on which multiple chips are disposed, as shown in FIG. Program to perform the network device operations shown in the above method embodiments.
  • the network side device may also include a network interface 86, which is, for example, a common public radio interface (CPRI).
  • a network interface 86 which is, for example, a common public radio interface (CPRI).
  • CPRI common public radio interface
  • the network side device 800 in this embodiment of the present invention also includes: instructions or programs stored in the memory 85 and executable on the processor 84.
  • the processor 84 calls the instructions or programs in the memory 85 to execute the various operations shown in Figure 5. The method of module execution and achieving the same technical effect will not be described in detail here to avoid duplication.
  • Embodiments of the present application also provide a readable storage medium.
  • Programs or instructions are stored on the readable storage medium.
  • the program or instructions are executed by a processor, each process of the method embodiment described in Figure 2 is implemented, or Each process of the method embodiment described in Figure 3 above can achieve the same technical effect. To avoid repetition, it will not be described again here.
  • the processor is the processor in the terminal described in the above embodiment.
  • the readable storage medium includes computer readable storage media, such as computer read-only memory ROM, random access memory RAM, magnetic disk or optical disk, etc.
  • An embodiment of the present application further provides a chip.
  • the chip includes a processor and a communication interface.
  • the communication interface is coupled to the processor.
  • the processor is used to run programs or instructions to implement the method described in Figure 2.
  • chips mentioned in the embodiments of this application may also be called system-on-chip, system-on-a-chip, system-on-chip or system-on-chip, etc.
  • Embodiments of the present application further provide a computer program/program product.
  • the computer program/program product is stored in a storage medium.
  • the computer program/program product is executed by at least one processor to implement the method described in Figure 2 above.
  • Each process of the embodiment, or each process of implementing the above method embodiment described in Figure 3, can achieve the same technical effect. To avoid repetition, it will not be described again here.
  • An embodiment of the present application also provides a communication system, including: a terminal and a network side device.
  • the terminal can be used to perform the steps of the channel characteristic information transmission method as shown in Figure 2.
  • the network side device The equipment may be used to perform the steps of the channel characteristic information transmission method as described in Figure 3 above.
  • the methods of the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is better. implementation.
  • the technical solution of the present application can be embodied in the form of a computer software product that is essentially or contributes to related technologies.
  • the computer software product is stored in a storage medium (such as ROM/RAM, disk, CD), including several instructions to cause a terminal (which can be a mobile phone, computer, server, air conditioner, or network device, etc.) to execute the methods described in various embodiments of this application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

本申请公开了一种信道特征信息传输方法、装置、终端及网络侧设备,属于通信技术领域,本申请实施例的信道特征信息传输方法包括:终端将每个层的信道信息分别输入对应的第一人工智能AI网络模型进行处理,并获取所述第一AI网络模型输出的信道特征信息,其中,一个层对应一个第一AI网络模型;所述终端向网络侧设备上报每层对应的所述信道特征信息。

Description

信道特征信息传输方法、装置、终端及网络侧设备
相关申请的交叉引用
本申请主张在2022年04月01日在中国提交的中国专利申请No.202210349419.2的优先权,其全部内容通过引用包含于此。
技术领域
本申请属于通信技术领域,具体涉及一种信道特征信息传输方法、装置、终端及网络侧设备。
背景技术
随着科学技术的发展,人们已经开始研究将人工智能(Artificial Intelligence,AI)网络模型应用在通信系统中,例如网络侧设备和终端之间可以基于AI网络模型来传输通信数据。目前,基于AI网络模型的信道信息压缩反馈方案,通过在终端对信道信息进行压缩编码,在网络侧对压缩后的内容进行解码,从而恢复信道信息,此时网络侧的解码网络和终端侧的编码网络需要联合训练,以达到合理的匹配度。相关技术中,不同层(layer)数的信道信息需要使用不同的AI网络模型来进行压缩编码,导致需要训练多个AI网络模型来对信道信息进行处理,造成终端侧和网络侧的功耗也相应增加。
发明内容
本申请实施例提供一种信道特征信息传输方法、装置、终端及网络侧设备,能够解决相关技术中不同层数的信道信息需要使用不同的AI网络模型进行压缩编码的问题。
第一方面,提供了一种信道特征信息传输方法,包括:
终端将每个层的信道信息分别输入对应的第一人工智能AI网络模型进行处理,并获取所述第一AI网络模型输出的信道特征信息,其中,一个层对应一个第一AI网络模型;
所述终端向网络侧设备上报每层对应的所述信道特征信息。
第二方面,提供了一种信道特征信息传输方法,包括:
网络侧设备接收终端上报的每层对应的信道特征信息;
其中,所述终端的一个层对应一个第一AI网络模型,所述第一AI网络模型用于对终端输入的层的信道信息进行处理,并输出所述信道特征信息。
第三方面,提供了一种信道特征信息传输装置,包括:
处理模块,用于将每个层的信道信息分别输入对应的第一AI网络模型进行处理,并获取所述第一AI网络模型输出的信道特征信息,其中,一个层对应一个第一AI网络模型;
上报模块,用于向网络侧设备上报每层对应的所述信道特征信息。
第四方面,提供了一种信道特征信息传输装置,包括:
接收模块,用于接收终端上报的每层对应的信道特征信息;
其中,所述终端的一个层对应一个第一AI网络模型,所述第一AI网络模型用于对终端输入的层的信道信息进行处理,并输出所述信道特征信息。
第五方面,提供了一种终端,该终端包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的信道特征信息传输方法的步骤。
第六方面,提供了一种终端,包括处理器及通信接口,其中,所述处理器用于将每个层的信道信息分别输入对应的第一人工智能AI网络模型进行处理,并获取所述第一AI网络模型输出的信道特征信息,其中,一个层对应一个第一AI网络模型;所述通信接口用于向网络侧设备上报每层对应的所述信道特征信息。
第七方面,提供了一种网络侧设备,该网络侧设备包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第二方面所述的信道特征信息传输方法的步骤。
第八方面,提供了一种网络侧设备,包括处理器及通信接口,所述通信接口用于接收终端上报的每层对应的信道特征信息;其中,所述终端的一个层对应一个第一AI网络模型,所述第一AI网络模型用于对终端输入的层的信道信息进行处理,并输出所述信道特征信息。
第九方面,提供了一种通信系统,包括:终端及网络侧设备,所述终端 可用于执行如第一方面所述的信道特征信息传输方法的步骤,所述网络侧设备可用于执行如第二方面所述的信道特征信息传输方法的步骤。
第十方面,提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的信道特征信息传输方法的步骤,或者实现如第二方面所述的信道特征信息传输方法的步骤。
第十一方面,提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的信道特征信息传输方法,或实现如第二方面所述的信道特征信息传输方法。
第十二方面,提供了一种计算机程序/程序产品,所述计算机程序/程序产品被存储在存储介质中,所述计算机程序/程序产品被至少一个处理器执行以实现如第一方面所述的信道特征信息传输方法的步骤,或实现如第二方面所述的信道特征信息传输方法的步骤。
在本申请实施例中,终端可以将每层对应的信道信息分别输入对应的第一AI网络模型进行处理,并将每层的第一AI网络模型输出的信道特征信息上报给网络侧设备。相比于相关技术中网络侧设备需要针对不同层数训练不同的AI网络模型,终端需要配置不同层数对应的AI网络模型,本申请中终端侧的每层对应一个第一AI网络模型,进而无论终端侧有多少层,每层通过对应的第一AI网络模型来对信道信息进行处理即可,这样也就无需针对不同层数训练不同的AI网络模型,能够降低网络侧设备与终端之间针对AI网络模型的传输开销,也能够降低终端和网络侧设备的功耗。
附图说明
图1是本申请实施例可应用的一种无线通信系统的框图;
图2是本申请实施例提供的一种信道特征信息传输方法的流程图;
图3是本申请实施例提供的另一种信道特征信息传输方法的流程图;
图4是本申请实施例提供的一种信道特征信息传输装置的结构图;
图5是本申请实施例提供的另一种信道特征信息传输装置的结构图;
图6是本申请实施例提供的一种通信设备的结构图;
图7是本申请实施例提供的一种终端的结构图;
图8是本申请实施例提供的一种网络侧设备的结构图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的术语在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”所区别的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”一般表示前后关联对象是一种“或”的关系。
值得指出的是,本申请实施例所描述的技术不限于长期演进型(Long Term Evolution,LTE)/LTE的演进(LTE-Advanced,LTE-A)系统,还可用于其他无线通信系统,诸如码分多址(Code Division Multiple Access,CDMA)、时分多址(Time Division Multiple Access,TDMA)、频分多址(Frequency Division Multiple Access,FDMA)、正交频分多址(Orthogonal Frequency Division Multiple Access,OFDMA)、单载波频分多址(Single-carrier Frequency Division Multiple Access,SC-FDMA)和其他系统。本申请实施例中的术语“系统”和“网络”常被可互换地使用,所描述的技术既可用于以上提及的系统和无线电技术,也可用于其他系统和无线电技术。以下描述出于示例目的描述了新空口(New Radio,NR)系统,并且在以下大部分描述中使用NR术语,但是这些技术也可应用于NR系统应用以外的应用,如第6代(6th Generation,6G)通信系统。
图1示出本申请实施例可应用的一种无线通信系统的框图。无线通信系统包括终端11和网络侧设备12。其中,终端11可以是手机、平板电脑(Tablet  Personal Computer)、膝上型电脑(Laptop Computer)或称为笔记本电脑、个人数字助理(Personal Digital Assistant,PDA)、掌上电脑、上网本、超级移动个人计算机(ultra-mobile personal computer,UMPC)、移动上网装置(Mobile Internet Device,MID)、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、机器人、可穿戴式设备(Wearable Device)、车载设备(Vehicle User Equipment,VUE)、行人终端(Pedestrian User Equipment,PUE)、智能家居(具有无线通信功能的家居设备,如冰箱、电视、洗衣机或者家具等)、游戏机、个人计算机(personal computer,PC)、柜员机或者自助机等终端侧设备,可穿戴式设备包括:智能手表、智能手环、智能耳机、智能眼镜、智能首饰(智能手镯、智能手链、智能戒指、智能项链、智能脚镯、智能脚链等)、智能腕带、智能服装等。需要说明的是,在本申请实施例并不限定终端11的具体类型。网络侧设备12可以包括接入网设备或核心网设备,其中,接入网设备也可以称为无线接入网设备、无线接入网(Radio Access Network,RAN)、无线接入网功能或无线接入网单元。接入网设备可以包括基站、无线局域网(Wireless Local Area Network,WLAN)接入点或WiFi节点等,基站可被称为节点B、演进节点B(Evolved NodeB,eNB)、接入点、基收发机站(Base Transceiver Station,BTS)、无线电基站、无线电收发机、基本服务集(Basic Service Set,BSS)、扩展服务集(Extended Service Set,ESS)、家用B节点、家用演进型B节点、发送接收点(Transmitting Receiving Point,TRP)或所述领域中其他某个合适的术语,只要达到相同的技术效果,所述基站不限于特定技术词汇,需要说明的是,在本申请实施例中仅以NR系统中的基站为例进行介绍,并不限定基站的具体类型。
为更好地理解本申请的技术方案,以下对本申请实施例中可能涉及的相关概念进行解释说明。
由信息论可知,准确的信道状态信息(channel state information,CSI)对信道容量至关重要。尤其是对于多天线系统来讲,发送端可以根据CSI优化信号的发送,使其更加匹配信道的状态。如:信道质量指示(channel quality indicator,CQI)可以用来选择合适的调制编码方案(modulation and coding scheme,MCS)实现链路自适应;预编码矩阵指示(precoding matrix indicator, PMI)可以用来实现特征波束成形(eigen beamforming)从而最大化接收信号的强度,或者用来抑制干扰(如小区间干扰、多用户之间干扰等)。因此,自从多天线技术(multi-input multi-output,MIMO)被提出以来,CSI获取一直都是研究热点。
通常,网络侧设备(例如基站)在在某个时隙(slot)的某些时频资源上发送CSI参考信号(channel state information reference signal,CSI-RS),终端根据CSI-RS进行信道估计,计算这个slot上的信道信息,通过码本将PMI反馈给基站,基站根据终端反馈的码本信息组合出信道信息,在下一次CSI上报之前,基站以此进行数据预编码及多用户调度。
为了进一步减少CSI反馈开销,终端可以将每个子带上报PMI改成按照延迟(delay)上报PMI,由于delay域的信道更集中,用更少的delay的PMI就可以近似表示全部子带的PMI,即将delay域信息压缩之后再上报。
同样,为了减少开销,基站可以事先对CSI-RS进行预编码,将编码后的CSI-RS发送给终端,终端看到的是经过编码之后的CSI-RS对应的信道,终端只需要在网络侧指示的端口中选择若干个强度较大的端口,并上报这些端口对应的系数即可。
进一步地,为了更好地压缩信道信息,可以使用神经网络或机器学习的方法。具体地,在终端通过AI网络模型对信道信息进行压缩编码,在基站通过AI网络模型对压缩后的内容进行解码,从而恢复信道信息,此时基站的用于解码的AI网络模型和终端的用于编码的AI网络模型需要联合训练,达到合理的匹配度。通过终端的用于编码的AI网络模型和基站的用于解码的AI网络模型组成联合的神经网络模型,由网络侧进行联合训练,训练完成后,基站将用于编码的AI网络模型发送给终端。
终端估计CSI-RS,计算信道信息,将计算的信道信息或者原始的估计到的信道信息通过AI网络模型得到编码结果,将编码结果发送给基站,基站接收编码后的结果,输入到AI网络模型中进行解码,恢复信道信息。
对于高秩(rank)的信道,信道本身能量不集中,可以通过预编码将信道分成独立的几个信道,彼此不干扰,很适合做并行数据传输,提高吞吐量,此时需要终端反馈多个层(layer)的信道信息,或PMI信息。通常,终端对 信道矩阵进行奇异值(Singular Value Decomposition,SVD)分解,得到V阵的前几列作为需要上报的PMI信息,按照特征值(或者称奇异值)从大到小的排列,选择前面较大特征值的几列,layer1、layer2……特征值依次降低,代表的信道信息占整个信道的比重也依次降低。
下面结合附图,通过一些实施例及其应用场景对本申请实施例提供的信道特征信息传输方法进行详细地说明。
请参照图2,图2是本申请实施例提供的一种信道特征信息传输方法的流程图,该方法应用于终端。如图2所示,所述方法包括以下步骤:
步骤201、终端将每个层的信道信息分别输入对应的第一AI网络模型进行处理,并获取所述第一AI网络模型输出的信道特征信息,其中,一个层对应一个第一AI网络模型。
可选地,终端可以是在网络侧设备指定的位置检测CSI参考信号(CSI Reference Signal,CSI-RS)或跟踪参考信号(Tracking Reference Signal,TRS),并进行信道估计,得到原始信道信息,即每个子带一个信道矩阵,终端对原始信道信息进行SVD分解,在每个子带得到预编码矩阵,预编码矩阵包括N个层(layer),终端将每个layer的预编码矩阵(也即信道信息)输入到第一AI网络模型中,一个layer在每个子带的预编码矩阵一起输入到第一AI网络模型,或者是预编码矩阵经过预处理后输入到第一AI网络模型,进一步地,终端通过第一AI网络模型对输入的信道信息(如每个子带的信道矩阵,或每个子带的预编码矩阵)进行处理,如进行信道信息编码,得到所述第一AI网络模型输出的信道特征信息。在一些实施例中,所述信道特征信息也可以称为比特(bit)信息、bit序列等。
需要说明地,本申请实施例中提及的信道信息编码,不同于信道编码。
可选地,本申请实施例所提及的输入第一AI网络模型的所述信道信息为预编码信息,如预编码矩阵、PMI信息、经过处理的预编码矩阵等。
步骤202、所述终端向网络侧设备上报每层对应的所述信道特征信息。
可以理解地,终端在得到每层对应的第一AI网络模型输出的信道特征信息后,终端向网络侧设备上报每层对应的所述信道特征信息。可选地,终端可以是将每层对应的信道特征信息分别上报,或者也可以是合并上报。
本申请实施例中,终端可以将每层对应的信道信息分别输入对应的第一AI网络模型进行处理,并将每层的第一AI网络模型输出的信道特征信息上报给网络侧设备。本申请中终端侧的每层对应一个第一AI网络模型,进而无论终端侧有多少层,每层通过对应的第一AI网络模型来对信道信息进行处理即可,这样也就无需针对不同层数训练不同的AI网络模型,能够降低网络侧设备与终端之间针对AI网络模型的传输开销,也能够降低终端和网络侧设备的功耗,同时可以增加上报的灵活性。
可选地,每个层对应同一个所述第一AI网络模型。也就是说,无论终端有多少层,终端可以仅需要一个第一AI网络模型,每个层的信道信息输入同一个第一AI网络模型,得到对应层的信道特征信息,终端直接上报每个层的信道特征信息。
例如,终端侧的秩(rank)为2,layer1的信道信息经过第一AI网络模型1得到输出的第一信道特征信息,layer2的信道信息经过该第一AI网络模型1得到输出的第二信道特征信息,终端向网络侧设备上报第一信道特征信息和第二信道特征信息。
这样,无论终端有多少层,网络侧设备只需要训练一个第一AI网络模型传递给终端即可,有效地降低了网络侧设备与终端之间针对AI网络模型的传输开销,也能够有效降低终端的功耗。
可选地,每个层对应的所述第一AI网络模型不同,各所述第一AI网络模型输出的信道特征信息的长度按照所述层的顺序逐渐降低。这种情况下,终端侧的每层各自对应一个第一AI网络模型,进而网络侧设备针对每层的第一AI网络模型进行单独训练,将训练后的第一AI网络模型发送给终端,终端针对不同层的信道信息使用各层各自对应的第一AI网络模型进行处理。其中,各所述第一AI网络模型输出的信道特征信息的长度可以按照层的顺序逐渐降低,例如layer1对应的第一AI网络模型输出的信道特征信息的长度为200bit,layer2对应的第一AI网络模型输出的信道特征信息的长度为180bit,layer3对应的第一AI网络模型输出的信道特征信息的长度为160bit……这样,也就对各层对应的第一AI网络模型输出的信道特征信息的长度进行了限定,以降低终端的传输开销。
可选地,所述终端将每个层的信道信息分别输入对应的第一AI网络模型进行处理之前,所述方法还可以包括:
所述终端基于信道的秩确定信道信息对应的层数;
所述终端获取第一目标层的目标参数相对于第二目标层的目标参数之和的比例,并基于所述比例所处的比例范围确定所述第一目标层对应的第一AI网络模型,所述第一目标层为所述信道信息对应的层中的任一层,所述第二目标层为所述终端对应的所有层或者所述终端所有上报的层;
其中,不同的所述比例范围对应不同的第一AI网络模型,所述目标参数包括如下任意一项:特征值、信道质量指示(Channel quality indicator,CQI)、信道容量。
可选地,所述终端可以是根据CSI参考信号(CSI Reference Signal,CSI-RS)信道估计结果来确定终端信道的秩(rank),基于rank也就能够确定终端信道信息对应的层数。例如,rank=2,终端信道信息对应的层数为2;rank=3,终端信道信息对应的层数为3。
本申请实施例中,对于终端的其中某层的第一AI网络模型的确定,可以是基于该层的目标参数相对于所有层的目标参数之和的比例来确定,或者是基于该层的目标参数相对于所有上报的层的目标参数之和的比例来确定。
可选地,终端预先对不同比例范围对应的第一AI网络模型进行划分。例如70%~100%的比例范围对应第一AI网络模型001,40%~70%的比例范围对应第一AI网络模型002,40%以下的比例范围对应第一AI网络模型003;若终端选择了rank1,以特征值为例,计算layer1的特征值占比为80%,则确定layer1对应第一AI网络模型001;若终端选择了rank2,计算layer1的特征值占所有层的特征值之和的比例为75%,layer2的特征值占所有层的特征值之和的比例为20%,则确定layer1对应第一AI网络模型001,layer2对应第一AI网络模型003。进一步地,终端基于各层确定的第一AI网络模型对输入的信道信息进行处理。
这样,终端基于层的特征值、CQI或信道容量来确定该层对应的第一AI网络模型,增加了终端对于信道信息处理的灵活性。
可选地,终端的每个层对应的第一AI网络模型不同,目标第一AI网络 模型的输入包括第三目标层的信道信息;其中,所述终端对应的层基于目标参数进行排序,所述第三目标层为所述终端对应的层进行排序后中的任一个,所述目标第一AI网络模型所述第三目标层对应的第一AI网络模型,所述目标参数包括如下任意一项:目标参数、CQI、信道容量。
例如,若第三目标层为layer2,layer2对应的第一AI网络模型的输入包括layer2的信道信息;若第三目标层为layer3,layer3对应的第一AI网络模型的输入包括layer3的信道信息。
可选地,所述第三目标层为所述终端对应的层进行排序后中除第一个层以外的任一个,所述目标第一AI网络模型的输入还包括如下任意一项:
所述第三目标层的前一层对应的第一AI网络模型的输出;
所述第一个层对应的第一AI网络模型的输出;
所述第三目标层之前的所有层各自对应的第一AI网络模型的输出;
所述第三目标层的前一层对应的信道信息;
所述第三目标层之前的所有层各自对应的信道信息。
例如,所述第三目标层为layer3,layer3对应的第一AI网络模型的输入可以是包括以下几种方式:
方式一、layer3的信道信息以及layer2对应的第一AI网络模型输出的信道特征信息;
方式二、layer3的信道信息以及layer1对应的第一AI网络模型输出的信道特征信息;
方式三、layer3的信道信息以及layer1对应的第一AI网络模型输出的信道特征信息和layer2对应的第一AI网络模型输出的信道特征信息;
方式四、layer3的信道信息以及layer2的信道信息;
方式五、layer3的信道信息以及layer1的信道信息和layer2的信道信息。
本申请实施例中,终端可以是基于上述不同的方式来确定终端某层对应的第一AI网络模型的输入,使得终端对于每层的第一AI网络模型的输入可以不同,提升了终端对于信道信息处理的灵活性。
可选地,所述终端将每个层的信道信息分别输入对应的第一AI网络模型进行处理,包括:
所述终端将每个层的信道信息经预处理后分别输入对应的第一AI网络模型进行处理。
也即是说,终端在将每个层的信道信息输入对应的第一AI网络模型之前,可以是先对所述信道信息进行预处理。例如,所述预处理可以是正交基投影、过采等。需要说明地,所述为正交基投影的一个进阶,以预编码矩阵为例,CSI-RS端口数是32,则一个layer的预编码矩阵可以是一个32*1的矩阵,投影是生成32个正交DFT向量,每个DFT向量长度为32,将这个预编码矩阵投影在32个正交DFT向量中,选择系数幅度较大的若干个,然后使用系数和/或对应的DFT向量作为预处理结果。过采则是在投影的时候,例如以4倍过采为例,生成4组32个正交DFT向量,每组32个DFT向量正交,组与组之间不正交,然后选择4组中最接近预编码矩阵的一组,再按照上面的方式投影。
可选地,所述终端将每个层的信道信息经预处理后分别输入对应的第一AI网络模型,包括如下任意一项:
所述终端将每个层的信道信息经第二AI网络模型进行预处理后分别输入对应的第一AI网络模型;
所述终端将目标层的信道信息经目标第二AI网络模型进行预处理后,将所述目标第二AI网络模型的输出输入所述目标层对应的第一AI网络模型,其中,所述目标层为所述终端对应的任一个层,每个层对应一个所述目标第二AI网络模型。
本申请实施例中,终端还可以是通过第二AI网络模型来对信道信息进行预处理。
例如,终端将每个层的信道信息经同一个第二AI网络模型进行预处理,而后将第二AI网络模型对应于每层的输出,分别输入每层对应的第一AI网络模型。这样,网络侧设备可以是仅训练一个第二AI网络模型,降低了网络侧设备和终端的功耗。
或者,网络侧设备也可以是针对每层分别训练一个第二AI网络模型,进而每个层通过对应的第二AI网络模型对信道信息进行预处理,而后将第二AI网络模型的输出作为对应层的第一AI网络模型的输入。这样,也就能够 通过不同的第二AI网络模型来对信道信息进行预处理,提升了终端对于每层信道信息预处理的灵活性。
本申请实施例中,所述终端向网络侧设备上报每层对应的所述信道特征信息,包括:
所述终端对目标层对应的所述信道特征信息进行后处理,将后处理后的信道特征信息向网络侧设备上报;其中,所述目标层为所述终端对应的任一个层。
需要说明地,终端可以是对每层对应的信道特征信息均进行后处理以后再上报给网络侧设备,或者也可以是只针对某一个或多个指定的层对应的信道特征信息进行后处理,然后将后处理后的信道特征信息上报给网络侧设备。
可选地,所述后处理的方式可以是熵编码,或者是对第一AI网络模型输出的信道特征信息进行目标长度的截取等。
可选地,所述终端对目标层对应的所述信道特征信息进行后处理,将后处理后的信道特征信息向网络侧设备上报,包括:
所述终端对目标层对应的所述信道特征信息进行后处理,得到目标长度的信道特征信息,所述目标长度小于后处理前的所述信道特征信息的长度;
所述终端将所述目标长度、所述目标长度的信道特征信息向网络侧设备上报。
例如,rank=2,layer1的信道信息经对应的第一AI网络模型进行处理后,得到第一AI网络模型输出的长度为100bit的信道特征信息1,layer2的信道信息经对应的第一AI网络模型进行处理后,得到第一AI网络模型输出的长度为100bit的信道特征信息2;终端可以是对layer1的信道特征信息1不做后处理,对layer2的信道特征信息2进行后处理,得到80bit的信道特征信息,则终端可以向网络侧设备上报如下信息:100bit的信道特征信息1、80bit的信道特征信息2及信道特征信息2的长度(也即80bit)。这样,也就使得网络侧设备能够基于上报的信息,通过与第一AI网络模型匹配的第三AI网络模型对信道特征信息进行解码处理,以得到恢复的信道信息。
可选地,所述后处理的方式可以是网络侧设备指示,或者也可以是终端自行选择。
本申请实施例中,在所述信道特征信息通过CSI进行上报的情况下,所述目标长度包括在所述CSI的第一部分中。
例如,终端可以是通过一个CSI来对信道特征信息进行上报,CSI包括第一部分(CSI Part1)和第二部分(CSI Part2),其中第一部分为CSI中的固定长度部分,第二部分为CSI中的可变长度部分;终端可以是将信道特征信息携带在CSI Part1中,将目标层的信道特征信息的目标长度也携带在CSI Part1中。进而网络侧设备能够直接从CSI Part1中获得目标层的信道特征信息及其长度,以实现对信道特征信息的解码。
可选地,在所述信道特征信息通过CSI进行上报的情况下,所述终端向网络侧设备上报每层对应的所述信道特征信息,包括如下任意一项:
在所述终端对应的层基于目标参数进行排序的情况下,所述终端将第一个层对应的所述信道特征信息通过所述CSI的第一部分向网络侧设备上报,将除所述第一个层以外的其他层对应的所述信道特征信息通过所述CSI的第二部分向网络侧设备上报,其中,所述目标参数包括如下任意一项:目标参数、CQI、信道容量;
所述终端将每层对应的所述信道特征信息通过所述CSI的第二部分向网络侧设备上报;
所述终端将每层对应的所述信道特征信息通过所述CSI的第二部分中对应的分块向网络侧设备上报,一个层对应一个所述分块。
例如,终端将第一个层对应的信道特征信息通过CSI Part1上报,将除第一个层以外的其他层对应的信道特征信息通过CSI Part2上报;或者,终端将每层的信道特征信息都通过CSI Part2进行上报;或者,CSI Part2可以进行分块,终端将每层的信道特征信息通过CSI Part2中对应的一个分块进行上报。这样,也就使得终端对于信道特征信息的上报方式更加灵活。
可选地,所述终端向网络侧设备上报每层对应的所述信道特征信息,包括:
所述终端向网络侧设备上报每层对应的所述信道特征信息,并按照层的顺序的倒序对所述信道特征信息进行丢弃。
本申请实施例中,终端在向网络侧设备上报每层对应的信道特征信息的 过程中,还可以对信道特征信息进行丢弃;例如若资源不足,终端可以是按照层的顺序从后往前丢弃信道特征信息,以确保前面的层的信道特征信息能够传输至网络侧设备。
可选地,所述方法还包括:
所述终端根据CSI参考信号信道估计结果确定信道的秩;
所述终端向网络侧设备上报每层对应的所述信道特征信息,包括:
所述终端向网络侧设备上报秩的指示(Rank Indicator,RI)及每层对应的所述信道特征信息。
本申请实施例中,终端基于CSI-RS信道估计结果确定信道的秩,进而也就能够确定终端对应的层数。终端在将每层的信道信息分别输入对应的第一AI网络模型,得到第一AI网络模型输出的信道特征信息后,终端将RI及每层对应的所述信道特征信息上报给网络侧设备,进而以网络侧设备能够基于RI及所述信道特征信息恢复信道信息。
请参照图3,图3是本申请实施例提供的另一种信道特征信息传输方法的流程图,该方法应用于网络侧设备。如图3所示,所述方法包括以下步骤:
步骤301、网络侧设备接收终端上报的每层对应的信道特征信息。
其中,所述终端的一个层对应一个第一AI网络模型,所述第一AI网络模型用于对终端输入的层的信道信息进行处理,并输出所述信道特征信息。
需要说明地,网络侧设备包括与所述第一AI网络模型匹配的第三AI网络模型,第一AI网络模型和第三AI网络模型通过网络侧设备进行联合训练,网络侧设备将训练好的第一AI网络模型发送给终端。终端通过第一AI网络模型对输入的系数进行编码处理,输出信道特征信息,终端将信道特征信息上报给网络侧设备,网络侧设备将所述信道特征信息输入匹配的第三AI网络模型,第三AI网络模型对信道特征信息进行解码处理,得到第三AI网络模型输出的信道信息,进而网络侧设备通过第三AI网络模型实现对信道信息的恢复。这样,终端和网络侧设备也就能够通过匹配的AI网络模型实现对信道信息的编码和解码处理。
本申请实施例中,终端将每层对应的信道信息分别输入对应的第一AI网络模型进行处理,并将每层的第一AI网络模型输出的信道特征信息上报给 网络侧设备。相比于相关技术中网络侧设备需要针对终端的不同层数训练不同的AI网络模型,本申请中网络侧设备可以针对终端侧的每层对应训练一个第一AI网络模型,进而无论终端侧有多少层,每层通过对应的第一AI网络模型来对信道信息进行处理即可,这样也就无需针对不同层数训练不同的AI网络模型,有效节省了网络侧设备的功耗,也能够降低网络侧设备与终端之间针对AI网络模型的传输开销。
可选地,每个层对应同一个所述第一AI网络模型。也就是说,无论终端有多少层,终端可以仅需要一个第一AI网络模型,每个层的信道信息输入同一个第一AI网络模型,进而网络侧设备可以仅需训练一个第一AI网络模型传输给终端即可,有效节省了网络侧设备的功耗和传输开销。
可选地,每个层对应的所述第一AI网络模型不同,各所述第一AI网络模型输出的信道特征信息的长度按照所述层的顺序逐渐降低。这种情况下,终端侧的每层各自对应一个第一AI网络模型,进而网络侧设备针对每层的第一AI网络模型进行单独训练,将训练后的第一AI网络模型发送给终端,并通过对每层对应的第一AI网络模型的输入长度进行限定,能耗降低终端的传输开销。
可选地,在所述信道特征信息通过CSI进行上报的情况下,所述网络侧设备接收终端上报的每层对应的信道特征信息,包括如下任意一项:
在所述终端对应的层基于目标参数进行排序的情况下,所述网络侧设备接收终端通过所述CSI的第一部分上报的第一个层对应的所述信道特征信息,以及通过所述CSI的第二部分上报的除所述第一个层以外的其他层对应的所述信道特征信息,其中,所述目标参数包括如下任意一项:目标参数、CQI、信道容量;
所述网络侧设备接收终端通过所述CSI的第二部分上报的每层对应的所述信道特征信息;
所述网络侧设备接收终端通过所述CSI的第二部分中对应的分块上报的每层对应的所述信道特征信息,一个层对应一个所述分块。
这样,也就使得终端对于信道特征信息的上报方式更加灵活。
可选地,所述网络侧设备接收终端上报的每层对应的信道特征信息,包 括:
所述网络侧设备接收终端上报的RI及每层对应的所述信道特征信息。
本申请实施例中,终端将RI及每层对应的所述信道特征信息上报给网络侧设备,进而以网络侧设备能够基于RI及所述信道特征信息恢复信道信息。
需要说明地,本申请实施例提供的应用于网络侧设备的信道特征信息传输方法与上述应用于终端侧的方法对应,本申请实施例中涉及的相关概念及具体实现流程可以是参照上述图2所述实施例中的描述,为避免重复,本实施例不再赘述。
本申请实施例提供的信道特征信息传输方法,执行主体可以为信道特征信息传输装置。本申请实施例中以信道特征信息传输装置执行信道特征信息传输方法为例,说明本申请实施例提供的信道特征信息传输装置。
请参照图4,图4是本申请实施例提供的一种信道特征信息传输装置的结构图,如图4所示,所述信道特征信息传输装置400包括:
处理模块401,用于将每个层的信道信息分别输入对应的第一人工智能AI网络模型进行处理,并获取所述第一AI网络模型输出的信道特征信息,其中,一个层对应一个第一AI网络模型;
上报模块402,用于向网络侧设备上报每层对应的所述信道特征信息。
可选地,每个层对应同一个所述第一AI网络模型。
可选地,每个层对应的所述第一AI网络模型不同,各所述第一AI网络模型输出的信道特征信息的长度按照所述层的顺序逐渐降低。
可选地,所述装置还包括确定模块,用于:
基于信道的秩确定信道信息对应的层数;
获取第一目标层的目标参数相对于第二目标层的目标参数之和的比例,并基于所述比例所处的比例范围确定所述第一目标层对应的第一AI网络模型,所述第一目标层为所述信道信息对应的层中的任一层,所述第二目标层为所述装置对应的所有层或者所述装置所有上报的层;
其中,不同的所述比例范围对应不同的第一AI网络模型,所述目标参数包括如下任意一项:目标参数、CQI、信道容量。
可选地,每个层对应的所述第一AI网络模型不同,目标第一AI网络模 型的输入包括第三目标层的信道信息;
其中,所述装置对应的层基于目标参数进行排序,所述第三目标层为所述装置对应的层进行排序后中的任一个,所述目标第一AI网络模型为所述第三目标层对应的第一AI网络模型,所述目标参数包括如下任意一项:目标参数、CQI、信道容量。
可选地,所述第三目标层为所述装置对应的层进行排序后中除第一个层以外的任一个,所述目标第一AI网络模型的输入还包括如下任意一项:
所述第三目标层的前一层对应的第一AI网络模型的输出;
所述第一个层对应的第一AI网络模型的输出;
所述第三目标层之前的所有层各自对应的第一AI网络模型的输出;
所述第三目标层的前一层对应的信道信息;
所述第三目标层之前的所有层各自对应的信道信息。
可选地,所述处理模块401还用于:
将每个层的信道信息经预处理后分别输入对应的第一AI网络模型进行处理。
可选地,所述处理模块401还用于执行如下任意一项:
将每个层的信道信息经第二AI网络模型进行预处理后分别输入对应的第一AI网络模型;
将目标层的信道信息经目标第二AI网络模型进行预处理后,将所述目标第二AI网络模型的输出输入所述目标层对应的第一AI网络模型,其中,所述目标层为所述装置对应的任一个层,每个层对应一个所述目标第二AI网络模型。
可选地,所述上报模块402还用于:
对目标层对应的所述信道特征信息进行后处理,将后处理后的信道特征信息向网络侧设备上报;
其中,所述目标层为所述装置对应的至少一个层。
可选地,所述上报模块402还用于:
对目标层对应的所述信道特征信息进行后处理,得到目标长度的信道特征信息,所述目标长度小于后处理前的所述信道特征信息的长度;
将所述目标长度、所述目标长度的信道特征信息向网络侧设备上报。
可选地,在所述信道特征信息通过信道状态信息CSI进行上报的情况下,所述目标长度包括在所述CSI的第一部分中。
可选地,在所述信道特征信息通过CSI进行上报的情况下,所述上报模块402还用于执行如下任意一项:
在所述装置对应的层基于目标参数进行排序的情况下,将第一个层对应的所述信道特征信息通过所述CSI的第一部分向网络侧设备上报,将除所述第一个层以外的其他层对应的所述信道特征信息通过所述CSI的第二部分向网络侧设备上报,其中,所述目标参数包括如下任意一项:目标参数、CQI、信道容量;
将每层对应的所述信道特征信息通过所述CSI的第二部分向网络侧设备上报;
将每层对应的所述信道特征信息通过所述CSI的第二部分中对应的分块向网络侧设备上报,一个层对应一个所述分块。
可选地,所述上报模块402还用于:
向网络侧设备上报每层对应的所述信道特征信息,并按照层的顺序的倒序对所述信道特征信息进行丢弃。
可选地,所述装置还包括:
确定模块,用于根据CSI参考信号信道估计结果确定信道的秩;
所述上报模块402还用于:
向网络侧设备上报秩的指示RI及每层对应的所述信道特征信息。
可选地,所述信道信息为预编码信息。
本申请实施例中,所述装置可以将每层对应的信道信息分别输入对应的第一AI网络模型进行处理,并将每层的第一AI网络模型输出的信道特征信息上报给网络侧设备。相比于相关技术中网络侧设备需要针对不同层数训练不同的AI网络模型,本申请中所述装置的每层对应一个第一AI网络模型,这样也就无需针对不同层数训练不同的AI网络模型,能够降低网络侧设备与所述装置之间针对AI网络模型的传输开销,也能够降低所述装置的功耗。
本申请实施例中的信道特征信息传输装置400可以是电子设备,例如具 有操作系统的电子设备,也可以是电子设备中的部件,例如集成电路或芯片。该电子设备可以是终端,也可以为除终端之外的其他设备。示例性的,终端可以包括但不限于上述所列举的终端11的类型,其他设备可以为服务器、网络附属存储器(Network Attached Storage,NAS)等,本申请实施例不作具体限定。
本申请实施例提供的信道特征信息传输装置400能够实现图2方法实施例中终端实现的各个过程,并达到相同的技术效果,为避免重复,这里不再赘述。
请参照图5,图5是本申请实施例提供的另一种信道特征信息传输装置的结构图,如图5所示,所述信道特征信息传输装置500包括:
接收模块501,用于接收终端上报的每层对应的信道特征信息;
其中,所述终端的一个层对应一个第一AI网络模型,所述第一AI网络模型用于对终端输入的层的信道信息进行处理,并输出所述信道特征信息。
可选地,每个层对应同一个所述第一AI网络模型。
可选地,每个层对应的所述第一AI网络模型不同,各所述第一AI网络模型输出的信道特征信息的长度按照所述层的顺序逐渐降低。
可选地,在所述信道特征信息通过CSI进行上报的情况下,所述接收模块501还用于执行如下任意一项:
在所述终端对应的层基于目标参数进行排序的情况下,接收终端通过所述CSI的第一部分上报的第一个层对应的所述信道特征信息,以及通过所述CSI的第二部分上报的除所述第一个层以外的其他层对应的所述信道特征信息,其中,所述目标参数包括如下任意一项:目标参数、CQI、信道容量;
接收终端通过所述CSI的第二部分上报的每层对应的所述信道特征信息;
接收终端通过所述CSI的第二部分中对应的分块上报的每层对应的所述信道特征信息,一个层对应一个所述分块。
可选地,所述接收模块501还用于:
接收终端上报的秩的指示RI及每层对应的所述信道特征信息。
本申请实施例中,所述装置可以针对终端侧的每层对应训练一个第一AI网络模型,进而无论终端侧有多少层,每层通过对应的第一AI网络模型来对 信道信息进行处理即可,这样也就无需针对不同层数训练不同的AI网络模型,能够节省了所述装置的功耗,也能够降低所述装置与终端之间针对AI网络模型的传输开销。
本申请实施例提供的信道特征信息传输装置500能够实现图3方法实施例中网络侧设备实现的各个过程,并达到相同的技术效果,为避免重复,这里不再赘述。
可选的,如图6所示,本申请实施例还提供一种通信设备600,包括处理器601和存储器602,存储器602上存储有可在所述处理器601上运行的程序或指令,例如,该通信设备600为终端时,该程序或指令被处理器601执行时实现上述图2所述方法实施例的各个步骤,且能达到相同的技术效果。该通信设备600为网络侧设备时,该程序或指令被处理器601执行时实现上述图3所述方法实施例的各个步骤,且能达到相同的技术效果,为避免重复,这里不再赘述。
本申请实施例还提供一种终端,包括处理器和通信接口,所述处理器用于将每个层的信道信息分别输入对应的第一人工智能AI网络模型进行处理,并获取所述第一AI网络模型输出的信道特征信息,其中,一个层对应一个第一AI网络模型;所述通信接口用于向网络侧设备上报每层对应的所述信道特征信息。该终端实施例与上述终端侧方法实施例对应,上述方法实施例的各个实施过程和实现方式均可适用于该终端实施例中,且能达到相同的技术效果。具体地,图7为实现本申请实施例的一种终端的硬件结构示意图。
该终端700包括但不限于:射频单元701、网络模块702、音频输出单元703、输入单元704、传感器705、显示单元706、用户输入单元707、接口单元708、存储器709以及处理器710等中的至少部分部件。
本领域技术人员可以理解,终端700还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器710逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图7中示出的终端结构并不构成对终端的限定,终端可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
应理解的是,本申请实施例中,输入单元704可以包括图形处理单元 (Graphics Processing Unit,GPU)7041和麦克风7042,图形处理器7041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。显示单元706可包括显示面板7061,可以采用液晶显示器、有机发光二极管等形式来配置显示面板7061。用户输入单元707包括触控面板7071以及其他输入设备7072中的至少一种。触控面板7071,也称为触摸屏。触控面板7071可包括触摸检测装置和触摸控制器两个部分。其他输入设备7072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
本申请实施例中,射频单元701接收来自网络侧设备的下行数据后,可以传输给处理器710进行处理;另外,射频单元701可以向网络侧设备发送上行数据。通常,射频单元701包括但不限于天线、放大器、收发信机、耦合器、低噪声放大器、双工器等。
存储器709可用于存储软件程序或指令以及各种数据。存储器709可主要包括存储程序或指令的第一存储区和存储数据的第二存储区,其中,第一存储区可存储操作系统、至少一个功能所需的应用程序或指令(比如声音播放功能、图像播放功能等)等。此外,存储器709可以包括易失性存储器或非易失性存储器,或者,存储器709可以包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本申请实施例中的存储器709包括但不限于这些和任意其它适合类型的存储器。
处理器710可包括一个或多个处理单元;可选的,处理器710集成应用 处理器和调制解调处理器,其中,应用处理器主要处理涉及操作系统、用户界面和应用程序等的操作,调制解调处理器主要处理无线通信信号,如基带处理器。可以理解的是,上述调制解调处理器也可以不集成到处理器710中。
其中,处理器710,用于将每个层的信道信息分别输入对应的第一人工智能AI网络模型进行处理,并获取所述第一AI网络模型输出的信道特征信息,其中,一个层对应一个第一AI网络模型;
射频单元701,用于向网络侧设备上报每层对应的所述信道特征信息。
可选地,每个层对应同一个所述第一AI网络模型。
可选地,每个层对应的所述第一AI网络模型不同,各所述第一AI网络模型输出的信道特征信息的长度按照所述层的顺序逐渐降低。
可选地,处理器710,还用于:
基于信道的秩确定信道信息对应的层数;
获取第一目标层的目标参数相对于第二目标层的目标参数之和的比例,并基于所述比例所处的比例范围确定所述第一目标层对应的第一AI网络模型,所述第一目标层为所述信道信息对应的层中的任一层,所述第二目标层为所述终端对应的所有层或者所述终端所有上报的层;
其中,不同的所述比例范围对应不同的第一AI网络模型,所述目标参数包括如下任意一项:目标参数、CQI、信道容量。
可选地,每个层对应的所述第一AI网络模型不同,目标第一AI网络模型的输入包括第三目标层的信道信息;
其中,所述终端对应的层基于目标参数进行排序,所述第三目标层为所述终端对应的层进行排序后中的任一个,所述目标第一AI网络模型为所述第三目标层对应的第一AI网络模型,所述目标参数包括如下任意一项:目标参数、CQI、信道容量。
可选地,所述第三目标层为所述终端对应的层进行排序后中除第一个层以外的任一个,所述目标第一AI网络模型的输入还包括如下任意一项:
所述第三目标层的前一层对应的第一AI网络模型的输出;
所述第一个层对应的第一AI网络模型的输出;
所述第三目标层之前的所有层各自对应的第一AI网络模型的输出;
所述第三目标层的前一层对应的信道信息;
所述第三目标层之前的所有层各自对应的信道信息。
可选地,处理器710,还用于:
将每个层的信道信息经预处理后分别输入对应的第一AI网络模型进行处理。
可选地,处理器710,还用于执行如下任意一项:
将每个层的信道信息经第二AI网络模型进行预处理后分别输入对应的第一AI网络模型;
将目标层的信道信息经目标第二AI网络模型进行预处理后,将所述目标第二AI网络模型的输出输入所述目标层对应的第一AI网络模型,其中,所述目标层为所述终端对应的任一个层,每个层对应一个所述目标第二AI网络模型。
可选地,射频单元701,还用于:
对目标层对应的所述信道特征信息进行后处理,将后处理后的信道特征信息向网络侧设备上报;
其中,所述目标层为所述终端对应的至少一个层。
可选地,射频单元701,还用于:
对目标层对应的所述信道特征信息进行后处理,得到目标长度的信道特征信息,所述目标长度小于后处理前的所述信道特征信息的长度;
将所述目标长度、所述目标长度的信道特征信息向网络侧设备上报。
可选地,在所述信道特征信息通过信道状态信息CSI进行上报的情况下,所述目标长度包括在所述CSI的第一部分中。
可选地,在所述信道特征信息通过CSI进行上报的情况下,射频单元701,还用于执行如下任意一项:
在所述终端对应的层基于目标参数进行排序的情况下,将第一个层对应的所述信道特征信息通过所述CSI的第一部分向网络侧设备上报,将除所述第一个层以外的其他层对应的所述信道特征信息通过所述CSI的第二部分向网络侧设备上报,其中,所述目标参数包括如下任意一项:目标参数、CQI、信道容量;
将每层对应的所述信道特征信息通过所述CSI的第二部分向网络侧设备上报;
将每层对应的所述信道特征信息通过所述CSI的第二部分中对应的分块向网络侧设备上报,一个层对应一个所述分块。
可选地,射频单元701,还用于:
向网络侧设备上报每层对应的所述信道特征信息,并按照层的顺序的倒序对所述信道特征信息进行丢弃。
可选地,处理器710,还用于:根据CSI参考信号信道估计结果确定信道的秩;
射频单元701,还用于:向网络侧设备上报RI及每层对应的所述信道特征信息。
可选地,所述信道信息为预编码信息。
本申请实施例中,终端的每层对应一个第一AI网络模型,进而无论终端侧有多少层,每层通过对应的第一AI网络模型来对信道信息进行处理即可,这样也就无需针对不同层数训练不同的AI网络模型,能够降低网络侧设备与终端之间针对AI网络模型的传输开销,也能够降低终端和网络侧设备的功耗
本申请实施例还提供一种网络侧设备,包括处理器和通信接口,所述通信接口用于接收终端上报的每层对应的信道特征信息;其中,所述终端的一个层对应一个第一AI网络模型,所述第一AI网络模型用于对终端输入的层的信道信息进行处理,并输出所述信道特征信息。该网络侧设备实施例与上述网络侧设备方法实施例对应,上述方法实施例的各个实施过程和实现方式均可适用于该网络侧设备实施例中,且能达到相同的技术效果。
具体地,本申请实施例还提供了一种网络侧设备。如图8所示,该网络侧设备800包括:天线81、射频装置82、基带装置83、处理器84和存储器85。天线81与射频装置82连接。在上行方向上,射频装置82通过天线81接收信息,将接收的信息发送给基带装置83进行处理。在下行方向上,基带装置83对要发送的信息进行处理,并发送给射频装置82,射频装置82对收到的信息进行处理后经过天线81发送出去。
以上实施例中网络侧设备执行的方法可以在基带装置83中实现,该基带 装置83包括基带处理器。
基带装置83例如可以包括至少一个基带板,该基带板上设置有多个芯片,如图8所示,其中一个芯片例如为基带处理器,通过总线接口与存储器85连接,以调用存储器85中的程序,执行以上方法实施例中所示的网络设备操作。
该网络侧设备还可以包括网络接口86,该接口例如为通用公共无线接口(common public radio interface,CPRI)。
具体地,本发明实施例的网络侧设备800还包括:存储在存储器85上并可在处理器84上运行的指令或程序,处理器84调用存储器85中的指令或程序执行图5所示各模块执行的方法,并达到相同的技术效果,为避免重复,故不在此赘述。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述图2所述方法实施例的各个过程,或者实现上述图3所述方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的终端中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器ROM、随机存取存储器RAM、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述图2所述方法实施例的各个过程,或者实现上述图3所述方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片,系统芯片,芯片系统或片上系统芯片等。
本申请实施例另提供了一种计算机程序/程序产品,所述计算机程序/程序产品被存储在存储介质中,所述计算机程序/程序产品被至少一个处理器执行以实现上述图2所述方法实施例的各个过程,或者实现上述图3所述方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本申请实施例还提供了一种通信系统,包括:终端及网络侧设备,所述终端可用于执行如图2所述的信道特征信息传输方法的步骤,所述网络侧设 备可用于执行如上图3所述的信道特征信息传输方法的步骤。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对相关技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (33)

  1. 一种信道特征信息传输方法,包括:
    终端将每个层的信道信息分别输入对应的第一人工智能AI网络模型进行处理,并获取所述第一AI网络模型输出的信道特征信息,其中,一个层对应一个第一AI网络模型;
    所述终端向网络侧设备上报每层对应的所述信道特征信息。
  2. 根据权利要求1所述的方法,其中,每个层对应的所述第一AI网络模型不同,各所述第一AI网络模型输出的信道特征信息的长度按照所述层的顺序逐渐降低。
  3. 根据权利要求1所述的方法,其中,所述终端将每个层的信道信息分别输入对应的第一AI网络模型进行处理之前,所述方法还包括:
    所述终端基于信道的秩确定信道信息对应的层数;
    所述终端获取第一目标层的目标参数相对于第二目标层的目标参数之和的比例,并基于所述比例所处的比例范围确定所述第一目标层对应的第一AI网络模型,所述第一目标层为所述终端上报的层中的任一层,所述第二目标层为所述终端对应的所有层或者所述终端所有上报的层;
    其中,不同的所述比例范围对应不同的第一AI网络模型,所述目标参数包括如下任意一项:特征值、信道质量指示CQI、信道容量。
  4. 根据权利要求1所述的方法,其中,每个层对应的所述第一AI网络模型不同,目标第一AI网络模型的输入包括第三目标层的信道信息;
    其中,所述终端对应的层基于目标参数进行排序,所述第三目标层为所述终端对应的层进行排序后中的任一个,所述目标第一AI网络模型为所述第三目标层对应的第一AI网络模型,所述目标参数包括如下任意一项:目标参数、CQI、信道容量。
  5. 根据权利要求4所述的方法,其中,所述第三目标层为所述终端对应的层进行排序后中除第一个层以外的任一个,所述目标第一AI网络模型的输入还包括如下任意一项:
    所述第三目标层的前一层对应的第一AI网络模型的输出;
    所述第一个层对应的第一AI网络模型的输出;
    所述第三目标层之前的所有层各自对应的第一AI网络模型的输出;
    所述第三目标层的前一层对应的信道信息;
    所述第三目标层之前的所有层各自对应的信道信息。
  6. 根据权利要求1所述的方法,其中,所述终端将每个层的信道信息分别输入对应的第一AI网络模型进行处理,包括:
    所述终端将每个层的信道信息经预处理后分别输入对应的第一AI网络模型进行处理。
  7. 根据权利要求6所述的方法,其中,所述终端将每个层的信道信息经预处理后分别输入对应的第一AI网络模型,包括如下任意一项:
    所述终端将每个层的信道信息经第二AI网络模型进行预处理后分别输入对应的第一AI网络模型;
    所述终端将目标层的信道信息经目标第二AI网络模型进行预处理后,将所述目标第二AI网络模型的输出输入所述目标层对应的第一AI网络模型,其中,所述目标层为所述终端对应的任一个层,每个层对应一个所述目标第二AI网络模型。
  8. 根据权利要求1所述的方法,其中,所述终端向网络侧设备上报每层对应的所述信道特征信息,包括:
    所述终端对目标层对应的所述信道特征信息进行后处理,将后处理后的信道特征信息向网络侧设备上报;
    其中,所述目标层为所述终端对应的至少一个层。
  9. 根据权利要求8所述的方法,其中,所述终端对目标层对应的所述信道特征信息进行后处理,将后处理后的信道特征信息向网络侧设备上报,包括:
    所述终端对目标层对应的所述信道特征信息进行后处理,得到目标长度的信道特征信息,所述目标长度小于后处理前的所述信道特征信息的长度;
    所述终端将所述目标长度、所述目标长度的信道特征信息向网络侧设备上报。
  10. 根据权利要求8所述的方法,其中,在所述信道特征信息通过信道 状态信息CSI进行上报的情况下,所述目标长度包括在所述CSI的第一部分中。
  11. 根据权利要求1-10中任一项所述的方法,其中,在所述信道特征信息通过CSI进行上报的情况下,所述终端向网络侧设备上报每层对应的所述信道特征信息,包括如下任意一项:
    在所述终端对应的层基于目标参数进行排序的情况下,所述终端将第一个层对应的所述信道特征信息通过所述CSI的第一部分向网络侧设备上报,将除所述第一个层以外的其他层对应的所述信道特征信息通过所述CSI的第二部分向网络侧设备上报,其中,所述目标参数包括如下任意一项:目标参数、CQI、信道容量;
    所述终端将每层对应的所述信道特征信息通过所述CSI的第二部分向网络侧设备上报;
    所述终端将每层对应的所述信道特征信息通过所述CSI的第二部分中对应的分块向网络侧设备上报,一个层对应一个所述分块。
  12. 根据权利要求1-10中任一项所述的方法,其中,所述终端向网络侧设备上报每层对应的所述信道特征信息,包括:
    所述终端向网络侧设备上报每层对应的所述信道特征信息,并按照层的顺序的倒序对所述信道特征信息进行丢弃。
  13. 根据权利要求1-10中任一项所述的方法,其中,所述方法还包括:
    所述终端根据CSI参考信号信道估计结果确定信道的秩;
    所述终端向网络侧设备上报每层对应的所述信道特征信息,包括:
    所述终端向网络侧设备上报秩的指示RI及每层对应的所述信道特征信息。
  14. 根据权利要求1-10中任一项所述的方法,其中,所述信道信息为预编码信息。
  15. 一种信道特征信息传输方法,包括:
    网络侧设备接收终端上报的每层对应的信道特征信息;
    其中,所述终端的一个层对应一个第一AI网络模型,所述第一AI网络模型用于对终端输入的层的信道信息进行处理,并输出所述信道特征信息。
  16. 根据权利要求15所述的方法,其中,每个层对应的所述第一AI网络模型不同,各所述第一AI网络模型输出的信道特征信息的长度按照所述层的顺序逐渐降低。
  17. 根据权利要求15所述的方法,其中,在所述信道特征信息通过CSI进行上报的情况下,所述网络侧设备接收终端上报的每层对应的信道特征信息,包括如下任意一项:
    在所述终端对应的层基于目标参数进行排序的情况下,所述网络侧设备接收终端通过所述CSI的第一部分上报的第一个层对应的所述信道特征信息,以及通过所述CSI的第二部分上报的除所述第一个层以外的其他层对应的所述信道特征信息,其中,所述目标参数包括如下任意一项:目标参数、CQI、信道容量;
    所述网络侧设备接收终端通过所述CSI的第二部分上报的每层对应的所述信道特征信息;
    所述网络侧设备接收终端通过所述CSI的第二部分中对应的分块上报的每层对应的所述信道特征信息,一个层对应一个所述分块。
  18. 根据权利要求15-17中任一项所述的方法,其中,所述网络侧设备接收终端上报的每层对应的信道特征信息,包括:
    所述网络侧设备接收终端上报的秩的指示RI及每层对应的所述信道特征信息。
  19. 一种信道特征信息传输装置,包括:
    处理模块,用于将每个层的信道信息分别输入对应的第一AI网络模型进行处理,并获取所述第一AI网络模型输出的信道特征信息,其中,一个层对应一个第一AI网络模型;
    上报模块,用于向网络侧设备上报每层对应的所述信道特征信息。
  20. 根据权利要求19所述的装置,其中,所述装置还包括确定模块,用于:
    基于信道的秩确定信道信息对应的层数;
    获取第一目标层的目标参数相对于第二目标层的目标参数之和的比例,并基于所述比例所处的比例范围确定所述第一目标层对应的第一AI网络模 型,所述第一目标层为所述信道信息对应的层中的任一层,所述第二目标层为所述装置对应的所有层或者所述装置所有上报的层;
    其中,不同的所述比例范围对应不同的第一AI网络模型,所述目标参数包括如下任意一项:目标参数、CQI、信道容量。
  21. 根据权利要求19所述的装置,其中,每个层对应的所述第一AI网络模型不同,目标第一AI网络模型的输入包括第三目标层的信道信息;
    其中,所述装置对应的层基于目标参数进行排序,所述第三目标层为所述装置对应的层进行排序后中的任一个,所述目标第一AI网络模型为所述第三目标层对应的第一AI网络模型,所述目标参数包括如下任意一项:目标参数、CQI、信道容量。
  22. 根据权利要求21所述的装置,其中,所述第三目标层为所述装置对应的层进行排序后中除第一个层以外的任一个,所述目标第一AI网络模型的输入还包括如下任意一项:
    所述第三目标层的前一层对应的第一AI网络模型的输出;
    所述第一个层对应的第一AI网络模型的输出;
    所述第三目标层之前的所有层各自对应的第一AI网络模型的输出;
    所述第三目标层的前一层对应的信道信息;
    所述第三目标层之前的所有层各自对应的信道信息。
  23. 根据权利要求19所述的装置,其中,所述处理模块还用于:
    将每个层的信道信息经预处理后分别输入对应的第一AI网络模型进行处理。
  24. 根据权利要求23所述的装置,其中,所述处理模块还用于执行如下任意一项:
    将每个层的信道信息经第二AI网络模型进行预处理后分别输入对应的第一AI网络模型;
    将目标层的信道信息经目标第二AI网络模型进行预处理后,将所述目标第二AI网络模型的输出输入所述目标层对应的第一AI网络模型,其中,所述目标层为所述装置对应的任一个层,每个层对应一个所述目标第二AI网络模型。
  25. 根据权利要求19所述的装置,其中,所述上报模块还用于:
    对目标层对应的所述信道特征信息进行后处理,将后处理后的信道特征信息向网络侧设备上报;
    其中,所述目标层为所述装置对应的至少一个层。
  26. 根据权利要求25所述的装置,其中,所述上报模块还用于:
    对目标层对应的所述信道特征信息进行后处理,得到目标长度的信道特征信息,所述目标长度小于后处理前的所述信道特征信息的长度;
    将所述目标长度、所述目标长度的信道特征信息向网络侧设备上报。
  27. 根据权利要求19-26中任一项所述的装置,其中,在所述信道特征信息通过CSI进行上报的情况下,所述上报模块还用于执行如下任意一项:
    在所述装置对应的层基于目标参数进行排序的情况下,将第一个层对应的所述信道特征信息通过所述CSI的第一部分向网络侧设备上报,将除所述第一个层以外的其他层对应的所述信道特征信息通过所述CSI的第二部分向网络侧设备上报,其中,所述目标参数包括如下任意一项:目标参数、CQI、信道容量;
    将每层对应的所述信道特征信息通过所述CSI的第二部分向网络侧设备上报;
    将每层对应的所述信道特征信息通过所述CSI的第二部分中对应的分块向网络侧设备上报,一个层对应一个所述分块。
  28. 根据权利要求19-26中任一项所述的装置,其中,所述上报模块还用于:
    向网络侧设备上报每层对应的所述信道特征信息,并按照层的顺序的倒序对所述信道特征信息进行丢弃。
  29. 一种信道特征信息传输装置,包括:
    接收模块,用于接收终端上报的每层对应的信道特征信息;
    其中,所述终端的一个层对应一个第一AI网络模型,所述第一AI网络模型用于对终端输入的层的信道信息进行处理,并输出所述信道特征信息。
  30. 根据权利要求29所述的装置,其中,在所述信道特征信息通过CSI进行上报的情况下,所述接收模块还用于执行如下任意一项:
    在所述终端对应的层基于目标参数进行排序的情况下,接收终端通过所述CSI的第一部分上报的第一个层对应的所述信道特征信息,以及通过所述CSI的第二部分上报的除所述第一个层以外的其他层对应的所述信道特征信息,其中,所述目标参数包括如下任意一项:目标参数、CQI、信道容量;
    接收终端通过所述CSI的第二部分上报的每层对应的所述信道特征信息;
    接收终端通过所述CSI的第二部分中对应的分块上报的每层对应的所述信道特征信息,一个层对应一个所述分块。
  31. 一种终端,包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1-14中任一项所述的信道特征信息传输方法的步骤。
  32. 一种网络侧设备,包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求15-18中任一项所述的信道特征信息传输方法的步骤。
  33. 一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1-14中任一项所述的信道特征信息传输方法的步骤,或者实现如权利要求15-18中任一项所述的信道特征信息传输方法的步骤。
PCT/CN2023/085012 2022-04-01 2023-03-30 信道特征信息传输方法、装置、终端及网络侧设备 WO2023185995A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210349419.2 2022-04-01
CN202210349419.2A CN116939649A (zh) 2022-04-01 2022-04-01 信道特征信息传输方法、装置、终端及网络侧设备

Publications (1)

Publication Number Publication Date
WO2023185995A1 true WO2023185995A1 (zh) 2023-10-05

Family

ID=88199369

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/085012 WO2023185995A1 (zh) 2022-04-01 2023-03-30 信道特征信息传输方法、装置、终端及网络侧设备

Country Status (2)

Country Link
CN (1) CN116939649A (zh)
WO (1) WO2023185995A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108696932A (zh) * 2018-04-09 2018-10-23 西安交通大学 一种利用csi多径及机器学习的室外指纹定位方法
WO2020092340A1 (en) * 2018-11-01 2020-05-07 Intel Corporation Frequency domain channel state information compression
CN111614435A (zh) * 2019-05-13 2020-09-01 维沃移动通信有限公司 信道状态信息csi报告的传输方法、终端及网络设备
CN113922936A (zh) * 2021-08-31 2022-01-11 中国信息通信研究院 一种ai技术信道状态信息反馈方法和设备
WO2022040046A1 (en) * 2020-08-18 2022-02-24 Qualcomm Incorporated Reporting configurations for neural network-based processing at a ue

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108696932A (zh) * 2018-04-09 2018-10-23 西安交通大学 一种利用csi多径及机器学习的室外指纹定位方法
WO2020092340A1 (en) * 2018-11-01 2020-05-07 Intel Corporation Frequency domain channel state information compression
CN111614435A (zh) * 2019-05-13 2020-09-01 维沃移动通信有限公司 信道状态信息csi报告的传输方法、终端及网络设备
WO2022040046A1 (en) * 2020-08-18 2022-02-24 Qualcomm Incorporated Reporting configurations for neural network-based processing at a ue
CN113922936A (zh) * 2021-08-31 2022-01-11 中国信息通信研究院 一种ai技术信道状态信息反馈方法和设备

Also Published As

Publication number Publication date
CN116939649A (zh) 2023-10-24

Similar Documents

Publication Publication Date Title
WO2023246618A1 (zh) 信道矩阵处理方法、装置、终端及网络侧设备
WO2023185978A1 (zh) 信道特征信息上报及恢复方法、终端和网络侧设备
WO2023185995A1 (zh) 信道特征信息传输方法、装置、终端及网络侧设备
KR20230138538A (ko) 정보 리포팅 방법, 장치, 제1 장치 및 제2 장치
WO2023185980A1 (zh) 信道特征信息传输方法、装置、终端及网络侧设备
WO2023179460A1 (zh) 信道特征信息传输方法、装置、终端及网络侧设备
WO2023179570A1 (zh) 信道特征信息传输方法、装置、终端及网络侧设备
WO2024055974A1 (zh) Cqi传输方法、装置、终端及网络侧设备
WO2024007949A1 (zh) Ai模型处理方法、装置、终端及网络侧设备
WO2023179476A1 (zh) 信道特征信息上报及恢复方法、终端和网络侧设备
WO2024093999A1 (zh) 信道信息的上报和接收方法、终端及网络侧设备
WO2024088161A1 (zh) 信息传输方法、信息处理方法、装置和通信设备
WO2023179473A1 (zh) 信道特征信息上报及恢复方法、终端和网络侧设备
WO2024055993A1 (zh) Cqi传输方法、装置、终端及网络侧设备
WO2023179474A1 (zh) 信道特征信息辅助上报及恢复方法、终端和网络侧设备
WO2024051594A1 (zh) 信息传输方法、ai网络模型训练方法、装置和通信设备
WO2024051564A1 (zh) 信息传输方法、ai网络模型训练方法、装置和通信设备
WO2024088162A1 (zh) 信息传输方法、信息处理方法、装置和通信设备
WO2023207920A1 (zh) 信道信息反馈方法、终端及网络侧设备
WO2023197953A1 (zh) 预编码矩阵的反馈方法、终端及网络侧设备
CN118264290A (zh) 信道特征信息反馈方法、接收方法、终端及网络侧设备
WO2023151593A1 (zh) 预编码指示方法、装置、通信设备、系统及存储介质
CN117318773A (zh) 信道矩阵处理方法、装置、终端及网络侧设备
CN117411527A (zh) 信道特征信息上报及恢复方法、终端和网络侧设备
CN117335849A (zh) 信道特征信息上报及恢复方法、终端和网络侧设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23778360

Country of ref document: EP

Kind code of ref document: A1