WO2024008004A1 - 一种通信方法及装置 - Google Patents

一种通信方法及装置 Download PDF

Info

Publication number
WO2024008004A1
WO2024008004A1 PCT/CN2023/105001 CN2023105001W WO2024008004A1 WO 2024008004 A1 WO2024008004 A1 WO 2024008004A1 CN 2023105001 W CN2023105001 W CN 2023105001W WO 2024008004 A1 WO2024008004 A1 WO 2024008004A1
Authority
WO
WIPO (PCT)
Prior art keywords
indication information
channel status
status indication
encoder
model
Prior art date
Application number
PCT/CN2023/105001
Other languages
English (en)
French (fr)
Inventor
柴晓萌
孙琰
杭海存
陈家璇
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024008004A1 publication Critical patent/WO2024008004A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • H04B7/0613Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission
    • H04B7/0615Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal
    • H04B7/0619Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal using feedback from receiving side
    • H04B7/0621Feedback content
    • H04B7/0626Channel coefficients, e.g. channel state information [CSI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/0413MIMO systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • H04B7/0613Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission
    • H04B7/0615Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal
    • H04B7/0619Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal using feedback from receiving side
    • H04B7/0636Feedback format

Definitions

  • the present disclosure relates to the field of communication technology, and in particular, to a communication method and device.
  • wireless communication networks such as mobile communication networks
  • the services supported by the network are becoming more and more diverse, and therefore the requirements that need to be met are becoming more and more diverse.
  • the network needs to be able to support ultra-high speeds, ultra-low latency, and/or ultra-large connections.
  • This feature makes network planning, network configuration, and/or resource scheduling increasingly complex.
  • the functions of the network become more and more powerful, such as supporting higher and higher spectrum, supporting high-order multiple input multiple output (MIMO) technology, supporting beam forming, and/or supporting beam management and other new Technology has made network energy conservation a hot research topic.
  • MIMO multiple input multiple output
  • These new requirements, new scenarios, and new features have brought unprecedented challenges to network planning, O&M, and efficient operations.
  • artificial intelligence technology can be introduced into wireless communication networks to achieve network intelligence. Based on this, how to effectively implement artificial intelligence in the network is a question worth studying.
  • the present disclosure provides a communication method and device, with a view to reducing the overhead of channel state information (CSI) feedback through artificial intelligence.
  • CSI channel state information
  • the present disclosure provides a communication method, including: using a first encoder and channel information of N downlink transmission layers input to the first encoder to determine N pieces of first channel status indication information; wherein, N is a positive integer greater than 1; use the second encoder and the N first channel status indication information input to the second encoder to determine the second channel status indication information; wherein, the second channel status indication information Corresponding to the N downlink transmission layers, the sequence length corresponding to the second channel status indication information is less than the sum of the sequence lengths corresponding to the N first channel status indication information; and the second channel status indication information is sent.
  • the downlink channel information is compressed twice through a two-layer encoder, which reduces the resources occupied by common information between the channel information (CSI) of multiple downlink transmission layers, and can reduce the feedback overhead of CSI.
  • CSI channel information
  • the method further includes: sending information indicating the rank, and the value of N is equal to the value of the rank.
  • the value of rank is greater than 1, it can implicitly indicate that the downlink channel information is compressed twice.
  • the information used to indicate the rank may be a rank indicator (rank indicator, RI).
  • RI rank indicator
  • the structure of the second encoder, the structure of the second decoder matching the second encoder, or the structure of the second autoencoder to which the second encoder belongs corresponds to the value of N.
  • the structure of the first encoder, the structure of the first decoder matching the first encoder, or one or more of the structures of the first autoencoder to which the first encoder belongs is also Corresponds to the value of N.
  • the aforementioned information indicating the rank may also be used by a device that receives the information indicating the rank to determine the structure of the second decoder and/or the first decoder.
  • the structure of the encoder, or the structure of the second autoencoder to which the second encoder belongs corresponds to, and/or corresponds to the value of N, that is, the sequence length of the second channel status indication information
  • the range may indicate one or more of the following: the structure of the second encoder, the structure of the second decoder that matches the second encoder, or the second self-encoder to which the second encoder belongs.
  • the structure of the encoder, and/or, the value of N is the sequence length of the second channel status indication information.
  • the structure of the encoder, or the structure of the first autoencoder to which the first encoder belongs corresponds to, and/or corresponds to the value of N, that is, the sequence length of the second channel status indication information
  • the range may indicate one or more of the following: the structure of the first encoder, the structure of the first decoder matching the first encoder, or the first self-contained code to which the first encoder belongs.
  • the method further includes: sending second information, the second information being used to indicate one or more of the following: the first A second encoder, a second decoder matching the second encoder, and a second auto-encoder (AE) to which the second encoder belongs.
  • the information receiver can quickly determine the second decoder matching the second encoder, thereby improving decoding efficiency.
  • the method further includes: sending first information, the first information being used to indicate one or more of the following: the first encoder, the first decoder matching the first encoder , the first autoencoder AE to which the first encoder belongs.
  • the information receiver can quickly determine the first decoder that matches the first encoder, thereby improving decoding efficiency.
  • the structure, input and/or output of the first encoder will be described in detail below.
  • the first encoder may have compression and quantization functions, or the first encoder may have compression but not quantization functions.
  • the first encoder includes N first artificial intelligence (artificial intelligence, AI) models, and the channel information of the N downlink transmission layers is input to the N first AI models respectively.
  • Each first AI model in the first AI model outputs one piece of first channel status indication information. It can be understood that one first AI model in the first encoder is only used to process channel information of one downlink transmission layer.
  • Each first AI model is independent of each other and processed in parallel, which can improve processing efficiency.
  • the first encoder includes a first AI model, and the channel information of the N downlink transmission layers is input to the first AI model serially or in parallel.
  • the first AI model outputs N pieces of first channel status indication information serially or in parallel.
  • the first encoder includes M first AI models, M is an integer greater than 1 and less than N, and part of the channel information of the N downlink transmission layers among the channel information of the N downlink transmission layers is input serially or in parallel.
  • One first AI model among the M first AI models corresponds to outputting part of the first channel state indication information corresponding to the downlink transmission layer in series or in parallel.
  • one first AI model in the first encoder can be used to process channel information of multiple downlink transmission layers, which can reduce the number of AI models and reduce the storage resources of the models.
  • the second encoder has compression and quantization functions.
  • the N pieces of first channel status indication information are input to the second encoder, and the output of the second encoder includes the second channel status indication information.
  • the structure of the second encoder can be understood with reference to the following possible examples.
  • the second encoder includes a second AI model
  • the input of the second AI model includes the N pieces of first channel status indication information.
  • the second AI model The output includes the second channel status indication information.
  • one second AI model is used to process multiple first channel status indication information, which can reduce the storage resources of the model.
  • N is greater than 2
  • the second encoder includes N-1 second AI models; wherein, the 1st second AI model among the N-1 second AI models
  • the input includes the first first channel status indication information and the second first channel status indication information among the N first channel status indication information, and the output of the first second AI model includes the first first channel status indication information.
  • the output of the i-th second AI model includes the i-th fourth channel status indication information, 2 ⁇ i ⁇ N-1,i is a positive integer; wherein, the N-1th fourth channel status indication information output by the N-1th second AI model is the second channel status indication information.
  • Such a design implements hierarchical compression, and multiple second AI models with an input port number of 2 and an output port number of 1 can be used to implement a second encoder in which N is any number greater than 2.
  • the multiple second AI models may be the same or different. Taking multiple second AI models as an example, the second encoder can be implemented by training a second AI model and calling it multiple times, thereby reducing the feedback overhead of CSI and reducing the number of second AI models that need to be trained. quantity.
  • N is greater than 2
  • the second encoder includes K second AI models
  • K is an integer greater than 1 and less than N
  • some of the N first channel status indication information are A channel status indication information is input into one first AI model among the K second AI models in parallel.
  • K is 2.
  • Such a design implements hierarchical compression. Multiple second AI models with an integer output port number of 1 from 2 to N-1 can be used to implement a second encoder where N is any number greater than 2.
  • the second encoder can be implemented by training several second AI models with different numbers of input ports, thereby reducing the feedback overhead of CSI, reducing the number of second AI models that need to be trained, and reducing the time required to implement the second encoder. Second, the number of AI models increases the coding speed.
  • the second encoder has compression but no quantization capabilities. N pieces of first channel status indication information are input to the second encoder, and the output of the second encoder includes third channel status indication information. Then, the third channel status indication information is quantized to obtain the second channel status indication information.
  • the structure of the second encoder can be understood with reference to the following possible examples.
  • the second encoder includes a second AI model
  • the input of the second AI model includes the N first channel status indication information
  • the output of the second AI model includes the third channel status indication information.
  • one second AI model is used to process multiple first channel status indication information, which can reduce the storage resources of the model.
  • N is greater than 2
  • the second encoder includes N-1 second AI models; wherein, the 1st second AI model among the N-1 second AI models
  • the input includes the first first channel status indication information and the second first channel status indication information among the N first channel status indication information, and the output of the first second AI model includes the first first channel status indication information.
  • the output of the i-th second AI model includes the i-th fourth channel status indication information, 2 ⁇ i ⁇ N-1,i is a positive integer; wherein, the N-1th fourth channel status indication information output by the N-1th second AI model is the third channel status indication information.
  • Such a design implements hierarchical compression.
  • Multiple second AI models with an input port number of 2 and an output port number of 1 can be used to implement a second encoder with an input port number (N) of any number greater than 2.
  • N input port number
  • multiple The second AI model can be the same or different. Taking multiple second AI models of the same model as an example, a second AI model can be trained and called multiple times to implement the second encoder, thereby reducing the feedback overhead of CSI and reducing the number of second AI models that need to be trained.
  • the second encoder when N is greater than 2, the second encoder includes K second AI models, K is an integer greater than 1 and less than N, and part of the N first channel status indication information The first channel status indication information is input into one second AI model among the K second AI models in parallel.
  • Such a design can use multiple second AI models with an integer output port number of 1 from 2 to N-1 to implement a second encoder where N is any number greater than 2. In this way, several inputs can be trained A second AI model with a different number of ports can implement the second encoder, thereby reducing the feedback overhead of CSI and reducing the number of second AI models that need to be trained. It can also reduce the time required to implement the second AI model of the second encoder. quantity to increase encoding speed.
  • K is 2.
  • the method before determining the N pieces of first channel status indication information, further includes: acquiring a downlink reference signal; and determining the value of N and the N pieces of downlink information based on the downlink reference signal.
  • Transport layer channel information before determining the N pieces of first channel status indication information, the method further includes: acquiring a downlink reference signal; and determining the value of N and the N pieces of downlink information based on the downlink reference signal.
  • the present disclosure provides a communication method, applied to access network equipment, including: obtaining second channel status indication information; using a second decoder and the second channel status indication information to determine N first channels Status indication information, the sum of the sequence lengths corresponding to the N pieces of first channel status indication information is greater than the sequence length corresponding to the second channel status indication information, and N is a positive integer greater than 1; using the first decoder and the input The N pieces of first channel status indication information of the first decoder determine the channel information of the N downlink transmission layers.
  • the above design matches the use of the first aspect and can decompress the twice-compressed channel information twice, thereby recovering the corresponding channel information.
  • the structure of the encoder, or the structure of the second autoencoder to which the second encoder belongs corresponds to, and/or corresponds to the value of N, that is, the sequence length of the second channel status indication information
  • the range may indicate one or more of the following: the structure of the second encoder, the structure of the second decoder that matches the second encoder, or the second self-encoder to which the second encoder belongs.
  • the structure of the encoder, and/or, the value of N is the sequence length of the second channel status indication information.
  • the second encoding The structure of the encoder, the structure of the second decoder matching the second encoder, or the structure of the second autoencoder to which the second encoder belongs, and/or the value of N.
  • the structure of the encoder, or the structure of the first autoencoder to which the first encoder belongs corresponds to, and/or corresponds to the value of N, that is, the sequence length of the second channel status indication information
  • the range may indicate one or more of the following: the structure of the first encoder, the structure of the first decoder matching the first encoder, or the first self-contained code to which the first encoder belongs.
  • the first encoding The structure of the encoder, the structure of the first decoder matching the first encoder, or the structure of the first autoencoder to which the first encoder belongs, and/or the value of N.
  • the method further includes: obtaining information indicating the rank, and the value of N is equal to the value of the rank.
  • the structure of the second decoder, the structure of the second encoder matching the second decoder, or one or more of the structures of the second autoencoder to which the second encoder belongs corresponds to at the value of N.
  • the structure of the first decoder, the structure of the first encoder matching the first decoder, or one or more of the structures of the first autoencoder to which the first encoder belongs is also corresponds to the N value.
  • the method further includes: obtaining second information, the second information being used to indicate one or more of the following: the second encoder, a decoder matching the second encoder, The second autoencoder AE to which the second encoder belongs.
  • the information receiver can quickly determine the second decoder matching the second encoder, thereby improving decoding efficiency.
  • the method further includes: obtaining first information, the first information being used to indicate one or more of the following: the first encoder, a decoder matching the first encoder, The first autoencoder AE to which the first encoder belongs.
  • the second decoder has decompression and dequantization functions.
  • the second channel status indication information is input to the second decoder, and the output of the second decoder includes N pieces of first channel status indication information.
  • the structure of the second decoder can be understood with reference to the following possible examples.
  • the second decoder includes a third AI model
  • the input of the third AI model includes the second channel status indication information
  • the output of the third AI model Includes the N pieces of first channel status indication information.
  • N is greater than 2
  • the second decoder includes N-1 third AI models
  • the input of the N-1 third AI model among the N-1 third AI models includes The second channel status indication information
  • the output includes the N-2th fourth channel status indication information and the Nth first channel status indication information
  • the input of the N-ith third AI model includes the N-ith fourth channel status indication information.
  • the output of the N-i third AI model includes the N-i+1 first channel status indication information and the N-i-1 fourth channel status indication information among the N first channel status indication information.
  • the input of the first third AI model includes the first fourth channel status indication information
  • the output of the first third AI model includes the first first among the N first channel status indication information.
  • Channel status indication information and the second first channel status indication information where i takes a positive integer ranging from 2 to N-2. It can be understood that neither N-i nor N-i-1 can be taken as 0.
  • the second channel status indication information is the N-1th fourth channel status information.
  • N is greater than 2
  • the second decoder includes K third AI models
  • K is a positive integer greater than 1 and less than N-1
  • the three AI model is used to restore at least 2 first channel status indication information among the N first channel status indication information.
  • the second decoder is capable of compression but not of dequantization. After obtaining the second channel status indication information, the second channel status indication information may be dequantized first to obtain the third channel status indication information. The third channel status indication information is then input to the second decoder, and the output of the second decoder includes N pieces of first channel status indication information.
  • the structure of the second decoder can be understood with reference to the following possible examples.
  • the second decoder includes a third AI model
  • the input of the third AI model includes the third channel status indication information
  • the output of the third AI model Includes the N pieces of first channel status indication information.
  • N is greater than 2
  • the second decoder includes N-1 third AI models
  • the input of the N-i third AI model among the N-1 third AI models includes the N-i third AI model.
  • fourth channel status indication information and the output of the N-i third AI model includes at least one of the following information: the N-i first channel status indication information in the N first channel status indication information, the The N-i+1th first channel status indication information and the N-i-1th fourth channel status indication information among the N pieces of first channel status indication information; where i takes a positive integer from 1 to N-1 , the third channel status indication information is the N-1th fourth channel status information. It can be understood that neither N-i nor N-i-1 can be 0.
  • the four channel status indication information may include the N-ith first channel status indication information and the N-i+1th first channel status indication information among the N pieces of first channel status indication information.
  • N is greater than 2
  • the second decoder includes K third AI models
  • K is a positive integer greater than 1 and less than N-1
  • each third of the K third AI models The AI model is used to restore part of the first channel status indication information among the N pieces of first channel status indication information.
  • the structure, input and/or output of the first decoder will be described in detail below.
  • the function of the first decoder matches that of the first encoder. If the first encoder has compression and quantization functions, the first decoder can have decompression and dequantization functions; or if the first encoder has The decoder may have compression but not quantization function, then the first decoder may have decompression but not dequantization function.
  • the first decoder includes N fourth AI models, and the N first channel status indication information is input to the N fourth AI models respectively.
  • the first decoder includes a fourth AI model, and the N first channel status indication information is input to the fourth AI model serially or in parallel; or, the The first decoder includes M fourth AI models, M is greater than 1 and An integer smaller than N, part of the first channel state indication information of the N pieces of first channel state indication information is input to one first AI model among the M fourth AI models in series or in parallel.
  • the present disclosure provides a communication device, which may be a terminal device, a device, a module or a chip in the terminal device, or a device that can be used in conjunction with the terminal device.
  • the communication device may include a module that performs one-to-one correspondence with the method/operation/step/action described in the first aspect.
  • the module may be a hardware circuit, software, or a combination of hardware circuit and software.
  • the communication device may include a processing module and a communication module.
  • a processing module configured to determine N pieces of first channel status indication information using the first encoder and the channel information of N downlink transmission layers input to the first encoder; wherein N is a positive integer greater than 1; and using The second encoder and the N pieces of first channel status indication information input to the second encoder determine the second channel status indication information; wherein the second channel status indication information corresponds to the N downlink transmission layers , the sequence length corresponding to the second channel status indication information is less than the sum of the sequence lengths corresponding to the N pieces of first channel status indication information;
  • a communication module configured to send the second channel status indication information.
  • the communication module is also configured to send information indicating the rank, and the value of N is equal to the value of the rank.
  • the value of rank is greater than 1, it can implicitly indicate that the downlink channel information is compressed twice.
  • the information used to indicate the rank may be a rank indicator (rank indicator, RI).
  • the structure of the second encoder, the structure of the second decoder matching the second encoder, or the structure of the second autoencoder to which the second encoder belongs corresponds to the value of N.
  • the structure of the first encoder, the structure of the first decoder matching the first encoder, or one or more of the structures of the first autoencoder to which the first encoder belongs is also Corresponds to the value of N.
  • the aforementioned information indicating the rank may also be used by a device that receives the information indicating the rank to determine the structure of the second decoder and/or the first decoder.
  • the communication module is also configured to send second information, where the second information is used to indicate one or more of the following: the second encoder, the second encoder that matches the second encoder. decoder, the second autoencoder AE to which the second encoder belongs.
  • the communication module is also configured to send first information, where the first information is used to indicate one or more of the following: the first encoder, the Decoder, the first autoencoder AE to which the first encoder belongs.
  • the structure, input and/or output of the first encoder can be understood with reference to the content in the first aspect, which will not be described again in this disclosure.
  • the structure, input and/or output of the second encoder can be understood with reference to the content in the first aspect, which will not be described again in this disclosure.
  • the processing module is further configured to obtain a downlink reference signal through the communication module before determining the N first channel status indication information, and determine the N based on the downlink reference signal.
  • the present disclosure provides a communication device.
  • the communication device may be an access network device, a device, a module or a chip in the access network device, or a device that can be used in conjunction with the access network device.
  • the communication device may include a module that performs one-to-one correspondence with the method/operation/step/action described in the second aspect.
  • the module may be a hardware circuit, software, or a combination of hardware circuit and software.
  • the communication device may include a processing module and a communication module.
  • a communication module used to obtain the second channel status indication information
  • a processing module configured to use the second decoder and the second channel status indication information to determine N first channel status indication information, where the sum of sequence lengths corresponding to the N first channel status indication information is greater than the first The sequence length corresponding to the two channel status indication information, N is a positive integer greater than 1; and using the first decoder and the N first channel status indication information input to the first decoder to determine the N downlink Transport layer channel information.
  • the communication module is also configured to obtain information indicating the rank, and the value of N is equal to the value of the rank.
  • the structure of the second decoder, the structure of the second encoder matching the second decoder, or one or more of the structures of the second autoencoder to which the second encoder belongs corresponds to at the value of N.
  • the structure of the first decoder, the structure of the first encoder matching the first decoder, or one or more of the structures of the first autoencoder to which the first encoder belongs is also Corresponds to the value of N.
  • the communication module is also used to obtain second information, where the second information is used to indicate one or more of the following: The second encoder, a decoder matching the second encoder, and a second autoencoder AE to which the second encoder belongs.
  • the communication module is also configured to obtain first information, where the first information is used to indicate one or more of the following: the first encoder, the Decoder, the first autoencoder AE to which the first encoder belongs.
  • the structure, input and/or output of the second decoder can be understood with reference to the content in the first aspect, which will not be described again in this disclosure.
  • the structure, input and/or output of the first decoder can be understood with reference to the content in the first aspect, which will not be described again in this disclosure.
  • the present disclosure provides a communication device, which includes a processor for implementing the method described in the first aspect.
  • the processor is coupled to a memory, and the memory is used to store instructions and data.
  • the communication device may also include a memory; the communication device may also include a communication interface, which is used for the device to communicate with other devices.
  • the communication interface may be a transceiver, a circuit, Bus, module, pin or other type of communication interface.
  • the communication device includes:
  • Memory used to store instructions
  • a processor configured to determine N pieces of first channel status indication information using the first encoder and the channel information of N downlink transmission layers input to the first encoder; wherein N is a positive integer greater than 1; and using The second encoder and the N pieces of first channel status indication information input to the second encoder determine the second channel status indication information; wherein the second channel status indication information corresponds to the N downlink transmission layers , the sequence length corresponding to the second channel status indication information is less than the sum of the sequence lengths corresponding to the N pieces of first channel status indication information;
  • a communication interface used to send the second channel status indication information is implemented in the form of software function modules.
  • the first encoder and the second encoder can be implemented in the form of hardware (such as chips, hardware circuits, etc.).
  • the first encoder and the second encoder may be integrated in the processor, or may be separately deployed in the communication device.
  • the communication device may further include the first encoder and the second encoder.
  • the present disclosure provides a communication device, which includes a processor for implementing the method described in the second aspect.
  • the processor is coupled to a memory, and the memory is used to store instructions and data.
  • the communication device may also include a memory; the communication device may also include a communication interface, which is used for the device to communicate with other devices.
  • the communication interface may be a transceiver, a circuit, Bus, module, pin or other type of communication interface.
  • the communication device includes:
  • Memory used to store instructions
  • a communication interface used to obtain the second channel status indication information
  • a processor configured to use a second decoder and the second channel status indication information to determine N pieces of first channel status indication information, where the sum of sequence lengths corresponding to the N pieces of first channel status indication information is greater than the The sequence length corresponding to the two channel status indication information, N is a positive integer greater than 1; and using the first decoder and the N first channel status indication information input to the first decoder to determine the N downlink Transport layer channel information.
  • the second decoder and the first decoder are implemented in the form of software function modules.
  • the second decoder and the first decoder may be implemented in the form of hardware (such as chips, hardware circuits, etc.).
  • the second decoder and the first decoder may be integrated in the processor, or may be separately deployed in the communication device.
  • the communication device may further include the second decoder and the first decoder.
  • the present disclosure provides a communication system, which includes a terminal device and an access network device.
  • a communication system which includes a terminal device and an access network device.
  • the interaction between terminal equipment and access network equipment can be understood with reference to the following content:
  • the terminal equipment uses the first encoder and the channel information of the N downlink transmission layers input to the first encoder to determine N pieces of first channel status indication information; where N is a positive integer greater than 1; the terminal equipment uses the second encoder and the N first channel status indication information input to the second encoder to determine the second channel status indication information; wherein the second channel status indication information corresponds to the N downlink transmission layers, and the second channel
  • the sequence length corresponding to the status indication information is less than the sum of the sequence lengths corresponding to the N first channel status indication information;
  • the terminal device sends the second channel status indication information to the access network device, and the access network device receives the second channel status indication information from the terminal device;
  • the access network device uses the second decoder and the second channel status indication information to determine N first channel status indication information, where N is a positive integer greater than 1; the access network device uses the first decoder and inputs the first The N first channel status indication information of the decoder determines the channel information of the N downlink transmission layers.
  • the solution implemented on the terminal device side can be understood with reference to the design described in the first aspect.
  • the solution implemented on the access network device side can also be understood with reference to the design described in the second aspect, which will not be described again in this disclosure.
  • the present disclosure provides a communication system, including a communication device as described in the third or fifth aspect; and a communication device as described in the fourth or sixth aspect.
  • the present disclosure also provides a computer program, which when the computer program is run on a computer, causes the computer to execute the method provided in any one of the above-mentioned first to second aspects.
  • the present disclosure also provides a computer program product, which includes instructions that, when run on a computer, cause the computer to execute the method provided in any one of the above-mentioned first to second aspects.
  • the present disclosure also provides a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program or instructions.
  • the computer program or instructions When the computer program or instructions are run on a computer, the computer causes the computer to The method provided in any one of the above first to second aspects is executed.
  • the present disclosure also provides a chip, which is used to read a computer program stored in a memory and execute the method provided in any one of the above first to second aspects, or the chip A circuit is included for performing the method provided in any one of the above-mentioned first to second aspects.
  • the present disclosure also provides a chip system, which includes a processor for supporting a device to implement the method provided in any one of the above first to second aspects.
  • the chip system further includes a memory, and the memory is used to save necessary programs and data for the device.
  • the chip system can be composed of chips or include chips and other discrete devices.
  • Figure 1 is a schematic structural diagram of a communication system
  • Figure 2A is a schematic diagram of the structure of a neuron
  • Figure 2B is a schematic diagram of the layer relationship of the neural network
  • Figure 2C is a schematic diagram of an AI application framework provided by the present disclosure.
  • Figure 3 is a schematic structural diagram of another communication system
  • FIGS. 4A to 4D are schematic diagrams of several network architectures
  • FIG. 5 is one of the flow diagrams of the communication method provided by the present disclosure.
  • Figure 6A is one of the schematic flow diagrams of encoding and decoding provided by the present disclosure.
  • Figure 6B is one of the schematic flow diagrams of encoding and decoding provided by the present disclosure.
  • Figure 6C is one of the schematic flow diagrams of encoding and decoding provided by the present disclosure.
  • Figure 7 is one of the flow diagrams of the communication method provided by the present disclosure.
  • Figure 8A is one of the schematic flow diagrams of encoding and decoding provided by the present disclosure.
  • Figure 8B is one of the schematic flow diagrams of encoding and decoding provided by the present disclosure.
  • Figure 8C is one of the schematic flow diagrams of encoding and decoding provided by the present disclosure.
  • Figure 9 is one of the schematic flow diagrams of encoding and decoding provided by the present disclosure.
  • Figure 10 is one of the structural schematic diagrams of the communication device provided by the present disclosure.
  • Figure 11 is one of the structural schematic diagrams of the communication device provided by the present disclosure.
  • the disclosure below refers to at least one (item), indicating one (item) or more (items). Multiple (items) refers to two (items) or more than two (items).
  • “And/or” describes the relationship between associated objects, indicating that there can be three relationships. For example, A and/or B can mean: A exists alone, A and B exist simultaneously, and B exists alone. The character “/” generally indicates that the related objects are in an "or” relationship.
  • first, second, etc. may be used to describe various objects in this disclosure, these objects should not be limited to these terms. These terms are only used to distinguish objects from each other.
  • the communication system can be a third generation (3G) communication system (such as a universal mobile telecommunication system (UMTS)), a fourth generation (4th generation, 4G) communication system (such as long term evolution (LTE) system), fifth generation (5th generation, 5G) communication system, global interoperability for microwave access (WiMAX) or wireless Local area network (wireless local area network, WLAN) system, or integration system of multiple systems, or future communication system, such as 6G communication system, etc.
  • the 5G communication system can also be called a new radio (NR) system.
  • One network element in a communication system can send signals to or receive signals from another network element.
  • the signal may include information, signaling or data, etc.
  • the network element can also be replaced by an entity, a network entity, a device, a communication device, a communication module, a node, a communication node, etc.
  • a network element is taken as an example for description.
  • the communication system may include at least one terminal device and at least one access network device.
  • the access network device can send downlink signals to the terminal device, and/or the terminal device can send uplink signals to the access network device.
  • the communication system includes multiple terminal devices, multiple terminal devices can also communicate with each other.
  • Mutual signaling means that both the signal sending network element and the signal receiving network element can be terminal devices.
  • FIG. 1 is a simplified schematic diagram of a wireless communication system provided by an embodiment of the present application.
  • the wireless communication system includes a wireless access network 100.
  • the radio access network 100 may be a next-generation (eg, 6G or higher) radio access network, or a legacy (eg, 5G, 4G, 3G or 2G) radio access network.
  • One or more communication devices 120a-120j, collectively 120
  • the wireless communication system may also include other equipment, such as core network equipment, wireless relay equipment and/or wireless backhaul equipment, etc., which are not shown in Figure 1 .
  • the wireless communication system may include multiple network devices (also called access network devices) at the same time, or may include multiple communication devices at the same time.
  • a network device can serve one or more communication devices at the same time.
  • a communication device can also access one or more network devices at the same time.
  • the embodiment of the present application does not limit the number of communication devices and network devices included in the wireless communication system.
  • the network device may be an entity on the network side that is used to transmit or receive signals.
  • the network device may be an access device through which the communication device wirelessly accesses the wireless communication system.
  • the network device may be a base station.
  • Base stations can broadly cover various names as follows, or be replaced with the following names, such as: Node B (NodeB), evolved base station (evolved NodeB, eNB), next generation base station (next generation NodeB, gNB), open wireless Access network equipment, relay station, access point, transmission point (transmitting and receiving point, TRP), transmitting point (TP), main station MeNB, Secondary station SeNB, multi-standard wireless (MSR) node, home base station, network controller, access node, wireless node, access point (AP), transmission node, transceiver node, baseband unit (BBU), radio frequency remote unit ( RRU), active antenna unit (AAU), radio head (RRH), central unit (CU), distribution unit (DU), wireless unit (radio unit, RU), centralized unit
  • the base station may be a macro base station, a micro base station, a relay node, a donor node or the like, or a combination thereof.
  • Network equipment may also refer to communication modules, modems or chips provided in the aforementioned equipment or devices.
  • Network equipment can also be a mobile switching center and responsible for device-to-device (D2D), vehicle outreach (vehicle-to-everything, V2X), and machine-to-machine (M2M) communications.
  • D2D device-to-device
  • V2X vehicle outreach
  • M2M machine-to-machine
  • Network devices can support networks with the same or different access technologies. The embodiments of this application do not limit the specific technology and specific equipment form used by the network equipment.
  • Network equipment can be fixed or mobile.
  • base stations 110a, 110b are stationary and are responsible for wireless transmission and reception in one or more cells from communication devices 120.
  • the helicopter or drone 120i shown in Figure 1 may be configured to act as a mobile base station, and one or more cells may move based on the location of the mobile base station 120i.
  • a helicopter or drone (120i) may be configured to serve as a communications device that communicates with base station 110b.
  • the communication device used to implement the above access network function may be an access network device, or may be a network device with partial functions of the access network, or may be a device capable of supporting the implementation of the access network function, such as Chip system, hardware circuit, software module, or hardware circuit plus software module. This device can be installed in access network equipment or used in conjunction with access network equipment.
  • the communication device used to implement the function of the access network device is an access network device as an example for description.
  • a communication device may be an entity on the user side that is used to receive or transmit signals, such as a mobile phone.
  • Communication devices can be used to connect people, things and machines.
  • the communication device may communicate with one or more core networks through network devices.
  • Communication devices include handheld devices with wireless connectivity, other processing devices connected to wireless modems, or vehicle-mounted devices.
  • the communication device may be a portable, pocket-sized, handheld, computer-built-in, or vehicle-mounted mobile device.
  • the communication device 120 can be widely used in various scenarios, such as cellular communication, device-to-device D2D, vehicle-to-everything V2X, end-to-end P2P, machine-to-machine M2M, machine type communication MTC, Internet of Things IOT, virtual reality VR, and augmented reality AR, industrial control, autonomous driving, telemedicine, smart grid, smart furniture, smart office, smart wear, smart transportation, smart city, drone, robot, remote sensing, passive sensing, positioning, navigation and tracking, autonomous delivery and Mobile etc.
  • cellular communication device-to-device D2D, vehicle-to-everything V2X, end-to-end P2P, machine-to-machine M2M, machine type communication MTC, Internet of Things IOT, virtual reality VR, and augmented reality AR, industrial control, autonomous driving, telemedicine, smart grid, smart furniture, smart office, smart wear, smart transportation, smart city, drone, robot, remote sensing, passive sensing, positioning, navigation and tracking, autonomous delivery and Mobile etc.
  • Some examples of communication devices 120 are: 3GPP standard user equipment (UE), fixed devices, mobile devices, handheld devices, wearable devices, cellular phones, smartphones, Session Initiation Protocol (SIP) phones, laptops, personal computers, Smart books, vehicles, satellites, global positioning system (GPS) equipment, target tracking equipment, drones, helicopters, aircraft, ships, remote control equipment, smart home equipment, industrial equipment, personal communication service (PCS) phones , wireless local loop (WLL) station, personal digital assistant (PDA), wireless network camera, tablet computer, handheld computer, mobile Internet device (mobile internet device, MID), wearable devices such as Smart watches, virtual reality (VR) equipment, augmented reality (AR) equipment, wireless terminals in industrial control (industrial control), terminals in Internet of Vehicles systems, and self-driving (self-driving) Wireless terminals, wireless terminals in smart grid, wireless terminals in transportation safety, wireless terminals in smart city (smart city) such as smart refuelers, terminal equipment on high-speed rail and smart homes (smart Wireless terminal
  • the communication device 120 may be a wireless device in the above various scenarios or a device provided in the wireless device, for example, a communication module, a modem or a chip in the above device.
  • Communication equipment can also be called terminal, terminal equipment, user equipment (UE), mobile station (MS), mobile terminal (mobile terminal, MT), etc.
  • the communication device may also be a communication device in a future wireless communication system.
  • Communication equipment can be used in dedicated network equipment or general equipment. The embodiments of this application do not limit the specific technology and specific equipment form used in the communication equipment.
  • the communication device can be used to act as a base station.
  • a UE may act as a scheduling entity that provides sidelink signals between UEs in V2X, D2D or P2P, etc.
  • cell phone 120a and car 120b communicate with each other using sidelink signals.
  • Cell phone 120a and smart home device 120e communicate without relaying communication signals through base station 110b.
  • the communication device used to implement the functions of the communication device may be a terminal device, or a terminal device having part of the functions of the above communication device, or a device that can support the realization of the functions of the above communication device, such as a chip system,
  • the device can be installed in a terminal device or used in conjunction with the terminal device.
  • the chip system may be composed of chips, or may include chips and other discrete devices.
  • the communication device is a terminal device or a UE as an example for description.
  • a wireless communication system is usually composed of a cell.
  • a base station provides management of the cell.
  • the base station provides communication services to multiple mobile stations (MS) in the cell.
  • the base station includes a baseband unit (BBU) and a remote radio unit (RRU).
  • BBU baseband unit
  • RRU remote radio unit
  • the BBU and RRU can be placed in different places.
  • the RRU is remote and placed in a high traffic area, and the BBU is placed in the central computer room.
  • BBU and RRU can also be placed in the same computer room.
  • the BBU and RRU can also be different components under the same rack.
  • a cell may correspond to a carrier or component carrier.
  • the present disclosure can be applied between a network device and a communication device, between a network device and a network device, or between a communication device and a communication device, that is, between a primary device and a secondary device.
  • the primary device can It is a network device or a communication device.
  • the secondary device can be another network device or communication device.
  • the main device is a communication device
  • the secondary device can be another communication device.
  • the primary device as a network device, such as an access network device
  • the secondary device as a communication device, such as a terminal device, as an example.
  • the communication direction corresponding to the downlink is from the primary device to the secondary device
  • the communication direction corresponding to the uplink is from the secondary device to the primary device.
  • Protocol layer structure between access network equipment and terminal equipment
  • the protocol layer structure may include a control plane protocol layer structure and a user plane protocol layer structure.
  • the control plane protocol layer structure may include a radio resource control (RRC) layer, a packet data convergence protocol (PDCP) layer, and a radio link control (radio link control) layer.
  • RRC radio resource control
  • PDCP packet data convergence protocol
  • radio link control radio link control
  • RLC radio link control
  • media access control medium access control
  • MAC media access control
  • the user plane protocol layer structure may include the functions of protocol layers such as PDCP layer, RLC layer, MAC layer and physical layer.
  • the PDCP layer may also include service data adaptation protocol (service data adaptation protocol). protocol, SDAP) layer.
  • the protocol layer structure between the access network device and the terminal may also include an artificial intelligence (artificial intelligence, AI) layer for transmitting data related to the AI function.
  • AI artificial intelligence
  • the SDAP layer such as the SDAP layer, PDCP layer, RLC layer, MAC layer, and physical layer.
  • the SDAP layer, PDCP layer, RLC layer, MAC layer and physical layer can also be collectively referred to as the access layer.
  • the transmission direction of data it is divided into sending or receiving, and each layer mentioned above is divided into sending part and receiving part.
  • the PDCP layer obtains data from the upper layer, it transmits the data to the RLC layer and MAC layer, and then the MAC layer generates a transmission block, and then wirelessly transmits it through the physical layer.
  • Data is encapsulated accordingly in each layer.
  • the data received by a certain layer from the upper layer of the layer is regarded as the service data unit (SDU) of the layer.
  • SDU service data unit
  • PDU protocol data unit
  • the terminal device may also have an application layer and a non-access layer.
  • the application layer can be used to provide services to applications installed in the terminal device.
  • the downlink data received by the terminal device can be sequentially transmitted from the physical layer to the application layer, and then provided to the application program by the application layer; for another example, The application layer can obtain the data generated by the application program and transmit the data to the physical layer in turn and send it to other communication devices.
  • the non-access layer can be used to forward user data, such as forwarding uplink data received from the application layer to the SDAP layer or forwarding downlink data received from the SDAP layer to the application layer.
  • Access network equipment can include centralized units (central unit, CU) and distributed units (distributed unit, DU). Multiple DUs can be centrally controlled by one CU.
  • the interface between the CU and the DU may be called the F1 interface.
  • the control panel (CP) interface can be F1-C
  • the user panel (UP) interface can be F1-U.
  • CU and DU can be divided according to the protocol layer of the wireless network: for example, the functions of the PDCP layer and above are set in the CU, and the functions of the protocol layers below the PDCP layer (such as the RLC layer and MAC layer, etc.) are set in the DU; for example, PDCP
  • the functions of the protocol layers above are set in the CU, and the functions of the PDCP layer and the lower protocol layers are set in the DU.
  • CU or DU can be divided into functions with more protocol layers, and CU or DU can also be divided into partial processing functions with protocol layer.
  • part of the functions of the RLC layer and the functions of the protocol layer above the RLC layer are set in the CU, and the remaining functions of the RLC layer and the functions of the protocol layer below the RLC layer are set in the DU.
  • the functions of CU or DU can also be divided according to business types or other system requirements, for example, according to delay, and the functions whose processing time needs to meet the delay requirements are set in DU, but do not need to meet the delay.
  • the required functionality is set in CU.
  • the CU may also have one or more functions of the core network.
  • the CU can be set on the network side to facilitate centralized management.
  • the RU of the DU is set far away. Among them, RU has radio frequency function.
  • DU and RU can be divided at the physical layer (PHY).
  • PHY physical layer
  • DU can implement high-level functions in the PHY layer
  • RU can implement low-level functions in the PHY layer.
  • the functions of the PHY layer can include adding cyclic redundancy check (CRC) code, channel coding, rate matching, scrambling, modulation, layer mapping, precoding, resource mapping, physical antenna Mapping, and/or RF transmitting functions.
  • CRC cyclic redundancy check
  • the functions of the PHY layer may include CRC, channel decoding, derate matching, descrambling, demodulation, delayer mapping, channel detection, resource demapping, physical antenna demapping, and/or radio frequency reception functions.
  • the high-level functions in the PHY layer may include part of the functions of the PHY layer, for example, this part of the function is closer to the MAC layer, and the lower-layer functions of the PHY layer may include another part of the function of the PHY layer, for example, this part of the function is closer to the radio frequency function.
  • high-level functions in the PHY layer may include adding CRC codes, channel coding, rate matching, scrambling, modulation, and layer mapping
  • low-level functions in the PHY layer may include precoding, resource mapping, physical antenna mapping, and radio frequency transmission.
  • the high-level functions in the PHY layer may include adding CRC codes, channel coding, rate matching, scrambling, modulation, layer mapping, and precoding
  • the low-layer functions in the PHY layer may include resource mapping, physical antenna mapping, and radio frequency Send function.
  • the functions of the CU may be implemented by one entity, or may be implemented by different entities.
  • the functions of the CU can be further divided, that is, the control plane and the user plane are separated and implemented through different entities, namely the control plane CU entity (i.e., CU-CP entity) and the user plane CU entity (i.e., CU-UP entity).
  • the CU-CP entity and the CU-UP entity can be coupled with the DU to jointly complete the functions of the access network equipment.
  • the signaling generated by the CU can be sent to the terminal device through DU, or the signaling generated by the terminal device can be sent through DU is sent to CU.
  • RRC or PDCP layer signaling will eventually be processed into physical layer signaling and sent to the terminal device, or it will be converted from the received physical layer signaling.
  • the RRC or PDCP layer signaling can be considered to be sent through DU, or sent through DU and RU.
  • any one of the above DU, CU, CU-CP, CU-UP and RU can be a software module, a hardware structure, or a software module + hardware structure, without limitation.
  • the existence forms of different entities can be different and are not limited.
  • DU, CU, CU-CP, and CU-UP are software modules
  • RU is a hardware structure.
  • the number and type of each device in the communication system shown in Figure 1 are only for illustration, and the present disclosure is not limited thereto.
  • the communication system may also include more terminal devices and more access networks.
  • the equipment may also include other network elements, such as core network equipment, and/or network elements used to implement artificial intelligence functions.
  • the method provided by the present disclosure can be used for communication between access network equipment and terminal equipment, and can also be used for communication between other communication equipment, such as communication between macro base stations and micro base stations in wireless backhaul links.
  • communication between two terminal devices in a side link (SL) is not restricted.
  • the access network equipment needs to obtain the channel state information CSI of the downlink channel, and decide based on the CSI to schedule the resources, modulation and coding scheme (MCS) of the downlink data channel of the terminal equipment, and Precoding and other configurations.
  • MCS modulation and coding scheme
  • CSI belongs to a kind of channel information, which is a kind of information that can reflect channel characteristics and channel quality.
  • the channel information may also be called channel response.
  • the CSI may be represented by a channel matrix.
  • the CSI may include a channel matrix, or the CSI may be composed of a channel feature vector.
  • the access network equipment In a frequency division duplex (FDD) communication scenario, since the uplink and downlink channels do not have reciprocity or cannot guarantee the reciprocity of the uplink and downlink channels, the access network equipment usually downlinks reference signals to the terminal equipment.
  • the terminal equipment performs channel measurement and interference measurement to estimate the downlink channel information based on the received downlink reference signal.
  • the downlink channel information includes CSI, and then feeds back the CSI to the access network equipment.
  • the terminal device can generate a CSI report based on the estimated CSI and feed it back to the access network device in a predefined or configured manner by the access network device.
  • the downlink reference signal includes channel state information-reference signal (CSI-RS) or synchronizing signal block (synchronizing signal/physical broadcast channel block, SSB).
  • CSI-RS channel state information-reference signal
  • SSB synchronizing signal block
  • the CSI report includes feedback quantities such as rank indicator (rank indicator, RI), channel quality indicator (channel quality indicator, CQI), and precoding matrix indicator (precoding matrix indicator, PMI).
  • RI is used to indicate the number of downlink transmission layers recommended by the terminal equipment
  • CQI is used to indicate the modulation and coding method that the current channel conditions judged by the terminal equipment can support
  • PMI is used to indicate the precoding recommended by the terminal equipment
  • the precoding layer indicated by the PMI The number corresponds to RI. For example, if RI is 1, then PMI indicates one layer of precoding; if RI is N, then PMI indicates N layer precoding. N is a positive integer greater than 1. It can be understood that N layer precoding corresponds to The N downlink transmission layers indicated by RI. It can be seen that the larger the number of downlink transmission layers indicated by RI, the higher the feedback overhead of PMI.
  • the PMI indicating layer 1 precoding is referred to as layer 1 PMI
  • the PMI indicating layer N precoding is referred to as layer N PMI.
  • the following uses i 1 and i 2 to introduce the common information in layer 1 to layer 4 PMI and each layer. independent information.
  • the first row in i 1 and i 2 corresponds to layer 1 PMI
  • the second row corresponds to layer 2 PMI
  • the third row corresponds to layer 3 PMI
  • the fourth row corresponds to layer 4 PMI.
  • i 1,1 , i 1,2 , and i 1,5 are the common information between the 4-layer precoding
  • i 1,1 is used to represent the DFT spatial base selection
  • i 1,2 is used is used to represent the spatial domain oversampling selection
  • i 1,5 is used to represent the DFT frequency domain basis subset selection.
  • i 1,6,l i 1,7,l i 1,8,l i 2,3,l i 2,4,l i 2,5,l is the independent information of the lth layer among the 4 layers, l is taken A positive integer from 1 to 4.
  • i 1,6,l is used to represent the l-th layer frequency domain base selection
  • i 1,7,l is used to represent the l-th layer non-zero sparse bitmap (bitmap)
  • i 1,8,l is used to Indicates the strongest level of l
  • the coefficients, i 2,3,l i 2,4,l i 2,5,l, are used to represent the amplitude and phase coefficients of the l-th layer.
  • AI artificial intelligence
  • the terminal equipment uses the AI model to compress and feedback the CSI
  • the access network equipment uses the AI model to restore the compressed CSI.
  • What is transmitted in AI-based CSI feedback is a sequence (such as a bit sequence), and the overhead is lower than the overhead of traditional feedback CSI reporting.
  • a terminal device when a terminal device performs CSI feedback of multiple downlink transmission layers, it usually treats the CSI of multiple downlink transmission layers as mutually independent information. For each of the multiple downlink transmission layers, the terminal device will independently use the AI model to perform compressed feedback on the CSI of that layer.
  • the access network device receives the compressed CSI corresponding to each of the multiple downlink transmission layers, and then the access network device recovers the compressed CSI corresponding to the multiple downlink transmission layers respectively.
  • This design of independent compression feedback for multi-layer CSI will lead to repeated transmission of some common information between multi-layer CSI (such as DFT spatial base selection, spatial oversampling selection, DFT frequency domain base subset selection, etc.), resulting in resource waste.
  • the present disclosure provides a communication method and device. After compressing the multi-layer CSI respectively, the AI model is then used to compress the common information between the once-compressed multi-layer CSI, thereby reducing the waste of resources in repeated transmission of public information. , thereby reducing the feedback overhead of CSI.
  • This communication method can be applied to the above communication system, such as FDD communication scenarios.
  • the communication method can also be used in time division duplex (TDD) communication scenarios, which is not limited by the present disclosure.
  • TDD time division duplex
  • the AI model is the specific implementation of the AI technical function.
  • the AI model represents the mapping relationship between the input and output of the model.
  • the type of AI model can be a neural network, linear regression model, decision tree model, support vector machine (SVM), Bayesian network, Q learning model or other machine learning (ML) model.
  • This disclosure relates to an encoder for compressing CSI and a decoder for restoring compressed CSI.
  • the encoder and decoder need to be used in a matching manner. It can be understood that the encoder and decoder are matching AI models.
  • an encoder may include one or more AI models, and the decoder matched by the encoder may also include one or more AI models. The number of AI models included in the matched encoder and the decoder is the same, and One-to-one correspondence.
  • a matched set of encoder (encoder) and decoder (decoder) can be specifically two parts of the same auto-encoder (auto-encoders, AE).
  • the autoencoder is an unsupervised learning neural network. Its characteristic is that the input data is used as label data. Therefore, the autoencoder can also be understood as a self-supervised learning neural network.
  • Autoencoders can be used for data compression and recovery. For example, the encoder in the autoencoder can compress (encode) data A to obtain data B; the decoder in the autoencoder can decompress (decode) data B to recover data A. Or it can be understood that the decoder is the inverse operation of the encoder.
  • Neural network is a specific implementation form of AI or machine learning technology. According to the universal approximation theorem, neural networks can theoretically approximate any continuous function, which enables neural networks to have the ability to learn arbitrary mappings.
  • each neuron performs a weighted sum operation on its input values and outputs the operation result through an activation function.
  • the bias for weighted summation of input values according to the weight is, for example, b. There are many forms of activation functions.
  • the activation functions of different neurons in a neural network can be the same or different.
  • Neural networks generally include multiple layers, and each layer may include one or more neurons. By increasing the depth and/or width of a neural network, the expressive ability of the neural network can be improved, providing more powerful information extraction and abstract modeling capabilities for complex systems.
  • the depth of the neural network may refer to the number of layers included in the neural network, and the number of neurons included in each layer may be called the width of the layer.
  • the neural network includes an input layer and an output layer. The input layer of the neural network processes the received input information through neurons, and passes the processing results to the output layer, which obtains the output results of the neural network.
  • the neural network includes an input layer, a hidden layer and an output layer, as shown in Figure 2B.
  • the input layer of the neural network processes the received input information through neurons and passes the processing results to the intermediate hidden layer.
  • the hidden layer calculates the received processing results and obtains the calculation results.
  • the hidden layer passes the calculation results to the input layer.
  • a neural network may include one hidden layer, or multiple hidden layers connected in sequence, without limitation.
  • the AI model involved in this disclosure can be a deep neural network (DNN).
  • DNN can include feedforward neural networks (FNN), convolutional neural networks (CNN) and recurrent neural networks (RNN).
  • FNN feedforward neural networks
  • CNN convolutional neural networks
  • RNN recurrent neural networks
  • the training data set is used for training the AI model.
  • the training data set can include the input of the AI model, or include the input and target output of the AI model.
  • the training data set includes one or more training data.
  • the training data may be training samples input to the AI model, or may be the target output of the AI model. Among them, the target output can also be called a label or a label sample.
  • the training data set is one of the important parts of machine learning. Model training is essentially to learn some of its features from the training data so that the output of the AI model is as close as possible to the target output, such as the difference between the output of the AI model and the target output. The difference is as small as possible.
  • the composition and selection of training data sets can, to a certain extent, determine the performance of the trained AI model.
  • a loss function can be defined during the training process of AI models (such as neural networks).
  • the loss function describes the gap or difference between the output value of the AI model and the target output value. This disclosure does not limit the specific form of the loss function.
  • the training process of the AI model is the process of adjusting the model parameters of the AI model so that the value of the loss function is less than the threshold, or the value of the loss function meets the target requirements.
  • the AI model is a neural network, and adjusting the model parameters of the neural network includes adjusting at least one of the following parameters: the number of layers of the neural network, the width, the weight of the neuron, or the parameters in the activation function of the neuron.
  • Inference data can be used as input to the trained AI model for inference of the AI model.
  • the inference data is input into the AI model, and the corresponding output can be obtained, which is the inference result.
  • the design of AI models mainly includes data collection links (such as collecting training data and/or inference data), model training links, and model inference links. It may further include an application link of the reasoning results.
  • data collection links such as collecting training data and/or inference data
  • model training links such as collecting training data and/or inference data
  • model inference links It may further include an application link of the reasoning results.
  • FIG 2C to illustrate an AI application framework.
  • data sources are used to provide training data sets and inference data.
  • the AI model is obtained by analyzing or training the training data (training data) provided by the data source.
  • the AI model represents the mapping relationship between the input and output of the model. Learning an AI model through model training nodes is equivalent to using training data to learn the mapping relationship between the input and output of the model.
  • the AI model trained through the model training link is used to perform inference based on the inference data provided by the data source to obtain the inference results.
  • This link can also be understood as: input the inference data into the AI model, obtain the output through the AI model, and the output is the inference result.
  • the inference result may indicate: configuration parameters used (executed) by the execution object, and/or operations performed by the execution object.
  • the inference results are released in the inference result application link.
  • the inference results can be planned uniformly by the execution (actor) entity.
  • the execution entity can send the inference results to one or more execution objects (for example, core network equipment, access network equipment). , or terminal equipment, etc.) to execute.
  • the execution entity can also feed back the performance of the model to the data source to facilitate the subsequent update and training of the model.
  • network elements with artificial intelligence functions may be included in the communication system.
  • the above steps related to AI model design can be executed by one or more network elements with artificial intelligence functions.
  • AI functions (such as AI modules or AI entities) can be configured in existing network elements in the communication system to implement AI-related operations, such as AI model training and/or inference.
  • the existing network element may be access network equipment (such as gNB), terminal equipment, core network equipment, or network management, etc.
  • the network management can divide the network management work into three categories according to the actual needs of the operator's network operation: Operation, Management, and Maintenance.
  • Network management can also be called operation administration and maintenance (OAM) network element, referred to as OAM.
  • OAM operation administration and maintenance
  • Operations mainly complete the analysis, prediction, planning and configuration of daily networks and services; maintenance mainly involves daily operational activities such as testing and fault management of the network and its services.
  • the network manager can detect the network operating status and optimize network connections and performance. , improve network operation stability and reduce network maintenance costs.
  • independent network elements can also be introduced into the communication system to perform AI-related operations, such as training AI models.
  • the independent network element may be called an AI network element or an AI node, and this disclosure does not limit this name.
  • the AI network element can be directly connected to the access network equipment in the communication system, or indirectly connected through a third-party network element and the access network equipment.
  • third-party network elements can be authentication management function (AMF) network elements, user plane function (UPF) network elements and other core network elements, OAM, cloud servers or other network elements. They are not allowed. limit.
  • AMF authentication management function
  • UPF user plane function
  • OAM cloud servers
  • a model can infer one parameter or multiple parameters.
  • the training processes of different models can be deployed in different devices or nodes, or they can be deployed in the same device or node.
  • the inference processes of different models can be deployed in different device or node, or can be deployed in the same device or node.
  • the terminal device can train the supporting encoder and decoder and then send the model parameters of the decoder to the access network device.
  • the access network equipment can indicate the model parameters of the encoder to the terminal equipment after training the supporting encoder and decoder.
  • the AI network element can train the supporting encoder and decoder, and then send the model parameters of the encoder to the terminal device and the model parameters of the decoder to the access network. equipment. Then, the model inference step corresponding to the encoder is performed in the terminal equipment, and the model inference step corresponding to the decoder is performed in the access network equipment.
  • the model parameters may include the structural parameters of one or more of the following models (such as the number of layers and/or weights of the model, etc.), the input parameters of the model (such as input dimensions, the number of input ports), or the output of the model. Parameters (such as output dimensions, number of output ports).
  • the input dimension may refer to the size of an input data.
  • the input dimension corresponding to the sequence may indicate the length of the sequence.
  • the number of input ports may refer to the amount of input data.
  • the output dimension can refer to the size of an output data.
  • the output dimension corresponding to the sequence can indicate the length of the sequence.
  • the number of output ports can refer to the amount of output data.
  • the present disclosure also provides several network architectures as shown in Figures 4A to 4D, taking model training and/or inference in access network equipment as an example, and performing model training and/or inference in access network equipment.
  • the functional modules are divided.
  • the access network equipment includes a near-real-time access network intelligent control (RAN intelligent controller, RIC) module for model learning and/or inference.
  • the near real-time RIC may obtain network side and/or terminal side information from at least one of CU, DU and RU, and the information may include training data or inference data.
  • near-real-time RIC can be used for model training, and the trained model can also be used for inference.
  • the near real-time RIC may submit the inference results to at least one of CU, DU and RU.
  • CU and DU can interact with inference results.
  • inference results can be exchanged between DU and RU.
  • near-real-time RIC submits inference results to DU, and DU submits them to RU.
  • a non-real-time RIC can be included outside the access network equipment.
  • the non-real-time RIC can be located in the OAM or core. network equipment.
  • This non-real-time RIC is used for model learning and inference.
  • the non-real-time RIC may obtain network-side and/or terminal-side information from at least one of CU, DU, and RU, and the information may include training data or inference data.
  • non-real-time RIC is used for model training, and the trained model can also be used for inference.
  • the non-real-time RIC may submit the inference result to at least one of CU, DU and RU.
  • CU and DU can interact with inference results.
  • inference results can be exchanged between DU and RU.
  • non-real-time RIC submits inference results to DU, and DU submits them to RU.
  • the access network equipment includes near-real-time RIC, and the access network equipment also includes non-real-time RIC.
  • the non-real-time RIC can be located in the OAM or core network equipment.
  • non-real-time RIC can be used for model training.
  • Near-real-time RIC can obtain the model parameters of the trained AI model from the non-real-time RIC, and obtain the network-side sum from at least one of CU, DU, and RU. /Or information on the terminal side, use this information and the trained AI model to obtain inference results.
  • the near real-time RIC can also submit the inference results to at least one of CU, DU and RU.
  • CU and DU can exchange inference results.
  • DU and RU can exchange inference results, for example.
  • Near real-time RIC submits the inference results to DU, and DU submits them to RU.
  • near-real-time RIC is used for model training and inference using the trained model, and non-real-time RIC does not participate in the training or inference of the model; or, non-real-time RIC is used for model training and
  • the trained model is used for inference, and real-time RIC does not participate in the training or inference of the model.
  • near-real-time RIC is used to train the model and send the model parameters of the trained AI model to the non-real-time RIC, and the non-real-time RIC uses the trained model for inference.
  • FIG. 4B shows an example diagram of a network architecture to which the method provided by the present disclosure can be applied. Compared with (c) in Figure 4A, the CU is separated into CU-CP and CU-UP in Figure 4B.
  • Figure 4C shows an example diagram of a network architecture to which the method provided by the present disclosure can be applied.
  • the access network equipment includes one or more AI entities, and the function of the AI entity is similar to the above-mentioned near real-time RIC.
  • OAM includes one or more AI entities whose functions are similar to the above-mentioned non-real-time RIC.
  • the core network equipment includes one or more AI entities whose functions are similar to the above-mentioned non-real-time RIC.
  • differences in models may include at least one of the following differences: structural parameters of the model (such as the number of layers and/or weights of the model, etc.), input parameters of the model, or output parameters of the model.
  • Figure 4D shows an example diagram of a network architecture to which the method provided by the present disclosure can be applied.
  • the access network equipment in Figure 4D is separated into CU and DU.
  • the CU may include an AI entity whose functions are similar to the above-mentioned near real-time RIC.
  • the DU may include an AI entity whose functions are similar to the above-mentioned near real-time RIC.
  • both CU and DU include AI entities, the models trained by their respective AI entities are different, and/or the models used for inference are different.
  • the CU in Figure 4D can be further divided into CU-CP and CU-UP.
  • one or more AI models can be deployed in CU-CP.
  • one or more AI models may be deployed in CU-UP.
  • the OAM of the access network device and the OAM of the core network device can be deployed independently.
  • the present disclosure involves two stages of CSI compression.
  • the terminal device uses a first encoder to separately compress multi-layer CSI.
  • the terminal device uses a second encoder to compress the results of the first stage.
  • the access network equipment side also requires two stages of decompression. For example, the access network equipment first uses the second decoder matched by the second encoder to decompress, and then uses the first decoder matched by the first encoder. The output of the second decoder is decompressed.
  • the first encoder and the first decoder are used in matching manner, and the two are trained together. It can be understood that the number of AI models included in the first encoder is the same as the number of AI models included in the first decoder. Each AI model in the first encoder has a corresponding AI model in the first decoder.
  • the training data set used to train the first encoder and the first decoder includes training samples and sample labels, where the training samples are the channel information of the downlink transmission layer determined by the terminal device, and the sample labels are the real channel information of the downlink transmission layer. For the case where the first encoder and the first decoder belong to the same autoencoder, the training data only includes training samples, or the training samples are sample labels, that is, the channel information of the downlink transmission layer.
  • the first encoder and the first decoder may be for a specific downlink transmission layer. For example, channel information of multiple first downlink transmission layers is used to train a pair of first encoders and first decoders. The pair of first encoders and the first decoder is dedicated to processing channel information of the first downlink transmission layer. Another pair of first encoders and first decoders is trained using channel information of a plurality of second downlink transmission layers, and the pair of first encoders and first decoders is dedicated to processing channel information of the second downlink transmission layer.
  • the first encoder and the first decoder may be common to multiple downlink transmission layers, for example, using channel information of multiple first downlink transmission layers and channel information of the second downlink transmission layer to train a pair of first encoders and a first decoder.
  • the pair of the first encoder and the first decoder can be used for processing channel information of the first downlink transmission layer and can also be used for processing channel information of the second downlink transmission layer.
  • the specific training process is as follows: the model training node uses the first encoder to process the channel information of a downlink transmission layer, that is, the training sample, obtains the first channel status indication information corresponding to the downlink transmission layer, and uses the first decoder to process the first channel Status indication information to obtain the restored channel information of the downlink transmission layer.
  • the sequence length of the first channel status indication information may be the first target length.
  • the difference between the restored downlink transmission layer channel information and the corresponding sample label is calculated, that is, the value of the loss function, and the parameters of the first encoder and the first decoder are updated according to the value of the loss function, so that the recovery
  • the difference between the channel information of the downlink transmission layer and the corresponding sample label is minimized, that is, the loss function is minimized.
  • the loss function may be minimum mean square error (MSE) or cosine similarity.
  • the above-mentioned model training nodes can be terminal equipment, access network equipment, or other network elements with AI functions in the communication system. Furthermore, if the model training node is a terminal device, the model parameters of the first decoder can be sent to the access network device; or if the model training node is an access network device, the model parameters of the first encoder can be sent to the terminal. equipment; or if the model training node is a network element other than terminal equipment and access network equipment, the model parameters of the first encoder can be sent to the terminal equipment, and the model parameters of the first decoder can be sent to the access network element. Network access equipment.
  • the second encoder and the second decoder are used in a matching manner, and the two are trained together. It can be understood that the number of AI models included in the second encoder is the same as the number of AI models included in the second decoder. Each AI model in the second encoder has a corresponding AI model in the second decoder. Since the second encoder compresses the result of the first encoder, the second encoder and the second decoder cannot be used independently of the first encoder and the second decoder, that is, the second encoder and the second decoder It needs to be trained together with the first encoder and the second decoder. In one possible design, the second encoder and the second decoder belong to the same autoencoder.
  • the second encoder and the second decoder can be trained together with the trained first encoder and second decoder, or they can be trained together with the untrained first encoder and second decoder to train the second encoder. and the second decoder, the first encoder and the first decoder can be fixed, and only the second encoder and the second decoder can be trained, or the second encoder and the second decoder can be jointly trained, and the first encoder and second decoder.
  • the specific training process is as follows: the model training node uses the first encoder to process the channel information of N downlink transmission layers to obtain N first channel status indication information , use the second encoder to process N pieces of first channel status indication information to obtain the second channel status indication information, use the second decoder to process the second channel status indication information, and obtain The restored N pieces of first channel status indication information are obtained, and the first decoder is used to process the restored N pieces of first channel status indication information, and the restored N pieces of downlink transmission layer channel information are obtained.
  • the sequence length of the second channel status indication information may be the second target length. Optionally, there may be one or more second target lengths.
  • the loss function may be minimum mean square error (MSE) or cosine similarity.
  • model training nodes can be terminal equipment, access network equipment, or other network elements with AI functions in the communication system. Furthermore, if the model training node is a terminal device, the model parameters of the second decoder can be sent to the access network device; or if the model training node is an access network device, the model parameters of the second encoder can be sent to the terminal.
  • the model parameters of the second encoder can be sent to the terminal device, and the model parameters of the second decoder can be sent to the access network device.
  • Network access equipment
  • the method mainly includes the following processes.
  • S501 The terminal device determines channel information of N downlink transmission layers.
  • the terminal device can estimate the downlink channel information by measuring the downlink reference signal from the access network device, and determine the number N of downlink transmission layers recommended by the terminal device.
  • the downlink channel information may include channel state information CSI of the downlink channel, and the downlink channel information may also be called a downlink channel response.
  • the terminal device can divide the downlink channel information into channel information (or channel characteristics) of N downlink transmission layers, where N is a positive integer.
  • the terminal device uses the first encoder and the channel information of N downlink transmission layers input to the first encoder to determine N pieces of first channel status indication information.
  • the channel information of the i-th downlink transmission layer among the N downlink transmission layers corresponds to the i-th first channel status indication information among the N first channel status indication information.
  • the i-th first channel status indication information can be understood as a sequence, matrix or high-dimensional tensor, used to indicate the channel information of the i-th downlink transmission layer.
  • the first encoder has compression and quantization functions, and the sequence corresponding to the i-th first channel status indication information may specifically be a bit sequence.
  • the first encoder has a compression function but does not have a quantization function, and the sequence corresponding to the i-th first channel status indication information may specifically be a floating-point sequence.
  • the first encoder can be trained.
  • the structure of the first encoder can be understood with reference to any one of the following modes 11, 12, or 13.
  • the first encoder includes a first AI model.
  • the terminal device can use the first AI model to process channel information of N downlink transmission layers to obtain N first channel status indication information.
  • channel information of N downlink transmission layers is input to the first AI model, and correspondingly, the output of the first encoder includes N pieces of first channel status indication information.
  • the terminal device may input the channel information of N downlink transmission layers into the first AI model in parallel or serially, which is not limited by this disclosure.
  • different first encoders that is, different first AI models
  • a second AI model included in the first encoder is associated with the value of N, and different values of N are associated with different first AI models.
  • N the first encoder is recorded as the encoder 21 or the encoder in the AE model 21; when N is 3, the first encoder is recorded as the encoder 31 or the encoder in the AE model 31.
  • Such a design can enhance the flexibility of AI model application.
  • set different downlink transmission layer numbers to correspond to using the same first encoder or encoders in the same AE model. Such a design can reduce the storage overhead of AI models.
  • a first AI model can also be trained, using the downlink channel information estimated by measuring the downlink reference signal as input data, and outputting N first channel status indication information.
  • the terminal equipment estimates the downlink channel information by measuring the downlink reference signal, it can directly use the first AI model to process the downlink channel information to obtain N pieces of first channel status indication information.
  • the first encoder may include N first AI models, and the i-th first AI model in the first encoder is used to process channel information of the i-th downlink transmission layer among the N downlink transmission layers, The i-th first channel status indication information among the N pieces of first channel status indication information is obtained.
  • the channel information of the N downlink transmission layers is input into the N first AI models respectively.
  • the input of the i-th first AI model among the N first AI models includes the channel information of the i-th downlink transmission layer among the N downlink transmission layers, and the output of the i-th first AI model includes the i-th downlink transmission layer. Corresponding i-th first channel status indication information.
  • the N first AI models in the first encoder can be the same, or the model type of the N first AI models (such as the neural network used) is the same, but the model parameters are different; or, the N first AI models The model types and model parameters are different, and this disclosure does not limit this.
  • the first encoder includes M first AI models, where M is an integer greater than 1 and less than N.
  • a first AI model can be used to process channel information of part (one or more) of the N downlink transmission layers, or can be described as part of the N downlink transmission layers.
  • the channel information may be processed by the same first AI model, and part of the channel information of the N downlink transmission layers is input serially or in parallel to the first of the M first AI models.
  • An AI model An AI model.
  • the structure of the first encoder can be pre-configured to be one of Mode 11, Mode 12 or Mode 13.
  • the preconfiguration can be implemented using protocol definition or other defined methods, or it can be a terminal device and Access network devices negotiate among themselves to achieve preconfiguration.
  • the terminal device can determine the structure of the first encoder by itself according to one of Mode 11, Mode 12 or Mode 13. Further optionally, the terminal device can send the first instruction to the access network device.
  • the first indication information is used to indicate the structure of the first encoder.
  • the first indication information is specifically used to indicate that the structure of the first encoder is mode 11, mode 12 or mode 13.
  • the mapping relationship between different values of N and the structure of the first encoder can also be preconfigured, or can be described as the structure of the first encoder Corresponds to the value of N.
  • model type and model parameters of each first AI model in the first encoder can also be pre-configured; or the terminal device can report to the access network device the identification of the terminal device using the first AI model in the first encoder, Model type or model parameters, etc., for example, when the first indication information indicates mode 11, it may also specifically indicate the identification, model type or model parameters of the first AI model in the first encoder in mode 11.
  • the terminal device can send N first channel status indication information to the access network device; or, if the aforementioned downlink transmission layer The number N is equal to 1, and the first encoder only has a compression function.
  • the terminal device can quantize the N first channel status indication information and then send it to the access network device. Or, if the number of downlink transmission layers N is greater than 1, the terminal device further performs the following S503 and S504 after completing S502:
  • the terminal device uses the second encoder and the N pieces of first channel status indication information input to the second encoder to determine the second channel status indication information; wherein the second channel status indication information corresponds to the N pieces of Downstream transport layer.
  • the terminal device may use the second encoder to process the N pieces of first channel status indication information to obtain the second channel status indication information.
  • the input of the second encoder includes N pieces of first channel status indication information
  • the output includes second channel status indication information.
  • the second channel status indication information obtained in S503 is a compression of the N pieces of first channel status indication information obtained in S502.
  • the second channel status indication information can also be understood as a sequence, and the sequence length corresponding to the second channel status indication information is less than the sum of the sequence lengths corresponding to N pieces of first channel status indication information.
  • the second encoder in this solution 1 has compression and quantization functions, and the processing of the N first channel status indication information includes compression and quantization.
  • the sequence corresponding to the second channel status indication information can specifically be a bit sequence.
  • the second encoder can be trained.
  • the structure of the second encoder can refer to any one of the following modes 21, 22, or 23.
  • the second encoder includes a second AI model.
  • the terminal device can use the second AI model to process (such as compress and quantize) the N pieces of first channel status indication information to obtain the second channel status indication information. It can be understood that the input of the second AI model includes N pieces of first channel status indication information, and the output of the second AI model includes the second channel status indication information.
  • Figure 6A illustrates a schematic diagram of the encoding process.
  • the first encoder in S502 includes 4 first AI models, namely the first AI model 11, the first AI model 12, the first AI model 13, and the first AI model 14, and the second encoder includes 1
  • the second AI model illustrates the process in which the terminal device uses the first encoder and the second encoder to compress and quantize the channel information of the four downlink transmission layers twice to obtain the second channel status indication information.
  • different second encoders can be set for different downlink transmission layer numbers.
  • a second AI model included in the second encoder is associated with the value of N, and different values of N are associated with different second AI models.
  • N the second encoder is recorded as the encoder 22 or the encoder in the AE model 22; when N is 3, the second encoder is recorded as the encoder 32 or the encoder in the AE model 32.
  • Such a design can enhance the flexibility of AI model application.
  • set different downlink transmission layer numbers to use the same second encoder or encoders in the same AE model. Such a design can reduce the AI model storage overhead.
  • the second encoder may include multiple second AI models, and the number of second AI models in the second encoder is related to the number N of downlink transmission layers.
  • the second encoder includes N-1 second AI models.
  • the input of the first second AI model among the N-1 second AI models includes the first first channel status indication information and the second first channel status indication among the N first channel status indication information.
  • Information, the output of the first second AI model includes the first fourth channel status indication information.
  • the input of the i-th encoder includes the i-1th fourth channel status indication information output by the i-1th encoder and the The i+1-th first channel status indication information among the N first channel status indication information, the output of the i-th encoder includes the i-th fourth channel status indication information, 2 ⁇ i ⁇ N-1, i is a positive integer, or it can be described as a positive integer ranging from 2 to N-1.
  • the N-1th fourth channel status indication information output by the N-1th second AI model is the second channel status indication information.
  • Figure 6B shows a schematic diagram of the encoding process.
  • the first encoder in S502 includes 4 first AI models, namely the first AI model 11, the first AI model 12, the first AI model 13 and the first AI model 14, and the second encoder in S503 Including three second AI models, namely the second AI model 21, the second AI model 22 and the second AI model 23, as an example, it is shown that the terminal device uses the first encoder and the second encoder to implement four downlink transmissions. The process of performing secondary compression and quantization on the channel information of the second layer to obtain the second channel status indication information.
  • the input of the first AI model 11 includes the channel information of the downlink transmission layer 1, the output of the first AI model 11 includes the first first channel status indication information, and the input of the first AI model 12 includes the channel of the downlink transmission layer 2.
  • the output of the first AI model 12 includes the second first channel status indication information;
  • the input of the first AI model 13 includes the channel information of the downlink transmission layer 3, and the output of the first AI model 13 includes the third first channel status indication information.
  • Information; the input of the first AI model 14 includes the channel information of the downlink transmission layer 4, and the output of the first AI model 14 includes the fourth first channel status indication information.
  • the input of the second AI model 21 includes the first first channel status indication information and the second first channel status indication information, and the output of the second AI model 21 includes the first fourth channel status indication information;
  • the second AI model The input of 22 includes the first fourth channel status indication information and the third first channel status indication information, the output of the second AI model 22 includes the second fourth channel status indication information;
  • the input of the second AI model 23 includes The second fourth channel status indication information and the fourth first channel status indication information, the output of the second AI model 23 includes the third fourth channel status indication information, and the third fourth channel status indication information output by the second AI model 23
  • the four-channel status indication information is the second channel status indication information.
  • the second encoder may include K second AI models.
  • K is a positive integer greater than 1 and less than N-1, for example, K is 2. It can be understood that part of the first channel state indication information among the N first channel state indication information is input in parallel to one first artificial intelligence AI model among the K second artificial intelligence AI models. The input and output of the K second AI models are described in detail below.
  • K may be 2
  • the second encoder may include two second AI models, denoted as the second AI model 21 and the second AI model 22.
  • the input of the second AI model 21 includes the first N-1 first channel status indication information among the 4 first channel status indication information, and the output of the second AI model 21 includes the first N-1 first channel status indication information.
  • Four channel status indication information The input of the second AI model 22 includes the Nth first channel status indication information and the 1st fourth channel status indication information among the N first channel status indication information, and the output of the second AI model 22 includes the 2nd fourth channel status indication information.
  • the second fourth channel status indication information output by the second AI model 22 is the second channel status indication information.
  • Figure 6C shows a schematic diagram of the encoding process.
  • the first encoder in S502 includes 4 first AI models, namely the first AI model 11, the first AI model 12, the first AI model 13 and the first AI model 14, and the second encoder in S503 Including two second AI models, namely the second AI model 21 and the second AI model 22, as an example, it is shown that the terminal equipment uses the first encoder and the second encoder to realize two-pass processing of the channel information of the four downlink transmission layers. A process of performing compression and quantization to obtain the second channel status indication information.
  • the input of the first AI model 11 includes the channel information of the downlink transmission layer 1, the output of the first AI model 11 includes the first first channel status indication information, and the input of the first AI model 12 includes the channel of the downlink transmission layer 2.
  • the output of the first AI model 12 includes the second first channel status indication information;
  • the input of the first AI model 13 includes the channel information of the downlink transmission layer 3, and the output of the first AI model 13 includes the third first channel status indication information.
  • Information; the input of the first AI model 14 includes the channel information of the downlink transmission layer 4, and the output of the first AI model 14 includes the fourth first channel status indication information.
  • the input of the second AI model 21 includes the first first channel status indication information, the second first channel status indication information and the third first channel status indication information, and the output of the second AI model 21 includes the first first channel status indication information.
  • Four channel status indication information the input of the second AI model 22 includes the first fourth channel status indication information and the fourth first channel status indication information, and the output of the second AI model 22 includes the second fourth channel status indication information. information.
  • the second AI model 22 outputs The second fourth channel status indication information is the second channel status indication information.
  • the input of the second AI model 21 includes the first two first channel indication information among the N first channel status indication information, that is, the first first channel status indication information and the second The first channel status indication information
  • the output of the second AI model 21 includes the first fourth channel status indication information
  • the input of the second AI model 22 includes the first fourth channel status indication information and the rest of the N first channel status indication information except the first two.
  • N is 4, and the remaining first channel status indication information is
  • the channel status indication information includes the third first channel status indication information and the fourth first channel status indication information
  • the output of the second AI model 22 includes the second fourth channel status indication information.
  • the second fourth channel status indication information output by the second AI model 22 is the second channel status indication information.
  • K when N is 5, K may be 2 or 3, that is, the second encoder may include 2 second AI models or 3 second AI models.
  • the case where the second encoder includes two second AI models can be understood with reference to the example in which N is 4.
  • the second encoder includes three second AI models it can be understood as follows:
  • the three second AI models are denoted as second AI model 21, second AI model 22, and second AI model 23.
  • the input of the second AI model 21 includes the 1st to 3rd first channel status indication information among the 5 first channel status indication information, and the output of the second AI model 21 includes the 1st to 3rd first channel status indication information.
  • Four channel status indication information The input of the second AI model 22 includes the fourth first channel status indication information and the first fourth channel status indication information among the five first channel status indication information, and the output of the second AI model 22 includes the second fourth channel status indication information.
  • the input of the second AI model 23 includes the fifth first channel status indication information and the second fourth channel status indication information among the five first channel status indication information, and the output of the second AI model 23 includes the third fourth channel status indication information.
  • the third fourth channel status indication information output by the second AI model 23 is the second channel status indication information.
  • the input of the second AI model 21 includes the 1st first channel status indication information and the 2 first channel status indication information among the 5 first channel status indication information.
  • the second AI model 21 The output of 21 includes the first fourth channel status indication information.
  • the input of the second AI model 22 includes the third first channel status indication information and the four first channel status indication information among the five first channel status indication information, and the output of the second AI model 22 includes the second fourth channel Status indication information.
  • the input of the second AI model 23 includes the fifth first channel status indication information, the first fourth channel status indication information and the second fourth channel status indication information among the five first channel status indication information.
  • the second AI The output of model 23 includes the third fourth channel status indication information.
  • the third fourth channel status indication information output by the second AI model 23 is the second channel status indication information.
  • the input of the second AI model 21 includes the 1st first channel status indication information and 2 first channel status indication information among the 5 first channel status indication information.
  • the second AI model 21 The output of 21 includes the first fourth channel status indication information.
  • the input of the second AI model 22 includes the third to fifth first channel status indication information among the five first channel status indication information, and the output of the second AI model 22 includes the second and fourth Channel status indication information.
  • the input of the second AI model 23 includes the first fourth channel status indication information and the second fourth channel status indication information, and the output of the second AI model 22 includes the third fourth channel status indication information.
  • the third fourth channel status indication information output by the second AI model 23 is the second channel status indication information.
  • the structure of the second encoder can be pre-configured to be one of Mode 21, Mode 22 or Mode 23.
  • the preconfiguration can be implemented using protocol definition or other defined methods. Another example can be It is the negotiation between the terminal equipment and the access network equipment to achieve pre-configuration.
  • the terminal device can determine the structure of the second encoder by itself according to one of the methods 21, 22, or 23. Further optionally, the terminal device may send second indication information to the access network device, where the second indication information is used to indicate the structure of the second encoder.
  • the second indication information is specifically used to indicate that the structure of the second encoder is Mode 21, Mode 22 or Mode 23.
  • the mapping relationship between different values of N and the structure of the second encoder can also be preconfigured, or can be described as the structure of the second encoder. Corresponds to the value of N.
  • one or more of the model type and model parameters of each second AI model in the second encoder can also be pre-configured; or the terminal device can report to the access network device the use of the second AI model by the terminal device.
  • One or more of the identification, model type or model parameters of the second AI model For example, when the second indication information indicates mode 21, it may also specifically indicate one or more of the identification, model type or model parameters of the second AI model in the second encoder in mode 21.
  • S504 The terminal device sends the second channel status indication information to the access network device.
  • the terminal device may be predefined that the terminal device feeds back twice-compressed CSI to the access network device, or the terminal device and the access network device may negotiate in advance to feed back twice-compressed CSI. Then the access network device receives the second channel status indication message When receiving the information, it may be determined that the second channel status indication information needs to be decompressed twice, and S505 to S506 are further executed.
  • the terminal device may also send information indicating the rank, such as RI, and the value of the foregoing downlink transmission layer number N is equal to the value of the rank. Or it can be understood that RI is used to indicate the value of N.
  • the RI indicates that the value of N is greater than 1, and the access network device can also know based on the RI that the second channel status indication information is obtained by the terminal device through two compressions.
  • Such a design uses existing information to additionally define or indicate other content, which can improve information utilization.
  • the structure of the aforementioned second encoder corresponds to the value of N.
  • the structure of the second decoder matched by the second encoder also corresponds to the value of N.
  • the structure of the second autoencoder to which the second encoder belongs may correspond to the value of N.
  • the access network device may determine one or more of the structures of the second encoder, the second decoder, or the second autoencoder according to the value of N.
  • the structure of the first encoder may also correspond to the value of N.
  • the structure of the first decoder matched by the first encoder also corresponds to the value of N.
  • the structure of the first autoencoder to which the first encoder belongs may also correspond to the value of N.
  • the access network device may determine one or more of the structures of the first encoder, the first decoder, or the first autoencoder according to the value of N.
  • a separate message can be defined, e.g. denoted as the second message.
  • the second information indicates one or more of the following: a second encoder, a second autoencoder AE to which the second encoder belongs, and a second decoder matching the second encoder.
  • the terminal device may also send the second information.
  • the access network device may determine the second decoder matching the second encoder according to the second information.
  • the second information may include one or more of the model identification, model type, and model parameters of the second encoder.
  • the second information may include the second indication information described in S503.
  • the second indication information indicates that the structure of the second encoder is one of Mode 21, Mode 22, and Mode 23.
  • the second encoder described in each mode The model type and model parameters of the second AI model included are preconfigured.
  • the access network device may determine the structure of the second encoder according to the second indication information.
  • the second information may also include information indicating the value of N.
  • the first information can also be defined separately, and the first information indicates one or more of the following: the first encoder, the first autoencoder AE to which the first encoder belongs, and the first autoencoder AE that matches the first encoder. a decoder. Then the access network device can determine a decoder matching the first encoder according to the first information.
  • the first information may include one or more of the model identification, model type, and model parameters of the first encoder.
  • the first information may include the first indication information described in S502. The first indication information indicates that the structure of the first encoder is one of Mode 11, Mode 12, and Mode 13.
  • the first encoder described in each mode The model structure and model parameters of the first AI model included are preconfigured. Then the access network device can determine the structure of the first encoder according to the first indication information.
  • the first information may also include information indicating the value of N.
  • the access network device uses the second decoder and the second channel status indication information input to the second decoder to determine N pieces of first channel status indication information.
  • the second decoder is a decoder that matches the second encoder described in S503, and the implementation of the second decoder can be understood as the reverse process of the second encoder.
  • the access network device may determine the second decoder based on the information sent by the terminal device or the relevant configuration of the second encoder, and then the access network device inputs the received second channel status indication information to the second decoder, Obtain N pieces of first channel status indication information. It can be understood that, as an example, in the first solution, the second decoder has dequantization and decompression functions, and the access network device uses the second decoder to process the second channel status indication information including dequantization and decompression.
  • the second decoder can be trained.
  • Three structures of the second encoder are described corresponding to S503, and the second decoder may also have corresponding three structures.
  • the second decoder may also have corresponding three structures.
  • three possible structures of the second decoder are illustrated below through Mode 31 to Mode 33. Among them, mode 31 matches the second encoder described in mode 21, mode 32 matches the second encoder described in mode 22, and mode 33 matches the second encoder described in mode 23.
  • the second decoder includes a third AI model.
  • the input of the second decoder includes second channel status indication information obtained by the access network device, and the output of the second decoder includes N pieces of first channel status indication information.
  • the access network device uses a second decoder including a third AI model to dequantize and decompress the second channel status indication information to obtain N first channel status indications. information flow.
  • the second decoder may include multiple third AI models, and the number of third AI models in the second decoder is related to the number N of downlink transmission layers.
  • the second decoder may include N-1 third AI models.
  • the input of the N-1 third AI model among the N-1 third artificial intelligence AI models includes the second channel status indication information
  • the output includes the N-2 fourth channel status indication information and the N-th third AI model.
  • the first channel status indication information, the Nith third artificial intelligence AI model The input includes the Nith fourth channel status indication information
  • the output of the Nith third artificial intelligence AI model includes the N-i+1th first channel status indication among the N first channel status indication information.
  • the input of the first third AI model includes the first fourth channel status indication information
  • the output of the first third AI model includes the N The first first channel status indication information and the second first channel status indication information in the first channel status indication information, where i takes a positive integer ranging from 2 to N-2. It can be understood that neither Ni nor Ni-1 can be 0, and the second channel status indication information is the N-1th fourth channel status information.
  • the second decoder includes a second third AI model and a first third AI model.
  • the input of the second third AI model includes the second fourth channel status indication information, that is, the second channel status indication information
  • the output of the second third AI model includes the third first channel status indication information and the first Fourth channel status indication information.
  • the input of the first third model includes the first fourth channel status indication information
  • the input of the first third AI model includes the first first channel status indication information and the second first channel status indication information.
  • the input of the N-1 third AI model among the N-1 third AI models includes the second channel status indication information obtained by the access network device, and the N-1 third AI model
  • the output of the model includes the N-2th fourth channel status indication information and the Nth first channel status indication information.
  • i represents the positive integers from 2 to N-2.
  • the input of the N-i third AI model includes the (N-i)th Four channel status indication information
  • the output of the N-i third AI model includes the (N-i-1)th fourth channel status indication information and the (N-i+1)th of the N first channel status indication information
  • the input of the first third AI model includes the first fourth channel status indication information
  • the output of the N-1 second AI model includes the first first channel status indication information and the second first channel status indication. information.
  • FIG. 6B it also illustrates a process in which the access network device uses a second decoder to dequantize and decompress the second channel status indication information to obtain four pieces of first channel status indication information.
  • the second decoder includes three third AI models. From the third third AI model to the first AI model, they are sequentially referred to as the third AI model 23, the third AI model 22 and the third AI model 21. It can be understood that the third AI model 23 matches the second AI model 23, the third AI model 22 matches the second AI model 22, and the third AI model 21 matches the second AI model 21.
  • the input of the third AI model 23 includes the second channel status indication information, and the output of the third AI model 23 includes the second fourth channel status indication information and the fourth first channel status indication information;
  • the third AI model 23 The input of 22 includes the second fourth channel status indication information, the output of the third AI model 22 includes the first fourth channel status indication information and the third first channel status indication information;
  • the input of the third AI model 21 includes The output of the first fourth channel status indication information and the third AI model 21 includes the first first channel status indication information and the second first channel status indication information.
  • the second decoder may include K third AI models.
  • K is a positive integer greater than 1 and less than N-1, for example, K is 2. It can be understood that each of the K third AI models is used to restore part of the N first channel state indication information. The input and output of the K third AI models are described in detail below.
  • K may be 2
  • the second decoder may include two third AI models, denoted as the third AI model 21 and the third AI model 22.
  • the input of the third AI model 22 includes second channel status indication information
  • the output of the third AI model 22 includes the 1st fourth channel status indication information and the Nth first channel status indication. information.
  • the input of the third AI model 21 includes the first fourth channel status indication information
  • the output of the third AI model 21 includes the first N-1 first channel status indication information among the N first channel status indication information.
  • FIG. 6C it also illustrates a process in which the access network device uses the second decoder to dequantize and decompress the second channel status indication information to obtain four pieces of first channel status indication information.
  • the second decoder includes two third AI models, from the second third AI model to the first AI model, they are sequentially denoted as the third AI model 22 and the third AI model 21. It can be understood that the third AI model 22 matches the second AI model 22, and the third AI model 21 matches the second AI model 21.
  • the input of the third AI model 22 includes the second channel status indication information, and the output of the third AI model 22 includes the first fourth channel status indication information and the fourth first channel status indication information; the third AI model 22
  • the input of 21 includes the first fourth channel status indication information, and the output of the third AI model 21 includes the first first channel status indication information, the second first channel status indication information, and the third first channel status indication. information.
  • the input of the third AI model 22 includes second channel status indication information
  • the output of the third AI model 22 includes the first fourth channel status indication information and the N first channel status indications.
  • the input of the third AI model 21 includes the first fourth channel status indication information.
  • the third AI model 21 The output includes the first two first channel indication information among the N pieces of first channel status indication information, that is, the first first channel status indication information and the second first channel status indication information.
  • K when N is 5, K may be 2 or 3, that is, the second decoder may include 2 third AI models or 3 third AI models.
  • the case where the second decoder includes two third AI models can be understood with reference to the example in which N is 4.
  • the second decoder includes three third AI models it can be understood as follows:
  • the three third AI models are denoted as third AI model 23, third AI model 22, and third AI model 21.
  • the input of the third AI model 23 includes the second channel status indication information
  • the output of the third AI model 23 includes the third of the five first channel status indication information. 5 pieces of first channel status indication information and 2 pieces of fourth channel status indication information.
  • the input of the third AI model 22 includes the second fourth channel status indication information
  • the output of the third AI model 22 includes the fourth first channel status indication information and the first fourth among the five first channel status indication information.
  • Channel status indication information The input of the third AI model 21 includes the first fourth channel status indication information, and the output of the third AI model 21 includes the first to third first channel status indication information among the five first channel status indication information.
  • the input of the third AI model 23 includes second channel status indication information
  • the output of the third AI model 23 includes the fifth first channel status indication among the five first channel status indication information. information, the second fourth channel status indication information and the first fourth channel status indication information.
  • the input of the third AI model 22 includes the second fourth channel status indication information, and the output of the third AI model 22 includes the third first channel status indication information and the fourth first channel status indication information among the five first channel status indication information.
  • the input of the third AI model 21 includes the first fourth channel status indication information
  • the output of the third AI model 21 includes the first first channel status indication information and the second first channel status indication information among the five first channel status indication information. Channel status indication information.
  • the input of the third AI model 23 includes the second channel status indication information
  • the output of the third AI model 23 includes the first and fourth channel status indication information and the second Fourth channel status indication information.
  • the input of the third AI model 22 includes the second fourth channel status indication information, and the output of the third AI model 22 includes the third to fifth first channel status indication information among the five first channel status indication information.
  • the input of the third AI model 21 includes the first fourth channel status indication information, and the input of the third AI model 21 includes the first first channel status indication information and the two first channels among the five first channel status indication information. Status indication information.
  • the second decoder is also preconfigured to the corresponding structure. For example, if the structure of the second encoder is preconfigured to mode 21, then the structure of the second decoder is also preconfigured to mode 31, and if the structure of the second encoder is preconfigured to mode 22, then the structure of the second decoder is also preconfigured. The preconfiguration is mode 32. If the structure of the second encoder is preconfigured as mode 23, then the structure of the second decoder is also preconfigured as mode 33. Further optionally, in this design, the second decoder corresponds to the number of downlink transmission layers.
  • the terminal device does not need to indicate the value of N, and the access network device can also know the value of N. Then the access network device can obtain the first value by combining the preconfigured method and the number of downlink transmission layers N. Two decoders. Or, corresponding to the description in S504, if the terminal device sends information indicating the rank to the access network device, the access network device can determine the number of downlink transmission layers N according to the value of the rank, and then the access network device can combine the predetermined The second decoder is obtained based on the configuration method and the number of downlink transmission layers N.
  • the access network device may determine the number of input ports of the second encoder based on the value of N. , and the number of output ports of the second decoder.
  • the second encoder includes a second AI model. Different downlink transmission layer numbers correspond to different second AI models.
  • the model structure and model parameters of the second AI model are preconfigured, access The network device can determine the corresponding second AI model according to the value of N.
  • the second decoder includes a third AI model. Different downlink transmission layer numbers correspond to different third AI models.
  • the network access device can determine the corresponding third AI model according to the value of N.
  • the access network device may determine the number of second AI models included in the second encoder based on the value of N. and the number of third AI models included in the second decoder. If the model type and model parameters of each second AI model are pre-configured, the access network device can determine the second encoder and the second decoder according to the value of N.
  • the access network device can determine the number K of second AI models included in the second encoder based on the value of N. If the model types and model parameters of K second AI models are pre-configured, the access network device can determine the number K of second AI models included in the second encoder based on the value of N. The structure of the second encoder can be determined, and accordingly the value of K is used to determine the second decoder that matches the second encoder.
  • the network access device may determine the second decoder according to the second information. For example, when the second information includes the second indication information, the access network device may determine that the structure of the second decoder is mode 31, mode 32 or mode 33 according to the second indication information. Similarly, if the number of downlink transmission layers is predefined, the terminal device does not need to indicate the value of N. The access network device can also know the value of N. Then the access network device can combine the preconfigured method and the number of downlink transmission layers. N gets the second decoder.
  • the access network device can determine the number of downlink transmission layers N based on the information, and then the access network device can combine the predetermined The second decoder is obtained based on the configuration method and the number of downlink transmission layers N.
  • the access network device uses the first decoder and the N first channel status indication information input to the first decoder to determine the channel information of the N downlink transmission layers.
  • the first decoder is a decoder used matching the first encoder described in S502, and the implementation of the first decoder can be understood as the reverse process of the first encoder.
  • the access network device may determine the first decoder according to the information sent by the terminal device or the relevant configuration of the first encoder, and then the access network device inputs the N first channel status indication information determined in S505 to the first decoder. The device obtains the channel information of N downlink transmission layers. It can be understood that as an example, in this solution one, the first decoder has dequantization and decompression functions, and the access network device uses the first decoder to process the N first channel status indication information including dequantization and decompression. .
  • the first decoder can be trained.
  • Three structures of the first encoder are described corresponding to S502, and the first decoder also has corresponding three structures.
  • three possible structures of the first decoder are illustrated below through Method 41 to Method 43. Among them, mode 41 matches the first encoder described in mode 11, mode 42 matches the first encoder described in mode 12, and mode 43 matches the first encoder described in mode 13.
  • the first decoder includes a fourth AI model.
  • the terminal device can use the fourth AI model to process the N first channel status indication information and recover the channel information of the N downlink transmission layers.
  • the access network device inputs N pieces of first channel status indication information to the fourth AI model, and correspondingly, the output of the first decoder includes restored channel information of the N downlink transmission layers.
  • the access network device may input the N pieces of first channel status indication information into the fourth AI model in parallel or serially, which is not limited by this disclosure.
  • different first decoders that is, different fourth AI models
  • a second AI model included in the first decoder is associated with the value of N, and different values of N are associated with different fourth AI models.
  • N the first decoder is recorded as the decoder 21 or the decoder in the AE model 21
  • N the first decoder is recorded as the decoder 31 or the decoder in the AE model 31.
  • Such a design can enhance the flexibility of AI model application.
  • set different downlink transmission layer numbers to correspond to using the same first decoder or decoders in the same AE model. Such a design can reduce the storage overhead of AI models.
  • the first decoder may include N fourth AI models, and the i-th fourth AI model in the first decoder is used to calculate the i-th first channel status indication information among the N first channel status indication information. Processing is performed to obtain the channel information of the i-th downlink transmission layer among the N downlink transmission layers. It can be understood that the N first channel state indication information is input to the N fourth AI models respectively, and the input of the i-th fourth AI model among the N fourth AI models includes N first channel states.
  • the i-th first channel status indication information in the indication information and the output of the i-th fourth AI model include channel information of the i-th downlink transmission layer.
  • the N fourth AI models in the first decoder can be the same, or the N fourth AI models have the same model type (such as the neural network used), but the model parameters are different; or, the N fourth AI models The model types and model parameters are different, and this disclosure does not limit this.
  • the first decoder includes M fourth AI models, where M is an integer greater than 1 and less than N.
  • a fourth AI model can be used to process part (one or more) of the N first channel status indication information, or can be described as one of the N first channel status indication information.
  • Part of the first channel status indication information may be processed by the same fourth AI model, and part of the first channel status indication information among the N pieces of first channel status indication information is input to the M pieces of fourth AI serially or in parallel. 1 fourth AI model in the model.
  • the first encoder includes four first AI models.
  • Figures 6A, 6B and 6C further illustrate that the first decoder also includes four fourth AI models. , that is, the fourth AI model 11 associated with the first AI model 11, the fourth AI model 12 associated with the first AI model 12, the fourth AI model 13 associated with the first AI model 13, and the fourth AI model 13 associated with the first AI model 13. 14 Associated fourth AI model 14.
  • the input of the fourth AI model 11 includes the first first channel status indication information
  • the output of the fourth AI model 11 includes the channel information of the downlink transmission layer 1
  • the input of the fourth AI model 12 includes the second first channel Status indication information
  • the output of the fourth AI model 12 includes the channel information of the downlink transmission layer 2
  • the input of the fourth AI model 13 includes the third first channel status indication information
  • the output of the fourth AI model 13 includes the downlink transmission layer 3 channel information
  • the input of the fourth AI model 14 includes the fourth first channel status indication information
  • the output of the fourth AI model 14 includes the channel information of the downlink transmission layer 4.
  • the first decoder is also preconfigured to the corresponding structure. For example, if the structure of the first encoder is preconfigured to mode 11, then the structure of the first decoder is also preconfigured to mode 41, and if the structure of the first encoder is preconfigured to mode 12, then the structure of the first decoder is also preconfigured. The preconfiguration is mode 42. If the structure of the first encoder is preconfigured as mode 13, then the structure of the first decoder is also preconfigured as mode 43. Further optionally, in this design, the first decoder corresponds to the number of downlink transmission layers.
  • the terminal device does not need to indicate the value of N, and the access network device can also know the value of N. Then the access network device can obtain the first value by combining the preconfigured method and the number of downlink transmission layers N. a decoder. Or, corresponding to the description in S504, if the terminal device sends information indicating the rank to the access network device, the access network device can determine the number of downlink transmission layers N according to the value of the rank, and then the access network device can combine the predetermined The configuration method and the number of downlink transmission layers N are used to obtain the first decoder.
  • the access network device may determine the input data of the first encoder according to the value of N. quantity, and the quantity of output data of the first decoder.
  • the first encoder includes a first AI model, and different downlink transmission layer numbers correspond to different first AI models.
  • access The network device can determine the corresponding first AI model according to the value of N.
  • the first decoder includes a fourth AI model. Different downlink transmission layer numbers correspond to different fourth AI models.
  • the network access device can determine the corresponding fourth AI model according to the value of N.
  • the access network device may determine the number of first AI models included in the first encoder based on the value of N , if the model type and model parameters of each first AI model are pre-configured, the access network device can obtain the first encoder according to the value of N; the first access network device can also determine the first encoder according to the value of N.
  • the decoder contains the number of the fourth AI model.
  • the access network device can obtain the first decoder according to the value of N.
  • the structure of the first encoder is pre-configured in mode 13
  • M is N-2
  • the access network device can determine the first encoder based on the value of N.
  • An encoder contains a number M of first AI models
  • a first decoder contains a number M of fourth AI models. If the model types and model parameters of M first AI models/fourth AI models are preconfigured, the access network device can obtain the first encoder and the first decoder according to the value of M.
  • the access network device may integrate the recovered channel information of N downlink transmission layers to recover the foregoing estimated downlink channel information based on downlink reference signal measurements.
  • a multi-level AI model is used to compress downlink channel information twice, reducing the resources occupied by common information between multi-layer CSI, and reducing the feedback overhead of CSI.
  • the method mainly includes the following processes.
  • the terminal device determines the channel information of N downlink transmission layers, where N is a positive integer.
  • the terminal device uses the first encoder and the channel information of N downlink transmission layers input to the first encoder to determine N pieces of first channel status indication information.
  • the terminal device can send N first channel status indication information to the access network device; or, if the aforementioned downlink transmission layer The number N is equal to 1, and the first encoder only has a compression function.
  • the terminal device can quantize the N first channel status indication information and then send it to the access network device. Or, if the number of downlink transmission layers N is greater than 1, after completing S702, the terminal device further performs the following S703 and S704:
  • the terminal device uses the second encoder and the N first channel status indication information input to the second encoder to determine the third channel status indication information; wherein the third channel status indication information corresponds to the N downlink transmission layers.
  • the second encoder has a compression (encoding) function but does not have a quantization function.
  • the terminal device may use the second encoder to compress the N pieces of first channel status indication information to obtain third channel status indication information.
  • the input of the second encoder includes N pieces of first channel status indication information
  • the output includes third channel status indication information.
  • the third channel status indication information obtained in S703 is a compression of the N pieces of first channel status indication information obtained in S502.
  • the third channel status indication information can also be understood as a sequence, and the length of the sequence corresponding to the third channel status indication information is less than the sum of the sequence lengths corresponding to the N pieces of first channel status indication information.
  • the second encoder in this solution has compression function but does not have quantization Function, the processing of N first channel status indication information includes compression, and the sequence corresponding to the third channel status indication information may specifically be a floating point number sequence.
  • the second encoder can be trained.
  • the structure of the second encoder may refer to any one of the following methods 21-1, 22-1, or 23-1.
  • the second encoder includes a second AI model.
  • the terminal device can use the second AI model to compress the N pieces of first channel status indication information to obtain third channel status indication information. It can be understood that the input of the second AI model includes N pieces of first channel status indication information, and the output of the second AI model includes third channel status indication information.
  • Figure 8A illustrates a schematic diagram of the encoding process.
  • the description of Figure 8A may refer to the description of Figure 6A, which is different from Figure 6A only in that the second encoder does not have a quantization function and its output is third channel status indication information.
  • the third channel status indication information has not been quantized, and the second channel status indication information is information after quantization.
  • the second encoder may include multiple second AI models, and the number of second AI models in the second encoder is related to the number N of downlink transmission layers.
  • Figure 8B illustrates a schematic diagram of the encoding process. The description of Figure 8B may refer to the description of Figure 6B. It is different from Figure 6B only in that the second encoder does not have a quantization function and its output is third channel status indication information. The third channel status indication information has not been quantized. The second channel status indication information is information after quantization processing.
  • the second encoder may include K second AI models.
  • K is a positive integer greater than 1 and less than N-1, for example, K is 2. It can be understood that part of the first channel state indication information among the N first channel state indication information is input in parallel to one first artificial intelligence AI model among the K second artificial intelligence AI models.
  • Figure 8C shows a schematic diagram of the encoding process.
  • the description of Figure 8C may refer to the description of Figure 6C. It is different from Figure 6C only in that the second encoder does not have a quantization function and its output is third channel status indication information.
  • the third channel status indication information has not been quantized.
  • the second channel status indication information is information after quantization processing.
  • S704 The terminal device performs quantization processing on the third channel status indication information to obtain the second channel status indication information.
  • the terminal device may use a quantizer to perform quantization processing on the third channel status indication information to obtain the second channel status indication information.
  • the quantizer can be one of the following: a non-uniform quantizer, a scalar quantizer, a vector quantizer, a quantizer designed based on experience, or a quantizer obtained through AI training.
  • the quantizer can be used with the encoder (such as the first encoder The encoder, second encoder, or related decoder, autoencoder AE) is trained together.
  • Quantization processing can also be called quantization operation. Quantization processing can be understood as finding the item closest to the quantity to be quantified in the quantization dictionary and outputting the index of the item.
  • the quantity to be quantized is the third channel state indication information
  • the index output by querying the quantization dictionary contains the quantized information of the third channel state indication information, that is, the second channel state indication information.
  • the third channel status indication information is a floating point number sequence
  • the second channel status indication information obtained after quantization may be a bit sequence.
  • Figure 8A, Figure 8B and Figure 8C further illustrate a process in which the terminal device performs quantization processing on the third channel status indication information based on the quantizer to obtain the second channel status indication information, where the input of the quantizer The third channel status indication information is included, and the output of the quantizer includes the second channel status indication information.
  • the terminal device may also introduce quantization of the N first channel status indication information in S702, and then the N first channel status indication information used by the terminal device in S703 is Quantified information.
  • N is greater than 1
  • the terminal device can use N quantizers to respectively quantize the N pieces of first channel status indication information in S702, and the N quantizers correspond to the N pieces of first channel status indication information one-to-one.
  • the N quantizers can be the same or different.
  • the quantizer used by the terminal device can be pre-configured, for example, the pre-configuration is implemented in a protocol-defined manner, or the terminal device and the access network device pre-negotiate a quantizer.
  • the terminal device can decide the quantizer to be used by itself and send information indicating the quantizer to be used to the access network device.
  • S705 The terminal device sends the second channel status indication information to the access network device.
  • S706 The access network device dequantizes the second channel status indication information to obtain the third channel status indication information.
  • FIG. 8A, FIG. 8B, and FIG. 8C also illustrate a process in which the access network device uses a dequantizer to dequantize the second channel status indication information to obtain the third channel status indication information.
  • the dequantizer is used in conjunction with the quantizer.
  • the dequantizer can be understood as the inverse operation of the quantizer.
  • the input of the dequantizer includes the second channel status indication information, and the output of the dequantizer includes the third channel status indication information.
  • the dequantizer is also preconfigured.
  • the access network device can determine the matching solution based on the information indicating the quantizer. Chemicalizer.
  • the access network device uses the second decoder and the third channel status indication information input to the second decoder to determine N pieces of first channel status indication information.
  • the second decoder is a decoder that matches the second encoder described in S703, and the implementation of the second decoder can be understood as the reverse process of the second encoder.
  • the access network device may determine the second decoder based on the information sent by the terminal device or the relevant configuration of the second encoder, and then the access network device inputs the received third channel status indication information to the second decoder, Obtain N pieces of first channel status indication information. It can be understood that as an example, in the first solution, the second decoder only has a decompression function, and the access network device uses the second decoder to process the third channel status indication information including decompression.
  • the second decoder can be trained.
  • Three structures of the second encoder are described corresponding to S503, and the second decoder may also have corresponding three structures.
  • the second decoder may also have corresponding three structures.
  • three possible structures of the second decoder are illustrated below through methods 31-1 to 33-1. Among them, mode 31-1 matches the second encoder described in mode 21-1, mode 32-1 matches the second encoder described in mode 22-1, and mode 33-1 matches the second encoder described in mode 23-1. device matches.
  • the second decoder includes a third AI model.
  • the input of the second decoder includes the third channel status indication information determined by the access network device in S706, and the output of the second decoder includes N pieces of first channel status indication information.
  • FIG. 8A it also illustrates a process in which the access network device uses the second decoder to decompress the third channel status indication information to obtain N pieces of first channel status indication information.
  • the second decoder may include multiple third AI models, and the number of third AI models in the second decoder is related to the number N of downlink transmission layers.
  • FIG. 8B it also illustrates a process in which the access network device uses the second decoder to decompress the third channel status indication information to obtain four pieces of first channel status indication information.
  • the access network device uses the second decoder to decompress the third channel status indication information to obtain four pieces of first channel status indication information.
  • Figure 6B which differs from Figure 6B only in that the second channel status indication information is decompressed in Figure 6B.
  • the second decoder may include K third AI models.
  • K is a positive integer greater than 1 and less than N-1, for example, K is 2.
  • FIG 8C it also illustrates a process in which the access network device uses the second decoder to decompress the third channel status indication information to obtain four pieces of first channel status indication information.
  • the access network device uses the second decoder to decompress the third channel status indication information to obtain four pieces of first channel status indication information.
  • the access network device uses the first decoder and the N first channel status indication information input to the first decoder to determine the channel information of the N downlink transmission layers.
  • the first encoder includes four first AI models.
  • Figures 8A, 8B and 8C further illustrate that the first decoder also includes four fourth AI models. , that is, the fourth AI model 11 associated with the first AI model 11, the fourth AI model 12 associated with the first AI model 12, the fourth AI model 13 associated with the first AI model 13, and the fourth AI model 13 associated with the first AI model 13. 14 Associated fourth AI model 14.
  • the input of the fourth AI model 11 includes the first first channel status indication information
  • the output of the fourth AI model 11 includes the channel information of the downlink transmission layer 1
  • the input of the fourth AI model 12 includes the second first channel Status indication information
  • the output of the fourth AI model 12 includes the channel information of the downlink transmission layer 2
  • the input of the fourth AI model 13 includes the third first channel status indication information
  • the output of the fourth AI model 13 includes the downlink transmission layer 3 channel information
  • the input of the fourth AI model 14 includes the fourth first channel status indication information
  • the output of the fourth AI model 14 includes the channel information of the downlink transmission layer 4.
  • the access network device may integrate the recovered channel information of N downlink transmission layers to recover the foregoing downlink channel information estimated based on downlink reference signal measurements.
  • the multi-level AI model is used to compress the downlink channel information twice to reduce the resources occupied by the common information between multi-layer CSI, and the compressed channel status indication information is quantized, which can reduce the CSI Feedback overhead.
  • the encoder and the decoder may be matched and used, or the decoder may be used instead of the encoder, without limitation.
  • the encoder has the quantization function and the decoder has the dequantization function, or the encoder does not have the quantization function and the decoder does not have the dequantization function; or the encoder has the quantization function and the decoder does not. It has the quantization function; or the encoder does not have the quantization function and the decoder has the quantization function, so there is no restriction.
  • the above solutions 1 and 2 can be implemented together.
  • the second encoder and the second decoder are matched, the second encoder has the quantization function and the second decoder does not have the quantization function. In this case, you can execute it first. Perform S501 to S504 in plan one, and then execute S706 to S708 in plan two.
  • Another example is that in the second encoder and the second decoder used in matching, the second encoder does not have the quantization function, and the second encoder does not have the quantization function.
  • the second decoder has the quantization function. In this case, you can first execute S701 ⁇ S705 in the second option, and then execute S505 ⁇ S506 in the first option.
  • the functions of the first encoder and the second encoder in Figure 5 or Figure 7 are implemented by the same AI model W1 , and/or, the functions of the first decoder and the second decoder are implemented by the same AI model W2 corresponding to the AI model W1.
  • AI model W1 and AI model W2 are used in conjunction, and they are trained together.
  • the specific training process is similar to the described joint training of the first encoder, the second encoder, the first decoder, and the second decoder, and will not be described again here.
  • the input of AI model W1 is the channel information of N downlink transmission layers
  • the output is the second channel status indication information
  • the input of AI model W2 is the second channel status indication information
  • the output is the channel information of N downlink transmission layers.
  • AI model W1 and AI model W2 can be trained for a specific N.
  • the value of N has a corresponding relationship with AI model W1 and AI model W2.
  • the value of N corresponds to AI model W11 and AI model W21
  • the value N2 of N corresponds to AI model W12 and AI model W22.
  • the number of values of N corresponds to the number of multiple sets of AI models W1 and AI models W2.
  • the terminal device may send information indicating rank to the access network device, where the value of the rank is the same as the value of N, or the terminal device may send indication information to the access network device to indicate the AI model W1 and/or AI model W2, or the sequence length range of the second channel state indication information corresponds to AI model WI and/or AI model W2, and/or corresponds to the value of N, that is, the The sequence length range of the second channel status indication information may indicate the AI model WI and/or the AI model W2, and/or the value of N. In this way, when the value of N is variable, the access network device can determine to use the AI model W2 that matches the AI model W1 used by the terminal device to decode the received second channel status indication information. .
  • AI model W1 and AI model W2 can be trained for any N, that is, they are not dedicated to a specific N.
  • the value of N may be included in the channel information of the aforementioned N downlink transmission layers.
  • the present disclosure provides a communication device 1000 , which includes a processing module 1001 and a communication module 1002 .
  • the communication device 1000 may be a terminal device, or a communication device applied to or matched with a terminal device, capable of implementing a communication method executed on the terminal device side; or the communication device 1000 may be an access network device, or a communication device that can be used in conjunction with a terminal device. It may be a communication device applied to access network equipment or used in conjunction with access network equipment, and capable of implementing a communication method executed on the access network equipment side.
  • the communication module may also be called a transceiver module, a transceiver, a transceiver, or a transceiver device, etc.
  • the processing module may also be called a processor, a processing board, a processing unit, or a processing device.
  • the communication module is used to perform sending operations and receiving operations on the terminal device side or the access network device side in the above method.
  • the device used to implement the receiving function in the communication module can be regarded as a receiving unit, and the devices used in the communication module can be regarded as receiving units.
  • the device that implements the sending function is regarded as a sending unit, that is, the communication module includes a receiving unit and a sending unit.
  • the processing module 1001 can be used to implement the processing functions of the terminal device in the examples described in Figures 5 to 9, and the communication module 1002 can be used to implement the processing functions of the terminal device in the examples described in Figures 5 to 9 Transceiver function of terminal equipment.
  • the communication device can also be understood with reference to the third aspect and possible designs in the third aspect of the invention.
  • the processing module 1001 can be used to implement the processing functions of the access network equipment in the examples described in Figures 5 to 9, and the communication module 1002 can be used to implement the processing functions of the access network equipment described in Figures 5 to 9
  • the transceiver function of the access network equipment described in the example can be understood with reference to the fourth aspect of the invention and possible designs in the fourth aspect.
  • the aforementioned communication module and/or processing module can be implemented through a virtual module.
  • the processing module can be implemented through a software functional unit or a virtual device, and the communication module can be implemented through a software function or a virtual device.
  • the processing module or the communication module can also be implemented by a physical device.
  • the communication module can be an input/output circuit and/or a communication interface to perform input operations (corresponding to the aforementioned receiving operations), Output operation (corresponding to the aforementioned sending operation); the processing module is an integrated processor or microprocessor or integrated circuit.
  • each functional module in each example of this disclosure may be integrated into one processor. It can also exist physically alone, or two or more modules can be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or software function modules.
  • the present disclosure also provides a communication device 1100.
  • the communication device 1100 may be a chip or a chip system.
  • the chip system may be composed of chips, or may include chips and other discrete devices.
  • the communication device 1100 can be used to implement the functions of any network element in the communication system described in the foregoing examples.
  • Communication device 1100 may include at least one processor 1110.
  • the processor 1110 is coupled to a memory, and the memory may be located within the device, or the memory may be integrated with the processor, or the memory may be located outside the device.
  • the communication device 1100 may further include at least one memory 1120.
  • the memory 1120 stores the necessary computer programs, computer programs or instructions and/or data to implement any of the above examples; the processor 1110 may execute the computer program stored in the memory 1120 to complete the method in any of the above examples.
  • the communication device 1100 may also include a communication interface 1130, and the communication device 1100 may interact with other devices through the communication interface 1130.
  • the communication interface 1130 may be a transceiver, a circuit, a bus, a module, a pin, or other types of communication interfaces.
  • the communication interface 1130 in the device 1100 can also be an input-output circuit, which can input information (or receive information) and output information (or send information)
  • the processor is an integrated processor, a microprocessor, an integrated circuit, or a logic circuit, and the processor can determine output information based on input information.
  • Coupling in this disclosure is an indirect coupling or communication connection between devices, units or modules, which may be in electrical, mechanical or other forms, and is used for information interaction between devices, units or modules.
  • the processor 1110 may cooperate with the memory 1120 and the communication interface 1130.
  • the present disclosure does not limit the specific connection medium between the above-mentioned processor 1110, memory 1120 and communication interface 1130.
  • the processor 1110 , the memory 1120 and the communication interface 1130 are connected to each other through a bus 1140 .
  • the bus 1140 may be a peripheral component interconnect (PCI) bus or an extended industry standard architecture (EISA) bus, etc.
  • PCI peripheral component interconnect
  • EISA extended industry standard architecture
  • the bus can be divided into address bus, data bus, control bus, etc. For ease of presentation, only one thick line is used in Figure 11, but it does not mean that there is only one bus or one type of bus.
  • a processor may be a general-purpose processor, a digital signal processor, an application-specific integrated circuit, a field-programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component that may implement or execute the present disclosure.
  • the disclosed methods, steps and logical block diagrams are disclosed.
  • a general-purpose processor may be a microprocessor or any conventional processor, etc.
  • the steps of the method disclosed in conjunction with the present disclosure can be directly embodied as executed by a hardware processor, or executed by a combination of hardware and software modules in the processor.
  • the memory may be a non-volatile memory, such as a hard disk drive (HDD) or a solid-state drive (SSD), etc., or it may be a volatile memory (volatile memory), such as a random access memory.
  • Get memory random-access memory, RAM.
  • Memory is, but is not limited to, any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
  • the memory in the present disclosure can also be a circuit or any other device capable of realizing a storage function, used to store program instructions and/or data.
  • the communication device 1100 can be applied to access network equipment.
  • the communication device 1100 can be an access network device, or can support access network equipment to implement any of the above-mentioned examples.
  • the memory 1120 stores computer programs (or instructions) and/or data that implement the functions of the access network equipment in any of the above examples.
  • the processor 1110 can execute the computer program stored in the memory 1120 to complete the method executed by the access network device in any of the above examples.
  • the communication interface in the communication device 1100 can be used to interact with the terminal equipment, send information to the terminal equipment or receive information from the terminal equipment.
  • the communication device 1100 can be applied to a terminal device.
  • the communication device 1100 can be a terminal device, or can support the terminal device and realize the functions of the terminal device in any of the above examples. device.
  • the memory 1120 stores computer programs (or instructions) and/or data that implement the functions of the terminal device in any of the above examples.
  • the processor 1110 can execute the computer program stored in the memory 1120 to complete the method executed by the terminal device in any of the above examples.
  • the communication interface in the communication device 1100 can be used to interact with the access network equipment, send information to the access network equipment or receive information from the access network equipment.
  • the communication device 1100 provided in this example can be applied to an access network device to complete the method executed by the access network device, or applied to a terminal device to complete the method executed by the terminal device. Therefore, the technical effects that can be obtained can be referred to the above method examples and will not be described again here.
  • the present disclosure provides a communication system, including access network equipment and terminal equipment, wherein the access network equipment and terminal equipment can implement the communication methods provided in the examples shown in Figures 5 to 9 .
  • the technical solutions provided by this disclosure can be implemented in whole or in part through software, hardware, firmware, or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions.
  • the computer program instructions When the computer program instructions are loaded and executed on a computer, the processes or functions described in the present disclosure are produced in whole or in part.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, a terminal device, an access network device, or other programmable computers. device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another, e.g., the computer instructions may be transferred from a website, computer, server, or data center Transmission to another website, computer, server or data center through wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more available media integrated.
  • the available media may be magnetic media (eg, floppy disk, hard disk, tape), optical media (eg, digital video disc (DVD)), or semiconductor media, etc.
  • examples may refer to each other.
  • methods and/or terms between method embodiments may refer to each other.
  • functions and/or terms between device embodiments may refer to each other.
  • Cross-references, for example, functions and/or terms between apparatus examples and method examples may refer to each other.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

本公开提供一种通信方法及装置,应用于通信技术领域。该通信方法包括:利用第一编码器和输入第一编码器的N个下行传输层的信道信息,确定N个第一信道状态指示信息;其中,N为大于1的正整数;利用第二编码器和输入第二编码器的N个第一信道状态指示信息,确定第二信道状态指示信息;其中,第二信道状态指示信息对应N个下行传输层,第二信道状态指示信息对应的序列长度小于N个第一信道状态指示信息对应的序列长度之和;发送第二信道状态指示信息。本公开利用AI模型降低信道状态信息的反馈开销。

Description

一种通信方法及装置
相关申请的交叉引用
本申请要求在2022年07月06日提交中华人民共和国知识产权局、申请号为202210800531.3、申请名称为“一种通信方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及通信技术领域,尤其涉及一种通信方法及装置。
背景技术
在无线通信网络中,例如在移动通信网络中,网络支持的业务越来越多样,因此需要满足的需求越来越多样。例如,网络需要能够支持超高速率、超低时延、和/或超大连接。该特点使得网络规划、网络配置、和/或资源调度越来越复杂。此外,由于网络的功能越来越强大,例如支持的频谱越来越高、支持高阶多入多出(multiple input multiple output,MIMO)技术、支持波束赋形、和/或支持波束管理等新技术,使得网络节能成为了热门研究课题。这些新需求、新场景和新特性给网络规划、运维和高效运营带来了前所未有的挑战。为了迎接该挑战,可以将人工智能技术引入无线通信网络中,从而实现网络智能化。基于此,如何在网络中有效地实现人工智能是一个值得研究的问题。
发明内容
本公开提供一种通信方法及装置,以期通过人工智能降低信道状态信息(channel state information,CSI)反馈的开销。
第一方面,本公开提供一种通信方法,包括:利用第一编码器和输入所述第一编码器的N个下行传输层的信道信息,确定N个第一信道状态指示信息;其中,N为大于1的正整数;利用第二编码器和输入所述第二编码器的所述N个第一信道状态指示信息,确定第二信道状态指示信息;其中,所述第二信道状态指示信息对应所述N个下行传输层,所述第二信道状态指示信息对应的序列长度小于所述N个第一信道状态指示信息对应的序列长度之和;发送所述第二信道状态指示信息。
上述设计中,通过两层编码器对下行信道信息进行两次压缩,减少多个下行传输层的信道信息(CSI)之间公共信息所占的资源,能够降低CSI的反馈开销。
在一种可能的设计中,还包括:发送用于指示秩的信息,N的取值等于所述秩的值。在秩的值大于1时,可以隐含地表示下行信道信息经由两次压缩。示例性的,用于指示秩的信息可以为秩指示(rank indicator,RI)。这样的设计可以复用已有RI隐含地指示其他内容,能够提高信息利用率。
在一种可能的设计中,所述第二编码器的结构,与所述第二编码器匹配的第二解码器的结构,或,所述第二编码器所属的第二自编码器的结构中的一项或多项对应于所述N的取值。所述第一编码器的结构,与所述第一编码器匹配的第一解码器的结构,或,所述第一编码器所属的第一自编码器的结构中的一项或多项也对应于所述N的取值。前述用于指示秩的信息还可以用于接收该指示秩的信息的设备确定第二解码器和/或第一解码器的结构。
在一种可能的设计中,第二信道状态指示信息的序列长度的范围和以下中的一项或多项:所述第二编码器的结构,与所述第二编码器匹配的第二解码器的结构,或,所述第二编码器所属的第二自编码器的结构,相对应,和/或,和N的取值相对应,也即,第二信道状态指示信息的序列长度的范围可以指示以下中的一项或多项:所述第二编码器的结构,与所述第二编码器匹配的第二解码器的结构,或,所述第二编码器所属的第二自编码器的结构,和/或,N的取值。
在一种可能的设计中,第二信道状态指示信息的序列长度的范围和以下中的一项或多项:所述第一编码器的结构,与所述第一编码器匹配的第一解码器的结构,或,所述第一编码器所属的第一自编码器的结构,相对应,和/或,和N的取值相对应,也即,第二信道状态指示信息的序列长度的范围可以指示以下中的一项或多项:所述第一编码器的结构,与所述第一编码器匹配的第一解码器的结构,或,所述第一编码器所属的第一自编码器的结构,和/或,N的取值。
在一种可能的设计中,还包括:发送第二信息,所述第二信息用于指示如下的一个或多个:所述第 二编码器、与所述第二编码器匹配的第二解码器、所述第二编码器所属的第二自编码器(auto-encoders,AE)。通过这样的设计,使得信息接收方能够快速确定第二编码器匹配的第二解码器,提升解码效率。
在一种可能的设计中,还包括:发送第一信息,所述第一信息用于指示如下的一个或多个:所述第一编码器、与所述第一编码器匹配的第一解码器、所述第一编码器所属的第一自编码器AE。通过这样的设计,使得信息接收方能够快速确定第一编码器匹配的第一解码器,提升解码效率。
下面对第一编码器的结构、输入和/或输出进行详细说明。
在一种可能的设计中,第一编码器可以具备压缩以及量化功能,或者第一编码器可以具备压缩,但不具备量化功能。
在一种可能的设计中,第一编码器包括N个第一人工智能(artificial Intelligence,AI)模型,所述N个下行传输层的信道信息分别输入所述N个第一AI模型,N个第一AI模型中的各个第一AI模型输出1个第一信道状态指示信息。可以理解第一编码器中的1个第一AI模型只用于处理一个下行传输层的信道信息,各个第一AI模型之间相互独立,并行处理,能够提升处理效率。
在另一种可能的设计中,所述第一编码器包括1个第一AI模型,所述N个下行传输层的信道信息串行或并行输入所述1个第一AI模型,该1个第一AI模型对应串行或并行地输出N个第一信道状态指示信息。或者,所述第一编码器包括M个第一AI模型,M为大于1且小于N的整数,所述N个下行传输层的信道信息中的部分下行传输层的信道信息串行或并行输入所述M个第一AI模型中的1个第一AI模型,该1个第一AI模型对应串行或并行地输出部分下行传输层对应的第一信道状态指示信息。该设计中,第一编码器中的1个第一AI模型可以用于处理多个下行传输层的信道信息,可以减少AI模型数量,降低模型的存储资源。
下面对第二编码器的结构、输入和/或输出进行详细说明。
在一种可能的设计中,第二编码器具备压缩以及量化功能。将N个第一信道状态指示信息输入至第二编码器,第二编码器的输出包括第二信道状态指示信息。在该设计中,第二编码器的结构可参照下面几种可能的示例理解。
第一种可能的示例中,第二编码器包括1个第二AI模型,所述1个第二AI模型的输入包括所述N个第一信道状态指示信息,所述1个第二AI模型的输出包括所述第二信道状态指示信息。该设计中,利用1个第二AI模型处理多个第一信道状态指示信息,可以降低模型的存储资源。
第二种可能的示例中,N大于2,所述第二编码器包括N-1个第二AI模型;其中,所述N-1个第二AI模型中的第1个第二AI模型的输入包括所述N个第一信道状态指示信息中的第1个第一信道状态指示信息和第2个第一信道状态指示信息,所述第1个第二AI模型的输出包括第1个第四信道状态指示信息;所述N-1个第二AI模型中的第i个第二AI模型的输入包括第i-1个第二AI模型输出的第i-1个第四信道状态指示信息和所述N个第一信道状态指示信息中第i+1个第一信道状态指示信息,所述第i个第二AI模型的输出包括第i个第四信道状态指示信息,2≤i≤N-1,i为正整数;其中,所述第N-1个第二AI模型输出的第N-1个第四信道状态指示信息为所述第二信道状态指示信息。这样的设计实现分级压缩,可以采用输入端口数为2输出端口数为1的多个第二AI模型来实现N为大于2的任意数的第二编码器。其中,多个第二AI模型可以相同或者不同。以多个第二AI模型相同为例,可以通过训练一个第二AI模型并对其调用多次来实现该第二编码器,从而降低CSI的反馈开销的同时减少所需训练的第二AI模型的数量。
第三种可能的示例中,N大于2,所述第二编码器包括K个第二AI模型,K为大于1且小于N的整数,所述N个第一信道状态指示信息中的部分第一信道状态指示信息并行输入所述K个第二AI模型中的1个第一AI模型。可选的,K为2。这样的设计实现分级压缩,可以采用输入端口数为2至N-1中一个整数输出端口数为1的多个第二AI模型来实现N为大于2的任意数的第二编码器,这样,训练若干个输入端口数不同的第二AI模型即可实现第二编码器,从而降低CSI的反馈开销的同时减少所需训练的第二AI模型的数量,并可以减少实现第二编码器的第二AI模型的数量,提高编码速度。
在另一种可能的设计中,第二编码器具备压缩,但不具备量化功能。将N个第一信道状态指示信息输入至第二编码器,第二编码器的输出包括第三信道状态指示信息。进而再对第三信道状态指示信息进行量化处理,即可得到第二信道状态指示信息。在该设计中,第二编码器的结构可参照下面几种可能的示例理解。
第一种可能的示例中,第二编码器包括1个第二AI模型,所述1个第二AI模型的输入包括所述N 个第一信道状态指示信息,所述1个第二AI模型的输出包括所述第三信道状态指示信息。该设计中,利用1个第二AI模型处理多个第一信道状态指示信息,可以降低模型的存储资源。
第二种可能的示例中,N大于2,所述第二编码器包括N-1个第二AI模型;其中,所述N-1个第二AI模型中的第1个第二AI模型的输入包括所述N个第一信道状态指示信息中的第1个第一信道状态指示信息和第2个第一信道状态指示信息,所述第1个第二AI模型的输出包括第1个第四信道状态指示信息;所述N-1个第二AI模型中的第i个第二AI模型的输入包括第i-1个第二AI模型输出的第i-1个第四信道状态指示信息和所述N个第一信道状态指示信息中第i+1个第一信道状态指示信息,所述第i个第二AI模型的输出包括第i个第四信道状态指示信息,2≤i≤N-1,i为正整数;其中,所述第N-1个第二AI模型输出的第N-1个第四信道状态指示信息为所述第三信道状态指示信息。
这样的设计实现分级压缩,可以采用输入端口数为2输出端口数为1的多个第二AI模型来实现输入端口数(N)为大于2的任意数的第二编码器,其中,多个第二AI模型可以相同或者不同。以多个第二AI模型相同为例,可以训练一个第二AI模型调用多次来实现第二编码器,从而降低CSI的反馈开销的同时减少所需训练的第二AI模型的数量。
第三种可能的示例中,N大于2时,所述第二编码器包括K个第二AI模型,K为大于1且小于N的整数,所述N个第一信道状态指示信息中的部分第一信道状态指示信息并行输入所述K个第二AI模型中的1个第二AI模型。这样的设计可以采用输入端口数为2至N-1中一个整数输出端口数为1的多个第二AI模型来实现N为大于2的任意数的第二编码器,这样,训练若干个输入端口数不同的第二AI模型即可实现第二编码器,从而降低CSI的反馈开销的同时减少所需训练的第二AI模型的数量,并可以减少实现第二编码器的第二AI模型的数量,提高编码速度。可选的,K为2。
进一步,在一种可能的设计中,在确定N个第一信道状态指示信息之前,还包括:获取下行参考信号;根据所述下行参考信号,确定所述N的取值以及所述N个下行传输层的信道信息。
第二方面,本公开提供一种通信方法,应用于接入网设备,包括:获取第二信道状态指示信息;利用第二解码器和所述第二信道状态指示信息,确定N个第一信道状态指示信息,所述N个第一信道状态指示信息对应的序列长度之和大于所述第二信道状态指示信息对应的序列长度,N为大于1的正整数;利用第一解码器和输入所述第一解码器的所述N个第一信道状态指示信息,确定所述N个下行传输层的信道信息。
上述设计匹配第一方面使用,能够实现对经过两次压缩的信道信息进行两次解压缩,从而恢复出对应的信道信息。
在一种可能的设计中,第二信道状态指示信息的序列长度的范围和以下中的一项或多项:所述第二编码器的结构,与所述第二编码器匹配的第二解码器的结构,或,所述第二编码器所属的第二自编码器的结构,相对应,和/或,和N的取值相对应,也即,第二信道状态指示信息的序列长度的范围可以指示以下中的一项或多项:所述第二编码器的结构,与所述第二编码器匹配的第二解码器的结构,或,所述第二编码器所属的第二自编码器的结构,和/或,N的取值。这样,通过比较第二信道状态指示信息的序列长度和相应的阈值进行比较确定第二信道状态指示信息的序列长度所在的范围,即可确定以下中的一项或多项:所述第二编码器的结构,与所述第二编码器匹配的第二解码器的结构,或,所述第二编码器所属的第二自编码器的结构,和/或,N的取值。
在一种可能的设计中,第二信道状态指示信息的序列长度的范围和以下中的一项或多项:所述第一编码器的结构,与所述第一编码器匹配的第一解码器的结构,或,所述第一编码器所属的第一自编码器的结构,相对应,和/或,和N的取值相对应,也即,第二信道状态指示信息的序列长度的范围可以指示以下中的一项或多项:所述第一编码器的结构,与所述第一编码器匹配的第一解码器的结构,或,所述第一编码器所属的第一自编码器的结构,和/或,N的取值。这样,通过比较第二信道状态指示信息的序列长度和相应的阈值进行比较确定第二信道状态指示信息的序列长度所在的范围,即可确定以下中的一项或多项:所述第一编码器的结构,与所述第一编码器匹配的第一解码器的结构,或,所述第一编码器所属的第一自编码器的结构,和/或,N的取值。
在一种可能的设计中,还包括:获取用于指示秩的信息,所述N的取值等于所述秩的取值。所述第二解码器的结构,与所述第二解码器匹配的第二编码器的结构,或,所述第二编码器所属的第二自编码器的结构中的一项或多项对应于所述N的取值。所述第一解码器的结构,与所述第一解码器匹配的第一编码器的结构,或,所述第一编码器所属的第一自编码器的结构中的一项或多项也对应于所述N 的取值。
在一种可能的设计中,还包括:获取第二信息,所述第二信息用于指示如下的一个或多个:所述第二编码器、与所述第二编码器匹配的解码器、所述第二编码器所属的第二自编码器AE。通过这样的设计,使得信息接收方能够快速确定第二编码器匹配的第二解码器,提升解码效率。
在一种可能的设计中,还包括:获取第一信息,所述第一信息用于指示如下的一个或多个:所述第一编码器、与所述第一编码器匹配的解码器、所述第一编码器所属的第一自编码器AE。
下面对第二解码器的结构、输入和/或输出进行详细说明。
在一种可能的设计中,第二解码器具备解压缩以及解量化功能。将第二信道状态指示信息输入至第二解码器,第二解码器的输出包括N个第一信道状态指示信息。在该设计中,第二解码器的结构可参照下面几种可能的示例理解。
第一种可能的示例中,第二解码器包括1个第三AI模型,所述1个第三AI模型的输入包括所述第二信道状态指示信息,所述1个第三AI模型的输出包括所述N个第一信道状态指示信息。
第二种可能的示例中,N大于2,第二解码器包括N-1个第三AI模型,所述N-1个第三AI模型中的第N-1个第三AI模型的输入包括第二信道状态指示信息,输出包括第N-2个第四信道状态指示信息和第N个第一信道状态指示信息,第N-i个第三AI模型的输入包括第N-i个第四信道状态指示信息,所述第N-i个第三AI模型的输出包括所述N个第一信道状态指示信息中的第N-i+1个第一信道状态指示信息和第N-i-1个第四信道状态指示信息,第1个第三AI模型的输入包括第1个第四信道状态指示信息,所述第1个第三AI模型的输出包括所述N个第一信道状态指示信息中的第1个第一信道状态指示信息和第2个第一信道状态指示信息;其中,i取遍2至N-2的正整数。可以理解,N-i、N-i-1均不可以取0。所述第二信道状态指示信息即为第N-1个第四信道状态信息。
第三种可能的示例中,N大于2,第二解码器包括K个第三AI模型,K为大于1且小于N-1的正整数,所述K个第三AI模型中至少1个第三AI模型用于恢复所述N个第一信道状态指示信息中的至少2个第一信道状态指示信息。
在另一种可能的设计中,第二解码器具备压缩,但不具备解量化功能。获取到第二信道状态指示信息后,可先对第二信道状态指示信息进行解量化,得到第三信道状态指示信息。进而将第三信道状态指示信息输入至第二解码器,第二解码器的输出包括N个第一信道状态指示信息。在该设计中,第二解码器的结构可参照下面几种可能的示例理解。
第一种可能的示例中,第二解码器包括1个第三AI模型,所述1个第三AI模型的输入包括所述第三信道状态指示信息,所述1个第三AI模型的输出包括所述N个第一信道状态指示信息。
第二种可能的示例中,N大于2,第二解码器包括N-1个第三AI模型,所述N-1个第三AI模型中的第N-i个第三AI模型的输入包括第N-i个第四信道状态指示信息,所述第N-i个第三AI模型的输出包括如下信息中的至少一个:所述N个第一信道状态指示信息中的第N-i个第一信道状态指示信息、所述N个第一信道状态指示信息中的第N-i+1个第一信道状态指示信息、第N-i-1个第四信道状态指示信息;其中,i取遍1至N-1的正整数,所述第三信道状态指示信息为第N-1个第四信道状态信息。可以理解,N-i、N-i-1均不可以取0,例如i为N-1时,计算得到N-i-1为0,此情况下第N-i个第三AI模型的输出不包括第N-i-1个第四信道状态指示信息,而可以包括所述N个第一信道状态指示信息中的第N-i个第一信道状态指示信息以及第N-i+1个第一信道状态指示信息。
第三种可能的示例中,N大于2,第二解码器包括K个第三AI模型,K为大于1且小于N-1的正整数,所述K个第三AI模型中每个第三AI模型用于恢复所述N个第一信道状态指示信息中的部分第一信道状态指示信息。
下面对第一解码器的结构、输入和/或输出进行详细说明。
在一种可能的设计中,第一解码器的功能匹配于第一编码器,如果第一编码器具备压缩以及量化功能,第一解码器可以具备解压缩以及解量化功能;或者如果第一编码器可以具备压缩,但不具备量化功能,则第一解码器可以具备解压缩,但不具备解量化功能。
在一种可能的设计中,第一解码器包括N个第四AI模型,所述N个第一信道状态指示信息分别输入所述N个第四AI模型。
在另一种可能的设计中,所述第一解码器包括1个第四AI模型,所述N个第一信道状态指示信息串行或并行输入所述1个第四AI模型;或者,所述第一解码器包括M个第四AI模型,M为大于1且 小于N的整数,所述N个第一信道状态指示信息的部分第一信道状态指示信息串行或并行输入所述M个第四AI模型中的1个第一AI模型。
第三方面,本公开提供一种通信装置,该通信装置可以是终端设备,也可以是终端设备中的装置、模块或芯片等,或者是能够和终端设备匹配使用的装置。一种设计中,该通信装置可以包括执行第一方面中所描述的方法/操作/步骤/动作所一一对应的模块,该模块可以是硬件电路,也可是软件,也可以是硬件电路结合软件实现。一种设计中,该通信装置可以包括处理模块和通信模块。
一种示例:
处理模块,用于利用第一编码器和输入所述第一编码器的N个下行传输层的信道信息,确定N个第一信道状态指示信息;其中,N为大于1的正整数;以及利用第二编码器和输入所述第二编码器的所述N个第一信道状态指示信息,确定第二信道状态指示信息;其中,所述第二信道状态指示信息对应所述N个下行传输层,所述第二信道状态指示信息对应的序列长度小于所述N个第一信道状态指示信息对应的序列长度之和;
通信模块,用于发送所述第二信道状态指示信息。
在一种可能的设计中,通信模块,还用于发送用于指示秩的信息,N的取值等于所述秩的值。在秩的值大于1时,可以隐含地表示下行信道信息经由两次压缩。示例性的,用于指示秩的信息可以为秩指示(rank indicator,RI)。
在一种可能的设计中,所述第二编码器的结构,与所述第二编码器匹配的第二解码器的结构,或,所述第二编码器所属的第二自编码器的结构中的一项或多项对应于所述N的取值。所述第一编码器的结构,与所述第一编码器匹配的第一解码器的结构,或,所述第一编码器所属的第一自编码器的结构中的一项或多项也对应于所述N的取值。前述用于指示秩的信息还可以用于接收该指示秩的信息的设备确定第二解码器和/或第一解码器的结构。
在一种可能的设计中,通信模块,还用于发送第二信息,所述第二信息用于指示如下的一个或多个:所述第二编码器、与所述第二编码器匹配的解码器、所述第二编码器所属的第二自编码器AE。
在一种可能的设计中,通信模块,还用于发送第一信息,所述第一信息用于指示如下的一个或多个:所述第一编码器、与所述第一编码器匹配的解码器、所述第一编码器所属的第一自编码器AE。
有关第一编码器的结构、输入和/或输出,可以参照第一方面中的内容理解,本公开对此不再进行赘述。
有关第二编码器的结构、输入和/或输出,可以参照第一方面中的内容理解,本公开对此不再进行赘述。
进一步,在一种可能的设计中,所述处理模块,还用于在确定N个第一信道状态指示信息之前,通过通信模块获取下行参考信号,并根据所述下行参考信号,确定所述N的取值以及所述N个下行传输层的信道信息。
第四方面,本公开提供一种通信装置,该通信装置可以是接入网设备,也可以是接入网设备中的装置、模块或芯片等,或者是能够和接入网设备匹配使用的装置。一种设计中,该通信装置可以包括执行第二方面中所描述的方法/操作/步骤/动作所一一对应的模块,该模块可以是硬件电路,也可是软件,也可以是硬件电路结合软件实现。一种设计中,该通信装置可以包括处理模块和通信模块。
一种示例:
通信模块,用于获取第二信道状态指示信息;
处理模块,用于利用第二解码器和所述第二信道状态指示信息,确定N个第一信道状态指示信息,所述N个第一信道状态指示信息对应的序列长度之和大于所述第二信道状态指示信息对应的序列长度,N为大于1的正整数;以及利用第一解码器和输入所述第一解码器的所述N个第一信道状态指示信息,确定所述N个下行传输层的信道信息。
在一种可能的设计中,通信模块,还用于获取用于指示秩的信息,所述N的取值等于所述秩的取值。所述第二解码器的结构,与所述第二解码器匹配的第二编码器的结构,或,所述第二编码器所属的第二自编码器的结构中的一项或多项对应于所述N的取值。所述第一解码器的结构,与所述第一解码器匹配的第一编码器的结构,或,所述第一编码器所属的第一自编码器的结构中的一项或多项也对应于所述N的取值。
在一种可能的设计中,通信模块,还用于获取第二信息,所述第二信息用于指示如下的一个或多个: 所述第二编码器、与所述第二编码器匹配的解码器、所述第二编码器所属的第二自编码器AE。
在一种可能的设计中,通信模块,还用于获取第一信息,所述第一信息用于指示如下的一个或多个:所述第一编码器、与所述第一编码器匹配的解码器、所述第一编码器所属的第一自编码器AE。
有关第二解码器的结构、输入和/或输出,可以参照第一方面中的内容理解,本公开对此不再进行赘述。
有关第一解码器的结构、输入和/或输出,可以参照第一方面中的内容理解,本公开对此不再进行赘述。
第五方面,本公开提供一种通信装置,所述通信装置包括处理器,用于实现上述第一方面所描述的方法。处理器与存储器耦合,存储器用于存储指令和数据,所述处理器执行所述存储器中存储的指令时,可以实现上述第一方面描述的方法。可选的,所述通信装置还可以包括存储器;所述通信装置还可以包括通信接口,所述通信接口用于该装置与其它设备进行通信,示例性的,通信接口可以是收发器、电路、总线、模块、管脚或其它类型的通信接口。
在一种可能的设备中,该通信装置包括:
存储器,用于存储指令;
处理器,用于利用第一编码器和输入所述第一编码器的N个下行传输层的信道信息,确定N个第一信道状态指示信息;其中,N为大于1的正整数;以及利用第二编码器和输入所述第二编码器的所述N个第一信道状态指示信息,确定第二信道状态指示信息;其中,所述第二信道状态指示信息对应所述N个下行传输层,所述第二信道状态指示信息对应的序列长度小于所述N个第一信道状态指示信息对应的序列长度之和;
通信接口,用于发送所述第二信道状态指示信息。一种可能的设计中,第一编码器和第二编码器采用软件功能模块的形式实现。另一种可能的设计中,第一编码器和第二编码器可以采用硬件(如芯片、硬件电路等)的形式实现。第一编码器和第二编码器可以集成在处理器中,也可以单独地部署在该通信装置中,换言之,该通信装置还可以包括第一编码器和第二编码器。
第六方面,本公开提供一种通信装置,所述通信装置包括处理器,用于实现上述第二方面所描述的方法。处理器与存储器耦合,存储器用于存储指令和数据,所述处理器执行所述存储器中存储的指令时,可以实现上述第二方面描述的方法。可选的,所述通信装置还可以包括存储器;所述通信装置还可以包括通信接口,所述通信接口用于该装置与其它设备进行通信,示例性的,通信接口可以是收发器、电路、总线、模块、管脚或其它类型的通信接口。
在一种可能的设备中,该通信装置包括:
存储器,用于存储指令;
通信接口,用于获取第二信道状态指示信息;
处理器,用于利用第二解码器和所述第二信道状态指示信息,确定N个第一信道状态指示信息,所述N个第一信道状态指示信息对应的序列长度之和大于所述第二信道状态指示信息对应的序列长度,N为大于1的正整数;以及利用第一解码器和输入所述第一解码器的所述N个第一信道状态指示信息,确定所述N个下行传输层的信道信息。
在一种可能的设计中,第二解码器和第一解码器采用软件功能模块的形式实现。在另一种可能的设计中,第二解码器和第一解码器可以采用硬件(如芯片、硬件电路等)的形式实现。第二解码器和第一解码器可以集成在处理器中,也可以单独地部署在该通信装置中,换言之,该通信装置还可以包括第二解码器和第一解码器。
第七方面,本公开提供一种通信系统,该通信系统包括终端设备和接入网设备。具体地,终端设备和接入网设备之间的交互可参照如下内容理解:
终端设备利用第一编码器和输入第一编码器的N个下行传输层的信道信息,确定N个第一信道状态指示信息;其中,N为大于1的正整数;终端设备利用第二编码器和输入第二编码器的所述N个第一信道状态指示信息,确定第二信道状态指示信息;其中,所述第二信道状态指示信息对应所述N个下行传输层,所述第二信道状态指示信息对应的序列长度小于所述N个第一信道状态指示信息对应的序列长度之和;
终端设备向接入网设备发送所述第二信道状态指示信息,接入网设备接收来自终端设备的第二信道状态指示信息;
接入网设备利用第二解码器和所述第二信道状态指示信息,确定N个第一信道状态指示信息,N为大于1的正整数;接入网设备利用第一解码器和输入第一解码器的所述N个第一信道状态指示信息,确定所述N个下行传输层的信道信息。
其中,终端设备侧实施的方案可以参照第一方面描述的设计理解。类似地,接入网设备侧实施的方案也可以参照第二方面描述的设计理解,本公开对此均不再进行赘述。
第八方面,本公开提供了一种通信系统,包括如第三方面或第五方面中所描述的通信装置;以及如第四方面或第六方面所描述的通信装置。
第九方面,本公开还提供了一种计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行上述第一方面至第二方面中任一方面提供的方法。
第十方面,本公开还提供了一种计算机程序产品,包括指令,当所述指令在计算机上运行时,使得计算机执行上述第一方面至第二方面中任一方面提供的方法。
第十一方面,本公开还提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序或指令,当所述计算机程序或者指令在计算机上运行时,使得所述计算机执行上述第一方面至第二方面中任一方面提供的方法。
第十二方面,本公开还提供了一种芯片,所述芯片用于读取存储器中存储的计算机程序,执行上述第一方面至第二方面中任一方面提供的方法,或者,所述芯片包括用于执行上述第一方面至第二方面中任一方面提供的方法的电路。
第十三方面,本公开还提供了一种芯片系统,该芯片系统包括处理器,用于支持装置实现上述第一方面至第二方面中任一方面提供的方法。在一种可能的设计中,所述芯片系统还包括存储器,所述存储器用于保存该装置必要的程序和数据。该芯片系统可以由芯片构成,也可以包含芯片和其他分立器件。
如上第二方面至第十三方面的任一方面所提供的方案的效果,可参考第一方面中的相应描述。
附图说明
图1为一种通信系统的结构示意图;
图2A为神经元结构的一种示意图;
图2B为神经网络的层关系的一种示意图;
图2C为本公开提供的一种AI应用框架示意图;
图3为另一种通信系统的结构示意图;
图4A~图4D为几种网络架构的示意图;
图5为本公开提供的通信方法的流程示意图之一;
图6A为本公开提供的编码解码的流程示意图之一;
图6B为本公开提供的编码解码的流程示意图之一;
图6C为本公开提供的编码解码的流程示意图之一;
图7为本公开提供的通信方法的流程示意图之一;
图8A为本公开提供的编码解码的流程示意图之一;
图8B为本公开提供的编码解码的流程示意图之一;
图8C为本公开提供的编码解码的流程示意图之一;
图9为本公开提供的编码解码的流程示意图之一;
图10为本公开提供的通信装置的结构示意图之一;
图11为本公开提供的通信装置的结构示意图之一。
具体实施方式
为了使本公开的目的、技术方案和优点更加清楚,下面将结合附图对本公开作进一步地详细描述。
本公开如下涉及的至少一个(项),指示一个(项)或多个(项)。多个(项),是指两个(项)或两个(项)以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。另外,应当理解,尽管在本公开中可能采用术语第一、第二等来描述各对象、但这些对象不应限于这些术语。这些术语仅用来将各对象彼此区分开。
本公开如下描述中所提到的术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括其他没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。需要说明的是,本公开中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本公开中被描述为“示例性的”或者“例如”的任何方法或设计方案不应被解释为比其它方法或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
本公开提供的技术可以应用于各种通信系统,例如,该通信系统可以是第三代(3th generation,3G)通信系统(例如通用移动通信系统(universal mobile telecommunication system,UMTS))、第四代(4th generation,4G)通信系统(例如长期演进(long term evolution,LTE)系统)、第五代(5th generation,5G)通信系统、全球互联微波接入(worldwide interoperability for microwave access,WiMAX)或者无线局域网(wireless local area network,WLAN)系统、或者多种系统的融合系统,或者是未来的通信系统,例如6G通信系统等。其中,5G通信系统还可以称为新无线(new radio,NR)系统。
通信系统中的一个网元可以向另一个网元发送信号或从另一个网元接收信号。其中信号可以包括信息、信令或者数据等。其中,网元也可以被替换为实体、网络实体、设备、通信设备、通信模块、节点、通信节点等等,本公开中以网元为例进行描述。例如,通信系统可以包括至少一个终端设备和至少一个接入网设备。接入网设备可以向终端设备发送下行信号,和/或终端设备可以向接入网设备发送上行信号此外可以理解的是,若通信系统中包括多个终端设备,多个终端设备之间也可以互发信号,即信号的发送网元和信号的接收网元均可以是终端设备。
本申请实施例提供的信息生成方法可以应用于5G、6G、卫星通信等无线通信系统中。参见图1,图1是本申请实施例提供的无线通信系统的一简化示意图。如图1所示,该无线通信系统包括无线接入网100。无线接入网100可以是下一代(例如6G或更高版本)无线接入网,或传统(例如5G、4G、3G或2G)无线接入网。一个或多个通信设备(120a-120j,统称为120)可以相互连接或连接到无线接入网100中的一个或多个网络设备(110a、110b,统称为110)。可选的,图1只是示意图,该无线通信系统中还可以包括其它设备,如还可以包括核心网设备、无线中继设备和/或无线回传设备等,在图1中未画出。
可选的,在实际应用中,该无线通信系统可以同时包括多个网络设备(也称为接入网设备),也可以同时包括多个通信设备。一个网络设备可以同时服务于一个或多个通信设备。一个通信设备也可以同时接入一个或多个网络设备。本申请实施例对该无线通信系统中包括的通信设备和网络设备的数量不做限定。
其中,网络设备可以是网络侧的一种用于发射或接收信号的实体。网络设备可以为通信设备通过无线方式接入到该无线通信系统中的接入设备,如网络设备可以是基站。基站可以广义的覆盖如下中的各种名称,或与如下名称进行替换,比如:节点B(NodeB)、演进型基站(evolved NodeB,eNB)、下一代基站(next generation NodeB,gNB)、开放无线接入网(open radio access network,O-RAN)中的接入网设备、中继站、接入点、传输点(transmitting and receiving point,TRP)、发射点(transmitting point,TP)、主站MeNB、辅站SeNB、多制式无线(MSR)节点、家庭基站、网络控制器、接入节点、无线节点、接入点(AP)、传输节点、收发节点、基带单元(BBU)、射频拉远单元(RRU)、有源天线单元(AAU)、射频头(RRH)、中心单元(CU)、分布单元(DU)、无线单元(radio unit,RU)、集中单元控制面(CU control plane,CU-CP)节点、集中单元用户面(CU user plane,CU-UP)节点、定位节点等。基站可以是宏基站、微基站、中继节点、施主节点或类似物,或其组合。网络设备还可以指用于设置于前述设备或装置内的通信模块、调制解调器或芯片。网络设备还可以是移动交换中心以及设备到设备(Device-to-Device,D2D)、车辆外联(vehicle-to-everything,V2X)、机器到机器(machine-to-machine,M2M)通信中承担基站功能的设备、6G网络中的网络侧设备、未来的通信系统中承担基站功能的设备等。网络设备可以支持相同或不同接入技术的网络。本申请的实施例对网络设备所采用的具体技术和具体设备形态不做限定。
网络设备可以是固定的,也可以是移动的。例如,基站110a、110b是静止的,并负责来自通信设备120的一个或多个小区中的无线传输和接收。图1中示出的直升机或无人机120i可以被配置成充当移动基站,并且一个或多个小区可以根据移动基站120i的位置移动。在其他示例中,直升机或无人机(120i)可以被配置成用作与基站110b通信的通信设备。
本公开中,用于实现如上接入网络功能的通信装置可以是接入网设备,也可以是具有接入网络的部分功能的网络设备,也可以是能够支持实现接入网络功能的装置,例如芯片系统,硬件电路、软件模块、或硬件电路加软件模块,该装置可以被安装在接入网设备中或者和接入网设备匹配使用。本公开的方法中,以用于实现接入网设备功能的通信装置是接入网设备为例进行描述。
通信设备可以是用户侧的一种用于接收或发射信号的实体,如手机。通信设备可以用于连接人、物和机器。通信设备可通过网络设备与一个或多个核心网进行通信。通信设备包括具有无线连接功能的手持式设备、连接到无线调制解调器的其他处理设备或车载设备等。通信设备可以是便携式、袖珍式、手持式、计算机内置的或者车载的移动装置。通信设备120可以广泛应用于各种场景,例如蜂窝通信、设备到设备D2D、车到所有V2X、端到端P2P、机器到机器M2M、机器类型通信MTC、物联网IOT、虚拟现实VR、增强现实AR、工业控制、自动驾驶、远程医疗、智能电网、智能家具、智能办公、智能穿戴、智能交通、智慧城市、无人机、机器人、遥感、被动传感、定位、导航与跟踪、自主交付与移动等。通信设备120的一些举例为:3GPP标准的用户设备(UE)、固定设备、移动设备、手持设备、可穿戴设备、蜂窝电话、智能电话、会话发起协议(SIP)电话、笔记本电脑、个人计算机、智能书、车辆、卫星、全球定位系统(GPS)设备、目标跟踪设备、无人机、直升机、飞行器、船只、遥控设备、智能家居设备、工业设备、个人通信业务(personal communication service,PCS)电话、无线本地环路(wireless local loop,WLL)站、个人数字助理(personal digital assistant,PDA)、无线网络摄像头、平板电脑、掌上电脑、移动互联网设备(mobile internet device,MID)、可穿戴设备如智能手表、虚拟现实(virtual reality,VR)设备、增强现实(augmented reality,AR)设备、工业控制(industrial control)中的无线终端、车联网系统中的终端、无人驾驶(self driving)中的无线终端、智能电网(smart grid)中的无线终端、运输安全(transportation safety)中的无线终端、智慧城市(smart city)中的无线终端如智能加油器,高铁上的终端设备以及智慧家庭(smart home)中的无线终端,如智能音响、智能咖啡机、智能打印机等。通信设备120可以为以上各种场景中的无线设备或用于设置于无线设备的装置,例如,上述设备中的通信模块、调制解调器或芯片等。通信设备也可以称为终端、终端设备、用户设备(user equipment,UE)、移动台(mobile station,MS)、移动终端(mobile terminal,MT)等。通信设备还可以是未来的无线通信系统中的通信设备。通信设备可以用于专用网设备或者通用设备中。本申请的实施例对通信设备所采用的具体技术和具体设备形态不做限定。
可选的,通信设备可以用于充当基站。例如,UE可以充当调度实体,其在V2X、D2D或P2P等中的UE之间提供侧行链路信号。如图1所示,蜂窝电话120a和汽车120b利用侧行链路信号彼此通信。蜂窝电话120a和智能家居设备120e之间通信,而无需通过基站110b中继通信信号。
本公开中,用于实现通信设备功能的通信装置可以是终端设备,也可以是具有以上通信设备的部分功能的终端设备,也可以是能够支持实现以上通信设备的功能的装置,例如芯片系统,该装置可以被安装在终端设备中或者和终端设备匹配使用。本公开中,芯片系统可以由芯片构成,也可以包括芯片和其他分立器件。本公开提供的技术方案中,以通信装置是终端设备或UE为例进行描述。
可选的,无线通信系统通常由小区组成,基站提供小区的管理,基站向小区中多个移动台(mobile station,MS)提供通信服务。其中基站包含基带单元(baseband unit,BBU)和远端射频单元(remote radio unit,RRU)。BBU和RRU可以放置在不同的地方,例如:RRU拉远,放置于高话务量的区域,BBU放置于中心机房。BBU和RRU也可以放置在同一机房。BBU和RRU也可以为一个机架下的不同部件。可选的,一个小区可以对应于一个载波或成员载波。
可以理解的是,本公开可以应用在网络设备和通信设备之间,网络设备和网络设备之间,或,通信设备和通信设备之间,也即,主设备和次设备之间,主设备可以为网络设备或通信设备,主设备为网络设备时,次设备可以为另一网络设备或通信设备,主设备为通信设备时,次设备可以为另一通信设备。
以下以主设备为网络设备,如,接入网设备,次设备为通信设备,如终端设备,为例进行方案的描述。其中,下行对应的通信方向为主设备向次设备的发送,上行对应的通信方向为次设备向主设备的发送。
接入网设备和终端设备之间的协议层结构
接入网设备和终端设备之间的通信遵循一定的协议层结构。该协议层结构可以包括控制面协议层结构和用户面协议层结构。例如,控制面协议层结构可以包括无线资源控制(radio resource control,RRC)层、分组数据汇聚层协议(packet data convergence protocol,PDCP)层、无线链路控制(radio link control, RLC)层、媒体接入控制(medium access control,MAC)层和物理层等协议层的功能。例如,用户面协议层结构可以包括PDCP层、RLC层、MAC层和物理层等协议层的功能,在一种可能的实现中,PDCP层之上还可以包括业务数据适配协议(service data adaptation protocol,SDAP)层。
可选的,接入网设备和终端之间的协议层结构还可以包括人工智能(artificial intelligence,AI)层,用于传输AI功能相关的数据。
以接入网设备和终端设备之间的数据传输为例,数据传输需要经过用户面协议层,比如经过SDAP层、PDCP层、RLC层、MAC层、物理层。其中,SDAP层、PDCP层、RLC层、MAC层和物理层也可以统称为接入层。根据数据的传输方向分为发送或接收,上述每层又分为发送部分和接收部分。以下行数据传输为例,PDCP层自上层取得数据后,将数据传送到RLC层与MAC层,再由MAC层生成传输块,然后通过物理层进行无线传输。数据在各个层中进行相对应的封装。例如,某一层从该层的上层收到的数据视为该层的服务数据单元(service data unit,SDU),经过该层封装后成为协议数据单元(protocol data unit,PDU),再传递给下一个层。
示例性的,终端设备还可以具有应用层和非接入层。其中,应用层可以用于向终端设备中所安装的应用程序提供服务,比如,终端设备接收到的下行数据可以由物理层依次传输到应用层,进而由应用层提供给应用程序;又比如,应用层可以获取应用程序产生的数据,并将数据依次传输到物理层,发送给其它通信装置。非接入层可以用于转发用户数据,比如将从应用层接收到的上行数据转发给SDAP层或者将从SDAP层接收到的下行数据转发给应用层。
接入网设备的结构
接入网设备可以包括集中式单元(central unit,CU)和分布式单元(distributed unit,DU)。多个DU可以由一个CU集中控制。作为示例,CU和DU之间的接口可以称为F1接口。其中,控制面(control panel,CP)接口可以为F1-C,用户面(user panel,UP)接口可以为F1-U。CU和DU可以根据无线网络的协议层划分:比如,PDCP层及以上协议层的功能设置在CU,PDCP层以下协议层(例如RLC层和MAC层等)的功能设置在DU;又比如,PDCP层以上协议层的功能设置在CU,PDCP层及以下协议层的功能设置在DU。
可以理解的是,上述对CU和DU的处理功能按照协议层的划分仅仅是一种举例,也可以按照其他的方式进行划分,例如可以将CU或者DU划分为具有更多协议层的功能,又例如将CU或DU还可以划分为具有协议层的部分处理功能。在一种设计中,将RLC层的部分功能和RLC层以上的协议层的功能设置在CU,将RLC层的剩余功能和RLC层以下的协议层的功能设置在DU。在另一种设计中,还可以按照业务类型或者其他系统需求对CU或者DU的功能进行划分,例如按时延划分,将处理时间需要满足时延要求的功能设置在DU,不需要满足该时延要求的功能设置在CU。在另一种设计中,CU也可以具有核心网的一个或多个功能。示例性的,CU可以设置在网络侧方便集中管理。在另一种设计中,将DU的RU拉远设置。其中,RU具有射频功能。
可选的,DU和RU可以在物理层(physical layer,PHY)进行划分。例如,DU可以实现PHY层中的高层功能,RU可以实现PHY层中的低层功能。其中,用于发送时,PHY层的功能可以包括添加循环冗余校验(cyclic redundancy check,CRC)码、信道编码、速率匹配、加扰、调制、层映射、预编码、资源映射、物理天线映射、和/或射频发送功能。用于接收时,PHY层的功能可以包括CRC、信道解码、解速率匹配、解扰、解调、解层映射、信道检测、资源解映射、物理天线解映射、和/或射频接收功能。其中,PHY层中的高层功能可以包括PHY层的一部分功能,例如该部分功能更加靠近MAC层,PHY层中的低层功能可以包括PHY层的另一部分功能,例如该部分功能更加靠近射频功能。例如,PHY层中的高层功能可以包括添加CRC码、信道编码、速率匹配、加扰、调制、和层映射,PHY层中的低层功能可以包括预编码、资源映射、物理天线映射、和射频发送功能;或者,PHY层中的高层功能可以包括添加CRC码、信道编码、速率匹配、加扰、调制、层映射和预编码,PHY层中的低层功能可以包括资源映射、物理天线映射、和射频发送功能。
示例性的,CU的功能可以由一个实体来实现,或者也可以由不同的实体来实现。例如,可以对CU的功能进行进一步划分,即将控制面和用户面分离并通过不同实体来实现,分别为控制面CU实体(即CU-CP实体)和用户面CU实体(即CU-UP实体)。该CU-CP实体和CU-UP实体可以与DU相耦合,共同完成接入网设备的功能。
上述架构中,CU产生的信令可以通过DU发送给终端设备,或者终端设备产生的信令可以通过 DU发送给CU。例如,RRC或PDCP层的信令最终会处理为物理层的信令发送给终端设备,或者,由接收到的物理层的信令转变而来。在这种架构下,该RRC或PDCP层的信令,即可以认为是通过DU发送的,或者,通过DU和RU发送的。
可选的,上述DU、CU、CU-CP、CU-UP和RU中的任一个可以是软件模块、硬件结构、或者软件模块+硬件结构,不予限制。其中,不同实体的存在形式可以是不同的,不予限制。例如DU、CU、CU-CP、CU-UP是软件模块,RU是硬件结构。这些模块及其执行的方法也在本公开的保护范围内。
应理解,图1所示的通信系统中各个设备的数量、类型仅作为示意,本公开并不限于此,实际应用中在通信系统中还可以包括更多的终端设备、更多的接入网设备,还可以包括其它网元,例如可以包括核心网设备,和/或用于实现人工智能功能的网元。
本公开提供的方法可以用于接入网设备和终端设备之间的通信,也可以用于其他通信设备之间的通信,例如无线回传链路中宏基站和微基站之间的通信,又如边链路(sidelink,SL)中两个终端设备之间的通信等,不予限制。
下面对本公开涉及的CSI反馈技术进行介绍。在上述通信系统如LTE或者NR系统中,接入网设备需要获取下行信道的信道状态信息CSI,基于CSI决定调度终端设备的下行数据信道的资源、调制编码方案(modulation and coding scheme,MCS)以及预编码等配置。可以理解,CSI属于一种信道信息,是一种能够反映信道特征、信道质量的信息。其中,信道信息也可以称为信道响应。示例性地,CSI可采用信道矩阵表现,例如CSI包括信道矩阵,或者,CSI可以由信道的特征向量组成。
在频分双工(frequency division duplex,FDD)通信场景中,由于上下行信道不具备互易性或者说无法保证上下行信道的互易性,接入网设备通常会向终端设备下行参考信号,终端设备根据接收到的下行参考信号进行信道测量、干扰测量估计下行信道信息,该下行信道信息包括CSI,进而反馈CSI给接入网设备。
一种传统的CSI反馈方式中,终端设备可以按照预定义或者接入网设备配置的方式,根据估计的CSI生成CSI报告并反馈给接入网设备。其中,下行参考信号包括信道状态信息参考信号(channel state information-reference signal,CSI-RS)或者同步信号块(synchronizing signal/physical broadcast channel block,SSB)。CSI报告包括秩指示(rank indicator,RI),信道质量指示(channel quality indicator,CQI)和预编码矩阵指示(precoding matrix indicator,PMI)等反馈量。RI用于指示终端设备建议的下行传输层数,CQI用于指示终端设备判断的当前信道条件所能支持的调制编码方式,PMI用于指示终端设备建议的预编码,PMI所指示的预编码层数与RI对应,例如,如果RI为1,则PMI指示一层预编码;如果RI为N,则PMI指示N层预编码,N为大于1的正整数,可以理解,N层预编码分别对应于RI指示的N个下行传输层。由此可见,RI指示的下行传输层数越大,PMI的反馈开销越高。
一般地,多层预编码之间存在公共信息,或描述为共用信息,相同信息等。于是在反馈CSI报告的PMI引入对公共信息的考虑,可以减少一些PMI的反馈开销。作为示例,将指示1层预编码的PMI简称为1层PMI,指示N层预编码的PMI简称为N层PMI,下面通过i1和i2介绍1至4层PMI中的公共信息以及各层的独立信息。
其中,i1和i2中的第一行对应于1层PMI,第二行对应于2层PMI,第三行对应于3层PMI,第4行对应于4层PMI。以4层PMI为例,可以理解i1,1,i1,2,i1,5为4层预编码之间公共信息,i1,1用于表示DFT空域基底选择,i1,2用于表示空域过采样选择,i1,5用于表示DFT频域基底子集选择。i1,6,l i1,7,l i1,8,l i2,3,l i2,4,l i2,5,l为4层中第l层的独立信息,l取遍1至4的正整数。其中,i1,6,l用于表示第l层频域基底选择,i1,7,l用于表示第l层非零稀疏的比特位图(bitmap),i1,8,l用于表示第l层的最强 系数,i2,3,l i2,4,l i2,5,l用于表示第l层的幅度和相位系数。
将人工智能(artificial Intelligence,AI)基于引入无线通信网络中,产生了一种基于AI的CSI反馈方式。终端设备利用AI模型对CSI进行压缩反馈,接入网设备利用AI模型对压缩的CSI进行恢复。在基于AI的CSI反馈中传输的是一个序列(如比特序列),开销相较于传统反馈CSI报告的开销低。
目前,终端设备在进行多个下行传输层的CSI反馈时,通常会将多个下行传输层的CSI看作相互独立的信息。对于多个下行传输层中每一个下行传输层来说,终端设备会独立地利用AI模型对该层的CSI进行压缩反馈。相应地,接入网设备会收到多个下行传输层中每个下行传输层所对应的压缩的CSI,进而接入网设备对多个下行传输层对应的压缩的CSI分别进行恢复。这样将多层CSI独立的压缩反馈的设计,会导致多层CSI之间存在的一些公共信息(如DFT空域基底选择、空域过采样选择、DFT频域基底子集选择等)重复传输,造成资源浪费。
基于此,本公开提供一种通信方法及装置,在对多层CSI分别进行压缩之后,再利用AI模型对一次压缩的多层CSI之间的公共信息进行压缩,减少重复传输公共信息的资源浪费,从而降低CSI的反馈开销。该通信方法可以应用于上述通信系统中,例如FDD通信场景。此外可选地,该通信方法还可以用于时分双工(time division duplex,TDD)通信场景,本公开对此不予限制。
为便于理解,下面先对本公开涉及的AI技术进行介绍。可以理解的是,该介绍并不作为对本公开的限定。
(1)AI模型
AI模型是AI技术功能的具体实现,AI模型表征了模型的输入和输出之间的映射关系。AI模型的类型可以是神经网络、线性回归模型、决策树模型、支持向量机(support vector machine,SVM)、贝叶斯网络、Q学习模型或者其他机器学习(machine learning,ML)模型。
本公开涉及用于压缩CSI的编码器以及用于恢复压缩CSI的解码器,编码器与解码器需要匹配使用,可以理解编码器和解码器为配套的AI模型。在本公开中,一个编码器可以包括一个或多个AI模型,该编码器匹配的解码器中也包括一个或多个AI模型,匹配使用的编码器和解码器中包括的AI模型数量相同且一一对应。
一种可能的设计中,一套匹配使用的编码器(encoder)和解码器(decoder)可以具体为同一个自编码器(auto-encoders,AE)中的两个部分。其中,自编码器是一种无监督学习的神经网络,它的特点是将输入数据作为标签数据,因此自编码器也可以理解为自监督学习的神经网络。自编码器可以用于数据的压缩和恢复。示例性地,自编码器中的编码器可以对数据A进行压缩(编码)处理,得到数据B;自编码器中的解码器可以对数据B进行解压缩(解码)处理,恢复出数据A。或者可以理解为,解码器是编码器的逆操作。
(2)神经网络
神经网络是AI或机器学习技术的一种具体实现形式。根据通用近似定理,神经网络在理论上可以逼近任意连续函数,从而使得神经网络具备学习任意映射的能力。
神经网络的思想来源于大脑组织的神经元结构。例如,每个神经元都对其输入值进行加权求和运算,通过一个激活函数输出运算结果。如图2A所示,为神经元结构的一种示意图。假设神经元的输入为x=[x0,x1,…,xn],与各个输入对应的权值分别为w=[w,w1,…,wn],其中,wi作为xi的权值,用于对xi进行加权。根据权值对输入值进行加权求和的偏置例如为b。激活函数的形式可以有多种,假设一个神经元的激活函数为:y=f(z)=max(0,z),则该神经元的输出为:xi+b)。再例如,一个神经元的激活函数为:y=f(z)=z,则该神经元的输出为: 其中,b、wi、xi可以是小数、整数(例如0、正整数或负整数)、或复数等各种可能的取值。神经网络中不同神经元的激活函数可以相同或不同。
神经网络一般包括多个层,每层可包括一个或多个神经元。通过增加神经网络的深度和/或宽度,能够提高该神经网络的表达能力,为复杂系统提供更强大的信息提取和抽象建模能力。其中,神经网络的深度可以是指神经网络包括的层数,其中每层包括的神经元个数可以称为该层的宽度。在一种实现方式中,神经网络包括输入层和输出层。神经网络的输入层将接收到的输入信息经过神经元处理,将处理结果传递给输出层,由输出层得到神经网络的输出结果。在另一种实现方式中,神经网络包括输入层、隐藏层和输出层,可参考图2B。神经网络的输入层将接收到的输入信息经过神经元处理,将处理结果传递给中间的隐藏层,隐藏层对接收的处理结果进行计算,得到计算结果,隐藏层将计算结果传递给输 出层或者相邻的隐藏层,最终由输出层得到神经网络的输出结果。其中,一个神经网络可以包括一个隐藏层,或者包括多个依次连接的隐藏层,不予限制。
以AI模型的类型为神经网络为例,本公开涉及的AI模型可以为深度神经网络(deep neural network,DNN)。根据网络的构建方式,DNN可以包括前馈神经网络(feedforward neural network,FNN)、卷积神经网络(convolutional neural networks,CNN)和递归神经网络(recurrent neural network,RNN)。
(3)训练数据集和推理数据
训练数据集用于AI模型的训练,训练数据集可以包括AI模型的输入,或者包括AI模型的输入和目标输出。其中,训练数据集包括一个或多个训练数据,训练数据可以是输入至AI模型的训练样本,也可以是AI模型的目标输出。其中,目标输出也可以被称为标签或者标签样本。训练数据集是机器学习重要的部分之一,模型训练本质上就是从训练数据中学习它的某些特征,使得AI模型的输出尽可能接近目标输出,如AI模型的输出与目标输出之间的差异尽可能地小。训练数据集的构成与选取,在一定程度上可以决定训练出来的AI模型的性能。
另外,在AI模型(如神经网络)的训练过程中,可以定义损失函数。损失函数描述了AI模型的输出值与目标输出值之间的差距或差异。本公开并不限制损失函数的具体形式。AI模型的训练过程就是通过调整AI模型的模型参数,使得损失函数的取值小于门限,或者使得损失函数的取值满足目标需求的过程。例如,AI模型为神经网络,调整神经网络的模型参数包括调整如下参数中的至少一种:神经网络的层数、宽度、神经元的权值、或神经元的激活函数中的参数。
推理数据可以作为已训练好的AI模型的输入,用于AI模型的推理。在模型推理过程中,将推理数据输入AI模型,可以得到对应的输出即为推理结果。
(4)AI模型的设计
AI模型的设计主要包括数据收集环节(例如收集训练数据和/或推理数据)、模型训练环节以及模型推理环节。进一步地还可以包括推理结果应用环节。参见图2C示意一种AI应用框架。在前述数据收集环节中,数据源(data source)用于提供训练数据集和推理数据。在模型训练环节中,通过对数据源提供的训练数据(training data)进行分析或训练,得到AI模型。其中,AI模型表征了模型的输入和输出之间的映射关系。通过模型训练节点学习得到AI模型,相当于利用训练数据学习得到模型的输入和输出之间的映射关系。在模型推理环节中,使用经由模型训练环节训练后的AI模型,基于数据源提供的推理数据进行推理,得到推理结果。该环节还可以理解为:将推理数据输入到AI模型,通过AI模型得到输出,该输出即为推理结果。该推理结果可以指示:由执行对象使用(执行)的配置参数、和/或由执行对象执行的操作。在推理结果应用环节中进行推理结果的发布,例如推理结果可以由执行(actor)实体统一规划,例如执行实体可以发送推理结果给一个或多个执行对象(例如,核心网设备、接入网设备、或终端设备等)去执行。又如执行实体还可以反馈模型的性能给数据源,便于后续实施模型的更新训练。
可以理解的是,在通信系统中可以包括具备人工智能功能的网元。上述AI模型设计相关的环节可以由一个或多个具备人工智能功能的网元执行。一种可能的设计中,可以在通信系统中已有网元内配置AI功能(如AI模块或者AI实体)来实现AI相关的操作,例如AI模型的训练和/或推理。例如该已有网元可以是接入网设备(如gNB)、终端设备、核心网设备、或网管等。其中,网管可以根据运营商网络运营的实际需要,将网络的管理工作划分为3类:操作(Operation)、管理(Administration)、维护(Maintenance)。网管又可以称为操作维护管理(operation administration and maintenance,OAM)网元,简称OAM。操作主要完成日常网络和业务进行的分析、预测、规划和配置工作;维护主要是对网络及其业务的测试和故障管理等进行的日常操作活动,网管可以检测网络运行状态、优化网络连接和性能,提升网络运行稳定性,降低网络维护成本。或者另一种可能的设计中,也可以在通信系统中引入独立的网元来执行AI相关的操作,如训练AI模型。该独立的网元可以称为AI网元或者AI节点等,本公开对此名称不进行限制。该AI网元可以和通信系统中的接入网设备之间直接连接,也可以通过第三方网元和接入网设备实现间接连接。其中,第三方网元可以是认证管理功能(authentication management function,AMF)网元、用户面功能(user plane function,UPF)网元等核心网网元、OAM、云服务器或者其他网元,不予限制。示例性的,参见图3,在上述图1所示的通信系统中引入了AI网元140。
本公开中,一个模型可以推理得到一个参数,或者推理得到多个参数。不同模型的训练过程可以部署在不同的设备或节点中,也可以部署在相同的设备或节点中。不同模型的推理过程可以部署在不同的 设备或节点中,也可以部署在相同的设备或节点中。以终端设备完成模型训练环节为例,终端设备可以训练配套的编码器和解码器之后,将其中解码器的模型参数发送给接入网设备。以接入网设备完成模型训练环节为例,接入网设备在训练配套的编码器和解码器之后,可以将其中编码器的模型参数指示给终端设备。以独立的AI网元完成模型训练环节为例,AI网元可以训练配套的编码器和解码器之后,将其中编码器的模型参数发送给终端设备,将解码器的模型参数发送给接入网设备。进而在终端设备中进行编码器对应的模型推理环节,以及在接入网设备中进行解码器对应的模型推理环节。
其中,模型参数可以包括如下的一种或多种模型的结构参数(例如模型的层数、和/或权值等)、模型的输入参数(如输入维度、输入端口数)、或模型的输出参数(如输出维度、输出端口数)。可以理解,输入维度可以指的是一个输入数据的大小,例如输入数据为一个序列时,该序列对应的输入维度可以指示该序列的长度。输入端口数可以指的是输入数据的数量。类似地,输出维度可以指的是一个输出数据的大小,例如输出数据为一个序列时,该序列对应的输出维度可以指示该序列的长度。输出端口数可以指的是输出数据的数量。
进一步地,本公开还提供如图4A~图4D示意的几种网络架构,以在接入网设备中进行模型训练和/或推理为例,对接入网设备中进行模型训练和/或推理的功能模块进行了划分。
如图4A中的(a)所示,第一种可能的实现中,接入网设备中包括近实时接入网智能控制(RAN intelligent controller,RIC)模块,用于进行模型学习和/或推理。例如,近实时RIC可以从CU、DU和RU中的至少一个获得网络侧和/或终端侧的信息,该信息可以包括训练数据或者推理数据。例如,近实时RIC可以用于进行模型的训练,还可以利用训练好的模型进行推理。此外可选的,近实时RIC可以将推理结果递交至CU、DU和RU中的至少一个。可选的,CU和DU之间可以交互推理结果。可选的,DU和RU之间可以交互推理结果,例如近实时RIC将推理结果递交至DU,由DU递交给RU。
如图4A中的(b)所示,第二种可能的实现中,在通信系统中,接入网设备之外可以包括非实时RIC,可选的,该非实时RIC可以位于OAM中或核心网设备中。该非实时RIC用于进行模型学习和推理。例如,非实时RIC可以从CU、DU和RU中的至少一个获得网络侧和/或终端侧的信息,该信息可以包括训练数据或者推理数据。例如,非实时RIC用于进行模型的训练,还可以利用训练好的模型进行推理。此外可选的,非实时RIC可以将推理结果递交至CU、DU和RU中的至少一个。可选的,CU和DU之间可以交互推理结果。可选的,DU和RU之间可以交互推理结果,例如非实时RIC将推理结果递交至DU,由DU递交给RU。
如图4A中的(c)所示,第三种可能的实现中,接入网设备中包括近实时RIC,接入网设备之外还包括非实时RIC。可选的,非实时RIC可以位于OAM中或者核心网设备中。一种可能的设计中,非实时RIC可以用于进行模型训练,近实时RIC可以从非实时RIC获得训练好的AI模型的模型参数,并从CU、DU和RU中的至少一个获得网络侧和/或终端侧的信息,利用该信息和该训练好的AI模型得到推理结果。进而近实时RIC还可以将推理结果递交至CU、DU和RU中的至少一个,可选的,CU和DU之间可以交互推理结果,可选的,DU和RU之间可以交互推理结果,例如近实时RIC将推理结果递交至DU,由DU递交给RU。或者一种可能的设计中,近实时RIC用于进行模型的训练以及利用训练好的模型进行推理,非实时RIC不参与该模型的训练或推理;或者,非实时RIC用于进行模型的训练以及利用训练好的模型进行推理,实时RIC不参与该模型的训练或推理。或者一种可能的设计中,近实时RIC用于进行模型的训练,以及将训练好的AI模型的模型参数发送给非实时RIC,非实时RIC利用训练好的模型进行推理。
图4B所示为本公开提供的方法能够应用的一种网络架构的示例图。相对图4A中的(c),图4B中将CU分离为了CU-CP和CU-UP。
图4C所示为本公开提供的方法能够应用的一种网络架构的示例图。如图4C所示,可选的,接入网设备中包括一个或多个AI实体,该AI实体的功能类似上述近实时RIC。可选的,OAM中包括一个或多个AI实体,该AI实体的功能类似上述非实时RIC。可选的,核心网设备中包括一个或多个AI实体,该AI实体的功能类似上述非实时RIC。当OAM和核心网设备中都包括AI实体时,他们各自的AI实体所训练得到的模型不同,和/或用于进行推理的模型不同。本公开中,模型不同可以包括以下至少一项不同:模型的结构参数(例如模型的层数、和/或权值等)、模型的输入参数、或模型的输出参数。
图4D所示为本公开提供的方法能够应用的一种网络架构的示例图。相对图4C,图4D中的接入网设备分离为CU和DU。可选的,CU中可以包括AI实体,该AI实体的功能类似上述近实时RIC。可 选的,DU中可以包括AI实体,该AI实体的功能类似上述近实时RIC。当CU和DU中都包括AI实体时,他们各自的AI实体所训练得到的模型不同,和/或用于进行推理的模型不同。可选的,还可以进一步将图4D中的CU拆分为CU-CP和CU-UP。可选的,CU-CP中可以部署有一个或多个AI模型。和/或,CU-UP中可以部署有一个或多个AI模型。可选的,图4C或图4D中,接入网设备的OAM和核心网设备的OAM可以分开独立部署。
下面进一步对本公开中AI模型的训练过程以及推理过程进行详细说明。
本公开涉及两个阶段的CSI压缩,第一阶段终端设备利用第一编码器对多层CSI分别进行压缩,第二阶段终端设备利用第二编码器对第一阶段的结果进行压缩。相应地,接入网设备侧也需要两个阶段的解压缩,如接入网设备先利用第二编码器匹配的第二解码器进行解压缩,然后利用第一编码器匹配的第一解码器对第二解码器的输出进行解压缩。
其中,第一编码器和第一解码器匹配使用,二者配套训练。可以理解第一编码器包括的AI模型数量和第一解码器包括的AI模型数量相同,第一编码器中每个AI模型在第一解码器中均有对应配套的AI模型。用于训练第一编码器和第一解码器的训练数据集包括训练样本和样本标签,其中训练样本为终端设备确定的下行传输层的信道信息,样本标签为真实的下行传输层的信道信息。对于第一编码器和第一解码器属于同一自编码器的情况,训练数据仅包括训练样本,或者说训练样本就是样本标签,也就是下行传输层的信道信息。
第一编码器和第一解码器可以是针对特定下行传输层的,例如使用多个第一下行传输层的信道信息训练一对第一编码器和第一解码器,该对第一编码器和第一解码器专用于第一下行传输层的信道信息的处理。使用多个第二下行传输层的信道信息训练另一对第一编码器和第一解码器,该对第一编码器和第一解码器专用于第二下行传输层的信道信息的处理。或者,第一编码器和第一解码器可以是多个下行传输层通用的,例如使用多个第一下行传输层的信道信息和第二下行传输层的信道信息训练1对第一编码器和第一解码器,该对第一编码器和第一解码器即可用于第一下行传输层的信道信息的处理,也可用于第二下行传输层的信道信息的处理。
具体训练过程如下:模型训练节点使用第一编码器处理一个下行传输层的信道信息,即训练样本,得到该下行传输层对应的第一信道状态指示信息,并使用第一解码器处理第一信道状态指示信息,得到恢复的下行传输层的信道信息。其中,第一信道状态指示信息的序列长度可以为第一目标长度。可选的,该第一目标长度可以有一个或多个。进而根据恢复的下行传输层的信道信息与对应的样本标签计算他们之间的差异,即损失函数的取值,根据损失函数的取值更新第一编码器和第一解码器的参数,使得恢复的下行传输层的信道信息与对应的样本标签之间的差异最小化,即最小化损失函数。示例性地,损失函数可以是最小均方误差(mean square error,MSE)或者余弦相似度。针对多个第一目标长度重复上述操作,即可得到满足目标需求的训练好,即训练完成,的第一编码器和第一解码器,其中,目标需求包括目标长度和/或恢复准确度等中的一项或多项。上述模型训练节点可以是终端设备、接入网设备或者通信系统中其他具备AI功能的网元。进而,如果模型训练节点为终端设备时,可以将第一解码器的模型参数发送给接入网设备;或者如果模型训练节点为接入网设备时,可以将第一编码器的模型参数发送终端设备;或者如果模型训练节点为除终端设备、接入网设备之外的其他网元时,可以将第一编码器的模型参数发送给终端设备,并将第一解码器的模型参数发送给接入网设备。
类似地,第二编码器和第二解码器匹配使用,二者配套训练。可以理解第二编码器包括的AI模型数量和第二解码器包括的AI模型数量相同,第二编码器中每个AI模型在第二解码器中均有对应配套的AI模型。由于第二编码器是对第一编码器的结果进行压缩,因此第二编码器与第二解码器不能独立于第一编码器和第二解码器使用,即第二编码器和第二解码器需要和第一编码器和第二解码器配套训练。一种可能的设计中,第二编码器和第二解码器属于同一个自编码器。
第二编码器和第二解码器可以和训练好的第一编码器和第二解码器配套训练,也可以和未训练好的第一编码器和第二解码器配套训练,训练第二编码器和第二解码器的过程中,可以固定第一编码器和第一解码器,仅训练第二编码器和第二解码器,也可以联合训练第二编码器和第二解码器,以及第一编码器和第二解码器。
以第二编码器和第二解码器均包括一个AI模型为例,具体训练过程如下:模型训练节点使用第一编码器处理N个下行传输层的信道信息,得到N个第一信道状态指示信息,使用第二编码器处理N个第一信道状态指示信息,得到第二信道状态指示信息,使用第二解码器处理第二信道状态指示信息,得 到恢复的N个第一信道状态指示信息,使用第一解码器处理恢复的N个第一信道状态指示信息,得到恢复的N个下行传输层的信道信息。其中,第二信道状态指示信息的序列长度可以为第二目标长度。可选的,该第二目标长度可以有一个或多个。根据恢复的N个下行传输层的信道信息与对应的样本标签计算他们之间的差异,即损失函数的取值,根据损失函数的取值更新第二编码器和第二解码器的参数,或者,更新第二编码器和第二解码器的参数以及第一编码器和第一解码器的参数,使得恢复的N个下行传输层的信道信息与对应的样本标签之间的差异最小化,即最小化损失函数。示例性地,损失函数可以是最小均方误差(MSE)或者余弦相似度。针对多个第二目标长度重复上述操作,即可得到满足目标需求的训练好,即训练完成的第二编码器和第二解码器,其中,目标需求包括目标长度和/或恢复准确度等中的一项或多项。上述模型训练节点可以是终端设备、接入网设备或者通信系统中其他具备AI功能的网元。进而,如果模型训练节点为终端设备时,可以将第二解码器的模型参数发送给接入网设备;或者如果模型训练节点为接入网设备时,可以将第二编码器的模型参数发送终端设备;或者如果模型训练节点为除终端设备、接入网设备之外的其他网元时,可以将第二编码器的模型参数发送给终端设备,并将第二解码器的模型参数发送给接入网设备。
下面结合方案一和方案二,对本公开中AI模型的推理过程进行详细说明。
方案一
参见图5示意一种通信方法,该方法主要包括如下流程。
S501,终端设备确定N个下行传输层的信道信息。
具体地,终端设备可以通过测量来自接入网设备的下行参考信号,估计下行信道信息,且确定终端设备建议的下行传输层数N。其中,该下行信道信息可以包括下行信道的信道状态信息CSI,下行信道信息也可以称为下行信道响应。进而终端设备可以将下行信道信息分割成N个下行传输层的信道信息(或称信道特征),N为正整数。
S502,终端设备利用第一编码器和输入所述第一编码器的N个下行传输层的信道信息,确定N个第一信道状态指示信息。
其中,N个下行传输层中第i个下行传输层的信道信息对应N个第一信道状态指示信息中的第i个第一信道状态指示信息。该第i个第一信道状态指示信息可以理解为一个序列,矩阵或者高维张量,用于指示第i个下行传输层的信道信息。一种可能的设计中,第一编码器具备压缩以及量化功能,第i个第一信道状态指示信息对应的序列具体可以是比特序列。另一种可能的设计中,第一编码器具备压缩功能但不具备量化功能,第i个第一信道状态指示信息对应的序列具体可以是浮点数序列。
具体地,通过前述训练过程,可以训练得到第一编码器。该第一编码器的结构可参照如下方式11、方式12或方式13中的任意一个方式理解。
方式11,第一编码器包括一个第一AI模型。终端设备可以利用该1个第一AI模型对N个下行传输层的信道信息进行处理,得到N个第一信道状态指示信息。
可以理解,N个下行传输层的信道信息输入至该1个第一AI模型,对应的,第一编码器的输出包括N个第一信道状态指示信息。可选的,终端设备可以将N个下行传输层的信道信息并行或者串行的输入该1个第一AI模型,本公开对此不予限制。
可选的,针对不同下行传输层数,可以设定不同的第一编码器,即不同的第一AI模型。或描述为第一编码器包括的1个第二AI模型与N的取值关联,不同取值的N所关联的第一AI模型不同。例如,N为2时,第一编码器记作编码器21或AE模型21中的编码器;N为3时,第一编码器记作编码器31或AE模型31中的编码器。这样的设计可以增强AI模型应用的灵活性。或者可选的,设定不同下行传输层数对应使用同一个第一编码器或同一AE模型中的编码器。这样的设计可以降低AI模型的存储开销。
此外可选的,具体实现时,也可以训练一个1个第一AI模型,以测量下行参考信号估计得到的下行信道信息作为输入数据,输出N个第一信道状态指示信息。此情况下,终端设备在测量下行参考信号估计出下行信道信息时,可以直接利用该1个第一AI模型对下行信道信息进行处理,得到N个第一信道状态指示信息。
方式12,第一编码器可以包括N个第一AI模型,第一编码器中的第i个第一AI模型用于对N个下行传输层中第i个下行传输层的信道信息进行处理,得到N个第一信道状态指示信息中的第i个第一信道状态指示信息。可以理解的是,所述N个下行传输层的信道信息分别输入所述N个第一AI模型, N个第一AI模型中的第i个第一AI模型的输入包括N个下行传输层中第i个下行传输层的信道信息,第i个第一AI模型的输出包括第i个下行传输层对应的第i个第一信道状态指示信息。可选的,第一编码器中N个第一AI模型可以相同,或者N个第一AI模型的模型类型(如使用的神经网络)相同,但模型参数不同;或者,N个第一AI模型的模型类型和模型参数均不相同,本公开对此不进行限制。
方式13,第一编码器包括M个第一AI模型,M为大于1且小于N的整数。其中,一个第一AI模型可以用于对N个下行传输层中的部分(一个或多个)下行传输层的信道信息进行处理,或者可以描述为N个下行传输层中的部分下行传输层的信道信息可以由同一个第一AI模型进行处理,所述N个下行传输层的信道信息中的部分下行传输层的信道信息串行或并行输入所述M个第一AI模型中的1个第一AI模型。
一种可能的设计中,可以预先配置第一编码器的结构为方式11、方式12或方式13中的一个方式,如采用协议定义或者其他定义的方式实现预配置,又如可以是终端设备和接入网设备之间协商实现预配置。另一种可能的设计中,终端设备可以自行按照方式11、方式12或方式13中的一个方式确定第一编码器的结构,进一步可选的,终端设备可以向接入网设备发送第一指示信息,该第一指示信息用于指示第一编码器的结构。例如第一指示信息具体用于指示第一编码器的结构为方式11、方式12或者方式13。可选的,在方式11、方式12或方式13的一个方式中,还可以预配置N的不同取值与第一编码器的结构之间的映射关系,或者可以描述为第一编码器的结构对应于N的取值。
当然类似地,还可以预先配置第一编码器中各个第一AI模型的模型类型和模型参数;或者终端设备可以向接入网设备上报终端设备使用第一编码器内第一AI模型的标识、模型类型或模型参数等,例如当第一指示信息指示方式11时,还可以具体指示方式11中第一编码器内第一AI模型的标识、模型类型或模型参数等。
进一步地,如果前述下行传输层数N等于1,且第一编码器具备压缩以及量化功能,终端设备可以将N个第一信道状态指示信息发送给接入网设备;或者,如果前述下行传输层数N等于1,且第一编码器只具备压缩功能,终端设备可以将N个第一信道状态指示信息进行量化处理后发送给接入网设备。或者,如果下行传输层数N大于1,则终端设备在执行完S502之后进一步执行如下S503和S504:
S503,终端设备利用第二编码器和输入第二编码器的所述N个第一信道状态指示信息,确定第二信道状态指示信息;其中,所述第二信道状态指示信息对应所述N个下行传输层。
具体地,终端设备可以利用第二编码器对N个第一信道状态指示信息进行处理,得到第二信道状态指示信息。其中,第二编码器的输入包括N个第一信道状态指示信息,输出包括第二信道状态指示信息。可以理解,S503得到的第二信道状态指示信息为对S502得到的N个第一信道状态指示信息的压缩。
具体地,第二信道状态指示信息也可以理解为一个序列,第二信道状态指示信息对应的序列长度小于N个第一信道状态指示信息对应的序列长度之和。本方案一中的第二编码器具备压缩以及量化功能,对N个第一信道状态指示信息的处理包括压缩以及量化,第二信道状态指示信息对应的序列具体可以为比特序列。
具体地,通过前述训练过程,可以训练得到第二编码器。该第二编码器的结构可参照如下方式21、方式22或方式23中的任意一个方式。
方式21,第二编码器包括1个第二AI模型。终端设备可以利用该1个第二AI模型对N个第一信道状态指示信息进行处理(如压缩以及量化),得到第二信道状态指示信息。可以理解,该1个第二AI模型的输入包括N个第一信道状态指示信息,该1个第二AI模型的输出包括第二信道状态指示信息。
示例性的,如图6A示意出一种编码流程示意图。以N为4,S502中第一编码器包括4个第一AI模型,即第一AI模型11、第一AI模型12以及第一AI模型13、第一AI模型14,第二编码器包括1个第二AI模型为例,示意出终端设备利用第一编码器和第二编码器,实现对4个下行传输层的信道信息进行两次压缩以及量化,得到第二信道状态指示信息的流程。
此外可选的,针对不同下行传输层数,可以设定不同的第二编码器,即不同的第二AI模型。或描述为第二编码器包括的1个第二AI模型与N的取值关联,不同取值的N所关联的第二AI模型不同。例如,N为2时,第二编码器记作编码器22或AE模型22中的编码器;N为3时,第二编码器记作编码器32或AE模型32中的编码器。这样的设计可以增强AI模型应用的灵活性。或者可选的,设定不同下行传输层数对应使用同一个第二编码器或同一AE模型中的编码器。这样的设计可以降低AI模型 的存储开销。
方式22,当下行传输层数N大于2时,第二编码器可以包括多个第二AI模型,第二编码器中第二AI模型的数量与下行传输层数N有关。示例性的,第二编码器包括N-1个第二AI模型。其中,N-1个第二AI模型中的第1个第二AI模型的输入包括N个第一信道状态指示信息中的第1个第一信道状态指示信息和第2个第一信道状态指示信息,第1个第二AI模型的输出包括第1个第四信道状态指示信息。从N-1个第二AI模型中的第2个第二AI模型开始,第i个编码器的输入包括第i-1个编码器输出的第i-1个第四信道状态指示信息和所述N个第一信道状态指示信息中第i+1个第一信道状态指示信息,所述第i个编码器的输出包括第i个第四信道状态指示信息,2≤i≤N-1,i为正整数,或者描述为i取遍2至N-1的正整数。以此类推,所述第N-1个第二AI模型输出的第N-1个第四信道状态指示信息即为所述第二信道状态指示信息。
示例性的,如图6B示意出一种编码流程示意图。以N为4,S502中第一编码器包括4个第一AI模型,即第一AI模型11、第一AI模型12、第一AI模型13以及第一AI模型14,S503中第二编码器包括3个第二AI模型,即第二AI模型21、第二AI模型22以及第二AI模型23为例,示意出终端设备利用第一编码器和第二编码器,实现对4个下行传输层的信道信息进行二次压缩以及量化,得到第二信道状态指示信息的流程。其中,第一AI模型11的输入包括下行传输层1的信道信息,第一AI模型11的输出包括第1个第一信道状态指示信息;第一AI模型12的输入包括下行传输层2的信道信息,第一AI模型12的输出包括第2个第一信道状态指示信息;第一AI模型13输入包括下行传输层3的信道信息,第一AI模型13输出包括第3个第一信道状态指示信息;第一AI模型14输入包括下行传输层4的信道信息,第一AI模型14输出包括第4个第一信道状态指示信息。第二AI模型21的输入包括第1个第一信道状态指示信息和第2个第一信道状态指示信息,第二AI模型21的输出包括第1个第四信道状态指示信息;第二AI模型22的输入包括第1个第四信道状态指示信息和第3个第一信道状态指示信息,第二AI模型22的输出包括第2个第四信道状态指示信息;第二AI模型23的输入包括第2个第四信道状态指示信息和第4个第一信道状态指示信息,第二AI模型23的输出包括第3个第四信道状态指示信息,该第二AI模型23输出的第3个第四信道状态指示信息即为第二信道状态指示信息。
方式23,当下行传输层数N大于2时,第二编码器可以包括K个第二AI模型。其中,K为大于1且小于N-1的正整数,例如K为2。可以理解,所述N个第一信道状态指示信息中的部分第一信道状态指示信息并行输入所述K个第二人工智能AI模型中的1个第一人工智能AI模型。下面对K个第二AI模型的输入输出进行详细说明。
示例性的,N为4时,K可以为2,即第二编码器可以包括2个第二AI模型,记作第二AI模型21和第二AI模型22。
一种可选的实施方式中,第二AI模型21的输入包括4个第一信道状态指示信息中前N-1个第一信道状态指示信息,第二AI模型21的输出包括第1个第四信道状态指示信息。第二AI模型22的输入包括N个第一信道状态指示信息中第N个第一信道状态指示信息以及第1个第四信道状态指示信息,第二AI模型22的输出包括第2个第四信道状态指示信息。所述第二AI模型22输出的第2个第四信道状态指示信息即为所述第二信道状态指示信息。
如图6C示意出一种编码流程示意图。以N为4,S502中第一编码器包括4个第一AI模型,即第一AI模型11、第一AI模型12、第一AI模型13以及第一AI模型14,S503中第二编码器包括2个第二AI模型,即第二AI模型21以及第二AI模型22为例,示意出终端设备利用第一编码器和第二编码器,实现对4个下行传输层的信道信息进行两次压缩以及量化,得到第二信道状态指示信息的流程。其中,第一AI模型11的输入包括下行传输层1的信道信息,第一AI模型11的输出包括第1个第一信道状态指示信息;第一AI模型12的输入包括下行传输层2的信道信息,第一AI模型12的输出包括第2个第一信道状态指示信息;第一AI模型13输入包括下行传输层3的信道信息,第一AI模型13输出包括第3个第一信道状态指示信息;第一AI模型14输入包括下行传输层4的信道信息,第一AI模型14输出包括第4个第一信道状态指示信息。第二AI模型21的输入包括第1个第一信道状态指示信息、第2个第一信道状态指示信息以及第3个第一信道状态指示信息,第二AI模型21的输出包括第1个第四信道状态指示信息;第二AI模型22的输入包括第1个第四信道状态指示信息和第4个第一信道状态指示信息,第二AI模型22的输出包括第2个第四信道状态指示信息。该第二AI模型22输出的 第2个第四信道状态指示信息即为第二信道状态指示信息。
另一种可选的实施方式中,第二AI模型21的输入包括N个第一信道状态指示信息中前两个第一信道指示信息,即第1个第一信道状态指示信息和第2个第一信道状态指示信息,第二AI模型21的输出包括第1个第四信道状态指示信息。第二AI模型22的输入包括第1个第四信道状态指示信息以及N个第一信道状态指示信息中除前两个之外的其余第一信道状态指示信息,如N为4,其余第一信道状态指示信息包括第3个第一信道状态指示信息和第4个第一信道状态指示信息,第二AI模型22的输出包括第2个第四信道状态指示信息。所述第二AI模型22输出的第2个第四信道状态指示信息即为所述第二信道状态指示信息。
示例性的,N为5时,K可以为2或3,即第二编码器可以包括2个第二AI模型或者包括3个第二AI模型。其中,对于第二编码器包括2个第二AI模型的情况可参照N为4的示例理解。而对于第二编码器包括3个第二AI模型的情况,可按照如下内容理解:
将3个第二AI模型记作第二AI模型21、第二AI模型22、第二AI模型23。
一种可选的实施方式中,第二AI模型21的输入包括5个第一信道状态指示信息中第1至3个第一信道状态指示信息,第二AI模型21的输出包括第1个第四信道状态指示信息。第二AI模型22的输入包括5个第一信道状态指示信息中第4个第一信道状态指示信息以及第1个第四信道状态指示信息,第二AI模型22的输出包括第2个第四信道状态指示信息。第二AI模型23的输入包括5个第一信道状态指示信息中第5个第一信道状态指示信息以及第2个第四信道状态指示信息,第二AI模型23的输出包括第3个第四信道状态指示信息。所述第二AI模型23输出的第3个第四信道状态指示信息即为所述第二信道状态指示信息。
另一种可选的实施方式中,第二AI模型21的输入包括5个第一信道状态指示信息中第1个第一信道状态指示信息和2个第一信道状态指示信息,第二AI模型21的输出包括第1个第四信道状态指示信息。第二AI模型22的输入包括5个第一信道状态指示信息中第3个第一信道状态指示信息和4个第一信道状态指示信息,第二AI模型22的输出包括第2个第四信道状态指示信息。第二AI模型23的输入包括5个第一信道状态指示信息中第5个第一信道状态指示信息、第1个第四信道状态指示信息以及第2个第四信道状态指示信息,第二AI模型23的输出包括第3个第四信道状态指示信息。所述第二AI模型23输出的第3个第四信道状态指示信息即为所述第二信道状态指示信息。
此外还有一种可选的实施方式,第二AI模型21的输入包括5个第一信道状态指示信息中第1个第一信道状态指示信息和2个第一信道状态指示信息,第二AI模型21的输出包括第1个第四信道状态指示信息。第二AI模型22的输入包括5个第一信道状态指示信息中第3个第一信道状态指示信息至第5个第一信道状态指示信息,第二AI模型22的输出包括第2个第四信道状态指示信息。第二AI模型23的输入包括第1个第四信道状态指示信息以及第2个第四信道状态指示信息,第二AI模型22的输出包括第3个第四信道状态指示信息。所述第二AI模型23输出的第3个第四信道状态指示信息即为所述第二信道状态指示信息。
具体实现时,一种可能的设计中,可以预先配置第二编码器的结构为方式21、方式22或方式23中的一个方式,如采用协议定义或者其他定义的方式实现预配置,又如可以是终端设备和接入网设备之间协商实现预配置。另一种可能的设计中,终端设备可以自行按照方式21、方式22或方式23中的一个方式确定第二编码器的结构。进一步可选的,终端设备可以向接入网设备发送第二指示信息,该第二指示信息用于指示第二编码器的结构,例如第二指示信息具体用于指示第二编码器的结构为方式21、方式22或者方式23。可选的,在方式21、方式22或方式23的一个方式中,还可以预配置N的不同取值与第二编码器的结构之间的映射关系,或者可以描述为第二编码器的结构对应于N的取值。
当然类似地,还可以预先配置第二编码器中各个第二AI模型的模型类型和模型参数中的一项或多项;或者终端设备可以向接入网设备上报终端设备使用第二编码器内第二AI模型的标识、模型类型或模型参数等中的一项或多项。例如当第二指示信息指示方式21时,还可以具体指示方式21中第二编码器内第二AI模型的标识、模型类型或模型参数等中的一项或多项。
S504,终端设备向接入网设备发送所述第二信道状态指示信息。
一种可选的实施方式中,可以预定义终端设备向接入网设备反馈的是经过两次压缩的CSI,或者终端设备和接入网设备可以预先协商反馈两次压缩后的CSI。那么接入网设备在收到第二信道状态指示信 息时,可以确定需要对第二信道状态指示信息进行两次解压缩,进一步执行S505~S506。
另一种可选的实施方式中,终端设备在发送第二信道状态指示信息的同时,还可以发送用于指示秩的信息如RI,前述下行传输层数N的取值等于该秩的值。或者可以理解,RI用于指示N的取值。在终端设备发送第二信道状态指示信息的情况下,RI指示N的取值大于1,接入网设备根据RI也可以得知第二信道状态指示信息是终端设备通过两次压缩得到的。这样的设计利用已有信息额外定义或指示其他内容,可以提高信息利用率。
其中,前述第二编码器的结构对应于所述N的取值。类似地,第二编码器匹配的第二解码器的结构也对应于所述N的取值。可选的,如果第二编码器和第二解码器为同一自编码器中的两部分,第二编码器所属的第二自编码器的结构可以对应于N的取值。进而,接入网设备可以根据N的取值,确定第二编码器、第二解码器、或者第二自编码器的结构中的一项或多项。可选的,前述第一编码器的结构也可以对应于所述N的取值。类似地,第一编码器匹配的第一解码器的结构也对应于所述N的取值。可选的,如果第一编码器和第一解码器为同一自编码器中的两部分,第一编码器所属的第一自编码器的结构也可以对应于N的取值。进而,接入网设备可以根据N的取值,确定第一编码器、第一解码器、或者第一自编码器的结构中的一项或多项。
此外可选的,可以定义一个独立的信息,如记作第二信息。由第二信息指示如下的一种或多种:第二编码器、第二编码器所属的第二自编码器AE、与第二编码器匹配的第二解码器。终端设备在发送第二信道状态指示信息的同时,还可以发送该第二信息。则接入网设备可以根据第二信息确定与第二编码器匹配的第二解码器。例如第二信息指示第二编码器时,第二信息可以包括第二编码器的模型标识、模型类型以及模型参数中的一项或多项。又如该第二信息可以包括S503中描述的第二指示信息,第二指示信息指示第二编码器的结构是方式21、方式22、方式23中的一种,各个方式描述的第二编码器中包括的第二AI模型的模型类型和模型参数为预配置。则接入网设备可以根据第二指示信息确定第二编码器的结构。可选的,第二信息还可以包括用于指示N的取值的信息。
类似地,也可以单独定义第一信息,由第一信息指示如下的一种或多种:第一编码器、第一编码器所属的第一自编码器AE、与第一编码器匹配的第一解码器。则接入网设备可以根据第一信息确定与第一编码器匹配的解码器。例如第一信息指示第一编码器时,第一信息可以包括第一编码器的模型标识、模型类型以及模型参数中的一项或多项。又如该第一信息可以包括S502中描述的第一指示信息,第一指示信息指示第一编码器的结构是方式11、方式12、方式13中的一种,各个方式描述的第一编码器中包括的第一AI模型的模型结构和模型参数为预配置。则接入网设备可以根据第一指示信息确定第一编码器的结构。可选的,第一信息还可以包括用于指示N的取值的信息。
S505,接入网设备利用第二解码器和输入所述第二解码器的将所述第二信道状态指示信息,确定N个第一信道状态指示信息。
其中,第二解码器为与S503中描述的第二编码器匹配使用的解码器,第二解码器的实现可以理解为第二编码器的逆过程。具体地,接入网设备可以根据终端设备发送的信息或者第二编码器的相关配置确定第二解码器,然后接入网设备将收到的第二信道状态指示信息输入至第二解码器,获得N个第一信道状态指示信息。可以理解的是,作为示例,在本方案一中第二解码器具备解量化以及解压缩功能,接入网设备利用第二解码器对第二信道状态指示信息的处理包括解量化以及解压缩。
具体地,通过前述训练过程,可以训练得到第二解码器。对应S503描述第二编码器的三种结构,第二解码器也可以有对应的三种结构。为便于实施,下面通过方式31~方式33示意出第二解码器可能的三种结构。其中,方式31与方式21描述的第二编码器匹配,方式32与方式22描述的第二编码器匹配,方式33与方式23描述的第二编码器匹配。
方式31,第二解码器包括1个第三AI模型。该第二解码器的输入包括接入网设备获取的第二信道状态指示信息,该1个第二解码器的输出包括N个第一信道状态指示信息。
示例性的,参见图6A,还示意出接入网设备利用包含1个第三AI模型的第二解码器对第二信道状态指示信息进行解量化以及解压缩,得到N个第一信道状态指示信息的流程。
方式32,第二解码器可以包括多个第三AI模型,第二解码器中第三AI模型的数量与下行传输层数N有关。对应S503中的描述,N大于2时,该第二解码器中可以包括N-1个第三AI模型。其中,所述N-1个第三人工智能AI模型中第N-1个第三AI模型的输入包括第二信道状态指示信息,输出包括第N-2个第四信道状态指示信息和第N个第一信道状态指示信息,第N-i个第三人工智能AI模型的 输入包括第N-i个第四信道状态指示信息,所述第N-i个第三人工智能AI模型的输出包括所述N个第一信道状态指示信息中的第N-i+1个第一信道状态指示信息和第N-i-1个第四信道状态指示信息,第1个第三AI模型的输入包括第1个第四信道状态指示信息,所述第1个第三AI模型的输出包括所述N个第一信道状态指示信息中的第1个第一信道状态指示信息和第2个第一信道状态指示信息其中,i取遍2至N-2的正整数。可以理解,N-i、N-i-1均不可以取0,所述第二信道状态指示信息为第N-1个第四信道状态信息。
具体地,如果N为3,i取遍1至2。第二解码器包括第2个第三AI模型以及第1个第三AI模型。第2个第三AI模型的输入包括第2个第四信道状态指示信息即第二信道状态指示信息,第2个第三AI模型的输出包括第3个第一信道状态指示信息和第1个第四信道状态指示信息。第1个第三模型的输入包括第1个第四信道状态指示信息,第1个第三AI模型的输入包括第1个第一信道状态指示信息和第2个第一信道状态指示信息。
或者,如果N大于3,N-1个第三AI模型中的第N-1个第三AI模型的输入包括接入网设备获取的第二信道状态指示信息,第N-1个第三AI模型的输出包括第N-2个第四信道状态指示信息以及第N个第一信道状态指示信息。从第N-2个第三AI模型开始到第2个第三AI模型,以i表示取遍2至N-2的正整数,第N-i个第三AI模型的输入包括第(N-i)个第四信道状态指示信息,第N-i个第三AI模型的输出包括第(N-i-1)个第四信道状态指示信息和所述N个第一信道状态指示信息中第(N-i+1)个第一信道状态指示信息。第1个第三AI模型的输入包括第1个第四信道状态指示信息,第N-1个第二AI模型的输出包括第1个第一信道状态指示信息和第2个第一信道状态指示信息。
示例性的,参见图6B,还示意出接入网设备利用第二解码器对第二信道状态指示信息进行解量化以及解压缩,得到4个第一信道状态指示信息的流程。该第二解码器包括3个第三AI模型,从第3个第三AI模型至第1个AI模型依次记作第三AI模型23、第三AI模型22以及第三AI模型21。可以理解,第三AI模型23与第二AI模型23匹配,第三AI模型22与第二AI模型22匹配,第三AI模型21与第二AI模型21匹配。具体地,第三AI模型23的输入包括第二信道状态指示信息,第三AI模型23的输出包括第2个第四信道状态指示信息以及第4个第一信道状态指示信息;第三AI模型22的输入包括第2个第四信道状态指示信息,第三AI模型22的输出包括第1个第四信道状态指示信息以及第3个第一信道状态指示信息;第三AI模型21的输入包括第1个第四信道状态指示信息,第三AI模型21的输出包括第1个第一信道状态指示信息以及第2个第一信道状态指示信息。
方式33,当下行传输层数N大于2时,第二解码器可以包括K个第三AI模型。其中,K为大于1且小于N-1的正整数,例如K为2。可以理解,K个第三AI模型中每个第三AI模型用于恢复N个第一信道状态指示信息中的部分第一信道状态指示信息。下面对K个第三AI模型的输入输出进行详细说明。
示例性的,N为4时,K可以为2,即第二解码器可以包括2个第三AI模型,记作第三AI模型21和第三AI模型22。
一种可选的实施方式中,第三AI模型22的输入包括第二信道状态指示信息,第三AI模型22的输出包括第1个第四信道状态指示信息和第N个第一信道状态指示信息。第三AI模型21的输入包括第1个第四信道状态指示信息,第三AI模型21的输出包括N个第一信道状态指示信息中前N-1个第一信道状态指示信息。
参见图6C,还示意出接入网设备利用第二解码器对第二信道状态指示信息进行解量化以及解压缩,得到4个第一信道状态指示信息的流程。该第二解码器包括2个第三AI模型,从第2个第三AI模型至第1个AI模型依次记作第三AI模型22以及第三AI模型21。可以理解,第三AI模型22与第二AI模型22匹配,第三AI模型21与第二AI模型21匹配。具体地,第三AI模型22的输入包括第二信道状态指示信息,第三AI模型22的输出包括第1个第四信道状态指示信息以及第4个第一信道状态指示信息;第三AI模型21的输入包括第1个第四信道状态指示信息,第三AI模型21的输出包括第1个第一信道状态指示信息、第2个第一信道状态指示信息以及第3个第一信道状态指示信息。
另一种可选的实施方式中,第三AI模型22的输入包括第二信道状态指示信息,第三AI模型22的输出包括第1个第四信道状态指示信息以及N个第一信道状态指示信息中除前两个之外的其余第一信道状态指示信息,如N为4,其余第一信道状态指示信息包括第3个第一信道状态指示信息和第4个第一信道状态指示信息;第三AI模型21的输入包括第1个第四信道状态指示信息,第三AI模型21 的输出包括N个第一信道状态指示信息中前两个第一信道指示信息,即第1个第一信道状态指示信息和第2个第一信道状态指示信息。
示例性的,N为5时,K可以为2或3,即第二解码器可以包括2个第三AI模型或者包括3个第三AI模型。其中,对于第二解码器包括2个第三AI模型的情况可参照N为4的示例理解。而对于第二解码器包括3个第三AI模型的情况,可按照如下内容理解:
将3个第三AI模型记作第三AI模型23、第三AI模型22、第三AI模型21。对应方式23中描述的示例,一种可选的实施方式中,第三AI模型23的输入包括第二信道状态指示信息,第三AI模型23的输出包括5个第一信道状态指示信息中第5个第一信道状态指示信息以及第2个第四信道状态指示信息。第三AI模型22的输入包括第2个第四信道状态指示信息,第三AI模型22的输出包括5个第一信道状态指示信息中第4个第一信道状态指示信息以及第1个第四信道状态指示信息。第三AI模型21的输入包括第1个第四信道状态指示信息,第三AI模型21的输出包括5个第一信道状态指示信息中第1至3个第一信道状态指示信息。
另一种可选的实施方式中,第三AI模型23的输入包括第二信道状态指示信息,第三AI模型23的输出包括5个第一信道状态指示信息中第5个第一信道状态指示信息、第2个第四信道状态指示信息以及第1个第四信道状态指示信息。第三AI模型22的输入包括第2个第四信道状态指示信息,第三AI模型22的输出包括5个第一信道状态指示信息中第3个第一信道状态指示信息以及第4个第一信道状态指示信息。第三AI模型21的输入包括第1个第四信道状态指示信息,第三AI模型21的输出包括5个第一信道状态指示信息中第1个第一信道状态指示信息以及第2个第一信道状态指示信息。
此外还有一种可选的实施方式,所述第三AI模型23的输入包括所述第二信道状态指示信息,第三AI模型23的输出包括第1个第四信道状态指示信息以及第2个第四信道状态指示信息。第三AI模型22的输入包括第2个第四信道状态指示信息,第三AI模型22的输出包括5个第一信道状态指示信息中第3个第一信道状态指示信息至第5个第一信道状态指示信息。第三AI模型21的输入包括第1个第四信道状态指示信息,第三AI模型21的输入包括5个第一信道状态指示信息中第1个第一信道状态指示信息和2个第一信道状态指示信息。
在一种可选的实施方式中,如果第二编码器的结构预配置为方式21、方式22或方式23中的一种,那么第二解码器也相应地预配置为对应的结构。例如,如果第二编码器的结构预配置为方式21,那么第二解码器的结构也预配置为方式31,如果第二编码器的结构预配置为方式22,那么第二解码器的结构也预配置为方式32,如果第二编码器的结构预配置为方式23,那么第二解码器的结构也预配置为方式33。进一步可选的,在该设计中,第二解码器对应于下行传输层数。如果预先定义了下行传输层数,终端设备无需指示N的取值,接入网设备也可以得知N的取值,进而接入网设备可结合预配置的方式以及下行传输层数N获得第二解码器。或者,对应S504中的描述如果终端设备向接入网设备发送了用于指示秩的信息,接入网设备可以根据该秩的取值确定下行传输层数N,进而接入网设备可以结合预配置的方式以及下行传输层数N获得第二解码器。
示例性的,在预先配置第二编码器的结构为方式21、第二解码器的结构为方式31的情况下,接入网设备可以根据N的取值,确定第二编码器的输入端口数,以及第二解码器的输出端口数。或者,在方式21中第二编码器包括1个第二AI模型,不同下行传输层数对应不同的第二AI模型,第二AI模型的模型结构和模型参数是预配置的情况下,接入网设备可以根据N的取值确定对应的第二AI模型。类似地,在方式31中第二解码器包括1个第三AI模型,不同下行传输层数对应不同的第三AI模型,第三AI模型的模型类型和模型参数是预配置的情况下,接入网设备可以根据N的取值确定对应的第三AI模型。在预先配置第二编码器的结构为方式22,第二解码器的结构为方式32的情况下,接入网设备可以根据N的取值,确定第二编码器中包含第二AI模型的数量以及第二解码器中包含第三AI模型的数量。如果预先配置了各个第二AI模型的模型类型和模型参数,接入网设备根据N的取值可以确定第二编码器以及第二解码器。在预先配置第二编码器的结构为方式23,第二解码器的结构为方式33的情况下,如果预先设定K与N之间的取值关系,如K均为N-2,那么接入网设备可以根据N的取值确定第二编码器中包含第二AI模型的数量K,如果预先配置了K个第二AI模型的模型类型和模型参数,接入网设备根据K的取值可以确定第二编码器的结构,并相应地利用K的取值确定与第二编码器匹配的第二解码器。
在另一种可选的实施方式中,如S504中的描述,如果终端设备向接入网设备发送了第二信息,接 入网设备可以根据第二信息确定第二解码器。例如第二信息包括第二指示信息时,接入网设备可以根据第二指示信息确定第二解码器的结构为方式31、方式32或者方式33。类似地,如果预先定义了下行传输层数,终端设备无需指示N的取值,接入网设备也可以得知N的取值,进而接入网设备可结合预配置的方式以及下行传输层数N获得第二解码器。或者,对应S504中的描述如果终端设备向接入网设备发送了用于指示N的取值的信息,接入网设备可以根据该信息确定下行传输层数N,进而接入网设备可以结合预配置的方式以及下行传输层数N获得第二解码器。
S506,接入网设备利用第一解码器和输入所述第一解码器的所述N个第一信道状态指示信息,确定所述N个下行传输层的信道信息。
其中,第一解码器为与S502中描述的第一编码器匹配使用的解码器,第一解码器的实现可以理解为第一编码器的逆过程。具体地,接入网设备可以根据终端设备发送的信息或者第一编码器的相关配置确定第一解码器,然后接入网设备将S505确定的N个第一信道状态指示信息输入至第一解码器,获得N个下行传输层的信道信息。可以理解的是作为示例,在本方案一中第一解码器具备解量化以及解压缩功能,接入网设备利用第一解码器对N个第一信道状态指示信息的处理包括解量化以及解压缩。
具体地,通过前述训练过程,可以训练得到第一解码器。对应S502描述第一编码器的三种结构,第一解码器也有对应的三种结构。为便于实施,下面通过方式41~方式43示意出第一解码器可能的三种结构。其中,方式41与方式11描述的第一编码器匹配,方式42与方式12描述的第一编码器匹配,方式43与方式13描述的第一编码器匹配。
方式41,第一解码器包括1个第四AI模型。终端设备可以利用该1个第四AI模型对N个第一信道状态指示信息进行处理,恢复出N个下行传输层的信道信息。可以理解,接入网设备将N个第一信道状态指示信息输入至该1个第四AI模型,对应的,第一解码器的输出包括恢复的N个下行传输层的信道信息。可选的,接入网设备可以将N个第一信道状态指示信息并行或者串行的输入该1个第四AI模型,本公开对此不予限制。
可选的,针对不同下行传输层数,可以设定不同的第一解码器,即不同的第四AI模型。或描述为第一解码器包括的1个第二AI模型与N的取值关联,不同取值的N所关联的第四AI模型不同。例如,N为2时,第一解码器记作解码器21或AE模型21中的解码器;N为3时,第一解码器记作解码器31或AE模型31中的解码器。这样的设计可以增强AI模型应用的灵活性。或者可选的,设定不同下行传输层数对应使用同一个第一解码器或同一AE模型中的解码器。这样的设计可以降低AI模型的存储开销。
方式42,第一解码器可以包括N个第四AI模型,第一解码器中的第i个第四AI模型用于对N个第一信道状态指示信息中第i个第一信道状态指示信息进行处理,得到N个下行传输层数中第i个下行传输层数的信道信息。可以理解的是,所述N个第一信道状态指示信息分别输入所述N个第四AI模型,N个第四AI模型中的第i个第四AI模型的输入包括N个第一信道状态指示信息中第i个第一信道状态指示信息,第i个第四AI模型的输出包括第i个下行传输层的信道信息。可选的,第一解码器中N个第四AI模型可以相同,或者N个第四AI模型的模型类型(如使用的神经网络)相同,但模型参数不同;或者,N个第四AI模型的模型类型和模型参数均不相同,本公开对此不进行限制。
方式43,第一解码器包括M个第四AI模型,M为大于1且小于N的整数。其中,一个第四AI模型可以用于对N个第一信道状态指示信息中的部分(一个或多个)第一信道状态指示信息进行处理,或者可以描述为N个第一信道状态指示信息中的部分第一信道状态指示信息可以由同一个第四AI模型进行处理,所述N个第一信道状态指示信息中的部分第一信道状态指示信息串行或并行输入所述M个第四AI模型中的1个第四AI模型。
作为示例,对应图6A、图6B以及图6C中第一编码器包括4个第一AI模型,在图6A、图6B以及图6C中进一步示意出第一解码器也包括4个第四AI模型,即与第一AI模型11关联的第四AI模型11、与第一AI模型12关联的第四AI模型12、与第一AI模型13关联的第四AI模型13、以及与第一AI模型14关联的第四AI模型14。其中,第四AI模型11的输入包括第1个第一信道状态指示信息,第四AI模型11的输出包括下行传输层1的信道信息;第四AI模型12的输入包括第2个第一信道状态指示信息,第四AI模型12的输出包括下行传输层2的信道信息;第四AI模型13的输入包括第3个第一信道状态指示信息,第四AI模型13的输出包括下行传输层3的信道信息;第四AI模型14的输入包括第4个第一信道状态指示信息,第四AI模型14的输出包括下行传输层4的信道信息。
在一种可选的实施方式中,如果第一编码器的结构预配置为方式11、方式12或方式13中的一种,那么第一解码器也相应地预配置为对应的结构。例如,如果第一编码器的结构预配置为方式11,那么第一解码器的结构也预配置为方式41,如果第一编码器的结构预配置为方式12,那么第一解码器的结构也预配置为方式42,如果第一编码器的结构预配置为方式13,那么第一解码器的结构也预配置为方式43。进一步可选的,在该设计中,第一解码器对应于下行传输层数。如果预先定义了下行传输层数,终端设备无需指示N的取值,接入网设备也可以得知N的取值,进而接入网设备可结合预配置的方式以及下行传输层数N获得第一解码器。或者,对应S504中的描述如果终端设备向接入网设备发送了用于指示秩的信息,接入网设备可以根据该秩的取值确定下行传输层数N,进而接入网设备可以结合预配置的方式以及下行传输层数N获得第一解码器。
示例性的,在预先配置第一编码器的结构为方式11、第一解码器的结构为方式41的情况下,接入网设备可以根据N的取值,确定第一编码器的输入数据的数量,以及第一解码器的输出数据的数量。或者,在方式11中第一编码器包括1个第一AI模型,不同下行传输层数对应不同的第一AI模型,第一AI模型的模型类型和模型参数是预配置的情况下,接入网设备可以根据N的取值确定对应的第一AI模型。类似地,在方式41中第一解码器包括1个第四AI模型,不同下行传输层数对应不同的第四AI模型,第四AI模型的模型类型和模型参数是预配置的情况下,接入网设备可以根据N的取值确定对应的第四AI模型。在预先配置第一编码器的结构为方式12、第一解码器的结构为方式42的情况下,接入网设备可以根据N的取值,确定第一编码器中包含第一AI模型的数量,如果预先配置了各个第一AI模型的模型类型和模型参数,接入网设备根据N的取值可以获得第一编码器;第一接入网设备也可以根据N的取值,确定第一解码器包含第四AI模型的数量,如果预先配置了各个第四AI模型的模型类型和模型参数,接入网设备根据N的取值可以获得第一解码器。在预先配置第一编码器的结构为方式13的情况下,如果预先设定M与N之间的取值关系,如M为N-2,那么接入网设备可以根据N的取值确定第一编码器中包含第一AI模型的数量M,以及第一解码器中包含第四AI模型的数量M。如果预先配置了M个第一AI模型/第四AI模型的模型类型和模型参数,接入网设备根据M的取值可以获得第一编码器以及第一解码器。
进一步可选地,接入网设备可以将恢复出的N个下行传输层数的信道信息进行整合,恢复出前述基于下行参考信号测量估计下行信道信息。
上述方案一中,利用多级AI模型对下行信道信息进行两次压缩,减少多层CSI之间的公共信息所占的资源,能够降低CSI的反馈开销。
方案二
参见图7示意一种通信方法,该方法主要包括如下流程。
S701,终端设备确定N个下行传输层的信道信息,N为正整数。
具体地,可参照S501理解,本公开对此不再进行赘述。
S702,终端设备利用第一编码器和输入所述第一编码器的N个下行传输层的信道信息,确定N个第一信道状态指示信息。
具体地,可参照S502理解,本公开对此不再进行赘述。
进一步地,如果前述下行传输层数N等于1,且第一编码器具备压缩以及量化功能,终端设备可以将N个第一信道状态指示信息发送给接入网设备;或者,如果前述下行传输层数N等于1,且第一编码器只具备压缩功能,终端设备可以将N个第一信道状态指示信息进行量化处理后发送给接入网设备。或者,如果下行传输层数N大于1时,则终端设备在执行完S702之后,进一步执行如下S703和S704:
S703,终端设备利用第二编码器和输入所述第二编码器的所述N个第一信道状态指示信息,确定第三信道状态指示信息;其中,所述第三信道状态指示信息对应所述N个下行传输层。
可以理解的是,在本方案二中第二编码器具备压缩(编码)功能,但不具备量化功能。终端设备可以利用第二编码器对N个第一信道状态指示信息进行压缩处理,得到第三信道状态指示信息。其中,第二编码器的输入包括N个第一信道状态指示信息,输出包括第三信道状态指示信息。可以理解,S703得到的第三信道状态指示信息为对S502得到的N个第一信道状态指示信息的压缩。
具体地,第三信道状态指示信息也可以理解一个序列,第三信道状态指示信息对应的序列长度小于N个第一信道状态指示信息对应的序列长度之和。本方案一中的第二编码器具备压缩功能但不具备量化 功能,对N个第一信道状态指示信息的处理包括压缩,第三信道状态指示信息对应的序列具体可以为浮点数序列。
具体地,通过前述训练过程,可以训练得到第二编码器。该第二编码器的结构可参照如下方式21-1、方式22-1或方式23-1中的任意一个方式。
方式21-1,第二编码器包括1个第二AI模型。终端设备可以利用该1个第二AI模型对N个第一信道状态指示信息进行压缩处理,得到第三信道状态指示信息。可以理解,该1个第二AI模型的输入包括N个第一信道状态指示信息,该1个第二AI模型的输出包括第三信道状态指示信息。
示例性的,如图8A示意出一种编码流程示意图。图8A的描述可以参考图6A的描述,其与图6A不同,仅在于第二编码器不具备量化功能,其输出为第三信道状态指示信息。该第三信道状态指示信息未经过量化处理,而第二信道状态指示信息为经过量化处理后的信息。
方式22-1,当下行传输层数N大于2时,第二编码器可以包括多个第二AI模型,第二编码器中第二AI模型的数量与下行传输层数N有关。示例性的,如图8B示意出一种编码流程示意图。图8B的描述可以参考图6B的描述,其与图6B不同,仅在于第二编码器不具备量化功能,其输出为第三信道状态指示信息,该第三信道状态指示信息未经过量化处理,而第二信道状态指示信息为经过量化处理后的信息。
方式23-1,当下行传输层数N大于2时,第二编码器可以包括K个第二AI模型。其中,K为大于1且小于N-1的正整数,例如K为2。可以理解,所述N个第一信道状态指示信息中的部分第一信道状态指示信息并行输入所述K个第二人工智能AI模型中的1个第一人工智能AI模型。
如图8C示意出一种编码流程示意图。图8C的描述可以参考图6C的描述,其与图6C不同,仅在于第二编码器不具备量化功能,其输出为第三信道状态指示信息,该第三信道状态指示信息未经过量化处理,而第二信道状态指示信息为经过量化处理后的信息。
S704,终端设备对第三信道状态指示信息进行量化处理,得到第二信道状态指示信息。
具体地,终端设备可以采用量化器对第三信道状态指示信息进行量化处理,得到第二信道状态指示信息。量化器可以是如下中的一种:非均匀量化器、标量量化器、矢量量化器、根据经验设计的量化器、通过AI训练得到的量化器,该量化器可以与编码器(如第一编码器、第二编码器、或者相关的解码器、自编码器AE)一起进行训练。量化处理也可以称作量化操作,量化处理可以理解为在量化字典中寻找与待量化的量最接近的项,并输出该项的索引。本公开中待量化的量为第三信道状态指示信息,经由查询量化字典输出的索引包含于该第三信道状态指示信息的量化信息,即第二信道状态指示信息。可以理解,第三信道状态指示信息为一个浮点数序列,量化后得到第二信道状态指示信息的具体可能是一个比特序列。
作为示例,在图8A、图8B以及图8C中,进一步还示意出了终端设备基于量化器对第三信道状态指示信息进行量化处理,得到第二信道状态指示信息的流程,其中量化器的输入包括第三信道状态指示信息,量化器的输出包括第二信道状态指示信息。
可选的,终端设备在执行S702和S703之间,也可以引入对S702中的N个第一信道状态指示信息的量化,进而终端设备在S703中所使用的N个第一信道状态指示信息是经由量化后的信息。N大于1时,终端设备可以采用N个量化器分别对S702中的N个第一信道状态指示信息进行量化处理,N个量化器与N个第一信道状态指示信息一一对应。N个量化器可以相同或者不同。
可选的,一种可能的设计中,可以预配置终端设备使用的量化器,如采用协议定义的方式实现预配置,或者终端设备与接入网设备预先协商一个量化器。另一种可能的设计中,终端设备可以自行决定使用的量化器,并向接入网设备发送用于指示其使用的量化器的信息。
S705,终端设备向接入网设备发送所述第二信道状态指示信息。
具体地,可参照S504实施,本公开对此不再进行赘述。
S706,接入网设备对第二信道状态指示信息进行解量化处理,得到第三信道状态指示信息。
示例性的,图8A、图8B以及图8C还示意出接入网设备利用解量化器对第二信道状态指示信息进行解量化处理,得到第三信道状态指示信息的流程。其中,解量化器与量化器匹配使用,解量化器可以理解为量化器的逆操作,解量化器的输入包括第二信道状态指示信息,解量化器的输出包括第三信道状态指示信息。具体地,对应量化器为预配置的情况,可以理解的是解量化器也是预配置的。或者如果终端设备发送了用于指示量化器的信息,接入网设备可以根据该用于指示量化器的信息,确定匹配的解量 化器。
S707,接入网设备利用第二解码器和输入所述第二解码器的所述第三信道状态指示信息,确定N个第一信道状态指示信息。
其中,第二解码器为与S703中描述的第二编码器匹配使用的解码器,第二解码器的实现可以理解为第二编码器的逆过程。具体地,接入网设备可以根据终端设备发送的信息或者第二编码器的相关配置确定第二解码器,然后接入网设备将收到的第三信道状态指示信息输入至第二解码器,获得N个第一信道状态指示信息。可以理解的是作为示例,在本方案一中第二解码器只具备解压缩功能,接入网设备利用第二解码器对第三信道状态指示信息的处理包括解压缩。
具体地,通过前述训练过程,可以训练得到第二解码器。对应S503描述第二编码器的三种结构,第二解码器也可以有对应的三种结构。为便于实施,下面通过方式31-1~方式33-1示意出第二解码器可能的三种结构。其中,方式31-1与方式21-1描述的第二编码器匹配,方式32-1与方式22-1描述的第二编码器匹配,方式33-1与方式23-1描述的第二编码器匹配。
方式31-1,第二解码器包括1个第三AI模型。该第二解码器的输入包括接入网设备在S706确定的第三信道状态指示信息,该1个第二解码器的输出包括N个第一信道状态指示信息。
示例性的,参见图8A,还示意出接入网设备利用第二解码器对第三信道状态指示信息进行解压缩,得到N个第一信道状态指示信息的流程。
方式32-1,第二解码器可以包括多个第三AI模型,第二解码器中第三AI模型的数量与下行传输层数N有关。
示例性的,参见图8B,还示意出接入网设备利用第二解码器对第三信道状态指示信息进行解压缩,得到4个第一信道状态指示信息的流程。具体描述可以参考图6B中的描述,其与图6B的区别仅在于图6B中是对第二信道状态指示信息进行解压缩。
方式33-1,当下行传输层数N大于2时,第二解码器可以包括K个第三AI模型。其中,K为大于1且小于N-1的正整数,例如K为2。
参见图8C,还示意出接入网设备利用第二解码器对第三信道状态指示信息进行解压缩,得到4个第一信道状态指示信息的流程。具体描述可以参考图6C中的描述,其与图6C的区别仅在于图6C中是对第二信道状态指示信息进行解压缩。
S708,接入网设备利用第一解码器和输入所述第一解码器的所述N个第一信道状态指示信息,确定所述N个下行传输层的信道信息。
具体地,可参照S506理解,本公开对此不再进行赘述。
作为示例,对应图8A、图8B以及图8C中第一编码器包括4个第一AI模型,在图8A、图8B以及图8C中进一步示意出第一解码器也包括4个第四AI模型,即与第一AI模型11关联的第四AI模型11、与第一AI模型12关联的第四AI模型12、与第一AI模型13关联的第四AI模型13、以及与第一AI模型14关联的第四AI模型14。其中,第四AI模型11的输入包括第1个第一信道状态指示信息,第四AI模型11的输出包括下行传输层1的信道信息;第四AI模型12的输入包括第2个第一信道状态指示信息,第四AI模型12的输出包括下行传输层2的信道信息;第四AI模型13的输入包括第3个第一信道状态指示信息,第四AI模型13的输出包括下行传输层3的信道信息;第四AI模型14的输入包括第4个第一信道状态指示信息,第四AI模型14的输出包括下行传输层4的信道信息。
进一步可选地,接入网设备可以将恢复出的N个下行传输层数的信道信息进行整合,恢复出前述基于下行参考信号测量估计的下行信道信息。
上述方案二中,利用多级AI模型对下行信道信息进行两次压缩,减少多层CSI之间的公共信息所占的资源,且对压缩后的信道状态指示信息进行量化处理,能够降低CSI的反馈开销。
本公开的上述实施例中,编码器和解码器可以匹配使用,或者可以使用解码器而不使用编码器,不予限制。匹配使用的编码器和解码器中,编码器具备量化功能,解码器具备解量化功能,或者编码器不具备量化功能,解码器不具备解量化功能;或者,编码器具备量化功能,解码器不具备量化功能;或者,编码器不具备量化功能,解码器具备量化功能,不予限制。
上述方案一和方案二可以结合在一起实施,例如匹配使用的第二编码器和第二解码器中,第二编码器具备量化功能,第二解码器不具备量化功能,此情况下可以先执行方案一中的S501~S504,然后执行方案二中的S706~S708。又如匹配使用的第二编码器和第二解码器中,第二编码器不具备量化功能,第 二解码器具备量化功能,此情况下可以先执行方案二中的S701~S705,然后执行方案一中的S505~S506。
作为与如图5或图7并列的一个可选的实施例,如图9所示,如图5或图7中的第一编码器和第二编码器的功能由同一个AI模型W1来实现,和/或,第一解码器和第二解码器的功能由与该AI模型W1相对应的同一个AI模型W2来实现。此时,AI模型W1和AI模型W2匹配使用,二者配套训练。具体训练过程和所描述的第一编码器,第二编码器,第一解码器,以及第二解码器联合训练的类似,在此不予赘述。其中,AI模型W1的输入为N个下行传输层的信道信息,输出为第二信道状态指示信息,AI模型W2的输入为第二信道状态指示信息,输出为N个下行传输层的信道信息。
一种可能的实施方式中,AI模型W1和AI模型W2可以针对特定的N进行配套训练,这种情况下,N的取值与AI模型W1和AI模型W2具有对应关系,比如,N的取值N1对应AI模型W11和AI模型W21,N的取值N2对应AI模型W12和AI模型W22,其中,N的取值个数对应多套AI模型W1和AI模型W2的个数。终端设备可以向接入网设备发送用于指示秩的信息,所述秩的取值和所述N的取值相同,或者,终端设备可以向接入网设备发送指示信息,以指示AI模型W1和/或AI模型W2,又或者,第二信道状态指示信息的序列长度的范围和AI模型WI和/或AI模型W2相对应,和/或,和N的取值相对应,也即,第二信道状态指示信息的序列长度的范围可以指示AI模型WI和/或AI模型W2,和/或,N的取值。这样,可以使得在N的取值可变的情况下,接入网设备可以确定采用与终端设备所采用的AI模型W1所匹配的AI模型W2对所接收的第二信道状态指示信息进行解码处理。
另一种可能的实施方式中,AI模型W1和AI模型W2可以针对任意N进行配套训练,即,并不专用于特定N。这种情况下,可选的,所述N的取值可以包括在前述N个下行传输层的信道信息中。
采用本方式,也可以实现多个下行传输层的信道信息的压缩传输。
基于同一构思,参见图10,本公开提供了一种通信装置1000,该通信装置1000包括处理模块1001和通信模块1002。该通信装置1000可以是终端设备,也可以是应用于终端设备或者和终端设备匹配使用,能够实现终端设备侧执行的通信方法的通信装置;或者,该通信装置1000可以是接入网设备,也可以是应用于接入网设备或者和接入网设备匹配使用,能够实现接入网设备侧执行的通信方法的通信装置。
其中,通信模块也可以称为收发模块、收发器、收发机、或收发装置等。处理模块也可以称为处理器,处理单板,处理单元、或处理装置等。可选的,通信模块用于执行上述方法中终端设备侧或接入网设备侧的发送操作和接收操作,可以将通信模块中用于实现接收功能的器件视为接收单元,将通信模块中用于实现发送功能的器件视为发送单元,即通信模块包括接收单元和发送单元。
该通信装置1000应用于终端设备时,处理模块1001可用于实现图5~图9所述示例中所述终端设备的处理功能,通信模块1002可用于实现图5~图9所述示例中所述终端设备的收发功能。或者也可以参照发明内容中第三方面以及第三方面中可能的设计理解该通信装置。
该通信装置1000应用于接入网设备时,处理模块1001可用于实现图5~图9所述示例中所述接入网设备的处理功能,通信模块1002可用于实现图5~图9所述示例中所述接入网设备的收发功能。或者也可以参照发明内容中第四方面以及第四方面中可能的设计理解该通信装置。
此外需要说明的是,前述通信模块和/或处理模块可通过虚拟模块实现,例如处理模块可通过软件功能单元或虚拟装置实现,通信模块可以通过软件功能或虚拟装置实现。或者,处理模块或通信模块也可以通过实体装置实现,例如若该装置采用芯片/芯片电路实现,所述通信模块可以是输入输出电路和/或通信接口,执行输入操作(对应前述接收操作)、输出操作(对应前述发送操作);处理模块为集成的处理器或者微处理器或者集成电路。
本公开中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,另外,在本公开各个示例中的各功能模块可以集成在一个处理器中,也可以是单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
基于相同的技术构思,本公开还提供了一种通信装置1100。例如,该通信装置1100可以是芯片或者芯片系统。可选的,在本公开中芯片系统可以由芯片构成,也可以包含芯片和其他分立器件。
通信装置1100可用于实现前述示例描述的通信系统中任一网元的功能。通信装置1100可以包括至少一个处理器1110。可选的,该处理器1110与存储器耦合,存储器可以位于该装置之内,或,存储器可以和处理器集成在一起,或,存储器也可以位于该装置之外。例如,通信装置1100还可以包括至少一个存储器1120。存储器1120保存实施上述任一示例中必要计算机程序、计算机程序或指令和/或数据;处理器1110可能执行存储器1120中存储的计算机程序,完成上述任一示例中的方法。
通信装置1100中还可以包括通信接口1130,通信装置1100可以通过通信接口1130和其它设备进行信息交互。示例性的,所述通信接口1130可以是收发器、电路、总线、模块、管脚或其它类型的通信接口。当该通信装置1100为芯片类的装置或者电路时,该装置1100中的通信接口1130也可以是输入输出电路,可以输入信息(或称,接收信息)和输出信息(或称,发送信息),处理器为集成的处理器或者微处理器或者集成电路或则逻辑电路,处理器可以根据输入信息确定输出信息。
本公开中的耦合是装置、单元或模块之间的间接耦合或通信连接,可以是电性,机械或其它的形式,用于装置、单元或模块之间的信息交互。处理器1110可能和存储器1120、通信接口1130协同操作。本公开中不限定上述处理器1110、存储器1120以及通信接口1130之间的具体连接介质。
可选的,参见图11,所述处理器1110、所述存储器1120以及所述通信接口1130之间通过总线1140相互连接。所述总线1140可以是外设部件互连标准(peripheral component interconnect,PCI)总线或扩展工业标准结构(extended industry standard architecture,EISA)总线等。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图11中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
在本公开中,处理器可以是通用处理器、数字信号处理器、专用集成电路、现场可编程门阵列或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,可以实现或者执行本公开中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本公开所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。
在本公开中,存储器可以是非易失性存储器,比如硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD)等,还可以是易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM)。存储器是能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。本公开中的存储器还可以是电路或者其它任意能够实现存储功能的装置,用于存储程序指令和/或数据。
在一种可能的实施方式中,该通信装置1100可以应用于接入网设备,具体通信装置1100可以是接入网设备,也可以是能够支持接入网设备,实现上述涉及的任一示例中接入网设备的功能的装置。存储器1120保存实现上述任一示例中的接入网设备的功能的计算机程序(或指令)和/或数据。处理器1110可执行存储器1120存储的计算机程序,完成上述任一示例中接入网设备执行的方法。应用于接入网设备,该通信装置1100中的通信接口可用于与终端设备进行交互,向终端设备发送信息或者接收来自终端设备的信息。
在另一种可能的实施方式中,该通信装置1100可以应用于终端设备,具体通信装置1100可以是终端设备,也可以是能够支持终端设备,实现上述涉及的任一示例中终端设备的功能的装置。存储器1120保存实现上述任一示例中的终端设备的功能的计算机程序(或指令)和/或数据。处理器1110可执行存储器1120存储的计算机程序,完成上述任一示例中终端设备执行的方法。应用于终端设备,该通信装置1100中的通信接口可用于与接入网设备进行交互,向接入网设备发送信息或者接收来自接入网设备的信息。
由于本示例提供的通信装置1100可应用于接入网设备,完成上述接入网设备执行的方法,或者应用于终端设备,完成终端设备执行的方法。因此其所能获得的技术效果可参考上述方法示例,在此不再赘述。
基于以上示例,本公开提供了一种通信系统,包括接入网设备和终端设备,其中,所述接入网设备和终端设备可以实现图5~图9所示的示例中所提供的通信方法。
本公开提供的技术方案可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本公开所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、终端设备、接入网设备或者其他可编程 装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机可以存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,数字视频光盘(digital video disc,DVD))、或者半导体介质等。
在本公开中,在无逻辑矛盾的前提下,各示例之间可以相互引用,例如方法实施例之间的方法和/或术语可以相互引用,例如装置实施例之间的功能和/或术语可以相互引用,例如装置示例和方法示例之间的功能和/或术语可以相互引用。
显然,本领域的技术人员可以对本公开进行各种改动和变型而不脱离本公开的范围。这样,倘若本公开的这些修改和变型属于本公开权利要求及其等同技术的范围之内,则本公开也意图包含这些改动和变型在内。

Claims (51)

  1. 一种通信方法,其特征在于,包括:
    利用第一编码器和输入所述第一编码器的N个下行传输层的信道信息,确定N个第一信道状态指示信息;其中,N为大于1的正整数;
    利用第二编码器和输入所述第二编码器的所述N个第一信道状态指示信息,确定第二信道状态指示信息;其中,所述第二信道状态指示信息对应所述N个下行传输层,所述第二信道状态指示信息对应的序列长度小于所述N个第一信道状态指示信息对应的序列长度之和;
    发送所述第二信道状态指示信息。
  2. 如权利要求1所述的方法,其特征在于,还包括:
    发送用于指示秩的信息,所述N的取值等于所述秩的取值,所述第二编码器的结构,与所述第二编码器匹配的第二解码器的结构,或,所述第二编码器所属的第二自编码器的结构中的一项或多项对应于所述N的取值。
  3. 如权利要求2所述的方法,其特征在于,所述第一编码器的结构,与所述第一编码器匹配的第一解码器的结构,或,所述第一编码器所属的第一自编码器的结构中的一项或多项也对应于所述N的取值。
  4. 如权利要求1所述的方法,其特征在于,还包括:
    发送第一信息,所述第一信息用于指示如下的一个或多个:所述第一编码器、与所述第一编码器匹配的第一解码器、所述第一编码器所属的第一自编码器AE。
  5. 如权利要求1或4所述的方法,其特征在于,还包括:
    发送第二信息,所述第二信息用于指示如下的一个或多个:所述第二编码器、与所述第二编码器匹配的第二解码器、所述第二编码器所属的第二自编码器AE。
  6. 如权利要求1-5任一项所述的方法,其特征在于,所述第一编码器包括N个第一人工智能AI模型,所述N个下行传输层的信道信息分别输入所述N个第一人工智能AI模型;或者,
    所述第一编码器包括1个第一人工智能AI模型,所述N个下行传输层的信道信息串行或并行输入所述1个第一人工智能AI模型;或者,
    所述第一编码器包括M个第一人工智能AI模型,M为大于1且小于N的整数,所述N个下行传输层的信道信息中的部分下行传输层的信道信息串行或并行输入所述M个第一人工智能AI模型中的1个第一人工智能AI模型。
  7. 如权利要求1-6任一项所述的方法,其特征在于,所述第二编码器包括1个第二人工智能AI模型,所述1个第二人工智能AI模型的输入包括所述N个第一信道状态指示信息,所述1个第二人工智能AI模型的输出包括所述第二信道状态指示信息。
  8. 如权利要求1-6任一项所述的方法,其特征在于,N大于2,所述第二编码器包括N-1个第二人工智能AI模型;
    其中,所述N-1个第二人工智能AI模型中的第1个第二人工智能AI模型的输入包括所述N个第一信道状态指示信息中的第1个第一信道状态指示信息和第2个第一信道状态指示信息,所述第1个第二人工智能AI模型的输出包括第1个第四信道状态指示信息;
    所述N-1个第二人工智能AI模型中的第i个第二人工智能AI模型的输入包括第i-1个第二人工智能AI模型输出的第i-1个第四信道状态指示信息和所述N个第一信道状态指示信息中第i+1个第一信道状态指示信息,所述第i个第二人工智能AI模型的输出包括第i个第四信道状态指示信息,2≤i≤N-1,i为正整数;其中,所述第N-1个第二人工智能AI模型输出的第N-1个第四信道状态指示信息为所述第二信道状态指示信息。
  9. 如权利要求1-6任一项所述的方法,其特征在于,N大于2,所述第二编码器包括K个第二人工智能AI模型,K为大于1且小于N的整数,所述N个第一信道状态指示信息中的部分第一信道状态指示信息并行输入所述K个第二人工智能AI模型中的1个第一人工智能AI模型。
  10. 如权利要求1-6任一项所述的方法,其特征在于,所述利用第二编码器和输入所述第二编码器的所述N个第一信道状态指示信息,确定第二信道状态指示信息,包括:
    根据所述第二编码器对所述N个第一信道状态指示信息进行处理,得到第三信道状态指示信息;
    对所述第三信道状态指示信息进行量化处理,得到所述第二信道状态指示信息。
  11. 如权利要求1-10任一项所述的方法,其特征在于,还包括:
    获取下行参考信号;
    根据所述下行参考信号,确定所述N的取值以及所述N个下行传输层的信道信息。
  12. 如权利要求1-11任一项所述的方法,其特征在于,所述第二信道状态指示信息的序列长度的范围和以下中的一项或多项:所述第二编码器的结构、与所述第二编码器匹配的第二解码器的结构、所述第二编码器所属的第二自编码器的结构、所述N的取值。
  13. 一种通信方法,其特征在于,包括:
    获取第二信道状态指示信息;
    利用第二解码器和所述第二信道状态指示信息,确定N个第一信道状态指示信息,所述N个第一信道状态指示信息对应的序列长度之和大于所述第二信道状态指示信息对应的序列长度,N为大于1的正整数;
    利用第一解码器和输入所述第一解码器的所述N个第一信道状态指示信息,确定N个下行传输层的信道信息。
  14. 如权利要求13所述的方法,其特征在于,还包括:
    获取用于指示秩的信息,所述N的取值等于所述秩的取值,所述第二解码器的结构,与所述第二解码器匹配的第二编码器的结构,或,所述第二编码器所属的第二自编码器的结构中的一项或多项对应于所述N的取值。
  15. 如权利要求14所述的方法,其特征在于,所述第一解码器的结构,与所述第一解码器匹配的第一编码器的结构,或,所述第一编码器所属的第一自编码器的结构中的一项或多项也对应于所述N的取值。
  16. 如权利要求13所述的方法,其特征在于,还包括:
    获取第一信息,所述第一信息用于指示如下的一个或多个:第一编码器、与所述第一编码器匹配的所述第一解码器、所述第一编码器所属的第一自编码器AE。
  17. 如权利要求13或14所述的方法,其特征在于,还包括:获取第二信息,所述第二信息用于指示如下的一个或多个:第二编码器、与所述第二编码器匹配的所述第二解码器、所述第二编码器所属的第二自编码器AE。
  18. 如权利要求13-17任一项所述的方法,其特征在于,所述第二解码器包括1个第三人工智能AI模型,所述1个第三人工智能AI模型的输入包括所述第二信道状态指示信息,所述1个第三人工智能AI模型的输出包括所述N个第一信道状态指示信息。
  19. 如权利要求13-17任一项所述的方法,其特征在于,N大于2,所述第二解码器包括N-1个第三人工智能AI模型,所述N-1个第三人工智能AI模型中的第N-1个第三人工智能AI模型的输入包括第二信道状态指示信息,输出包括第N-2个第四信道状态指示信息和第N个第一信道状态指示信息,第N-i个第三人工智能AI模型的输入包括第N-i个第四信道状态指示信息,所述第N-i个第三人工智能AI模型的输出包括所述N个第一信道状态指示信息中的第N-i+1个第一信道状态指示信息和第N-i-1个第四信道状态指示信息,第1个第三人工智能AI模型的输入包括第1个第四信道状态指示信息,所述第1个第三人工智能AI模型的输出包括所述N个第一信道状态指示信息中的第1个第一信道状态指示信息和第2个第一信道状态指示信息;其中,i取遍2至N-2的正整数,所述第二信道状态指示信息为第N-1个第四信道状态信息。
  20. 如权利要求13-17任一项所述的方法,其特征在于,N大于2,第二解码器包括K个第三人工智能AI模型,K为大于1且小于N-1的正整数,所述K个第三AI模型中部分第三AI模型用于恢复所述N个第一信道状态指示信息中的至少2个第一信道状态指示信息。
  21. 如权利要求13-17任一项所述的方法,其特征在于,所述利用第二解码器和所述第二信道状态指示信息,确定N个第一信道状态指示信息,包括:
    对所述第二信道状态指示信息进行解量化处理,得到第三信道状态指示信息;
    利用所述第二解码器和输入所述第二解码器的所述第三信道状态指示信息,确定所述N个第一信道状态指示信息。
  22. 如权利要求13-21任一项所述的方法,其特征在于,
    第一解码器包括N个第四人工智能AI模型,所述N个第一信道状态指示信息分别输入所述N个第四人工智能AI模型;或者,
    所述第一解码器包括1个第四人工智能AI模型,所述N个第一信道状态指示信息串行或并行输入所述1个第四人工智能AI模型;或者,
    所述第一解码器包括M个第四人工智能AI模型,M为大于1且小于N的整数,所述N个第一信道状态指示信息的部分第一信道状态指示信息串行或并行输入所述M个第四人工智能AI模型中的1个第一人工智能AI模型。
  23. 如权利要求13-22任一项所述的方法,其特征在于,所述第二信道状态指示信息的序列长度的范围和以下中的一项或多项:所述第二编码器的结构、与所述第二编码器匹配的第二解码器的结构、所述第二编码器所属的第二自编码器的结构、所述N的取值。
  24. 一种通信装置,其特征在于,包括:
    处理模块,用于利用第一编码器和输入所述第一编码器的N个下行传输层的信道信息,确定N个第一信道状态指示信息;其中,N为大于1的正整数;
    所述处理模块,还用于利用第二编码器和输入所述第二编码器的所述N个第一信道状态指示信息,确定第二信道状态指示信息;其中,所述第二信道状态指示信息对应所述N个下行传输层,所述第二信道状态指示信息对应的序列长度小于所述N个第一信道状态指示信息对应的序列长度之和;
    通信模块,用于发送所述第二信道状态指示信息。
  25. 如权利要求24所述的装置,其特征在于,所述通信模块,还用于:
    发送用于指示秩的信息,所述N的取值等于所述秩的取值,所述第二编码器的结构,与所述第二编码器匹配的第二解码器的结构,或,所述第二编码器所属的第二自编码器的结构中的一项或多项对应于所述N的取值。
  26. 如权利要求25所述的装置,其特征在于,所述第一编码器的结构,与所述第一编码器匹配的第一解码器的结构,或,所述第一编码器所属的第一自编码器的结构中的一项或多项也对应于所述N的取值。
  27. 如权利要求24所述的装置,其特征在于,所述通信模块,还用于:
    发送第一信息,所述第一信息用于指示如下的一个或多个:所述第一编码器、与所述第一编码器匹配的第一解码器、所述第一编码器所属的第一自编码器AE。
  28. 如权利要求24或27所述的装置,其特征在于,所述通信模块,还用于:
    发送第二信息,所述第二信息用于指示如下的一个或多个:所述第二编码器、与所述第二编码器匹配的第二解码器、所述第二编码器所属的第二自编码器AE。
  29. 如权利要求24-28任一项所述的装置,其特征在于,所述第一编码器包括N个第一人工智能AI模型,所述N个下行传输层的信道信息分别输入所述N个第一人工智能AI模型;或者,
    所述第一编码器包括1个第一人工智能AI模型,所述N个下行传输层的信道信息串行或并行输入所述1个第一人工智能AI模型;或者,
    所述第一编码器包括M个第一人工智能AI模型,M为大于1且小于N的整数,所述N个下行传输层的信道信息中的部分下行传输层的信道信息串行或并行输入所述M个第一人工智能AI模型中的1个第一人工智能AI模型。
  30. 如权利要求24-29任一项所述的装置,其特征在于,所述第二编码器包括1个第二人工智能AI模型,所述1个第二人工智能AI模型的输入包括所述N个第一信道状态指示信息,所述1个第二人工智能AI模型的输出包括所述第二信道状态指示信息。
  31. 如权利要求24-30任一项所述的装置,其特征在于,N大于2,所述第二编码器包括N-1个第二人工智能AI模型;
    其中,所述N-1个第二人工智能AI模型中的第1个第二人工智能AI模型的输入包括所述N个第一信道状态指示信息中的第1个第一信道状态指示信息和第2个第一信道状态指示信息,所述第1个第二人工智能AI模型的输出包括第1个第四信道状态指示信息;
    所述N-1个第二人工智能AI模型中的第i个第二人工智能AI模型的输入包括第i-1个第二人工智能AI模型输出的第i-1个第四信道状态指示信息和所述N个第一信道状态指示信息中第i+1个第一信道状态指示信息,所述第i个第二人工智能AI模型的输出包括第i个第四信道状态指示信息,2≤i≤ N-1,i为正整数;其中,所述第N-1个第二人工智能AI模型输出的第N-1个第四信道状态指示信息为所述第二信道状态指示信息。
  32. 如权利要求24-29任一项所述的装置,其特征在于,N大于2,所述第二编码器包括K个第二人工智能AI模型,K为大于1且小于N的整数,所述N个第一信道状态指示信息中的部分第一信道状态指示信息并行输入所述K个第二人工智能AI模型中的1个第一人工智能AI模型。
  33. 如权利要求24-29任一项所述的装置,其特征在于,所述利用第二编码器和输入所述第二编码器的所述N个第一信道状态指示信息,确定第二信道状态指示信息,包括:
    根据所述第二编码器对所述N个第一信道状态指示信息进行处理,得到第三信道状态指示信息;
    对所述第三信道状态指示信息进行量化处理,得到所述第二信道状态指示信息。
  34. 如权利要求24-33任一项所述的装置,其特征在于,
    所述通信模块,还用于获取下行参考信号;
    所述处理模块,还用于根据所述下行参考信号,确定所述N的取值以及所述N个下行传输层的信道信息。
  35. 如权利要求24-34任一项所述的装置,其特征在于,所述第二信道状态指示信息的序列长度的范围和以下中的一项或多项:所述第二编码器的结构、与所述第二编码器匹配的第二解码器的结构、所述第二编码器所属的第二自编码器的结构、所述N的取值。
  36. 一种通信装置,其特征在于,包括:
    通信模块,用于获取第二信道状态指示信息;
    处理模块,用于利用第二解码器和所述第二信道状态指示信息,确定N个第一信道状态指示信息,所述N个第一信道状态指示信息对应的序列长度之和大于所述第二信道状态指示信息对应的序列长度,N为大于1的正整数;
    所述处理模块,还用于利用第一解码器和输入所述第一解码器的所述N个第一信道状态指示信息,确定N个下行传输层的信道信息。
  37. 如权利要求36所述的装置,其特征在于,所述通信模块,还用于:
    获取用于指示秩的信息,所述N的取值等于所述秩的取值,所述第二解码器的结构,与所述第二解码器匹配的第二编码器的结构,或,所述第二编码器所属的第二自编码器的结构中的一项或多项对应于所述N的取值。
  38. 如权利要求37所述的装置,其特征在于,所述第一解码器的结构,与所述第一解码器匹配的第一编码器的结构,或,所述第一编码器所属的第一自编码器的结构中的一项或多项也对应于所述N的取值。
  39. 如权利要求36所述的装置,其特征在于,所述通信模块,还用于:
    获取第一信息,所述第一信息用于指示如下的一个或多个:第一编码器、与所述第一编码器匹配的所述第一解码器、所述第一编码器所属的第一自编码器AE。
  40. 如权利要求36或37所述的装置,其特征在于,还包括:获取第二信息,所述第二信息用于指示如下的一个或多个:第二编码器、与所述第二编码器匹配的所述第二解码器、所述第二编码器所属的第二自编码器AE。
  41. 如权利要求36-40任一项所述的装置,其特征在于,所述第二解码器包括1个第三人工智能AI模型,所述1个第三人工智能AI模型的输入包括所述第二信道状态指示信息,所述1个第三人工智能AI模型的输出包括所述N个第一信道状态指示信息。
  42. 如权利要求36-40任一项所述的装置,其特征在于,N大于2,所述第二解码器包括N-1个第三人工智能AI模型,所述N-1个第三人工智能AI模型中的第N-1个第三人工智能AI模型的输入包括第二信道状态指示信息,输出包括第N-2个第四信道状态指示信息和第N个第一信道状态指示信息,第N-i个第三人工智能AI模型的输入包括第N-i个第四信道状态指示信息,所述第N-i个第三人工智能AI模型的输出包括所述N个第一信道状态指示信息中的第N-i+1个第一信道状态指示信息和第N-i-1个第四信道状态指示信息,第1个第三人工智能AI模型的输入包括第1个第四信道状态指示信息,所述第1个第三人工智能AI模型的输出包括所述N个第一信道状态指示信息中的第1个第一信道状态指示信息和第2个第一信道状态指示信息;其中,i取遍2至N-2的正整数,所述第二信道状态指示信息为第N-1个第四信道状态信息。
  43. 如权利要求36-40任一项所述的装置,其特征在于,N大于2,第二解码器包括K个第三人工智能AI模型,K为大于1且小于N-1的正整数,所述K个第三AI模型中部分第三AI模型用于恢复所述N个第一信道状态指示信息中的至少2个第一信道状态指示信息。
  44. 如权利要求36-40任一项所述的装置,其特征在于,所述处理模块,具体用于:
    对所述第二信道状态指示信息进行解量化处理,得到第三信道状态指示信息;
    利用所述第二解码器和输入所述第二解码器的所述第三信道状态指示信息,确定所述N个第一信道状态指示信息。
  45. 如权利要求36-44任一项所述的装置,其特征在于,
    第一解码器包括N个第四人工智能AI模型,所述N个第一信道状态指示信息分别输入所述N个第四人工智能AI模型;或者,
    所述第一解码器包括1个第四人工智能AI模型,所述N个第一信道状态指示信息串行或并行输入所述1个第四人工智能AI模型;或者,
    所述第一解码器包括M个第四人工智能AI模型,M为大于1且小于N的整数,所述N个第一信道状态指示信息的部分第一信道状态指示信息串行或并行输入所述M个第四人工智能AI模型中的1个第一人工智能AI模型。
  46. 如权利要求36-45任一项所述的装置,其特征在于,所述第二信道状态指示信息的序列长度的范围和以下中的一项或多项:所述第二编码器的结构、与所述第二编码器匹配的第二解码器的结构、所述第二编码器所属的第二自编码器的结构、所述N的取值。
  47. 一种通信装置,其特征在于,包括:
    处理器,所述处理器和存储器耦合,所述处理器用于调用所述存储器存储的计算机程序指令,以执行如权利要求1-12任一项所述的方法。
  48. 一种通信装置,其特征在于,包括:
    处理器,所述处理器和存储器耦合,所述处理器用于调用所述存储器存储的计算机程序指令,以执行如权利要求13-23任一项所述的方法。
  49. 一种通信系统,其特征在于,包括权利要求24-35中任一项所述的通信装置,以及权利要求36-46中任一项所述的通信装置。
  50. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有指令,当所述指令在计算机上运行时,使得计算机执行如权利要求1-12任一项所述的方法或者如权利要求13-23任一项所述的方法。
  51. 一种计算机程序产品,其特征在于,包括指令,当所述指令在计算机上运行时,使得计算机执行如权利要求1-12任一项所述的方法或者如权利要求13-23任一项所述的方法。
PCT/CN2023/105001 2022-07-06 2023-06-30 一种通信方法及装置 WO2024008004A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210800531.3 2022-07-06
CN202210800531.3A CN117411526A (zh) 2022-07-06 2022-07-06 一种通信方法及装置

Publications (1)

Publication Number Publication Date
WO2024008004A1 true WO2024008004A1 (zh) 2024-01-11

Family

ID=89454790

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/105001 WO2024008004A1 (zh) 2022-07-06 2023-06-30 一种通信方法及装置

Country Status (2)

Country Link
CN (1) CN117411526A (zh)
WO (1) WO2024008004A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110350958A (zh) * 2019-06-13 2019-10-18 东南大学 一种基于神经网络的大规模mimo的csi多倍率压缩反馈方法
CN113796020A (zh) * 2019-03-08 2021-12-14 弗劳恩霍夫应用研究促进协会 用于无线通信系统中用于基于多普勒延迟码本的预编码的csi报告和码本结构
CN116193504A (zh) * 2023-04-18 2023-05-30 南京云程半导体有限公司 一种信道状态信息的上报方法、电子设备及存储介质
CN116260494A (zh) * 2021-12-10 2023-06-13 中兴通讯股份有限公司 反馈、获取及训练方法、终端、基站、电子设备和介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113796020A (zh) * 2019-03-08 2021-12-14 弗劳恩霍夫应用研究促进协会 用于无线通信系统中用于基于多普勒延迟码本的预编码的csi报告和码本结构
CN110350958A (zh) * 2019-06-13 2019-10-18 东南大学 一种基于神经网络的大规模mimo的csi多倍率压缩反馈方法
CN116260494A (zh) * 2021-12-10 2023-06-13 中兴通讯股份有限公司 反馈、获取及训练方法、终端、基站、电子设备和介质
CN116193504A (zh) * 2023-04-18 2023-05-30 南京云程半导体有限公司 一种信道状态信息的上报方法、电子设备及存储介质

Also Published As

Publication number Publication date
CN117411526A (zh) 2024-01-16

Similar Documents

Publication Publication Date Title
WO2022033456A1 (zh) 一种信道状态信息测量反馈方法及相关装置
WO2021217519A1 (zh) 用于调整神经网络的方法和装置
JP5999799B2 (ja) ジョイント信号処理における圧縮転送中継方式のためのシステムおよび方法
US20240137082A1 (en) Communication method and apparatus
WO2023126007A1 (zh) 信道信息传输方法及装置
WO2023125660A1 (zh) 一种通信方法及装置
WO2023098860A1 (zh) 通信方法和通信装置
WO2024008004A1 (zh) 一种通信方法及装置
WO2022012256A1 (zh) 通信的方法及通信装置
WO2024046215A1 (zh) 一种通信方法及装置
WO2024046419A1 (zh) 一种通信方法及装置
CN114492784A (zh) 神经网络的测试方法和装置
WO2023174108A1 (zh) 一种通信方法及装置
WO2023006096A1 (zh) 一种通信方法及装置
WO2023125996A1 (zh) 一种上行预编码方法及装置
WO2024046288A1 (zh) 通信方法和装置
WO2023036323A1 (zh) 一种通信方法及装置
WO2023231881A1 (zh) 一种模型应用方法及装置
CN118118133A (zh) 一种通信方法及装置
WO2023202514A1 (zh) 一种通信方法及装置
WO2023231934A1 (zh) 一种通信方法及装置
WO2023115254A1 (zh) 处理数据的方法及装置
WO2023279947A1 (zh) 一种通信方法及装置
WO2023125699A1 (zh) 一种通信方法及装置
WO2023198018A1 (zh) 一种上行信号发送、接收方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23834771

Country of ref document: EP

Kind code of ref document: A1