WO2024008004A1 - Procédé et appareil de communication - Google Patents

Procédé et appareil de communication Download PDF

Info

Publication number
WO2024008004A1
WO2024008004A1 PCT/CN2023/105001 CN2023105001W WO2024008004A1 WO 2024008004 A1 WO2024008004 A1 WO 2024008004A1 CN 2023105001 W CN2023105001 W CN 2023105001W WO 2024008004 A1 WO2024008004 A1 WO 2024008004A1
Authority
WO
WIPO (PCT)
Prior art keywords
indication information
channel status
status indication
encoder
model
Prior art date
Application number
PCT/CN2023/105001
Other languages
English (en)
Chinese (zh)
Inventor
柴晓萌
孙琰
杭海存
陈家璇
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024008004A1 publication Critical patent/WO2024008004A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • H04B7/0613Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission
    • H04B7/0615Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal
    • H04B7/0619Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal using feedback from receiving side
    • H04B7/0621Feedback content
    • H04B7/0626Channel coefficients, e.g. channel state information [CSI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/0413MIMO systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • H04B7/0613Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission
    • H04B7/0615Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal
    • H04B7/0619Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal using feedback from receiving side
    • H04B7/0636Feedback format

Definitions

  • the present disclosure relates to the field of communication technology, and in particular, to a communication method and device.
  • wireless communication networks such as mobile communication networks
  • the services supported by the network are becoming more and more diverse, and therefore the requirements that need to be met are becoming more and more diverse.
  • the network needs to be able to support ultra-high speeds, ultra-low latency, and/or ultra-large connections.
  • This feature makes network planning, network configuration, and/or resource scheduling increasingly complex.
  • the functions of the network become more and more powerful, such as supporting higher and higher spectrum, supporting high-order multiple input multiple output (MIMO) technology, supporting beam forming, and/or supporting beam management and other new Technology has made network energy conservation a hot research topic.
  • MIMO multiple input multiple output
  • These new requirements, new scenarios, and new features have brought unprecedented challenges to network planning, O&M, and efficient operations.
  • artificial intelligence technology can be introduced into wireless communication networks to achieve network intelligence. Based on this, how to effectively implement artificial intelligence in the network is a question worth studying.
  • the present disclosure provides a communication method and device, with a view to reducing the overhead of channel state information (CSI) feedback through artificial intelligence.
  • CSI channel state information
  • the present disclosure provides a communication method, including: using a first encoder and channel information of N downlink transmission layers input to the first encoder to determine N pieces of first channel status indication information; wherein, N is a positive integer greater than 1; use the second encoder and the N first channel status indication information input to the second encoder to determine the second channel status indication information; wherein, the second channel status indication information Corresponding to the N downlink transmission layers, the sequence length corresponding to the second channel status indication information is less than the sum of the sequence lengths corresponding to the N first channel status indication information; and the second channel status indication information is sent.
  • the downlink channel information is compressed twice through a two-layer encoder, which reduces the resources occupied by common information between the channel information (CSI) of multiple downlink transmission layers, and can reduce the feedback overhead of CSI.
  • CSI channel information
  • the method further includes: sending information indicating the rank, and the value of N is equal to the value of the rank.
  • the value of rank is greater than 1, it can implicitly indicate that the downlink channel information is compressed twice.
  • the information used to indicate the rank may be a rank indicator (rank indicator, RI).
  • RI rank indicator
  • the structure of the second encoder, the structure of the second decoder matching the second encoder, or the structure of the second autoencoder to which the second encoder belongs corresponds to the value of N.
  • the structure of the first encoder, the structure of the first decoder matching the first encoder, or one or more of the structures of the first autoencoder to which the first encoder belongs is also Corresponds to the value of N.
  • the aforementioned information indicating the rank may also be used by a device that receives the information indicating the rank to determine the structure of the second decoder and/or the first decoder.
  • the structure of the encoder, or the structure of the second autoencoder to which the second encoder belongs corresponds to, and/or corresponds to the value of N, that is, the sequence length of the second channel status indication information
  • the range may indicate one or more of the following: the structure of the second encoder, the structure of the second decoder that matches the second encoder, or the second self-encoder to which the second encoder belongs.
  • the structure of the encoder, and/or, the value of N is the sequence length of the second channel status indication information.
  • the structure of the encoder, or the structure of the first autoencoder to which the first encoder belongs corresponds to, and/or corresponds to the value of N, that is, the sequence length of the second channel status indication information
  • the range may indicate one or more of the following: the structure of the first encoder, the structure of the first decoder matching the first encoder, or the first self-contained code to which the first encoder belongs.
  • the method further includes: sending second information, the second information being used to indicate one or more of the following: the first A second encoder, a second decoder matching the second encoder, and a second auto-encoder (AE) to which the second encoder belongs.
  • the information receiver can quickly determine the second decoder matching the second encoder, thereby improving decoding efficiency.
  • the method further includes: sending first information, the first information being used to indicate one or more of the following: the first encoder, the first decoder matching the first encoder , the first autoencoder AE to which the first encoder belongs.
  • the information receiver can quickly determine the first decoder that matches the first encoder, thereby improving decoding efficiency.
  • the structure, input and/or output of the first encoder will be described in detail below.
  • the first encoder may have compression and quantization functions, or the first encoder may have compression but not quantization functions.
  • the first encoder includes N first artificial intelligence (artificial intelligence, AI) models, and the channel information of the N downlink transmission layers is input to the N first AI models respectively.
  • Each first AI model in the first AI model outputs one piece of first channel status indication information. It can be understood that one first AI model in the first encoder is only used to process channel information of one downlink transmission layer.
  • Each first AI model is independent of each other and processed in parallel, which can improve processing efficiency.
  • the first encoder includes a first AI model, and the channel information of the N downlink transmission layers is input to the first AI model serially or in parallel.
  • the first AI model outputs N pieces of first channel status indication information serially or in parallel.
  • the first encoder includes M first AI models, M is an integer greater than 1 and less than N, and part of the channel information of the N downlink transmission layers among the channel information of the N downlink transmission layers is input serially or in parallel.
  • One first AI model among the M first AI models corresponds to outputting part of the first channel state indication information corresponding to the downlink transmission layer in series or in parallel.
  • one first AI model in the first encoder can be used to process channel information of multiple downlink transmission layers, which can reduce the number of AI models and reduce the storage resources of the models.
  • the second encoder has compression and quantization functions.
  • the N pieces of first channel status indication information are input to the second encoder, and the output of the second encoder includes the second channel status indication information.
  • the structure of the second encoder can be understood with reference to the following possible examples.
  • the second encoder includes a second AI model
  • the input of the second AI model includes the N pieces of first channel status indication information.
  • the second AI model The output includes the second channel status indication information.
  • one second AI model is used to process multiple first channel status indication information, which can reduce the storage resources of the model.
  • N is greater than 2
  • the second encoder includes N-1 second AI models; wherein, the 1st second AI model among the N-1 second AI models
  • the input includes the first first channel status indication information and the second first channel status indication information among the N first channel status indication information, and the output of the first second AI model includes the first first channel status indication information.
  • the output of the i-th second AI model includes the i-th fourth channel status indication information, 2 ⁇ i ⁇ N-1,i is a positive integer; wherein, the N-1th fourth channel status indication information output by the N-1th second AI model is the second channel status indication information.
  • Such a design implements hierarchical compression, and multiple second AI models with an input port number of 2 and an output port number of 1 can be used to implement a second encoder in which N is any number greater than 2.
  • the multiple second AI models may be the same or different. Taking multiple second AI models as an example, the second encoder can be implemented by training a second AI model and calling it multiple times, thereby reducing the feedback overhead of CSI and reducing the number of second AI models that need to be trained. quantity.
  • N is greater than 2
  • the second encoder includes K second AI models
  • K is an integer greater than 1 and less than N
  • some of the N first channel status indication information are A channel status indication information is input into one first AI model among the K second AI models in parallel.
  • K is 2.
  • Such a design implements hierarchical compression. Multiple second AI models with an integer output port number of 1 from 2 to N-1 can be used to implement a second encoder where N is any number greater than 2.
  • the second encoder can be implemented by training several second AI models with different numbers of input ports, thereby reducing the feedback overhead of CSI, reducing the number of second AI models that need to be trained, and reducing the time required to implement the second encoder. Second, the number of AI models increases the coding speed.
  • the second encoder has compression but no quantization capabilities. N pieces of first channel status indication information are input to the second encoder, and the output of the second encoder includes third channel status indication information. Then, the third channel status indication information is quantized to obtain the second channel status indication information.
  • the structure of the second encoder can be understood with reference to the following possible examples.
  • the second encoder includes a second AI model
  • the input of the second AI model includes the N first channel status indication information
  • the output of the second AI model includes the third channel status indication information.
  • one second AI model is used to process multiple first channel status indication information, which can reduce the storage resources of the model.
  • N is greater than 2
  • the second encoder includes N-1 second AI models; wherein, the 1st second AI model among the N-1 second AI models
  • the input includes the first first channel status indication information and the second first channel status indication information among the N first channel status indication information, and the output of the first second AI model includes the first first channel status indication information.
  • the output of the i-th second AI model includes the i-th fourth channel status indication information, 2 ⁇ i ⁇ N-1,i is a positive integer; wherein, the N-1th fourth channel status indication information output by the N-1th second AI model is the third channel status indication information.
  • Such a design implements hierarchical compression.
  • Multiple second AI models with an input port number of 2 and an output port number of 1 can be used to implement a second encoder with an input port number (N) of any number greater than 2.
  • N input port number
  • multiple The second AI model can be the same or different. Taking multiple second AI models of the same model as an example, a second AI model can be trained and called multiple times to implement the second encoder, thereby reducing the feedback overhead of CSI and reducing the number of second AI models that need to be trained.
  • the second encoder when N is greater than 2, the second encoder includes K second AI models, K is an integer greater than 1 and less than N, and part of the N first channel status indication information The first channel status indication information is input into one second AI model among the K second AI models in parallel.
  • Such a design can use multiple second AI models with an integer output port number of 1 from 2 to N-1 to implement a second encoder where N is any number greater than 2. In this way, several inputs can be trained A second AI model with a different number of ports can implement the second encoder, thereby reducing the feedback overhead of CSI and reducing the number of second AI models that need to be trained. It can also reduce the time required to implement the second AI model of the second encoder. quantity to increase encoding speed.
  • K is 2.
  • the method before determining the N pieces of first channel status indication information, further includes: acquiring a downlink reference signal; and determining the value of N and the N pieces of downlink information based on the downlink reference signal.
  • Transport layer channel information before determining the N pieces of first channel status indication information, the method further includes: acquiring a downlink reference signal; and determining the value of N and the N pieces of downlink information based on the downlink reference signal.
  • the present disclosure provides a communication method, applied to access network equipment, including: obtaining second channel status indication information; using a second decoder and the second channel status indication information to determine N first channels Status indication information, the sum of the sequence lengths corresponding to the N pieces of first channel status indication information is greater than the sequence length corresponding to the second channel status indication information, and N is a positive integer greater than 1; using the first decoder and the input The N pieces of first channel status indication information of the first decoder determine the channel information of the N downlink transmission layers.
  • the above design matches the use of the first aspect and can decompress the twice-compressed channel information twice, thereby recovering the corresponding channel information.
  • the structure of the encoder, or the structure of the second autoencoder to which the second encoder belongs corresponds to, and/or corresponds to the value of N, that is, the sequence length of the second channel status indication information
  • the range may indicate one or more of the following: the structure of the second encoder, the structure of the second decoder that matches the second encoder, or the second self-encoder to which the second encoder belongs.
  • the structure of the encoder, and/or, the value of N is the sequence length of the second channel status indication information.
  • the second encoding The structure of the encoder, the structure of the second decoder matching the second encoder, or the structure of the second autoencoder to which the second encoder belongs, and/or the value of N.
  • the structure of the encoder, or the structure of the first autoencoder to which the first encoder belongs corresponds to, and/or corresponds to the value of N, that is, the sequence length of the second channel status indication information
  • the range may indicate one or more of the following: the structure of the first encoder, the structure of the first decoder matching the first encoder, or the first self-contained code to which the first encoder belongs.
  • the first encoding The structure of the encoder, the structure of the first decoder matching the first encoder, or the structure of the first autoencoder to which the first encoder belongs, and/or the value of N.
  • the method further includes: obtaining information indicating the rank, and the value of N is equal to the value of the rank.
  • the structure of the second decoder, the structure of the second encoder matching the second decoder, or one or more of the structures of the second autoencoder to which the second encoder belongs corresponds to at the value of N.
  • the structure of the first decoder, the structure of the first encoder matching the first decoder, or one or more of the structures of the first autoencoder to which the first encoder belongs is also corresponds to the N value.
  • the method further includes: obtaining second information, the second information being used to indicate one or more of the following: the second encoder, a decoder matching the second encoder, The second autoencoder AE to which the second encoder belongs.
  • the information receiver can quickly determine the second decoder matching the second encoder, thereby improving decoding efficiency.
  • the method further includes: obtaining first information, the first information being used to indicate one or more of the following: the first encoder, a decoder matching the first encoder, The first autoencoder AE to which the first encoder belongs.
  • the second decoder has decompression and dequantization functions.
  • the second channel status indication information is input to the second decoder, and the output of the second decoder includes N pieces of first channel status indication information.
  • the structure of the second decoder can be understood with reference to the following possible examples.
  • the second decoder includes a third AI model
  • the input of the third AI model includes the second channel status indication information
  • the output of the third AI model Includes the N pieces of first channel status indication information.
  • N is greater than 2
  • the second decoder includes N-1 third AI models
  • the input of the N-1 third AI model among the N-1 third AI models includes The second channel status indication information
  • the output includes the N-2th fourth channel status indication information and the Nth first channel status indication information
  • the input of the N-ith third AI model includes the N-ith fourth channel status indication information.
  • the output of the N-i third AI model includes the N-i+1 first channel status indication information and the N-i-1 fourth channel status indication information among the N first channel status indication information.
  • the input of the first third AI model includes the first fourth channel status indication information
  • the output of the first third AI model includes the first first among the N first channel status indication information.
  • Channel status indication information and the second first channel status indication information where i takes a positive integer ranging from 2 to N-2. It can be understood that neither N-i nor N-i-1 can be taken as 0.
  • the second channel status indication information is the N-1th fourth channel status information.
  • N is greater than 2
  • the second decoder includes K third AI models
  • K is a positive integer greater than 1 and less than N-1
  • the three AI model is used to restore at least 2 first channel status indication information among the N first channel status indication information.
  • the second decoder is capable of compression but not of dequantization. After obtaining the second channel status indication information, the second channel status indication information may be dequantized first to obtain the third channel status indication information. The third channel status indication information is then input to the second decoder, and the output of the second decoder includes N pieces of first channel status indication information.
  • the structure of the second decoder can be understood with reference to the following possible examples.
  • the second decoder includes a third AI model
  • the input of the third AI model includes the third channel status indication information
  • the output of the third AI model Includes the N pieces of first channel status indication information.
  • N is greater than 2
  • the second decoder includes N-1 third AI models
  • the input of the N-i third AI model among the N-1 third AI models includes the N-i third AI model.
  • fourth channel status indication information and the output of the N-i third AI model includes at least one of the following information: the N-i first channel status indication information in the N first channel status indication information, the The N-i+1th first channel status indication information and the N-i-1th fourth channel status indication information among the N pieces of first channel status indication information; where i takes a positive integer from 1 to N-1 , the third channel status indication information is the N-1th fourth channel status information. It can be understood that neither N-i nor N-i-1 can be 0.
  • the four channel status indication information may include the N-ith first channel status indication information and the N-i+1th first channel status indication information among the N pieces of first channel status indication information.
  • N is greater than 2
  • the second decoder includes K third AI models
  • K is a positive integer greater than 1 and less than N-1
  • each third of the K third AI models The AI model is used to restore part of the first channel status indication information among the N pieces of first channel status indication information.
  • the structure, input and/or output of the first decoder will be described in detail below.
  • the function of the first decoder matches that of the first encoder. If the first encoder has compression and quantization functions, the first decoder can have decompression and dequantization functions; or if the first encoder has The decoder may have compression but not quantization function, then the first decoder may have decompression but not dequantization function.
  • the first decoder includes N fourth AI models, and the N first channel status indication information is input to the N fourth AI models respectively.
  • the first decoder includes a fourth AI model, and the N first channel status indication information is input to the fourth AI model serially or in parallel; or, the The first decoder includes M fourth AI models, M is greater than 1 and An integer smaller than N, part of the first channel state indication information of the N pieces of first channel state indication information is input to one first AI model among the M fourth AI models in series or in parallel.
  • the present disclosure provides a communication device, which may be a terminal device, a device, a module or a chip in the terminal device, or a device that can be used in conjunction with the terminal device.
  • the communication device may include a module that performs one-to-one correspondence with the method/operation/step/action described in the first aspect.
  • the module may be a hardware circuit, software, or a combination of hardware circuit and software.
  • the communication device may include a processing module and a communication module.
  • a processing module configured to determine N pieces of first channel status indication information using the first encoder and the channel information of N downlink transmission layers input to the first encoder; wherein N is a positive integer greater than 1; and using The second encoder and the N pieces of first channel status indication information input to the second encoder determine the second channel status indication information; wherein the second channel status indication information corresponds to the N downlink transmission layers , the sequence length corresponding to the second channel status indication information is less than the sum of the sequence lengths corresponding to the N pieces of first channel status indication information;
  • a communication module configured to send the second channel status indication information.
  • the communication module is also configured to send information indicating the rank, and the value of N is equal to the value of the rank.
  • the value of rank is greater than 1, it can implicitly indicate that the downlink channel information is compressed twice.
  • the information used to indicate the rank may be a rank indicator (rank indicator, RI).
  • the structure of the second encoder, the structure of the second decoder matching the second encoder, or the structure of the second autoencoder to which the second encoder belongs corresponds to the value of N.
  • the structure of the first encoder, the structure of the first decoder matching the first encoder, or one or more of the structures of the first autoencoder to which the first encoder belongs is also Corresponds to the value of N.
  • the aforementioned information indicating the rank may also be used by a device that receives the information indicating the rank to determine the structure of the second decoder and/or the first decoder.
  • the communication module is also configured to send second information, where the second information is used to indicate one or more of the following: the second encoder, the second encoder that matches the second encoder. decoder, the second autoencoder AE to which the second encoder belongs.
  • the communication module is also configured to send first information, where the first information is used to indicate one or more of the following: the first encoder, the Decoder, the first autoencoder AE to which the first encoder belongs.
  • the structure, input and/or output of the first encoder can be understood with reference to the content in the first aspect, which will not be described again in this disclosure.
  • the structure, input and/or output of the second encoder can be understood with reference to the content in the first aspect, which will not be described again in this disclosure.
  • the processing module is further configured to obtain a downlink reference signal through the communication module before determining the N first channel status indication information, and determine the N based on the downlink reference signal.
  • the present disclosure provides a communication device.
  • the communication device may be an access network device, a device, a module or a chip in the access network device, or a device that can be used in conjunction with the access network device.
  • the communication device may include a module that performs one-to-one correspondence with the method/operation/step/action described in the second aspect.
  • the module may be a hardware circuit, software, or a combination of hardware circuit and software.
  • the communication device may include a processing module and a communication module.
  • a communication module used to obtain the second channel status indication information
  • a processing module configured to use the second decoder and the second channel status indication information to determine N first channel status indication information, where the sum of sequence lengths corresponding to the N first channel status indication information is greater than the first The sequence length corresponding to the two channel status indication information, N is a positive integer greater than 1; and using the first decoder and the N first channel status indication information input to the first decoder to determine the N downlink Transport layer channel information.
  • the communication module is also configured to obtain information indicating the rank, and the value of N is equal to the value of the rank.
  • the structure of the second decoder, the structure of the second encoder matching the second decoder, or one or more of the structures of the second autoencoder to which the second encoder belongs corresponds to at the value of N.
  • the structure of the first decoder, the structure of the first encoder matching the first decoder, or one or more of the structures of the first autoencoder to which the first encoder belongs is also Corresponds to the value of N.
  • the communication module is also used to obtain second information, where the second information is used to indicate one or more of the following: The second encoder, a decoder matching the second encoder, and a second autoencoder AE to which the second encoder belongs.
  • the communication module is also configured to obtain first information, where the first information is used to indicate one or more of the following: the first encoder, the Decoder, the first autoencoder AE to which the first encoder belongs.
  • the structure, input and/or output of the second decoder can be understood with reference to the content in the first aspect, which will not be described again in this disclosure.
  • the structure, input and/or output of the first decoder can be understood with reference to the content in the first aspect, which will not be described again in this disclosure.
  • the present disclosure provides a communication device, which includes a processor for implementing the method described in the first aspect.
  • the processor is coupled to a memory, and the memory is used to store instructions and data.
  • the communication device may also include a memory; the communication device may also include a communication interface, which is used for the device to communicate with other devices.
  • the communication interface may be a transceiver, a circuit, Bus, module, pin or other type of communication interface.
  • the communication device includes:
  • Memory used to store instructions
  • a processor configured to determine N pieces of first channel status indication information using the first encoder and the channel information of N downlink transmission layers input to the first encoder; wherein N is a positive integer greater than 1; and using The second encoder and the N pieces of first channel status indication information input to the second encoder determine the second channel status indication information; wherein the second channel status indication information corresponds to the N downlink transmission layers , the sequence length corresponding to the second channel status indication information is less than the sum of the sequence lengths corresponding to the N pieces of first channel status indication information;
  • a communication interface used to send the second channel status indication information is implemented in the form of software function modules.
  • the first encoder and the second encoder can be implemented in the form of hardware (such as chips, hardware circuits, etc.).
  • the first encoder and the second encoder may be integrated in the processor, or may be separately deployed in the communication device.
  • the communication device may further include the first encoder and the second encoder.
  • the present disclosure provides a communication device, which includes a processor for implementing the method described in the second aspect.
  • the processor is coupled to a memory, and the memory is used to store instructions and data.
  • the communication device may also include a memory; the communication device may also include a communication interface, which is used for the device to communicate with other devices.
  • the communication interface may be a transceiver, a circuit, Bus, module, pin or other type of communication interface.
  • the communication device includes:
  • Memory used to store instructions
  • a communication interface used to obtain the second channel status indication information
  • a processor configured to use a second decoder and the second channel status indication information to determine N pieces of first channel status indication information, where the sum of sequence lengths corresponding to the N pieces of first channel status indication information is greater than the The sequence length corresponding to the two channel status indication information, N is a positive integer greater than 1; and using the first decoder and the N first channel status indication information input to the first decoder to determine the N downlink Transport layer channel information.
  • the second decoder and the first decoder are implemented in the form of software function modules.
  • the second decoder and the first decoder may be implemented in the form of hardware (such as chips, hardware circuits, etc.).
  • the second decoder and the first decoder may be integrated in the processor, or may be separately deployed in the communication device.
  • the communication device may further include the second decoder and the first decoder.
  • the present disclosure provides a communication system, which includes a terminal device and an access network device.
  • a communication system which includes a terminal device and an access network device.
  • the interaction between terminal equipment and access network equipment can be understood with reference to the following content:
  • the terminal equipment uses the first encoder and the channel information of the N downlink transmission layers input to the first encoder to determine N pieces of first channel status indication information; where N is a positive integer greater than 1; the terminal equipment uses the second encoder and the N first channel status indication information input to the second encoder to determine the second channel status indication information; wherein the second channel status indication information corresponds to the N downlink transmission layers, and the second channel
  • the sequence length corresponding to the status indication information is less than the sum of the sequence lengths corresponding to the N first channel status indication information;
  • the terminal device sends the second channel status indication information to the access network device, and the access network device receives the second channel status indication information from the terminal device;
  • the access network device uses the second decoder and the second channel status indication information to determine N first channel status indication information, where N is a positive integer greater than 1; the access network device uses the first decoder and inputs the first The N first channel status indication information of the decoder determines the channel information of the N downlink transmission layers.
  • the solution implemented on the terminal device side can be understood with reference to the design described in the first aspect.
  • the solution implemented on the access network device side can also be understood with reference to the design described in the second aspect, which will not be described again in this disclosure.
  • the present disclosure provides a communication system, including a communication device as described in the third or fifth aspect; and a communication device as described in the fourth or sixth aspect.
  • the present disclosure also provides a computer program, which when the computer program is run on a computer, causes the computer to execute the method provided in any one of the above-mentioned first to second aspects.
  • the present disclosure also provides a computer program product, which includes instructions that, when run on a computer, cause the computer to execute the method provided in any one of the above-mentioned first to second aspects.
  • the present disclosure also provides a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program or instructions.
  • the computer program or instructions When the computer program or instructions are run on a computer, the computer causes the computer to The method provided in any one of the above first to second aspects is executed.
  • the present disclosure also provides a chip, which is used to read a computer program stored in a memory and execute the method provided in any one of the above first to second aspects, or the chip A circuit is included for performing the method provided in any one of the above-mentioned first to second aspects.
  • the present disclosure also provides a chip system, which includes a processor for supporting a device to implement the method provided in any one of the above first to second aspects.
  • the chip system further includes a memory, and the memory is used to save necessary programs and data for the device.
  • the chip system can be composed of chips or include chips and other discrete devices.
  • Figure 1 is a schematic structural diagram of a communication system
  • Figure 2A is a schematic diagram of the structure of a neuron
  • Figure 2B is a schematic diagram of the layer relationship of the neural network
  • Figure 2C is a schematic diagram of an AI application framework provided by the present disclosure.
  • Figure 3 is a schematic structural diagram of another communication system
  • FIGS. 4A to 4D are schematic diagrams of several network architectures
  • FIG. 5 is one of the flow diagrams of the communication method provided by the present disclosure.
  • Figure 6A is one of the schematic flow diagrams of encoding and decoding provided by the present disclosure.
  • Figure 6B is one of the schematic flow diagrams of encoding and decoding provided by the present disclosure.
  • Figure 6C is one of the schematic flow diagrams of encoding and decoding provided by the present disclosure.
  • Figure 7 is one of the flow diagrams of the communication method provided by the present disclosure.
  • Figure 8A is one of the schematic flow diagrams of encoding and decoding provided by the present disclosure.
  • Figure 8B is one of the schematic flow diagrams of encoding and decoding provided by the present disclosure.
  • Figure 8C is one of the schematic flow diagrams of encoding and decoding provided by the present disclosure.
  • Figure 9 is one of the schematic flow diagrams of encoding and decoding provided by the present disclosure.
  • Figure 10 is one of the structural schematic diagrams of the communication device provided by the present disclosure.
  • Figure 11 is one of the structural schematic diagrams of the communication device provided by the present disclosure.
  • the disclosure below refers to at least one (item), indicating one (item) or more (items). Multiple (items) refers to two (items) or more than two (items).
  • “And/or” describes the relationship between associated objects, indicating that there can be three relationships. For example, A and/or B can mean: A exists alone, A and B exist simultaneously, and B exists alone. The character “/” generally indicates that the related objects are in an "or” relationship.
  • first, second, etc. may be used to describe various objects in this disclosure, these objects should not be limited to these terms. These terms are only used to distinguish objects from each other.
  • the communication system can be a third generation (3G) communication system (such as a universal mobile telecommunication system (UMTS)), a fourth generation (4th generation, 4G) communication system (such as long term evolution (LTE) system), fifth generation (5th generation, 5G) communication system, global interoperability for microwave access (WiMAX) or wireless Local area network (wireless local area network, WLAN) system, or integration system of multiple systems, or future communication system, such as 6G communication system, etc.
  • the 5G communication system can also be called a new radio (NR) system.
  • One network element in a communication system can send signals to or receive signals from another network element.
  • the signal may include information, signaling or data, etc.
  • the network element can also be replaced by an entity, a network entity, a device, a communication device, a communication module, a node, a communication node, etc.
  • a network element is taken as an example for description.
  • the communication system may include at least one terminal device and at least one access network device.
  • the access network device can send downlink signals to the terminal device, and/or the terminal device can send uplink signals to the access network device.
  • the communication system includes multiple terminal devices, multiple terminal devices can also communicate with each other.
  • Mutual signaling means that both the signal sending network element and the signal receiving network element can be terminal devices.
  • FIG. 1 is a simplified schematic diagram of a wireless communication system provided by an embodiment of the present application.
  • the wireless communication system includes a wireless access network 100.
  • the radio access network 100 may be a next-generation (eg, 6G or higher) radio access network, or a legacy (eg, 5G, 4G, 3G or 2G) radio access network.
  • One or more communication devices 120a-120j, collectively 120
  • the wireless communication system may also include other equipment, such as core network equipment, wireless relay equipment and/or wireless backhaul equipment, etc., which are not shown in Figure 1 .
  • the wireless communication system may include multiple network devices (also called access network devices) at the same time, or may include multiple communication devices at the same time.
  • a network device can serve one or more communication devices at the same time.
  • a communication device can also access one or more network devices at the same time.
  • the embodiment of the present application does not limit the number of communication devices and network devices included in the wireless communication system.
  • the network device may be an entity on the network side that is used to transmit or receive signals.
  • the network device may be an access device through which the communication device wirelessly accesses the wireless communication system.
  • the network device may be a base station.
  • Base stations can broadly cover various names as follows, or be replaced with the following names, such as: Node B (NodeB), evolved base station (evolved NodeB, eNB), next generation base station (next generation NodeB, gNB), open wireless Access network equipment, relay station, access point, transmission point (transmitting and receiving point, TRP), transmitting point (TP), main station MeNB, Secondary station SeNB, multi-standard wireless (MSR) node, home base station, network controller, access node, wireless node, access point (AP), transmission node, transceiver node, baseband unit (BBU), radio frequency remote unit ( RRU), active antenna unit (AAU), radio head (RRH), central unit (CU), distribution unit (DU), wireless unit (radio unit, RU), centralized unit
  • the base station may be a macro base station, a micro base station, a relay node, a donor node or the like, or a combination thereof.
  • Network equipment may also refer to communication modules, modems or chips provided in the aforementioned equipment or devices.
  • Network equipment can also be a mobile switching center and responsible for device-to-device (D2D), vehicle outreach (vehicle-to-everything, V2X), and machine-to-machine (M2M) communications.
  • D2D device-to-device
  • V2X vehicle outreach
  • M2M machine-to-machine
  • Network devices can support networks with the same or different access technologies. The embodiments of this application do not limit the specific technology and specific equipment form used by the network equipment.
  • Network equipment can be fixed or mobile.
  • base stations 110a, 110b are stationary and are responsible for wireless transmission and reception in one or more cells from communication devices 120.
  • the helicopter or drone 120i shown in Figure 1 may be configured to act as a mobile base station, and one or more cells may move based on the location of the mobile base station 120i.
  • a helicopter or drone (120i) may be configured to serve as a communications device that communicates with base station 110b.
  • the communication device used to implement the above access network function may be an access network device, or may be a network device with partial functions of the access network, or may be a device capable of supporting the implementation of the access network function, such as Chip system, hardware circuit, software module, or hardware circuit plus software module. This device can be installed in access network equipment or used in conjunction with access network equipment.
  • the communication device used to implement the function of the access network device is an access network device as an example for description.
  • a communication device may be an entity on the user side that is used to receive or transmit signals, such as a mobile phone.
  • Communication devices can be used to connect people, things and machines.
  • the communication device may communicate with one or more core networks through network devices.
  • Communication devices include handheld devices with wireless connectivity, other processing devices connected to wireless modems, or vehicle-mounted devices.
  • the communication device may be a portable, pocket-sized, handheld, computer-built-in, or vehicle-mounted mobile device.
  • the communication device 120 can be widely used in various scenarios, such as cellular communication, device-to-device D2D, vehicle-to-everything V2X, end-to-end P2P, machine-to-machine M2M, machine type communication MTC, Internet of Things IOT, virtual reality VR, and augmented reality AR, industrial control, autonomous driving, telemedicine, smart grid, smart furniture, smart office, smart wear, smart transportation, smart city, drone, robot, remote sensing, passive sensing, positioning, navigation and tracking, autonomous delivery and Mobile etc.
  • cellular communication device-to-device D2D, vehicle-to-everything V2X, end-to-end P2P, machine-to-machine M2M, machine type communication MTC, Internet of Things IOT, virtual reality VR, and augmented reality AR, industrial control, autonomous driving, telemedicine, smart grid, smart furniture, smart office, smart wear, smart transportation, smart city, drone, robot, remote sensing, passive sensing, positioning, navigation and tracking, autonomous delivery and Mobile etc.
  • Some examples of communication devices 120 are: 3GPP standard user equipment (UE), fixed devices, mobile devices, handheld devices, wearable devices, cellular phones, smartphones, Session Initiation Protocol (SIP) phones, laptops, personal computers, Smart books, vehicles, satellites, global positioning system (GPS) equipment, target tracking equipment, drones, helicopters, aircraft, ships, remote control equipment, smart home equipment, industrial equipment, personal communication service (PCS) phones , wireless local loop (WLL) station, personal digital assistant (PDA), wireless network camera, tablet computer, handheld computer, mobile Internet device (mobile internet device, MID), wearable devices such as Smart watches, virtual reality (VR) equipment, augmented reality (AR) equipment, wireless terminals in industrial control (industrial control), terminals in Internet of Vehicles systems, and self-driving (self-driving) Wireless terminals, wireless terminals in smart grid, wireless terminals in transportation safety, wireless terminals in smart city (smart city) such as smart refuelers, terminal equipment on high-speed rail and smart homes (smart Wireless terminal
  • the communication device 120 may be a wireless device in the above various scenarios or a device provided in the wireless device, for example, a communication module, a modem or a chip in the above device.
  • Communication equipment can also be called terminal, terminal equipment, user equipment (UE), mobile station (MS), mobile terminal (mobile terminal, MT), etc.
  • the communication device may also be a communication device in a future wireless communication system.
  • Communication equipment can be used in dedicated network equipment or general equipment. The embodiments of this application do not limit the specific technology and specific equipment form used in the communication equipment.
  • the communication device can be used to act as a base station.
  • a UE may act as a scheduling entity that provides sidelink signals between UEs in V2X, D2D or P2P, etc.
  • cell phone 120a and car 120b communicate with each other using sidelink signals.
  • Cell phone 120a and smart home device 120e communicate without relaying communication signals through base station 110b.
  • the communication device used to implement the functions of the communication device may be a terminal device, or a terminal device having part of the functions of the above communication device, or a device that can support the realization of the functions of the above communication device, such as a chip system,
  • the device can be installed in a terminal device or used in conjunction with the terminal device.
  • the chip system may be composed of chips, or may include chips and other discrete devices.
  • the communication device is a terminal device or a UE as an example for description.
  • a wireless communication system is usually composed of a cell.
  • a base station provides management of the cell.
  • the base station provides communication services to multiple mobile stations (MS) in the cell.
  • the base station includes a baseband unit (BBU) and a remote radio unit (RRU).
  • BBU baseband unit
  • RRU remote radio unit
  • the BBU and RRU can be placed in different places.
  • the RRU is remote and placed in a high traffic area, and the BBU is placed in the central computer room.
  • BBU and RRU can also be placed in the same computer room.
  • the BBU and RRU can also be different components under the same rack.
  • a cell may correspond to a carrier or component carrier.
  • the present disclosure can be applied between a network device and a communication device, between a network device and a network device, or between a communication device and a communication device, that is, between a primary device and a secondary device.
  • the primary device can It is a network device or a communication device.
  • the secondary device can be another network device or communication device.
  • the main device is a communication device
  • the secondary device can be another communication device.
  • the primary device as a network device, such as an access network device
  • the secondary device as a communication device, such as a terminal device, as an example.
  • the communication direction corresponding to the downlink is from the primary device to the secondary device
  • the communication direction corresponding to the uplink is from the secondary device to the primary device.
  • Protocol layer structure between access network equipment and terminal equipment
  • the protocol layer structure may include a control plane protocol layer structure and a user plane protocol layer structure.
  • the control plane protocol layer structure may include a radio resource control (RRC) layer, a packet data convergence protocol (PDCP) layer, and a radio link control (radio link control) layer.
  • RRC radio resource control
  • PDCP packet data convergence protocol
  • radio link control radio link control
  • RLC radio link control
  • media access control medium access control
  • MAC media access control
  • the user plane protocol layer structure may include the functions of protocol layers such as PDCP layer, RLC layer, MAC layer and physical layer.
  • the PDCP layer may also include service data adaptation protocol (service data adaptation protocol). protocol, SDAP) layer.
  • the protocol layer structure between the access network device and the terminal may also include an artificial intelligence (artificial intelligence, AI) layer for transmitting data related to the AI function.
  • AI artificial intelligence
  • the SDAP layer such as the SDAP layer, PDCP layer, RLC layer, MAC layer, and physical layer.
  • the SDAP layer, PDCP layer, RLC layer, MAC layer and physical layer can also be collectively referred to as the access layer.
  • the transmission direction of data it is divided into sending or receiving, and each layer mentioned above is divided into sending part and receiving part.
  • the PDCP layer obtains data from the upper layer, it transmits the data to the RLC layer and MAC layer, and then the MAC layer generates a transmission block, and then wirelessly transmits it through the physical layer.
  • Data is encapsulated accordingly in each layer.
  • the data received by a certain layer from the upper layer of the layer is regarded as the service data unit (SDU) of the layer.
  • SDU service data unit
  • PDU protocol data unit
  • the terminal device may also have an application layer and a non-access layer.
  • the application layer can be used to provide services to applications installed in the terminal device.
  • the downlink data received by the terminal device can be sequentially transmitted from the physical layer to the application layer, and then provided to the application program by the application layer; for another example, The application layer can obtain the data generated by the application program and transmit the data to the physical layer in turn and send it to other communication devices.
  • the non-access layer can be used to forward user data, such as forwarding uplink data received from the application layer to the SDAP layer or forwarding downlink data received from the SDAP layer to the application layer.
  • Access network equipment can include centralized units (central unit, CU) and distributed units (distributed unit, DU). Multiple DUs can be centrally controlled by one CU.
  • the interface between the CU and the DU may be called the F1 interface.
  • the control panel (CP) interface can be F1-C
  • the user panel (UP) interface can be F1-U.
  • CU and DU can be divided according to the protocol layer of the wireless network: for example, the functions of the PDCP layer and above are set in the CU, and the functions of the protocol layers below the PDCP layer (such as the RLC layer and MAC layer, etc.) are set in the DU; for example, PDCP
  • the functions of the protocol layers above are set in the CU, and the functions of the PDCP layer and the lower protocol layers are set in the DU.
  • CU or DU can be divided into functions with more protocol layers, and CU or DU can also be divided into partial processing functions with protocol layer.
  • part of the functions of the RLC layer and the functions of the protocol layer above the RLC layer are set in the CU, and the remaining functions of the RLC layer and the functions of the protocol layer below the RLC layer are set in the DU.
  • the functions of CU or DU can also be divided according to business types or other system requirements, for example, according to delay, and the functions whose processing time needs to meet the delay requirements are set in DU, but do not need to meet the delay.
  • the required functionality is set in CU.
  • the CU may also have one or more functions of the core network.
  • the CU can be set on the network side to facilitate centralized management.
  • the RU of the DU is set far away. Among them, RU has radio frequency function.
  • DU and RU can be divided at the physical layer (PHY).
  • PHY physical layer
  • DU can implement high-level functions in the PHY layer
  • RU can implement low-level functions in the PHY layer.
  • the functions of the PHY layer can include adding cyclic redundancy check (CRC) code, channel coding, rate matching, scrambling, modulation, layer mapping, precoding, resource mapping, physical antenna Mapping, and/or RF transmitting functions.
  • CRC cyclic redundancy check
  • the functions of the PHY layer may include CRC, channel decoding, derate matching, descrambling, demodulation, delayer mapping, channel detection, resource demapping, physical antenna demapping, and/or radio frequency reception functions.
  • the high-level functions in the PHY layer may include part of the functions of the PHY layer, for example, this part of the function is closer to the MAC layer, and the lower-layer functions of the PHY layer may include another part of the function of the PHY layer, for example, this part of the function is closer to the radio frequency function.
  • high-level functions in the PHY layer may include adding CRC codes, channel coding, rate matching, scrambling, modulation, and layer mapping
  • low-level functions in the PHY layer may include precoding, resource mapping, physical antenna mapping, and radio frequency transmission.
  • the high-level functions in the PHY layer may include adding CRC codes, channel coding, rate matching, scrambling, modulation, layer mapping, and precoding
  • the low-layer functions in the PHY layer may include resource mapping, physical antenna mapping, and radio frequency Send function.
  • the functions of the CU may be implemented by one entity, or may be implemented by different entities.
  • the functions of the CU can be further divided, that is, the control plane and the user plane are separated and implemented through different entities, namely the control plane CU entity (i.e., CU-CP entity) and the user plane CU entity (i.e., CU-UP entity).
  • the CU-CP entity and the CU-UP entity can be coupled with the DU to jointly complete the functions of the access network equipment.
  • the signaling generated by the CU can be sent to the terminal device through DU, or the signaling generated by the terminal device can be sent through DU is sent to CU.
  • RRC or PDCP layer signaling will eventually be processed into physical layer signaling and sent to the terminal device, or it will be converted from the received physical layer signaling.
  • the RRC or PDCP layer signaling can be considered to be sent through DU, or sent through DU and RU.
  • any one of the above DU, CU, CU-CP, CU-UP and RU can be a software module, a hardware structure, or a software module + hardware structure, without limitation.
  • the existence forms of different entities can be different and are not limited.
  • DU, CU, CU-CP, and CU-UP are software modules
  • RU is a hardware structure.
  • the number and type of each device in the communication system shown in Figure 1 are only for illustration, and the present disclosure is not limited thereto.
  • the communication system may also include more terminal devices and more access networks.
  • the equipment may also include other network elements, such as core network equipment, and/or network elements used to implement artificial intelligence functions.
  • the method provided by the present disclosure can be used for communication between access network equipment and terminal equipment, and can also be used for communication between other communication equipment, such as communication between macro base stations and micro base stations in wireless backhaul links.
  • communication between two terminal devices in a side link (SL) is not restricted.
  • the access network equipment needs to obtain the channel state information CSI of the downlink channel, and decide based on the CSI to schedule the resources, modulation and coding scheme (MCS) of the downlink data channel of the terminal equipment, and Precoding and other configurations.
  • MCS modulation and coding scheme
  • CSI belongs to a kind of channel information, which is a kind of information that can reflect channel characteristics and channel quality.
  • the channel information may also be called channel response.
  • the CSI may be represented by a channel matrix.
  • the CSI may include a channel matrix, or the CSI may be composed of a channel feature vector.
  • the access network equipment In a frequency division duplex (FDD) communication scenario, since the uplink and downlink channels do not have reciprocity or cannot guarantee the reciprocity of the uplink and downlink channels, the access network equipment usually downlinks reference signals to the terminal equipment.
  • the terminal equipment performs channel measurement and interference measurement to estimate the downlink channel information based on the received downlink reference signal.
  • the downlink channel information includes CSI, and then feeds back the CSI to the access network equipment.
  • the terminal device can generate a CSI report based on the estimated CSI and feed it back to the access network device in a predefined or configured manner by the access network device.
  • the downlink reference signal includes channel state information-reference signal (CSI-RS) or synchronizing signal block (synchronizing signal/physical broadcast channel block, SSB).
  • CSI-RS channel state information-reference signal
  • SSB synchronizing signal block
  • the CSI report includes feedback quantities such as rank indicator (rank indicator, RI), channel quality indicator (channel quality indicator, CQI), and precoding matrix indicator (precoding matrix indicator, PMI).
  • RI is used to indicate the number of downlink transmission layers recommended by the terminal equipment
  • CQI is used to indicate the modulation and coding method that the current channel conditions judged by the terminal equipment can support
  • PMI is used to indicate the precoding recommended by the terminal equipment
  • the precoding layer indicated by the PMI The number corresponds to RI. For example, if RI is 1, then PMI indicates one layer of precoding; if RI is N, then PMI indicates N layer precoding. N is a positive integer greater than 1. It can be understood that N layer precoding corresponds to The N downlink transmission layers indicated by RI. It can be seen that the larger the number of downlink transmission layers indicated by RI, the higher the feedback overhead of PMI.
  • the PMI indicating layer 1 precoding is referred to as layer 1 PMI
  • the PMI indicating layer N precoding is referred to as layer N PMI.
  • the following uses i 1 and i 2 to introduce the common information in layer 1 to layer 4 PMI and each layer. independent information.
  • the first row in i 1 and i 2 corresponds to layer 1 PMI
  • the second row corresponds to layer 2 PMI
  • the third row corresponds to layer 3 PMI
  • the fourth row corresponds to layer 4 PMI.
  • i 1,1 , i 1,2 , and i 1,5 are the common information between the 4-layer precoding
  • i 1,1 is used to represent the DFT spatial base selection
  • i 1,2 is used is used to represent the spatial domain oversampling selection
  • i 1,5 is used to represent the DFT frequency domain basis subset selection.
  • i 1,6,l i 1,7,l i 1,8,l i 2,3,l i 2,4,l i 2,5,l is the independent information of the lth layer among the 4 layers, l is taken A positive integer from 1 to 4.
  • i 1,6,l is used to represent the l-th layer frequency domain base selection
  • i 1,7,l is used to represent the l-th layer non-zero sparse bitmap (bitmap)
  • i 1,8,l is used to Indicates the strongest level of l
  • the coefficients, i 2,3,l i 2,4,l i 2,5,l, are used to represent the amplitude and phase coefficients of the l-th layer.
  • AI artificial intelligence
  • the terminal equipment uses the AI model to compress and feedback the CSI
  • the access network equipment uses the AI model to restore the compressed CSI.
  • What is transmitted in AI-based CSI feedback is a sequence (such as a bit sequence), and the overhead is lower than the overhead of traditional feedback CSI reporting.
  • a terminal device when a terminal device performs CSI feedback of multiple downlink transmission layers, it usually treats the CSI of multiple downlink transmission layers as mutually independent information. For each of the multiple downlink transmission layers, the terminal device will independently use the AI model to perform compressed feedback on the CSI of that layer.
  • the access network device receives the compressed CSI corresponding to each of the multiple downlink transmission layers, and then the access network device recovers the compressed CSI corresponding to the multiple downlink transmission layers respectively.
  • This design of independent compression feedback for multi-layer CSI will lead to repeated transmission of some common information between multi-layer CSI (such as DFT spatial base selection, spatial oversampling selection, DFT frequency domain base subset selection, etc.), resulting in resource waste.
  • the present disclosure provides a communication method and device. After compressing the multi-layer CSI respectively, the AI model is then used to compress the common information between the once-compressed multi-layer CSI, thereby reducing the waste of resources in repeated transmission of public information. , thereby reducing the feedback overhead of CSI.
  • This communication method can be applied to the above communication system, such as FDD communication scenarios.
  • the communication method can also be used in time division duplex (TDD) communication scenarios, which is not limited by the present disclosure.
  • TDD time division duplex
  • the AI model is the specific implementation of the AI technical function.
  • the AI model represents the mapping relationship between the input and output of the model.
  • the type of AI model can be a neural network, linear regression model, decision tree model, support vector machine (SVM), Bayesian network, Q learning model or other machine learning (ML) model.
  • This disclosure relates to an encoder for compressing CSI and a decoder for restoring compressed CSI.
  • the encoder and decoder need to be used in a matching manner. It can be understood that the encoder and decoder are matching AI models.
  • an encoder may include one or more AI models, and the decoder matched by the encoder may also include one or more AI models. The number of AI models included in the matched encoder and the decoder is the same, and One-to-one correspondence.
  • a matched set of encoder (encoder) and decoder (decoder) can be specifically two parts of the same auto-encoder (auto-encoders, AE).
  • the autoencoder is an unsupervised learning neural network. Its characteristic is that the input data is used as label data. Therefore, the autoencoder can also be understood as a self-supervised learning neural network.
  • Autoencoders can be used for data compression and recovery. For example, the encoder in the autoencoder can compress (encode) data A to obtain data B; the decoder in the autoencoder can decompress (decode) data B to recover data A. Or it can be understood that the decoder is the inverse operation of the encoder.
  • Neural network is a specific implementation form of AI or machine learning technology. According to the universal approximation theorem, neural networks can theoretically approximate any continuous function, which enables neural networks to have the ability to learn arbitrary mappings.
  • each neuron performs a weighted sum operation on its input values and outputs the operation result through an activation function.
  • the bias for weighted summation of input values according to the weight is, for example, b. There are many forms of activation functions.
  • the activation functions of different neurons in a neural network can be the same or different.
  • Neural networks generally include multiple layers, and each layer may include one or more neurons. By increasing the depth and/or width of a neural network, the expressive ability of the neural network can be improved, providing more powerful information extraction and abstract modeling capabilities for complex systems.
  • the depth of the neural network may refer to the number of layers included in the neural network, and the number of neurons included in each layer may be called the width of the layer.
  • the neural network includes an input layer and an output layer. The input layer of the neural network processes the received input information through neurons, and passes the processing results to the output layer, which obtains the output results of the neural network.
  • the neural network includes an input layer, a hidden layer and an output layer, as shown in Figure 2B.
  • the input layer of the neural network processes the received input information through neurons and passes the processing results to the intermediate hidden layer.
  • the hidden layer calculates the received processing results and obtains the calculation results.
  • the hidden layer passes the calculation results to the input layer.
  • a neural network may include one hidden layer, or multiple hidden layers connected in sequence, without limitation.
  • the AI model involved in this disclosure can be a deep neural network (DNN).
  • DNN can include feedforward neural networks (FNN), convolutional neural networks (CNN) and recurrent neural networks (RNN).
  • FNN feedforward neural networks
  • CNN convolutional neural networks
  • RNN recurrent neural networks
  • the training data set is used for training the AI model.
  • the training data set can include the input of the AI model, or include the input and target output of the AI model.
  • the training data set includes one or more training data.
  • the training data may be training samples input to the AI model, or may be the target output of the AI model. Among them, the target output can also be called a label or a label sample.
  • the training data set is one of the important parts of machine learning. Model training is essentially to learn some of its features from the training data so that the output of the AI model is as close as possible to the target output, such as the difference between the output of the AI model and the target output. The difference is as small as possible.
  • the composition and selection of training data sets can, to a certain extent, determine the performance of the trained AI model.
  • a loss function can be defined during the training process of AI models (such as neural networks).
  • the loss function describes the gap or difference between the output value of the AI model and the target output value. This disclosure does not limit the specific form of the loss function.
  • the training process of the AI model is the process of adjusting the model parameters of the AI model so that the value of the loss function is less than the threshold, or the value of the loss function meets the target requirements.
  • the AI model is a neural network, and adjusting the model parameters of the neural network includes adjusting at least one of the following parameters: the number of layers of the neural network, the width, the weight of the neuron, or the parameters in the activation function of the neuron.
  • Inference data can be used as input to the trained AI model for inference of the AI model.
  • the inference data is input into the AI model, and the corresponding output can be obtained, which is the inference result.
  • the design of AI models mainly includes data collection links (such as collecting training data and/or inference data), model training links, and model inference links. It may further include an application link of the reasoning results.
  • data collection links such as collecting training data and/or inference data
  • model training links such as collecting training data and/or inference data
  • model inference links It may further include an application link of the reasoning results.
  • FIG 2C to illustrate an AI application framework.
  • data sources are used to provide training data sets and inference data.
  • the AI model is obtained by analyzing or training the training data (training data) provided by the data source.
  • the AI model represents the mapping relationship between the input and output of the model. Learning an AI model through model training nodes is equivalent to using training data to learn the mapping relationship between the input and output of the model.
  • the AI model trained through the model training link is used to perform inference based on the inference data provided by the data source to obtain the inference results.
  • This link can also be understood as: input the inference data into the AI model, obtain the output through the AI model, and the output is the inference result.
  • the inference result may indicate: configuration parameters used (executed) by the execution object, and/or operations performed by the execution object.
  • the inference results are released in the inference result application link.
  • the inference results can be planned uniformly by the execution (actor) entity.
  • the execution entity can send the inference results to one or more execution objects (for example, core network equipment, access network equipment). , or terminal equipment, etc.) to execute.
  • the execution entity can also feed back the performance of the model to the data source to facilitate the subsequent update and training of the model.
  • network elements with artificial intelligence functions may be included in the communication system.
  • the above steps related to AI model design can be executed by one or more network elements with artificial intelligence functions.
  • AI functions (such as AI modules or AI entities) can be configured in existing network elements in the communication system to implement AI-related operations, such as AI model training and/or inference.
  • the existing network element may be access network equipment (such as gNB), terminal equipment, core network equipment, or network management, etc.
  • the network management can divide the network management work into three categories according to the actual needs of the operator's network operation: Operation, Management, and Maintenance.
  • Network management can also be called operation administration and maintenance (OAM) network element, referred to as OAM.
  • OAM operation administration and maintenance
  • Operations mainly complete the analysis, prediction, planning and configuration of daily networks and services; maintenance mainly involves daily operational activities such as testing and fault management of the network and its services.
  • the network manager can detect the network operating status and optimize network connections and performance. , improve network operation stability and reduce network maintenance costs.
  • independent network elements can also be introduced into the communication system to perform AI-related operations, such as training AI models.
  • the independent network element may be called an AI network element or an AI node, and this disclosure does not limit this name.
  • the AI network element can be directly connected to the access network equipment in the communication system, or indirectly connected through a third-party network element and the access network equipment.
  • third-party network elements can be authentication management function (AMF) network elements, user plane function (UPF) network elements and other core network elements, OAM, cloud servers or other network elements. They are not allowed. limit.
  • AMF authentication management function
  • UPF user plane function
  • OAM cloud servers
  • a model can infer one parameter or multiple parameters.
  • the training processes of different models can be deployed in different devices or nodes, or they can be deployed in the same device or node.
  • the inference processes of different models can be deployed in different device or node, or can be deployed in the same device or node.
  • the terminal device can train the supporting encoder and decoder and then send the model parameters of the decoder to the access network device.
  • the access network equipment can indicate the model parameters of the encoder to the terminal equipment after training the supporting encoder and decoder.
  • the AI network element can train the supporting encoder and decoder, and then send the model parameters of the encoder to the terminal device and the model parameters of the decoder to the access network. equipment. Then, the model inference step corresponding to the encoder is performed in the terminal equipment, and the model inference step corresponding to the decoder is performed in the access network equipment.
  • the model parameters may include the structural parameters of one or more of the following models (such as the number of layers and/or weights of the model, etc.), the input parameters of the model (such as input dimensions, the number of input ports), or the output of the model. Parameters (such as output dimensions, number of output ports).
  • the input dimension may refer to the size of an input data.
  • the input dimension corresponding to the sequence may indicate the length of the sequence.
  • the number of input ports may refer to the amount of input data.
  • the output dimension can refer to the size of an output data.
  • the output dimension corresponding to the sequence can indicate the length of the sequence.
  • the number of output ports can refer to the amount of output data.
  • the present disclosure also provides several network architectures as shown in Figures 4A to 4D, taking model training and/or inference in access network equipment as an example, and performing model training and/or inference in access network equipment.
  • the functional modules are divided.
  • the access network equipment includes a near-real-time access network intelligent control (RAN intelligent controller, RIC) module for model learning and/or inference.
  • the near real-time RIC may obtain network side and/or terminal side information from at least one of CU, DU and RU, and the information may include training data or inference data.
  • near-real-time RIC can be used for model training, and the trained model can also be used for inference.
  • the near real-time RIC may submit the inference results to at least one of CU, DU and RU.
  • CU and DU can interact with inference results.
  • inference results can be exchanged between DU and RU.
  • near-real-time RIC submits inference results to DU, and DU submits them to RU.
  • a non-real-time RIC can be included outside the access network equipment.
  • the non-real-time RIC can be located in the OAM or core. network equipment.
  • This non-real-time RIC is used for model learning and inference.
  • the non-real-time RIC may obtain network-side and/or terminal-side information from at least one of CU, DU, and RU, and the information may include training data or inference data.
  • non-real-time RIC is used for model training, and the trained model can also be used for inference.
  • the non-real-time RIC may submit the inference result to at least one of CU, DU and RU.
  • CU and DU can interact with inference results.
  • inference results can be exchanged between DU and RU.
  • non-real-time RIC submits inference results to DU, and DU submits them to RU.
  • the access network equipment includes near-real-time RIC, and the access network equipment also includes non-real-time RIC.
  • the non-real-time RIC can be located in the OAM or core network equipment.
  • non-real-time RIC can be used for model training.
  • Near-real-time RIC can obtain the model parameters of the trained AI model from the non-real-time RIC, and obtain the network-side sum from at least one of CU, DU, and RU. /Or information on the terminal side, use this information and the trained AI model to obtain inference results.
  • the near real-time RIC can also submit the inference results to at least one of CU, DU and RU.
  • CU and DU can exchange inference results.
  • DU and RU can exchange inference results, for example.
  • Near real-time RIC submits the inference results to DU, and DU submits them to RU.
  • near-real-time RIC is used for model training and inference using the trained model, and non-real-time RIC does not participate in the training or inference of the model; or, non-real-time RIC is used for model training and
  • the trained model is used for inference, and real-time RIC does not participate in the training or inference of the model.
  • near-real-time RIC is used to train the model and send the model parameters of the trained AI model to the non-real-time RIC, and the non-real-time RIC uses the trained model for inference.
  • FIG. 4B shows an example diagram of a network architecture to which the method provided by the present disclosure can be applied. Compared with (c) in Figure 4A, the CU is separated into CU-CP and CU-UP in Figure 4B.
  • Figure 4C shows an example diagram of a network architecture to which the method provided by the present disclosure can be applied.
  • the access network equipment includes one or more AI entities, and the function of the AI entity is similar to the above-mentioned near real-time RIC.
  • OAM includes one or more AI entities whose functions are similar to the above-mentioned non-real-time RIC.
  • the core network equipment includes one or more AI entities whose functions are similar to the above-mentioned non-real-time RIC.
  • differences in models may include at least one of the following differences: structural parameters of the model (such as the number of layers and/or weights of the model, etc.), input parameters of the model, or output parameters of the model.
  • Figure 4D shows an example diagram of a network architecture to which the method provided by the present disclosure can be applied.
  • the access network equipment in Figure 4D is separated into CU and DU.
  • the CU may include an AI entity whose functions are similar to the above-mentioned near real-time RIC.
  • the DU may include an AI entity whose functions are similar to the above-mentioned near real-time RIC.
  • both CU and DU include AI entities, the models trained by their respective AI entities are different, and/or the models used for inference are different.
  • the CU in Figure 4D can be further divided into CU-CP and CU-UP.
  • one or more AI models can be deployed in CU-CP.
  • one or more AI models may be deployed in CU-UP.
  • the OAM of the access network device and the OAM of the core network device can be deployed independently.
  • the present disclosure involves two stages of CSI compression.
  • the terminal device uses a first encoder to separately compress multi-layer CSI.
  • the terminal device uses a second encoder to compress the results of the first stage.
  • the access network equipment side also requires two stages of decompression. For example, the access network equipment first uses the second decoder matched by the second encoder to decompress, and then uses the first decoder matched by the first encoder. The output of the second decoder is decompressed.
  • the first encoder and the first decoder are used in matching manner, and the two are trained together. It can be understood that the number of AI models included in the first encoder is the same as the number of AI models included in the first decoder. Each AI model in the first encoder has a corresponding AI model in the first decoder.
  • the training data set used to train the first encoder and the first decoder includes training samples and sample labels, where the training samples are the channel information of the downlink transmission layer determined by the terminal device, and the sample labels are the real channel information of the downlink transmission layer. For the case where the first encoder and the first decoder belong to the same autoencoder, the training data only includes training samples, or the training samples are sample labels, that is, the channel information of the downlink transmission layer.
  • the first encoder and the first decoder may be for a specific downlink transmission layer. For example, channel information of multiple first downlink transmission layers is used to train a pair of first encoders and first decoders. The pair of first encoders and the first decoder is dedicated to processing channel information of the first downlink transmission layer. Another pair of first encoders and first decoders is trained using channel information of a plurality of second downlink transmission layers, and the pair of first encoders and first decoders is dedicated to processing channel information of the second downlink transmission layer.
  • the first encoder and the first decoder may be common to multiple downlink transmission layers, for example, using channel information of multiple first downlink transmission layers and channel information of the second downlink transmission layer to train a pair of first encoders and a first decoder.
  • the pair of the first encoder and the first decoder can be used for processing channel information of the first downlink transmission layer and can also be used for processing channel information of the second downlink transmission layer.
  • the specific training process is as follows: the model training node uses the first encoder to process the channel information of a downlink transmission layer, that is, the training sample, obtains the first channel status indication information corresponding to the downlink transmission layer, and uses the first decoder to process the first channel Status indication information to obtain the restored channel information of the downlink transmission layer.
  • the sequence length of the first channel status indication information may be the first target length.
  • the difference between the restored downlink transmission layer channel information and the corresponding sample label is calculated, that is, the value of the loss function, and the parameters of the first encoder and the first decoder are updated according to the value of the loss function, so that the recovery
  • the difference between the channel information of the downlink transmission layer and the corresponding sample label is minimized, that is, the loss function is minimized.
  • the loss function may be minimum mean square error (MSE) or cosine similarity.
  • the above-mentioned model training nodes can be terminal equipment, access network equipment, or other network elements with AI functions in the communication system. Furthermore, if the model training node is a terminal device, the model parameters of the first decoder can be sent to the access network device; or if the model training node is an access network device, the model parameters of the first encoder can be sent to the terminal. equipment; or if the model training node is a network element other than terminal equipment and access network equipment, the model parameters of the first encoder can be sent to the terminal equipment, and the model parameters of the first decoder can be sent to the access network element. Network access equipment.
  • the second encoder and the second decoder are used in a matching manner, and the two are trained together. It can be understood that the number of AI models included in the second encoder is the same as the number of AI models included in the second decoder. Each AI model in the second encoder has a corresponding AI model in the second decoder. Since the second encoder compresses the result of the first encoder, the second encoder and the second decoder cannot be used independently of the first encoder and the second decoder, that is, the second encoder and the second decoder It needs to be trained together with the first encoder and the second decoder. In one possible design, the second encoder and the second decoder belong to the same autoencoder.
  • the second encoder and the second decoder can be trained together with the trained first encoder and second decoder, or they can be trained together with the untrained first encoder and second decoder to train the second encoder. and the second decoder, the first encoder and the first decoder can be fixed, and only the second encoder and the second decoder can be trained, or the second encoder and the second decoder can be jointly trained, and the first encoder and second decoder.
  • the specific training process is as follows: the model training node uses the first encoder to process the channel information of N downlink transmission layers to obtain N first channel status indication information , use the second encoder to process N pieces of first channel status indication information to obtain the second channel status indication information, use the second decoder to process the second channel status indication information, and obtain The restored N pieces of first channel status indication information are obtained, and the first decoder is used to process the restored N pieces of first channel status indication information, and the restored N pieces of downlink transmission layer channel information are obtained.
  • the sequence length of the second channel status indication information may be the second target length. Optionally, there may be one or more second target lengths.
  • the loss function may be minimum mean square error (MSE) or cosine similarity.
  • model training nodes can be terminal equipment, access network equipment, or other network elements with AI functions in the communication system. Furthermore, if the model training node is a terminal device, the model parameters of the second decoder can be sent to the access network device; or if the model training node is an access network device, the model parameters of the second encoder can be sent to the terminal.
  • the model parameters of the second encoder can be sent to the terminal device, and the model parameters of the second decoder can be sent to the access network device.
  • Network access equipment
  • the method mainly includes the following processes.
  • S501 The terminal device determines channel information of N downlink transmission layers.
  • the terminal device can estimate the downlink channel information by measuring the downlink reference signal from the access network device, and determine the number N of downlink transmission layers recommended by the terminal device.
  • the downlink channel information may include channel state information CSI of the downlink channel, and the downlink channel information may also be called a downlink channel response.
  • the terminal device can divide the downlink channel information into channel information (or channel characteristics) of N downlink transmission layers, where N is a positive integer.
  • the terminal device uses the first encoder and the channel information of N downlink transmission layers input to the first encoder to determine N pieces of first channel status indication information.
  • the channel information of the i-th downlink transmission layer among the N downlink transmission layers corresponds to the i-th first channel status indication information among the N first channel status indication information.
  • the i-th first channel status indication information can be understood as a sequence, matrix or high-dimensional tensor, used to indicate the channel information of the i-th downlink transmission layer.
  • the first encoder has compression and quantization functions, and the sequence corresponding to the i-th first channel status indication information may specifically be a bit sequence.
  • the first encoder has a compression function but does not have a quantization function, and the sequence corresponding to the i-th first channel status indication information may specifically be a floating-point sequence.
  • the first encoder can be trained.
  • the structure of the first encoder can be understood with reference to any one of the following modes 11, 12, or 13.
  • the first encoder includes a first AI model.
  • the terminal device can use the first AI model to process channel information of N downlink transmission layers to obtain N first channel status indication information.
  • channel information of N downlink transmission layers is input to the first AI model, and correspondingly, the output of the first encoder includes N pieces of first channel status indication information.
  • the terminal device may input the channel information of N downlink transmission layers into the first AI model in parallel or serially, which is not limited by this disclosure.
  • different first encoders that is, different first AI models
  • a second AI model included in the first encoder is associated with the value of N, and different values of N are associated with different first AI models.
  • N the first encoder is recorded as the encoder 21 or the encoder in the AE model 21; when N is 3, the first encoder is recorded as the encoder 31 or the encoder in the AE model 31.
  • Such a design can enhance the flexibility of AI model application.
  • set different downlink transmission layer numbers to correspond to using the same first encoder or encoders in the same AE model. Such a design can reduce the storage overhead of AI models.
  • a first AI model can also be trained, using the downlink channel information estimated by measuring the downlink reference signal as input data, and outputting N first channel status indication information.
  • the terminal equipment estimates the downlink channel information by measuring the downlink reference signal, it can directly use the first AI model to process the downlink channel information to obtain N pieces of first channel status indication information.
  • the first encoder may include N first AI models, and the i-th first AI model in the first encoder is used to process channel information of the i-th downlink transmission layer among the N downlink transmission layers, The i-th first channel status indication information among the N pieces of first channel status indication information is obtained.
  • the channel information of the N downlink transmission layers is input into the N first AI models respectively.
  • the input of the i-th first AI model among the N first AI models includes the channel information of the i-th downlink transmission layer among the N downlink transmission layers, and the output of the i-th first AI model includes the i-th downlink transmission layer. Corresponding i-th first channel status indication information.
  • the N first AI models in the first encoder can be the same, or the model type of the N first AI models (such as the neural network used) is the same, but the model parameters are different; or, the N first AI models The model types and model parameters are different, and this disclosure does not limit this.
  • the first encoder includes M first AI models, where M is an integer greater than 1 and less than N.
  • a first AI model can be used to process channel information of part (one or more) of the N downlink transmission layers, or can be described as part of the N downlink transmission layers.
  • the channel information may be processed by the same first AI model, and part of the channel information of the N downlink transmission layers is input serially or in parallel to the first of the M first AI models.
  • An AI model An AI model.
  • the structure of the first encoder can be pre-configured to be one of Mode 11, Mode 12 or Mode 13.
  • the preconfiguration can be implemented using protocol definition or other defined methods, or it can be a terminal device and Access network devices negotiate among themselves to achieve preconfiguration.
  • the terminal device can determine the structure of the first encoder by itself according to one of Mode 11, Mode 12 or Mode 13. Further optionally, the terminal device can send the first instruction to the access network device.
  • the first indication information is used to indicate the structure of the first encoder.
  • the first indication information is specifically used to indicate that the structure of the first encoder is mode 11, mode 12 or mode 13.
  • the mapping relationship between different values of N and the structure of the first encoder can also be preconfigured, or can be described as the structure of the first encoder Corresponds to the value of N.
  • model type and model parameters of each first AI model in the first encoder can also be pre-configured; or the terminal device can report to the access network device the identification of the terminal device using the first AI model in the first encoder, Model type or model parameters, etc., for example, when the first indication information indicates mode 11, it may also specifically indicate the identification, model type or model parameters of the first AI model in the first encoder in mode 11.
  • the terminal device can send N first channel status indication information to the access network device; or, if the aforementioned downlink transmission layer The number N is equal to 1, and the first encoder only has a compression function.
  • the terminal device can quantize the N first channel status indication information and then send it to the access network device. Or, if the number of downlink transmission layers N is greater than 1, the terminal device further performs the following S503 and S504 after completing S502:
  • the terminal device uses the second encoder and the N pieces of first channel status indication information input to the second encoder to determine the second channel status indication information; wherein the second channel status indication information corresponds to the N pieces of Downstream transport layer.
  • the terminal device may use the second encoder to process the N pieces of first channel status indication information to obtain the second channel status indication information.
  • the input of the second encoder includes N pieces of first channel status indication information
  • the output includes second channel status indication information.
  • the second channel status indication information obtained in S503 is a compression of the N pieces of first channel status indication information obtained in S502.
  • the second channel status indication information can also be understood as a sequence, and the sequence length corresponding to the second channel status indication information is less than the sum of the sequence lengths corresponding to N pieces of first channel status indication information.
  • the second encoder in this solution 1 has compression and quantization functions, and the processing of the N first channel status indication information includes compression and quantization.
  • the sequence corresponding to the second channel status indication information can specifically be a bit sequence.
  • the second encoder can be trained.
  • the structure of the second encoder can refer to any one of the following modes 21, 22, or 23.
  • the second encoder includes a second AI model.
  • the terminal device can use the second AI model to process (such as compress and quantize) the N pieces of first channel status indication information to obtain the second channel status indication information. It can be understood that the input of the second AI model includes N pieces of first channel status indication information, and the output of the second AI model includes the second channel status indication information.
  • Figure 6A illustrates a schematic diagram of the encoding process.
  • the first encoder in S502 includes 4 first AI models, namely the first AI model 11, the first AI model 12, the first AI model 13, and the first AI model 14, and the second encoder includes 1
  • the second AI model illustrates the process in which the terminal device uses the first encoder and the second encoder to compress and quantize the channel information of the four downlink transmission layers twice to obtain the second channel status indication information.
  • different second encoders can be set for different downlink transmission layer numbers.
  • a second AI model included in the second encoder is associated with the value of N, and different values of N are associated with different second AI models.
  • N the second encoder is recorded as the encoder 22 or the encoder in the AE model 22; when N is 3, the second encoder is recorded as the encoder 32 or the encoder in the AE model 32.
  • Such a design can enhance the flexibility of AI model application.
  • set different downlink transmission layer numbers to use the same second encoder or encoders in the same AE model. Such a design can reduce the AI model storage overhead.
  • the second encoder may include multiple second AI models, and the number of second AI models in the second encoder is related to the number N of downlink transmission layers.
  • the second encoder includes N-1 second AI models.
  • the input of the first second AI model among the N-1 second AI models includes the first first channel status indication information and the second first channel status indication among the N first channel status indication information.
  • Information, the output of the first second AI model includes the first fourth channel status indication information.
  • the input of the i-th encoder includes the i-1th fourth channel status indication information output by the i-1th encoder and the The i+1-th first channel status indication information among the N first channel status indication information, the output of the i-th encoder includes the i-th fourth channel status indication information, 2 ⁇ i ⁇ N-1, i is a positive integer, or it can be described as a positive integer ranging from 2 to N-1.
  • the N-1th fourth channel status indication information output by the N-1th second AI model is the second channel status indication information.
  • Figure 6B shows a schematic diagram of the encoding process.
  • the first encoder in S502 includes 4 first AI models, namely the first AI model 11, the first AI model 12, the first AI model 13 and the first AI model 14, and the second encoder in S503 Including three second AI models, namely the second AI model 21, the second AI model 22 and the second AI model 23, as an example, it is shown that the terminal device uses the first encoder and the second encoder to implement four downlink transmissions. The process of performing secondary compression and quantization on the channel information of the second layer to obtain the second channel status indication information.
  • the input of the first AI model 11 includes the channel information of the downlink transmission layer 1, the output of the first AI model 11 includes the first first channel status indication information, and the input of the first AI model 12 includes the channel of the downlink transmission layer 2.
  • the output of the first AI model 12 includes the second first channel status indication information;
  • the input of the first AI model 13 includes the channel information of the downlink transmission layer 3, and the output of the first AI model 13 includes the third first channel status indication information.
  • Information; the input of the first AI model 14 includes the channel information of the downlink transmission layer 4, and the output of the first AI model 14 includes the fourth first channel status indication information.
  • the input of the second AI model 21 includes the first first channel status indication information and the second first channel status indication information, and the output of the second AI model 21 includes the first fourth channel status indication information;
  • the second AI model The input of 22 includes the first fourth channel status indication information and the third first channel status indication information, the output of the second AI model 22 includes the second fourth channel status indication information;
  • the input of the second AI model 23 includes The second fourth channel status indication information and the fourth first channel status indication information, the output of the second AI model 23 includes the third fourth channel status indication information, and the third fourth channel status indication information output by the second AI model 23
  • the four-channel status indication information is the second channel status indication information.
  • the second encoder may include K second AI models.
  • K is a positive integer greater than 1 and less than N-1, for example, K is 2. It can be understood that part of the first channel state indication information among the N first channel state indication information is input in parallel to one first artificial intelligence AI model among the K second artificial intelligence AI models. The input and output of the K second AI models are described in detail below.
  • K may be 2
  • the second encoder may include two second AI models, denoted as the second AI model 21 and the second AI model 22.
  • the input of the second AI model 21 includes the first N-1 first channel status indication information among the 4 first channel status indication information, and the output of the second AI model 21 includes the first N-1 first channel status indication information.
  • Four channel status indication information The input of the second AI model 22 includes the Nth first channel status indication information and the 1st fourth channel status indication information among the N first channel status indication information, and the output of the second AI model 22 includes the 2nd fourth channel status indication information.
  • the second fourth channel status indication information output by the second AI model 22 is the second channel status indication information.
  • Figure 6C shows a schematic diagram of the encoding process.
  • the first encoder in S502 includes 4 first AI models, namely the first AI model 11, the first AI model 12, the first AI model 13 and the first AI model 14, and the second encoder in S503 Including two second AI models, namely the second AI model 21 and the second AI model 22, as an example, it is shown that the terminal equipment uses the first encoder and the second encoder to realize two-pass processing of the channel information of the four downlink transmission layers. A process of performing compression and quantization to obtain the second channel status indication information.
  • the input of the first AI model 11 includes the channel information of the downlink transmission layer 1, the output of the first AI model 11 includes the first first channel status indication information, and the input of the first AI model 12 includes the channel of the downlink transmission layer 2.
  • the output of the first AI model 12 includes the second first channel status indication information;
  • the input of the first AI model 13 includes the channel information of the downlink transmission layer 3, and the output of the first AI model 13 includes the third first channel status indication information.
  • Information; the input of the first AI model 14 includes the channel information of the downlink transmission layer 4, and the output of the first AI model 14 includes the fourth first channel status indication information.
  • the input of the second AI model 21 includes the first first channel status indication information, the second first channel status indication information and the third first channel status indication information, and the output of the second AI model 21 includes the first first channel status indication information.
  • Four channel status indication information the input of the second AI model 22 includes the first fourth channel status indication information and the fourth first channel status indication information, and the output of the second AI model 22 includes the second fourth channel status indication information. information.
  • the second AI model 22 outputs The second fourth channel status indication information is the second channel status indication information.
  • the input of the second AI model 21 includes the first two first channel indication information among the N first channel status indication information, that is, the first first channel status indication information and the second The first channel status indication information
  • the output of the second AI model 21 includes the first fourth channel status indication information
  • the input of the second AI model 22 includes the first fourth channel status indication information and the rest of the N first channel status indication information except the first two.
  • N is 4, and the remaining first channel status indication information is
  • the channel status indication information includes the third first channel status indication information and the fourth first channel status indication information
  • the output of the second AI model 22 includes the second fourth channel status indication information.
  • the second fourth channel status indication information output by the second AI model 22 is the second channel status indication information.
  • K when N is 5, K may be 2 or 3, that is, the second encoder may include 2 second AI models or 3 second AI models.
  • the case where the second encoder includes two second AI models can be understood with reference to the example in which N is 4.
  • the second encoder includes three second AI models it can be understood as follows:
  • the three second AI models are denoted as second AI model 21, second AI model 22, and second AI model 23.
  • the input of the second AI model 21 includes the 1st to 3rd first channel status indication information among the 5 first channel status indication information, and the output of the second AI model 21 includes the 1st to 3rd first channel status indication information.
  • Four channel status indication information The input of the second AI model 22 includes the fourth first channel status indication information and the first fourth channel status indication information among the five first channel status indication information, and the output of the second AI model 22 includes the second fourth channel status indication information.
  • the input of the second AI model 23 includes the fifth first channel status indication information and the second fourth channel status indication information among the five first channel status indication information, and the output of the second AI model 23 includes the third fourth channel status indication information.
  • the third fourth channel status indication information output by the second AI model 23 is the second channel status indication information.
  • the input of the second AI model 21 includes the 1st first channel status indication information and the 2 first channel status indication information among the 5 first channel status indication information.
  • the second AI model 21 The output of 21 includes the first fourth channel status indication information.
  • the input of the second AI model 22 includes the third first channel status indication information and the four first channel status indication information among the five first channel status indication information, and the output of the second AI model 22 includes the second fourth channel Status indication information.
  • the input of the second AI model 23 includes the fifth first channel status indication information, the first fourth channel status indication information and the second fourth channel status indication information among the five first channel status indication information.
  • the second AI The output of model 23 includes the third fourth channel status indication information.
  • the third fourth channel status indication information output by the second AI model 23 is the second channel status indication information.
  • the input of the second AI model 21 includes the 1st first channel status indication information and 2 first channel status indication information among the 5 first channel status indication information.
  • the second AI model 21 The output of 21 includes the first fourth channel status indication information.
  • the input of the second AI model 22 includes the third to fifth first channel status indication information among the five first channel status indication information, and the output of the second AI model 22 includes the second and fourth Channel status indication information.
  • the input of the second AI model 23 includes the first fourth channel status indication information and the second fourth channel status indication information, and the output of the second AI model 22 includes the third fourth channel status indication information.
  • the third fourth channel status indication information output by the second AI model 23 is the second channel status indication information.
  • the structure of the second encoder can be pre-configured to be one of Mode 21, Mode 22 or Mode 23.
  • the preconfiguration can be implemented using protocol definition or other defined methods. Another example can be It is the negotiation between the terminal equipment and the access network equipment to achieve pre-configuration.
  • the terminal device can determine the structure of the second encoder by itself according to one of the methods 21, 22, or 23. Further optionally, the terminal device may send second indication information to the access network device, where the second indication information is used to indicate the structure of the second encoder.
  • the second indication information is specifically used to indicate that the structure of the second encoder is Mode 21, Mode 22 or Mode 23.
  • the mapping relationship between different values of N and the structure of the second encoder can also be preconfigured, or can be described as the structure of the second encoder. Corresponds to the value of N.
  • one or more of the model type and model parameters of each second AI model in the second encoder can also be pre-configured; or the terminal device can report to the access network device the use of the second AI model by the terminal device.
  • One or more of the identification, model type or model parameters of the second AI model For example, when the second indication information indicates mode 21, it may also specifically indicate one or more of the identification, model type or model parameters of the second AI model in the second encoder in mode 21.
  • S504 The terminal device sends the second channel status indication information to the access network device.
  • the terminal device may be predefined that the terminal device feeds back twice-compressed CSI to the access network device, or the terminal device and the access network device may negotiate in advance to feed back twice-compressed CSI. Then the access network device receives the second channel status indication message When receiving the information, it may be determined that the second channel status indication information needs to be decompressed twice, and S505 to S506 are further executed.
  • the terminal device may also send information indicating the rank, such as RI, and the value of the foregoing downlink transmission layer number N is equal to the value of the rank. Or it can be understood that RI is used to indicate the value of N.
  • the RI indicates that the value of N is greater than 1, and the access network device can also know based on the RI that the second channel status indication information is obtained by the terminal device through two compressions.
  • Such a design uses existing information to additionally define or indicate other content, which can improve information utilization.
  • the structure of the aforementioned second encoder corresponds to the value of N.
  • the structure of the second decoder matched by the second encoder also corresponds to the value of N.
  • the structure of the second autoencoder to which the second encoder belongs may correspond to the value of N.
  • the access network device may determine one or more of the structures of the second encoder, the second decoder, or the second autoencoder according to the value of N.
  • the structure of the first encoder may also correspond to the value of N.
  • the structure of the first decoder matched by the first encoder also corresponds to the value of N.
  • the structure of the first autoencoder to which the first encoder belongs may also correspond to the value of N.
  • the access network device may determine one or more of the structures of the first encoder, the first decoder, or the first autoencoder according to the value of N.
  • a separate message can be defined, e.g. denoted as the second message.
  • the second information indicates one or more of the following: a second encoder, a second autoencoder AE to which the second encoder belongs, and a second decoder matching the second encoder.
  • the terminal device may also send the second information.
  • the access network device may determine the second decoder matching the second encoder according to the second information.
  • the second information may include one or more of the model identification, model type, and model parameters of the second encoder.
  • the second information may include the second indication information described in S503.
  • the second indication information indicates that the structure of the second encoder is one of Mode 21, Mode 22, and Mode 23.
  • the second encoder described in each mode The model type and model parameters of the second AI model included are preconfigured.
  • the access network device may determine the structure of the second encoder according to the second indication information.
  • the second information may also include information indicating the value of N.
  • the first information can also be defined separately, and the first information indicates one or more of the following: the first encoder, the first autoencoder AE to which the first encoder belongs, and the first autoencoder AE that matches the first encoder. a decoder. Then the access network device can determine a decoder matching the first encoder according to the first information.
  • the first information may include one or more of the model identification, model type, and model parameters of the first encoder.
  • the first information may include the first indication information described in S502. The first indication information indicates that the structure of the first encoder is one of Mode 11, Mode 12, and Mode 13.
  • the first encoder described in each mode The model structure and model parameters of the first AI model included are preconfigured. Then the access network device can determine the structure of the first encoder according to the first indication information.
  • the first information may also include information indicating the value of N.
  • the access network device uses the second decoder and the second channel status indication information input to the second decoder to determine N pieces of first channel status indication information.
  • the second decoder is a decoder that matches the second encoder described in S503, and the implementation of the second decoder can be understood as the reverse process of the second encoder.
  • the access network device may determine the second decoder based on the information sent by the terminal device or the relevant configuration of the second encoder, and then the access network device inputs the received second channel status indication information to the second decoder, Obtain N pieces of first channel status indication information. It can be understood that, as an example, in the first solution, the second decoder has dequantization and decompression functions, and the access network device uses the second decoder to process the second channel status indication information including dequantization and decompression.
  • the second decoder can be trained.
  • Three structures of the second encoder are described corresponding to S503, and the second decoder may also have corresponding three structures.
  • the second decoder may also have corresponding three structures.
  • three possible structures of the second decoder are illustrated below through Mode 31 to Mode 33. Among them, mode 31 matches the second encoder described in mode 21, mode 32 matches the second encoder described in mode 22, and mode 33 matches the second encoder described in mode 23.
  • the second decoder includes a third AI model.
  • the input of the second decoder includes second channel status indication information obtained by the access network device, and the output of the second decoder includes N pieces of first channel status indication information.
  • the access network device uses a second decoder including a third AI model to dequantize and decompress the second channel status indication information to obtain N first channel status indications. information flow.
  • the second decoder may include multiple third AI models, and the number of third AI models in the second decoder is related to the number N of downlink transmission layers.
  • the second decoder may include N-1 third AI models.
  • the input of the N-1 third AI model among the N-1 third artificial intelligence AI models includes the second channel status indication information
  • the output includes the N-2 fourth channel status indication information and the N-th third AI model.
  • the first channel status indication information, the Nith third artificial intelligence AI model The input includes the Nith fourth channel status indication information
  • the output of the Nith third artificial intelligence AI model includes the N-i+1th first channel status indication among the N first channel status indication information.
  • the input of the first third AI model includes the first fourth channel status indication information
  • the output of the first third AI model includes the N The first first channel status indication information and the second first channel status indication information in the first channel status indication information, where i takes a positive integer ranging from 2 to N-2. It can be understood that neither Ni nor Ni-1 can be 0, and the second channel status indication information is the N-1th fourth channel status information.
  • the second decoder includes a second third AI model and a first third AI model.
  • the input of the second third AI model includes the second fourth channel status indication information, that is, the second channel status indication information
  • the output of the second third AI model includes the third first channel status indication information and the first Fourth channel status indication information.
  • the input of the first third model includes the first fourth channel status indication information
  • the input of the first third AI model includes the first first channel status indication information and the second first channel status indication information.
  • the input of the N-1 third AI model among the N-1 third AI models includes the second channel status indication information obtained by the access network device, and the N-1 third AI model
  • the output of the model includes the N-2th fourth channel status indication information and the Nth first channel status indication information.
  • i represents the positive integers from 2 to N-2.
  • the input of the N-i third AI model includes the (N-i)th Four channel status indication information
  • the output of the N-i third AI model includes the (N-i-1)th fourth channel status indication information and the (N-i+1)th of the N first channel status indication information
  • the input of the first third AI model includes the first fourth channel status indication information
  • the output of the N-1 second AI model includes the first first channel status indication information and the second first channel status indication. information.
  • FIG. 6B it also illustrates a process in which the access network device uses a second decoder to dequantize and decompress the second channel status indication information to obtain four pieces of first channel status indication information.
  • the second decoder includes three third AI models. From the third third AI model to the first AI model, they are sequentially referred to as the third AI model 23, the third AI model 22 and the third AI model 21. It can be understood that the third AI model 23 matches the second AI model 23, the third AI model 22 matches the second AI model 22, and the third AI model 21 matches the second AI model 21.
  • the input of the third AI model 23 includes the second channel status indication information, and the output of the third AI model 23 includes the second fourth channel status indication information and the fourth first channel status indication information;
  • the third AI model 23 The input of 22 includes the second fourth channel status indication information, the output of the third AI model 22 includes the first fourth channel status indication information and the third first channel status indication information;
  • the input of the third AI model 21 includes The output of the first fourth channel status indication information and the third AI model 21 includes the first first channel status indication information and the second first channel status indication information.
  • the second decoder may include K third AI models.
  • K is a positive integer greater than 1 and less than N-1, for example, K is 2. It can be understood that each of the K third AI models is used to restore part of the N first channel state indication information. The input and output of the K third AI models are described in detail below.
  • K may be 2
  • the second decoder may include two third AI models, denoted as the third AI model 21 and the third AI model 22.
  • the input of the third AI model 22 includes second channel status indication information
  • the output of the third AI model 22 includes the 1st fourth channel status indication information and the Nth first channel status indication. information.
  • the input of the third AI model 21 includes the first fourth channel status indication information
  • the output of the third AI model 21 includes the first N-1 first channel status indication information among the N first channel status indication information.
  • FIG. 6C it also illustrates a process in which the access network device uses the second decoder to dequantize and decompress the second channel status indication information to obtain four pieces of first channel status indication information.
  • the second decoder includes two third AI models, from the second third AI model to the first AI model, they are sequentially denoted as the third AI model 22 and the third AI model 21. It can be understood that the third AI model 22 matches the second AI model 22, and the third AI model 21 matches the second AI model 21.
  • the input of the third AI model 22 includes the second channel status indication information, and the output of the third AI model 22 includes the first fourth channel status indication information and the fourth first channel status indication information; the third AI model 22
  • the input of 21 includes the first fourth channel status indication information, and the output of the third AI model 21 includes the first first channel status indication information, the second first channel status indication information, and the third first channel status indication. information.
  • the input of the third AI model 22 includes second channel status indication information
  • the output of the third AI model 22 includes the first fourth channel status indication information and the N first channel status indications.
  • the input of the third AI model 21 includes the first fourth channel status indication information.
  • the third AI model 21 The output includes the first two first channel indication information among the N pieces of first channel status indication information, that is, the first first channel status indication information and the second first channel status indication information.
  • K when N is 5, K may be 2 or 3, that is, the second decoder may include 2 third AI models or 3 third AI models.
  • the case where the second decoder includes two third AI models can be understood with reference to the example in which N is 4.
  • the second decoder includes three third AI models it can be understood as follows:
  • the three third AI models are denoted as third AI model 23, third AI model 22, and third AI model 21.
  • the input of the third AI model 23 includes the second channel status indication information
  • the output of the third AI model 23 includes the third of the five first channel status indication information. 5 pieces of first channel status indication information and 2 pieces of fourth channel status indication information.
  • the input of the third AI model 22 includes the second fourth channel status indication information
  • the output of the third AI model 22 includes the fourth first channel status indication information and the first fourth among the five first channel status indication information.
  • Channel status indication information The input of the third AI model 21 includes the first fourth channel status indication information, and the output of the third AI model 21 includes the first to third first channel status indication information among the five first channel status indication information.
  • the input of the third AI model 23 includes second channel status indication information
  • the output of the third AI model 23 includes the fifth first channel status indication among the five first channel status indication information. information, the second fourth channel status indication information and the first fourth channel status indication information.
  • the input of the third AI model 22 includes the second fourth channel status indication information, and the output of the third AI model 22 includes the third first channel status indication information and the fourth first channel status indication information among the five first channel status indication information.
  • the input of the third AI model 21 includes the first fourth channel status indication information
  • the output of the third AI model 21 includes the first first channel status indication information and the second first channel status indication information among the five first channel status indication information. Channel status indication information.
  • the input of the third AI model 23 includes the second channel status indication information
  • the output of the third AI model 23 includes the first and fourth channel status indication information and the second Fourth channel status indication information.
  • the input of the third AI model 22 includes the second fourth channel status indication information, and the output of the third AI model 22 includes the third to fifth first channel status indication information among the five first channel status indication information.
  • the input of the third AI model 21 includes the first fourth channel status indication information, and the input of the third AI model 21 includes the first first channel status indication information and the two first channels among the five first channel status indication information. Status indication information.
  • the second decoder is also preconfigured to the corresponding structure. For example, if the structure of the second encoder is preconfigured to mode 21, then the structure of the second decoder is also preconfigured to mode 31, and if the structure of the second encoder is preconfigured to mode 22, then the structure of the second decoder is also preconfigured. The preconfiguration is mode 32. If the structure of the second encoder is preconfigured as mode 23, then the structure of the second decoder is also preconfigured as mode 33. Further optionally, in this design, the second decoder corresponds to the number of downlink transmission layers.
  • the terminal device does not need to indicate the value of N, and the access network device can also know the value of N. Then the access network device can obtain the first value by combining the preconfigured method and the number of downlink transmission layers N. Two decoders. Or, corresponding to the description in S504, if the terminal device sends information indicating the rank to the access network device, the access network device can determine the number of downlink transmission layers N according to the value of the rank, and then the access network device can combine the predetermined The second decoder is obtained based on the configuration method and the number of downlink transmission layers N.
  • the access network device may determine the number of input ports of the second encoder based on the value of N. , and the number of output ports of the second decoder.
  • the second encoder includes a second AI model. Different downlink transmission layer numbers correspond to different second AI models.
  • the model structure and model parameters of the second AI model are preconfigured, access The network device can determine the corresponding second AI model according to the value of N.
  • the second decoder includes a third AI model. Different downlink transmission layer numbers correspond to different third AI models.
  • the network access device can determine the corresponding third AI model according to the value of N.
  • the access network device may determine the number of second AI models included in the second encoder based on the value of N. and the number of third AI models included in the second decoder. If the model type and model parameters of each second AI model are pre-configured, the access network device can determine the second encoder and the second decoder according to the value of N.
  • the access network device can determine the number K of second AI models included in the second encoder based on the value of N. If the model types and model parameters of K second AI models are pre-configured, the access network device can determine the number K of second AI models included in the second encoder based on the value of N. The structure of the second encoder can be determined, and accordingly the value of K is used to determine the second decoder that matches the second encoder.
  • the network access device may determine the second decoder according to the second information. For example, when the second information includes the second indication information, the access network device may determine that the structure of the second decoder is mode 31, mode 32 or mode 33 according to the second indication information. Similarly, if the number of downlink transmission layers is predefined, the terminal device does not need to indicate the value of N. The access network device can also know the value of N. Then the access network device can combine the preconfigured method and the number of downlink transmission layers. N gets the second decoder.
  • the access network device can determine the number of downlink transmission layers N based on the information, and then the access network device can combine the predetermined The second decoder is obtained based on the configuration method and the number of downlink transmission layers N.
  • the access network device uses the first decoder and the N first channel status indication information input to the first decoder to determine the channel information of the N downlink transmission layers.
  • the first decoder is a decoder used matching the first encoder described in S502, and the implementation of the first decoder can be understood as the reverse process of the first encoder.
  • the access network device may determine the first decoder according to the information sent by the terminal device or the relevant configuration of the first encoder, and then the access network device inputs the N first channel status indication information determined in S505 to the first decoder. The device obtains the channel information of N downlink transmission layers. It can be understood that as an example, in this solution one, the first decoder has dequantization and decompression functions, and the access network device uses the first decoder to process the N first channel status indication information including dequantization and decompression. .
  • the first decoder can be trained.
  • Three structures of the first encoder are described corresponding to S502, and the first decoder also has corresponding three structures.
  • three possible structures of the first decoder are illustrated below through Method 41 to Method 43. Among them, mode 41 matches the first encoder described in mode 11, mode 42 matches the first encoder described in mode 12, and mode 43 matches the first encoder described in mode 13.
  • the first decoder includes a fourth AI model.
  • the terminal device can use the fourth AI model to process the N first channel status indication information and recover the channel information of the N downlink transmission layers.
  • the access network device inputs N pieces of first channel status indication information to the fourth AI model, and correspondingly, the output of the first decoder includes restored channel information of the N downlink transmission layers.
  • the access network device may input the N pieces of first channel status indication information into the fourth AI model in parallel or serially, which is not limited by this disclosure.
  • different first decoders that is, different fourth AI models
  • a second AI model included in the first decoder is associated with the value of N, and different values of N are associated with different fourth AI models.
  • N the first decoder is recorded as the decoder 21 or the decoder in the AE model 21
  • N the first decoder is recorded as the decoder 31 or the decoder in the AE model 31.
  • Such a design can enhance the flexibility of AI model application.
  • set different downlink transmission layer numbers to correspond to using the same first decoder or decoders in the same AE model. Such a design can reduce the storage overhead of AI models.
  • the first decoder may include N fourth AI models, and the i-th fourth AI model in the first decoder is used to calculate the i-th first channel status indication information among the N first channel status indication information. Processing is performed to obtain the channel information of the i-th downlink transmission layer among the N downlink transmission layers. It can be understood that the N first channel state indication information is input to the N fourth AI models respectively, and the input of the i-th fourth AI model among the N fourth AI models includes N first channel states.
  • the i-th first channel status indication information in the indication information and the output of the i-th fourth AI model include channel information of the i-th downlink transmission layer.
  • the N fourth AI models in the first decoder can be the same, or the N fourth AI models have the same model type (such as the neural network used), but the model parameters are different; or, the N fourth AI models The model types and model parameters are different, and this disclosure does not limit this.
  • the first decoder includes M fourth AI models, where M is an integer greater than 1 and less than N.
  • a fourth AI model can be used to process part (one or more) of the N first channel status indication information, or can be described as one of the N first channel status indication information.
  • Part of the first channel status indication information may be processed by the same fourth AI model, and part of the first channel status indication information among the N pieces of first channel status indication information is input to the M pieces of fourth AI serially or in parallel. 1 fourth AI model in the model.
  • the first encoder includes four first AI models.
  • Figures 6A, 6B and 6C further illustrate that the first decoder also includes four fourth AI models. , that is, the fourth AI model 11 associated with the first AI model 11, the fourth AI model 12 associated with the first AI model 12, the fourth AI model 13 associated with the first AI model 13, and the fourth AI model 13 associated with the first AI model 13. 14 Associated fourth AI model 14.
  • the input of the fourth AI model 11 includes the first first channel status indication information
  • the output of the fourth AI model 11 includes the channel information of the downlink transmission layer 1
  • the input of the fourth AI model 12 includes the second first channel Status indication information
  • the output of the fourth AI model 12 includes the channel information of the downlink transmission layer 2
  • the input of the fourth AI model 13 includes the third first channel status indication information
  • the output of the fourth AI model 13 includes the downlink transmission layer 3 channel information
  • the input of the fourth AI model 14 includes the fourth first channel status indication information
  • the output of the fourth AI model 14 includes the channel information of the downlink transmission layer 4.
  • the first decoder is also preconfigured to the corresponding structure. For example, if the structure of the first encoder is preconfigured to mode 11, then the structure of the first decoder is also preconfigured to mode 41, and if the structure of the first encoder is preconfigured to mode 12, then the structure of the first decoder is also preconfigured. The preconfiguration is mode 42. If the structure of the first encoder is preconfigured as mode 13, then the structure of the first decoder is also preconfigured as mode 43. Further optionally, in this design, the first decoder corresponds to the number of downlink transmission layers.
  • the terminal device does not need to indicate the value of N, and the access network device can also know the value of N. Then the access network device can obtain the first value by combining the preconfigured method and the number of downlink transmission layers N. a decoder. Or, corresponding to the description in S504, if the terminal device sends information indicating the rank to the access network device, the access network device can determine the number of downlink transmission layers N according to the value of the rank, and then the access network device can combine the predetermined The configuration method and the number of downlink transmission layers N are used to obtain the first decoder.
  • the access network device may determine the input data of the first encoder according to the value of N. quantity, and the quantity of output data of the first decoder.
  • the first encoder includes a first AI model, and different downlink transmission layer numbers correspond to different first AI models.
  • access The network device can determine the corresponding first AI model according to the value of N.
  • the first decoder includes a fourth AI model. Different downlink transmission layer numbers correspond to different fourth AI models.
  • the network access device can determine the corresponding fourth AI model according to the value of N.
  • the access network device may determine the number of first AI models included in the first encoder based on the value of N , if the model type and model parameters of each first AI model are pre-configured, the access network device can obtain the first encoder according to the value of N; the first access network device can also determine the first encoder according to the value of N.
  • the decoder contains the number of the fourth AI model.
  • the access network device can obtain the first decoder according to the value of N.
  • the structure of the first encoder is pre-configured in mode 13
  • M is N-2
  • the access network device can determine the first encoder based on the value of N.
  • An encoder contains a number M of first AI models
  • a first decoder contains a number M of fourth AI models. If the model types and model parameters of M first AI models/fourth AI models are preconfigured, the access network device can obtain the first encoder and the first decoder according to the value of M.
  • the access network device may integrate the recovered channel information of N downlink transmission layers to recover the foregoing estimated downlink channel information based on downlink reference signal measurements.
  • a multi-level AI model is used to compress downlink channel information twice, reducing the resources occupied by common information between multi-layer CSI, and reducing the feedback overhead of CSI.
  • the method mainly includes the following processes.
  • the terminal device determines the channel information of N downlink transmission layers, where N is a positive integer.
  • the terminal device uses the first encoder and the channel information of N downlink transmission layers input to the first encoder to determine N pieces of first channel status indication information.
  • the terminal device can send N first channel status indication information to the access network device; or, if the aforementioned downlink transmission layer The number N is equal to 1, and the first encoder only has a compression function.
  • the terminal device can quantize the N first channel status indication information and then send it to the access network device. Or, if the number of downlink transmission layers N is greater than 1, after completing S702, the terminal device further performs the following S703 and S704:
  • the terminal device uses the second encoder and the N first channel status indication information input to the second encoder to determine the third channel status indication information; wherein the third channel status indication information corresponds to the N downlink transmission layers.
  • the second encoder has a compression (encoding) function but does not have a quantization function.
  • the terminal device may use the second encoder to compress the N pieces of first channel status indication information to obtain third channel status indication information.
  • the input of the second encoder includes N pieces of first channel status indication information
  • the output includes third channel status indication information.
  • the third channel status indication information obtained in S703 is a compression of the N pieces of first channel status indication information obtained in S502.
  • the third channel status indication information can also be understood as a sequence, and the length of the sequence corresponding to the third channel status indication information is less than the sum of the sequence lengths corresponding to the N pieces of first channel status indication information.
  • the second encoder in this solution has compression function but does not have quantization Function, the processing of N first channel status indication information includes compression, and the sequence corresponding to the third channel status indication information may specifically be a floating point number sequence.
  • the second encoder can be trained.
  • the structure of the second encoder may refer to any one of the following methods 21-1, 22-1, or 23-1.
  • the second encoder includes a second AI model.
  • the terminal device can use the second AI model to compress the N pieces of first channel status indication information to obtain third channel status indication information. It can be understood that the input of the second AI model includes N pieces of first channel status indication information, and the output of the second AI model includes third channel status indication information.
  • Figure 8A illustrates a schematic diagram of the encoding process.
  • the description of Figure 8A may refer to the description of Figure 6A, which is different from Figure 6A only in that the second encoder does not have a quantization function and its output is third channel status indication information.
  • the third channel status indication information has not been quantized, and the second channel status indication information is information after quantization.
  • the second encoder may include multiple second AI models, and the number of second AI models in the second encoder is related to the number N of downlink transmission layers.
  • Figure 8B illustrates a schematic diagram of the encoding process. The description of Figure 8B may refer to the description of Figure 6B. It is different from Figure 6B only in that the second encoder does not have a quantization function and its output is third channel status indication information. The third channel status indication information has not been quantized. The second channel status indication information is information after quantization processing.
  • the second encoder may include K second AI models.
  • K is a positive integer greater than 1 and less than N-1, for example, K is 2. It can be understood that part of the first channel state indication information among the N first channel state indication information is input in parallel to one first artificial intelligence AI model among the K second artificial intelligence AI models.
  • Figure 8C shows a schematic diagram of the encoding process.
  • the description of Figure 8C may refer to the description of Figure 6C. It is different from Figure 6C only in that the second encoder does not have a quantization function and its output is third channel status indication information.
  • the third channel status indication information has not been quantized.
  • the second channel status indication information is information after quantization processing.
  • S704 The terminal device performs quantization processing on the third channel status indication information to obtain the second channel status indication information.
  • the terminal device may use a quantizer to perform quantization processing on the third channel status indication information to obtain the second channel status indication information.
  • the quantizer can be one of the following: a non-uniform quantizer, a scalar quantizer, a vector quantizer, a quantizer designed based on experience, or a quantizer obtained through AI training.
  • the quantizer can be used with the encoder (such as the first encoder The encoder, second encoder, or related decoder, autoencoder AE) is trained together.
  • Quantization processing can also be called quantization operation. Quantization processing can be understood as finding the item closest to the quantity to be quantified in the quantization dictionary and outputting the index of the item.
  • the quantity to be quantized is the third channel state indication information
  • the index output by querying the quantization dictionary contains the quantized information of the third channel state indication information, that is, the second channel state indication information.
  • the third channel status indication information is a floating point number sequence
  • the second channel status indication information obtained after quantization may be a bit sequence.
  • Figure 8A, Figure 8B and Figure 8C further illustrate a process in which the terminal device performs quantization processing on the third channel status indication information based on the quantizer to obtain the second channel status indication information, where the input of the quantizer The third channel status indication information is included, and the output of the quantizer includes the second channel status indication information.
  • the terminal device may also introduce quantization of the N first channel status indication information in S702, and then the N first channel status indication information used by the terminal device in S703 is Quantified information.
  • N is greater than 1
  • the terminal device can use N quantizers to respectively quantize the N pieces of first channel status indication information in S702, and the N quantizers correspond to the N pieces of first channel status indication information one-to-one.
  • the N quantizers can be the same or different.
  • the quantizer used by the terminal device can be pre-configured, for example, the pre-configuration is implemented in a protocol-defined manner, or the terminal device and the access network device pre-negotiate a quantizer.
  • the terminal device can decide the quantizer to be used by itself and send information indicating the quantizer to be used to the access network device.
  • S705 The terminal device sends the second channel status indication information to the access network device.
  • S706 The access network device dequantizes the second channel status indication information to obtain the third channel status indication information.
  • FIG. 8A, FIG. 8B, and FIG. 8C also illustrate a process in which the access network device uses a dequantizer to dequantize the second channel status indication information to obtain the third channel status indication information.
  • the dequantizer is used in conjunction with the quantizer.
  • the dequantizer can be understood as the inverse operation of the quantizer.
  • the input of the dequantizer includes the second channel status indication information, and the output of the dequantizer includes the third channel status indication information.
  • the dequantizer is also preconfigured.
  • the access network device can determine the matching solution based on the information indicating the quantizer. Chemicalizer.
  • the access network device uses the second decoder and the third channel status indication information input to the second decoder to determine N pieces of first channel status indication information.
  • the second decoder is a decoder that matches the second encoder described in S703, and the implementation of the second decoder can be understood as the reverse process of the second encoder.
  • the access network device may determine the second decoder based on the information sent by the terminal device or the relevant configuration of the second encoder, and then the access network device inputs the received third channel status indication information to the second decoder, Obtain N pieces of first channel status indication information. It can be understood that as an example, in the first solution, the second decoder only has a decompression function, and the access network device uses the second decoder to process the third channel status indication information including decompression.
  • the second decoder can be trained.
  • Three structures of the second encoder are described corresponding to S503, and the second decoder may also have corresponding three structures.
  • the second decoder may also have corresponding three structures.
  • three possible structures of the second decoder are illustrated below through methods 31-1 to 33-1. Among them, mode 31-1 matches the second encoder described in mode 21-1, mode 32-1 matches the second encoder described in mode 22-1, and mode 33-1 matches the second encoder described in mode 23-1. device matches.
  • the second decoder includes a third AI model.
  • the input of the second decoder includes the third channel status indication information determined by the access network device in S706, and the output of the second decoder includes N pieces of first channel status indication information.
  • FIG. 8A it also illustrates a process in which the access network device uses the second decoder to decompress the third channel status indication information to obtain N pieces of first channel status indication information.
  • the second decoder may include multiple third AI models, and the number of third AI models in the second decoder is related to the number N of downlink transmission layers.
  • FIG. 8B it also illustrates a process in which the access network device uses the second decoder to decompress the third channel status indication information to obtain four pieces of first channel status indication information.
  • the access network device uses the second decoder to decompress the third channel status indication information to obtain four pieces of first channel status indication information.
  • Figure 6B which differs from Figure 6B only in that the second channel status indication information is decompressed in Figure 6B.
  • the second decoder may include K third AI models.
  • K is a positive integer greater than 1 and less than N-1, for example, K is 2.
  • FIG 8C it also illustrates a process in which the access network device uses the second decoder to decompress the third channel status indication information to obtain four pieces of first channel status indication information.
  • the access network device uses the second decoder to decompress the third channel status indication information to obtain four pieces of first channel status indication information.
  • the access network device uses the first decoder and the N first channel status indication information input to the first decoder to determine the channel information of the N downlink transmission layers.
  • the first encoder includes four first AI models.
  • Figures 8A, 8B and 8C further illustrate that the first decoder also includes four fourth AI models. , that is, the fourth AI model 11 associated with the first AI model 11, the fourth AI model 12 associated with the first AI model 12, the fourth AI model 13 associated with the first AI model 13, and the fourth AI model 13 associated with the first AI model 13. 14 Associated fourth AI model 14.
  • the input of the fourth AI model 11 includes the first first channel status indication information
  • the output of the fourth AI model 11 includes the channel information of the downlink transmission layer 1
  • the input of the fourth AI model 12 includes the second first channel Status indication information
  • the output of the fourth AI model 12 includes the channel information of the downlink transmission layer 2
  • the input of the fourth AI model 13 includes the third first channel status indication information
  • the output of the fourth AI model 13 includes the downlink transmission layer 3 channel information
  • the input of the fourth AI model 14 includes the fourth first channel status indication information
  • the output of the fourth AI model 14 includes the channel information of the downlink transmission layer 4.
  • the access network device may integrate the recovered channel information of N downlink transmission layers to recover the foregoing downlink channel information estimated based on downlink reference signal measurements.
  • the multi-level AI model is used to compress the downlink channel information twice to reduce the resources occupied by the common information between multi-layer CSI, and the compressed channel status indication information is quantized, which can reduce the CSI Feedback overhead.
  • the encoder and the decoder may be matched and used, or the decoder may be used instead of the encoder, without limitation.
  • the encoder has the quantization function and the decoder has the dequantization function, or the encoder does not have the quantization function and the decoder does not have the dequantization function; or the encoder has the quantization function and the decoder does not. It has the quantization function; or the encoder does not have the quantization function and the decoder has the quantization function, so there is no restriction.
  • the above solutions 1 and 2 can be implemented together.
  • the second encoder and the second decoder are matched, the second encoder has the quantization function and the second decoder does not have the quantization function. In this case, you can execute it first. Perform S501 to S504 in plan one, and then execute S706 to S708 in plan two.
  • Another example is that in the second encoder and the second decoder used in matching, the second encoder does not have the quantization function, and the second encoder does not have the quantization function.
  • the second decoder has the quantization function. In this case, you can first execute S701 ⁇ S705 in the second option, and then execute S505 ⁇ S506 in the first option.
  • the functions of the first encoder and the second encoder in Figure 5 or Figure 7 are implemented by the same AI model W1 , and/or, the functions of the first decoder and the second decoder are implemented by the same AI model W2 corresponding to the AI model W1.
  • AI model W1 and AI model W2 are used in conjunction, and they are trained together.
  • the specific training process is similar to the described joint training of the first encoder, the second encoder, the first decoder, and the second decoder, and will not be described again here.
  • the input of AI model W1 is the channel information of N downlink transmission layers
  • the output is the second channel status indication information
  • the input of AI model W2 is the second channel status indication information
  • the output is the channel information of N downlink transmission layers.
  • AI model W1 and AI model W2 can be trained for a specific N.
  • the value of N has a corresponding relationship with AI model W1 and AI model W2.
  • the value of N corresponds to AI model W11 and AI model W21
  • the value N2 of N corresponds to AI model W12 and AI model W22.
  • the number of values of N corresponds to the number of multiple sets of AI models W1 and AI models W2.
  • the terminal device may send information indicating rank to the access network device, where the value of the rank is the same as the value of N, or the terminal device may send indication information to the access network device to indicate the AI model W1 and/or AI model W2, or the sequence length range of the second channel state indication information corresponds to AI model WI and/or AI model W2, and/or corresponds to the value of N, that is, the The sequence length range of the second channel status indication information may indicate the AI model WI and/or the AI model W2, and/or the value of N. In this way, when the value of N is variable, the access network device can determine to use the AI model W2 that matches the AI model W1 used by the terminal device to decode the received second channel status indication information. .
  • AI model W1 and AI model W2 can be trained for any N, that is, they are not dedicated to a specific N.
  • the value of N may be included in the channel information of the aforementioned N downlink transmission layers.
  • the present disclosure provides a communication device 1000 , which includes a processing module 1001 and a communication module 1002 .
  • the communication device 1000 may be a terminal device, or a communication device applied to or matched with a terminal device, capable of implementing a communication method executed on the terminal device side; or the communication device 1000 may be an access network device, or a communication device that can be used in conjunction with a terminal device. It may be a communication device applied to access network equipment or used in conjunction with access network equipment, and capable of implementing a communication method executed on the access network equipment side.
  • the communication module may also be called a transceiver module, a transceiver, a transceiver, or a transceiver device, etc.
  • the processing module may also be called a processor, a processing board, a processing unit, or a processing device.
  • the communication module is used to perform sending operations and receiving operations on the terminal device side or the access network device side in the above method.
  • the device used to implement the receiving function in the communication module can be regarded as a receiving unit, and the devices used in the communication module can be regarded as receiving units.
  • the device that implements the sending function is regarded as a sending unit, that is, the communication module includes a receiving unit and a sending unit.
  • the processing module 1001 can be used to implement the processing functions of the terminal device in the examples described in Figures 5 to 9, and the communication module 1002 can be used to implement the processing functions of the terminal device in the examples described in Figures 5 to 9 Transceiver function of terminal equipment.
  • the communication device can also be understood with reference to the third aspect and possible designs in the third aspect of the invention.
  • the processing module 1001 can be used to implement the processing functions of the access network equipment in the examples described in Figures 5 to 9, and the communication module 1002 can be used to implement the processing functions of the access network equipment described in Figures 5 to 9
  • the transceiver function of the access network equipment described in the example can be understood with reference to the fourth aspect of the invention and possible designs in the fourth aspect.
  • the aforementioned communication module and/or processing module can be implemented through a virtual module.
  • the processing module can be implemented through a software functional unit or a virtual device, and the communication module can be implemented through a software function or a virtual device.
  • the processing module or the communication module can also be implemented by a physical device.
  • the communication module can be an input/output circuit and/or a communication interface to perform input operations (corresponding to the aforementioned receiving operations), Output operation (corresponding to the aforementioned sending operation); the processing module is an integrated processor or microprocessor or integrated circuit.
  • each functional module in each example of this disclosure may be integrated into one processor. It can also exist physically alone, or two or more modules can be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or software function modules.
  • the present disclosure also provides a communication device 1100.
  • the communication device 1100 may be a chip or a chip system.
  • the chip system may be composed of chips, or may include chips and other discrete devices.
  • the communication device 1100 can be used to implement the functions of any network element in the communication system described in the foregoing examples.
  • Communication device 1100 may include at least one processor 1110.
  • the processor 1110 is coupled to a memory, and the memory may be located within the device, or the memory may be integrated with the processor, or the memory may be located outside the device.
  • the communication device 1100 may further include at least one memory 1120.
  • the memory 1120 stores the necessary computer programs, computer programs or instructions and/or data to implement any of the above examples; the processor 1110 may execute the computer program stored in the memory 1120 to complete the method in any of the above examples.
  • the communication device 1100 may also include a communication interface 1130, and the communication device 1100 may interact with other devices through the communication interface 1130.
  • the communication interface 1130 may be a transceiver, a circuit, a bus, a module, a pin, or other types of communication interfaces.
  • the communication interface 1130 in the device 1100 can also be an input-output circuit, which can input information (or receive information) and output information (or send information)
  • the processor is an integrated processor, a microprocessor, an integrated circuit, or a logic circuit, and the processor can determine output information based on input information.
  • Coupling in this disclosure is an indirect coupling or communication connection between devices, units or modules, which may be in electrical, mechanical or other forms, and is used for information interaction between devices, units or modules.
  • the processor 1110 may cooperate with the memory 1120 and the communication interface 1130.
  • the present disclosure does not limit the specific connection medium between the above-mentioned processor 1110, memory 1120 and communication interface 1130.
  • the processor 1110 , the memory 1120 and the communication interface 1130 are connected to each other through a bus 1140 .
  • the bus 1140 may be a peripheral component interconnect (PCI) bus or an extended industry standard architecture (EISA) bus, etc.
  • PCI peripheral component interconnect
  • EISA extended industry standard architecture
  • the bus can be divided into address bus, data bus, control bus, etc. For ease of presentation, only one thick line is used in Figure 11, but it does not mean that there is only one bus or one type of bus.
  • a processor may be a general-purpose processor, a digital signal processor, an application-specific integrated circuit, a field-programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component that may implement or execute the present disclosure.
  • the disclosed methods, steps and logical block diagrams are disclosed.
  • a general-purpose processor may be a microprocessor or any conventional processor, etc.
  • the steps of the method disclosed in conjunction with the present disclosure can be directly embodied as executed by a hardware processor, or executed by a combination of hardware and software modules in the processor.
  • the memory may be a non-volatile memory, such as a hard disk drive (HDD) or a solid-state drive (SSD), etc., or it may be a volatile memory (volatile memory), such as a random access memory.
  • Get memory random-access memory, RAM.
  • Memory is, but is not limited to, any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
  • the memory in the present disclosure can also be a circuit or any other device capable of realizing a storage function, used to store program instructions and/or data.
  • the communication device 1100 can be applied to access network equipment.
  • the communication device 1100 can be an access network device, or can support access network equipment to implement any of the above-mentioned examples.
  • the memory 1120 stores computer programs (or instructions) and/or data that implement the functions of the access network equipment in any of the above examples.
  • the processor 1110 can execute the computer program stored in the memory 1120 to complete the method executed by the access network device in any of the above examples.
  • the communication interface in the communication device 1100 can be used to interact with the terminal equipment, send information to the terminal equipment or receive information from the terminal equipment.
  • the communication device 1100 can be applied to a terminal device.
  • the communication device 1100 can be a terminal device, or can support the terminal device and realize the functions of the terminal device in any of the above examples. device.
  • the memory 1120 stores computer programs (or instructions) and/or data that implement the functions of the terminal device in any of the above examples.
  • the processor 1110 can execute the computer program stored in the memory 1120 to complete the method executed by the terminal device in any of the above examples.
  • the communication interface in the communication device 1100 can be used to interact with the access network equipment, send information to the access network equipment or receive information from the access network equipment.
  • the communication device 1100 provided in this example can be applied to an access network device to complete the method executed by the access network device, or applied to a terminal device to complete the method executed by the terminal device. Therefore, the technical effects that can be obtained can be referred to the above method examples and will not be described again here.
  • the present disclosure provides a communication system, including access network equipment and terminal equipment, wherein the access network equipment and terminal equipment can implement the communication methods provided in the examples shown in Figures 5 to 9 .
  • the technical solutions provided by this disclosure can be implemented in whole or in part through software, hardware, firmware, or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions.
  • the computer program instructions When the computer program instructions are loaded and executed on a computer, the processes or functions described in the present disclosure are produced in whole or in part.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, a terminal device, an access network device, or other programmable computers. device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another, e.g., the computer instructions may be transferred from a website, computer, server, or data center Transmission to another website, computer, server or data center through wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more available media integrated.
  • the available media may be magnetic media (eg, floppy disk, hard disk, tape), optical media (eg, digital video disc (DVD)), or semiconductor media, etc.
  • examples may refer to each other.
  • methods and/or terms between method embodiments may refer to each other.
  • functions and/or terms between device embodiments may refer to each other.
  • Cross-references, for example, functions and/or terms between apparatus examples and method examples may refer to each other.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

La présente divulgation concerne un procédé et un appareil de communication, qui sont appliqués au domaine technique des communications. Le procédé de communication consiste à : déterminer N éléments de premières informations d'indication d'état de canal en utilisant un premier codeur et des informations de canal de N couches de transmission de liaison descendante entrées dans le premier codeur, N étant un nombre entier positif supérieur à 1 ; déterminer des secondes informations d'indication d'état de canal à l'aide d'un second codeur et des N éléments de premières informations d'indication d'état de canal entrées dans le second codeur, les secondes informations d'indication d'état de canal correspondant aux N couches de transmission de liaison descendante, et la longueur de séquence correspondant aux secondes informations d'indication d'état de canal étant inférieure à la somme des longueurs de séquence correspondant aux N éléments de premières informations d'indication d'état de canal ; et envoyer les secondes informations d'indication d'état de canal. Selon la présente divulgation, les surdébits de rétroaction d'informations d'état de canal sont réduits à l'aide d'un modèle d'IA.
PCT/CN2023/105001 2022-07-06 2023-06-30 Procédé et appareil de communication WO2024008004A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210800531.3 2022-07-06
CN202210800531.3A CN117411526A (zh) 2022-07-06 2022-07-06 一种通信方法及装置

Publications (1)

Publication Number Publication Date
WO2024008004A1 true WO2024008004A1 (fr) 2024-01-11

Family

ID=89454790

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/105001 WO2024008004A1 (fr) 2022-07-06 2023-06-30 Procédé et appareil de communication

Country Status (2)

Country Link
CN (1) CN117411526A (fr)
WO (1) WO2024008004A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110350958A (zh) * 2019-06-13 2019-10-18 东南大学 一种基于神经网络的大规模mimo的csi多倍率压缩反馈方法
CN113796020A (zh) * 2019-03-08 2021-12-14 弗劳恩霍夫应用研究促进协会 用于无线通信系统中用于基于多普勒延迟码本的预编码的csi报告和码本结构
CN116193504A (zh) * 2023-04-18 2023-05-30 南京云程半导体有限公司 一种信道状态信息的上报方法、电子设备及存储介质
CN116260494A (zh) * 2021-12-10 2023-06-13 中兴通讯股份有限公司 反馈、获取及训练方法、终端、基站、电子设备和介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113796020A (zh) * 2019-03-08 2021-12-14 弗劳恩霍夫应用研究促进协会 用于无线通信系统中用于基于多普勒延迟码本的预编码的csi报告和码本结构
CN110350958A (zh) * 2019-06-13 2019-10-18 东南大学 一种基于神经网络的大规模mimo的csi多倍率压缩反馈方法
CN116260494A (zh) * 2021-12-10 2023-06-13 中兴通讯股份有限公司 反馈、获取及训练方法、终端、基站、电子设备和介质
CN116193504A (zh) * 2023-04-18 2023-05-30 南京云程半导体有限公司 一种信道状态信息的上报方法、电子设备及存储介质

Also Published As

Publication number Publication date
CN117411526A (zh) 2024-01-16

Similar Documents

Publication Publication Date Title
WO2022033456A1 (fr) Procédé de retour de mesure d'informations d'état de canal et appareil associé
JP2016510559A (ja) ジョイント信号処理における圧縮転送中継方式のためのシステムおよび方法
US20240137082A1 (en) Communication method and apparatus
WO2023126007A1 (fr) Procédé et appareil de transmission d'informations de canal
WO2023125660A1 (fr) Procédé et dispositif de communication
WO2024008004A1 (fr) Procédé et appareil de communication
WO2022012256A1 (fr) Procédé de communication et dispositif de communication
WO2022236785A1 (fr) Procédé de rétroaction d'informations de canal, dispositif d'extrémité de réception et dispositif d'extrémité de transmission
WO2024046215A1 (fr) Procédé et appareil de communication
WO2024046419A1 (fr) Procédé et appareil de communication
CN114492784A (zh) 神经网络的测试方法和装置
WO2023174108A1 (fr) Procédé et appareil de communication
WO2023006096A1 (fr) Procédé et appareil de communication
WO2023125996A1 (fr) Procédé et appareil de précodage de liaison montante
WO2024046288A1 (fr) Procédé et appareil de communication
WO2023036323A1 (fr) Procédé et appareil de communication
WO2023231881A1 (fr) Procédé et appareil d'application de modèle
CN118118133A (zh) 一种通信方法及装置
WO2023202514A1 (fr) Procédé et appareil de communication
WO2023231934A1 (fr) Procédé et appareil de communication
WO2024131900A1 (fr) Procédé de communication et appareil de communication
WO2023115254A1 (fr) Procédé et dispositif de traitement de données
WO2023279947A1 (fr) Procédé et appareil de communication
WO2023125699A1 (fr) Procédé et appareil de communication
WO2023198018A1 (fr) Procédé et appareil d'envoi de signal de liaison montante, ainsi que procédé et appareil de réception de signal de liaison montante

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23834771

Country of ref document: EP

Kind code of ref document: A1